text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
You can click on the Google or Yahoo buttons to sign-in with these identity providers, or you just type your identity uri and click on the little login button. The following examples demonstrate the problem: == bad.py == from package import util as local_util import someother import util == Reimport 'util' (imported line 3) == == good.py == from package import util as localutil import someother import util == No (relevant) errors == Basically, the problem comes down to the non-deterministic dict order and the use of context.values() in the get_first_import() function. Changing context.values() => context.asList() fixes this problem. fixed by #112667 Fixed by #112667 Ticket #22277 - latest update on 2013/04/17, created on 2010/03/23 by Shahms King add comment - 2013/02/26 22:37, written by sthenault Fixed by #112667
https://www.logilab.org/ticket/22277
CC-MAIN-2016-44
refinedweb
132
54.32
Post is a simple class with four properties. The setter for $id isn't actually generated. Doctrine populates the $id instance variable directly in the entity hydration phase. We will see later how we delegate the ID generation to the DBMS. Doctrine annotations are imported from the \Doctrine\ORM\Mapping namespace with use statements. They are used in DocBlocks to add mapping information to the class and its properties. DocBlocks are just a special kind of comment starting with /**. The @Entity annotation is used in the class-level DocBlock to specify that this class is an entity class. The most important attribute of this annotation is repositoryClass. It allows specifying a custom ... No credit card required
https://www.safaribooksonline.com/library/view/persistence-in-php/9781782164104/ch02s03.html
CC-MAIN-2018-34
refinedweb
116
52.66
1) A good Bayes's theorem problem should pose an interesting question that seems hard to solve directly, but 2) It should be easier to solve with Bayes's theorem than without it, and 3) It should have some element of surprise, or at least a non-obvious outcome. Several years ago I posted some of my favorites in this article. Last week I posted a problem one of my students posed (Why is My Cat Orange?). This week I have another student-written problem and two related problems that I wrote. I'll post solutions later in the week. The sock drawer problemPosed by? [For this one, you can compute an approximate solution assuming socks are selected with replacement, or an exact solution assuming, more realistically, that they are selected without replacement.] The Alien Blaster problemIn? The Skeet Shooting problem At the 2016 Summer Olympics in the Women's Skeet event, Kim Rhode faced Wei Meng in the bronze medal match. After 25 shots, they were tied, sending the match into sudden death. In each round of sudden death, each competitor shoots at two targets. In the first three rounds, Rhode and Wei hit the same number of targets. Finally in the fourth round, Rhode hit more targets, so she won the bronze medal, making her the first Summer Olympian to win an individual medal at six consecutive summer games. Based on this information, should we infer that Rhode and Wei had an unusually good or bad day? As background information, you can assume that anyone in the Olympic final has about the same probability of hitting 13, 14, 15, or 16 out of 25 targets. Solutions For the sock problem, we have to compute the likelihood of the data (getting a pair) under each hypothesis. If it's Drawer 1, with 40 white socks and 10 black, the probability of getting a pair is approximately (4/5)² + (1/5)² If it's drawer 2, with 20 white and 30 black socks, the probability of a pair is: (2/5)² + (3/5)² In both cases I am pretending that we replace the first sock (and stir) before choosing the second, so the result is only approximate, but it is pretty close. I'll leave the exact solution as an exercise :) Now we can fill in the Bayesian update worksheet: In general, the probability of getting a pair is highest if the drawer contains only one color sock, and lowest if the proportion if 50:50. So getting a pair is evidence that the drawer is more likely to have a high (or low) proportion of one color, and less likely to be balanced. We can write a more general solution using Jupyter notebook. I'll represent the sock drawers with I'll represent the sock drawers with Histobjects, defined in the thinkbayes2library: In [2]: drawer1 = Hist(dict(W=40, B=10), label='Drawer 1') drawer2 = Hist(dict(W=20, B=30), label='Drawer 2') drawer1.Print() B 10 W 40 Now I can make a Pmfthat represents the two hypotheses: In [3]: pmf = Pmf([drawer1, drawer2]) pmf.Print() Drawer 2 0.5 Drawer 1 0.5 This function computes the likelihood of the data for a given hypothesis: In [4]: def Likelihood(data, hypo): """Likelihood of the data under the hypothesis. data: string 'same' or 'different' hypo: Hist object with the number of each color returns: float likelihood """ probs = Pmf(hypo) prob_same = probs['W']**2 + probs['B']**2 if data == 'same': return prob_same else: return 1-prob_same Now we can update pmfwith these likelihoods In [5]: data = 'same' pmf[drawer1] *= Likelihood(data, drawer1) pmf[drawer2] *= Likelihood(data, drawer2) pmf.Normalize() Out[5]: 0.6000000000000001 The return value from Normalize is the total probability of the data, the denominator of Bayes's theorem, also known as the normalizing constant. And here's the posterior distribution: And here's the posterior distribution: In [6]: pmf.Print() Drawer 2 0.433333333333 Drawer 1 0.566666666667 The result is the same as what we got by hand.
http://allendowney.blogspot.com/2016/10/in-my-bayesian-statistics-class-this.html
CC-MAIN-2019-09
refinedweb
678
60.14
* Image not getting painted. Ken Blair Ranch Hand Joined: Jul 15, 2003 Posts: 1078 posted Aug 05, 2003 04:36:00 0 I have a JFrame in which I need to have a TIFF image displayed and scaled. When a component gains focus I need to zoom in on a specific area of that image. For lack of a better method I simply created two PlanarImages, one scaled the appropriate size for the zoom out and one scaled to the size I would need when zoomed in and then converted those to BufferedImage . I use the BufferedImage in an ImageIcon which is in turn used in a JLabel which is in turn used in a JFrame . When I need to zoom in I get a Subimage of the BufferedImage that has the appropriate scaling and set that as ImageIcon 's image. It works... except for one thing... when I do the zoom in the image doesn't appear, well it will appear as soon as I change the size of the frame at all with my mouse. I'm not sure what I'm doing wrong, my code is getting called in the Event Handling thread and should be thread safe, I don't know why it doesn't wait for it to load. I tried using a MediaTracker and waitForAll on the image but that doesn't help. public void zoomTo(int x, int y, int width, int height) { icon.setImage((BufferedImage)zoomImage.getSubimage(x, y, width, height)); this.setSize(width + 10, height + 40); } Nathan Pruett Bartender Joined: Oct 18, 2000 Posts: 4121 I like... posted Aug 05, 2003 08:40:00 0 Call validate() or invalidate() on the component that is drawing the image... if the images are different sizes the layout manager may need to lay out the component again. validate() and invalidate() force the layout manager to run again. -Nate Write once, run anywhere, because there's nowhere to hide! - /. A.C. Ken Blair Ranch Hand Joined: Jul 15, 2003 Posts: 1078 posted Aug 05, 2003 11:50:00 0 Tried that. Actually I tried revalidate() but now I tried validate() and invalidate() and it doesn't seem to help. I changed the alignment of the icon to the top left because I discovered that it was getting painted it was just getting painted in part of the frame that isn't showing on the screen apparently. Even so, most of the image shows up but the far right 50 or so pixels are gray for some unknown reason. Even as I change the image when each field gains focus that blank area is still there. The thing is, as soon as I touch the resize with my mouse it works, and will work perfectly the rest of the time the application is running. Hell, even if I resize the big zoomed out picture in the first place and then go through the fields which makes it change to the zoomed in versions it works perfectly. I don't get it. public void zoomTo(final int x, final int y, final int width, final int height) { Runnable updateImage = new Runnable() { public void run() { setSize(width + 10, height + 35); icon.setImage((BufferedImage)zoomImage.getSubimage(x, y, width, height)); label.invalidate(); repaint(); } }; SwingUtilities.invokeLater(updateImage); } Another thing to add, I tried removing the alignment to the top left in the initialization and the image doesn't show up again. However, I added in code to the above just before the invalidate that changes the label's size to the appropriate size (as it seemed to me that the label wasn't getting its size updated before the layout and paint occurred) and it works, except I STILL have that 50-100 pixels of blank space until I resize it at some point, even if it's before the point at which I zoomed in on the images! Oh... and that's using a size of 500x100 and everyone of my images that use that width/height have the problem, however the smaller ones that are 300x100 display perfectly. [ August 05, 2003: Message edited by: Ken Blair ] Nathan Pruett Bartender Joined: Oct 18, 2000 Posts: 4121 I like... posted Aug 06, 2003 09:28:00 0 I really can't figure out why you are having this problem... Take a look at the code I used to try and re-create the problem. It seems to be working OK... Is this something like what you were trying to do? import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.geom.AffineTransform; import java.awt.image.AffineTransformOp; import java.awt.image.BufferedImage; import java.awt.image.renderable.ParameterBlock; import java.io.File; import java.io.IOException; import java.util.ArrayList; import javax.media.jai.*; import javax.swing.*; import com.sun.media.jai.codec.*; public class ZoomTiff extends JFrame { private BufferedImage image; private JLabel display; private JButton zoom100; private JButton zoom50; private JButton zoom25; public ZoomTiff() { super( "ZoomTIFF" ); display = new JLabel(); display.setHorizontalAlignment( JLabel.CENTER ); display.setVerticalAlignment( JLabel.CENTER ); try { SeekableStream in = new FileSeekableStream( new File( "aeyakaversion3.tif" ) ); ParameterBlock pb = new ParameterBlock(); pb.add( in ); TIFFDecodeParam param = new TIFFDecodeParam(); pb.add( param ); ); if ( images.size() > 0 ) { PlanarImage pImage = (PlanarImage)images.get( 0 ); image = pImage.getAsBufferedImage(); ImageIcon i = new ImageIcon( image ); display.setIcon( i ); } } catch (IOException e) { e.printStackTrace(); } getContentPane().add( display ); JPanel p = new JPanel(); zoom100 = new JButton( "100%" ); zoom100.addActionListener( new ZoomAction( 100 ) ); zoom50 = new JButton( "50%" ); zoom50.addActionListener( new ZoomAction( 50 ) ); zoom25 = new JButton( "25%" ); zoom25.addActionListener( new ZoomAction( 25 ) ); p.setLayout( new BoxLayout( p, BoxLayout.Y_AXIS ) ); p.add( new JLabel( "Zoom" ) ); p.add( zoom100 ); p.add( zoom50 ); p.add( zoom25 ); getContentPane().add( p, BorderLayout.WEST ); } public static void main( String[] args ) { JFrame frame = new ZoomTiff(); frame.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); frame.pack(); frame.show(); } private class ZoomAction implements ActionListener { private float percent; public ZoomAction( int percent ) { this.percent = (float)percent / 100F; } public void actionPerformed( ActionEvent e ) { Runnable r = new Runnable() { public void run() { int w = image.getWidth(); int h = image.getHeight(); Point center = new Point( w / 2, h / 2 ); w = Math.round( w * percent ); h = Math.round( h * percent ); BufferedImage zoomImage = image.getSubimage( center.x - ( w / 2), center.y - ( h / 2 ), w, h ); BufferedImage scaledImage = new BufferedImage( image.getWidth(), image.getHeight(), BufferedImage.TYPE_INT_ARGB ); Graphics g = scaledImage.getGraphics(); g.setColor( Color.green ); g.fillRect( 0, 0, scaledImage.getWidth(), scaledImage.getHeight() ); AffineTransform transform = new AffineTransform(); transform.setToScale( ( 1 / percent ), ( 1 / percent ) ); AffineTransformOp scale = new AffineTransformOp( transform, null ); scale.filter( zoomImage, scaledImage ); ImageIcon zoomIcon = new ImageIcon( scaledImage ); display.setIcon( zoomIcon ); } }; SwingUtilities.invokeLater( r ); } } } Ken Blair Ranch Hand Joined: Jul 15, 2003 Posts: 1078 posted Aug 06, 2003 19:12:00 0 import java.awt.BorderLayout; import java.awt.image.BufferedImage; import java.awt.image.renderable.ParameterBlock; import javax.media.jai.InterpolationBilinear; import javax.media.jai.JAI; import javax.media.jai.PlanarImage; import javax.swing.ImageIcon; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.SwingUtilities; public class ImageViewer extends JFrame { private BufferedImage zoomImage; private BufferedImage originalImage; private ImageIcon icon; private JLabel label; private int prefWidth; private int prefHeight; public ImageViewer(String fileName) { super("Bill Image"); initComponents(fileName, 800, 600); } public void zoomOut() { Runnable updateImage = new Runnable() { public void run() { setSize(originalImage.getWidth() + 10, originalImage.getHeight() + 35); label.setSize(originalImage.getWidth(), originalImage.getHeight()); icon.setImage(originalImage); label.invalidate(); repaint(); } }; SwingUtilities.invokeLater(updateImage); } // Focus's on the area at x,y using the specified width & height. public void zoomTo(final int x, final int y, int width, int height) { final int width2, height2; // Ensure the specified subimage is within the image's bounds. if (x + width > zoomImage.getWidth()) { width2 = zoomImage.getWidth() - x; } else { width2 = width; } if (y + height > zoomImage.getHeight()) { height2 = zoomImage.getHeight() - y; } else { height2 = height; } Runnable updateImage = new Runnable() { public void run() { setSize(width2 + 10, height2 + 35); label.setSize(width2, height2); icon.setImage((BufferedImage)zoomImage.getSubimage(x, y, width2, height2)); label.invalidate(); repaint(); } }; SwingUtilities.invokeLater(updateImage); } public void setPrefWidth(int i) { this.prefWidth = i; } public void setPrefHeight(int i) { this.prefHeight = i; } private void initComponents(String fileName, int prefWidth, int prefHeight) { float scaleX; float scaleY; this.prefWidth = prefWidth; this.prefHeight = prefHeight; PlanarImage source = JAI.create("fileload", fileName); ParameterBlock params = new ParameterBlock(); InterpolationBilinear inter = new InterpolationBilinear(); params.addSource(source); params.add(0.5F); params.add(0.5F); params.add(0.0F); params.add(0.0F); params.add(inter); PlanarImage scaledImage = JAI.create("scale", params); zoomImage = scaledImage.getAsBufferedImage(); scaledImage = null; scaleX = (float) prefWidth / source.getWidth(); scaleY = (float) prefHeight / source.getHeight(); if (scaleX < scaleY) { scaleY = scaleX; } else { scaleX = scaleY; } params.set(scaleX, 0); params.set(scaleY, 1); scaledImage = JAI.create("scale", params); source = null; originalImage = scaledImage.getAsBufferedImage(); scaledImage = null; icon = new ImageIcon(originalImage); label = new JLabel(icon); this.setSize(originalImage.getWidth() + 10, originalImage.getHeight() + 40); getContentPane().add(label); } } I've noticed that the 'blank' part is the area of the frame that wasn't viewable before the resize. What I mean is that initially the frame ends up at 460x600 due to the scaling, when I resize it goes to 500x100. Those 40 pixels on the right are the 'blank' area. Also, when I go back to 460x600 from the 500x100 (when I zoom out) everything below the 100 pixels is 'blank' as well. To be honest I don't really understand your code. Two months ago I'd never programmed before so I'm kind of new and I don't really know much about imaging. What am I doing wrong? Also, is there a need/reason to go through all the hassle of what you did to generate the images? [ August 06, 2003: Message edited by: Ken Blair ] I agree. Here's the link: subject: Image not getting painted. Similar Threads Image display with visor Image smoothing Problems convering Applets to Applications JScrollPane update AWT Applet to Application conversion All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/336231/GUI/java/Image-painted
CC-MAIN-2014-52
refinedweb
1,651
60.61
Home >>Java Programs >Prime Number Program in Java In this example, we will see a Java program to find the prime numbers. A number is called Prime number if it is greater than 1 and divided by 1 or itself only. For example 2, 3, 5, 7, 11, etc.. are the prime numbers.Program public class Main { public static void main(String args[]) { int i,m=0,flag=0; int n=17;//it is the number to be checked m=n/2; if(n==0||n==1) { System.out.println(n+" is not prime number"); } else { for(i=2;i<=m;i++) { if(n%i==0) { System.out.println(n+" is not prime number"); flag=1; break; } } if(flag==0) { System.out.println(n+" is prime number"); } }//end of else } }
https://www.phptpoint.com/prime-number-program-in-java/
CC-MAIN-2021-39
refinedweb
130
76.22
stl is not - Why use non-member begin and end functions in C++11? Consider the case when you have library that contain class: class SpecialArray; it has 2 methods: int SpecialArray::arraySize(); int SpecialArray::valueAt(int); to iterate over it's values you need to inherit from this class and define begin() and end() methods for cases when auto i = v.begin(); auto e = v.end(); But if you always use auto i = begin(v); auto e = end(v); you can do this: template <> SpecialArrayIterator begin(SpecialArray & arr) { return SpecialArrayIterator(&arr, 0); } template <> SpecialArrayIterator end(SpecialArray & arr) { return SpecialArrayIterator(&arr, arr.arraySize()); } where SpecialArrayIterator is something like: class SpecialArrayIterator { SpecialArrayIterator(SpecialArray * p, int i) :index(i), parray(p) { } SpecialArrayIterator operator ++(); SpecialArrayIterator operator --(); SpecialArrayIterator operator ++(int); SpecialArrayIterator operator --(int); int operator *() { return parray->valueAt(index); } bool operator ==(SpecialArray &); // etc private: SpecialArray *parray; int index; // etc }; now i and e can be legally used for iteration and accessing of values of SpecialArray Every standard container has a begin and end method for returning iterators for that container. However, C++11 has apparently introduced free functions called std::begin and std::end which call the begin and end member functions. So, instead of writing auto i = v.begin(); auto e = v.end(); you'd write using std::begin; using std::end; auto i = begin(v); auto e = end(v); In his talk, Writing Modern C++, Herb Sutter says that you should always use the free functions now when you want the begin or end iterator for a container. However, he does not go into detail as to why you would want to. Looking at the code, it saves you all of one character. So, as far as the standard containers go, the free functions seem to be completely useless. Herb Sutter indicated that there were benefits for non-standard containers, but again, he didn't go into detail. So, the question is what exactly do the free function versions of std::begin and std::end do beyond calling their corresponding member function versions, and why would you want to use them? To answer your question, the free functions begin() and end() by default do nothing more than call the container's member .begin() and .end() functions. From <iterator>, included automatically when you use any of the standard containers like <vector>, <list>, etc., you get: template< class C > auto begin( C& c ) -> decltype(c.begin()); template< class C > auto begin( const C& c ) -> decltype(c.begin()); The second part of you question is why prefer the free functions if all they do is call the member functions anyway. That really depends on what kind of object v is in your example code. If the type of v is a standard container type, like vector<T> v; then it doesn't matter if you use the free or member functions, they do the same thing. If your object v is more generic, like in the following code: template <class T> void foo(T& v) { auto i = v.begin(); auto e = v.end(); for(; i != e; i++) { /* .. do something with i .. */ } } Then using the member functions breaks your code for T = C arrays, C strings, enums, etc. By using the non-member functions, you advertise a more generic interface that people can easily extend. By using the free function interface: template <class T> void foo(T& v) { auto i = begin(v); auto e = end(v); for(; i != e; i++) { /* .. do something with i .. */ } } The code now works with T = C arrays and C strings. Now writing a small amount of adapter code: enum class color { RED, GREEN, BLUE }; static color colors[] = { color::RED, color::GREEN, color::BLUE }; color* begin(const color& c) { return begin(colors); } color* end(const color& c) { return end(colors); } We can get your code to be compatible with iterable enums too. I think Herb's main point is that using the free functions is just as easy as using the member functions, and it gives your code backward compatibility with C sequence types and forward compatibility with non-stl sequence types (and future-stl types!), with low cost to other developers. One benefit of std::begin and std::end is that they serve as extension points for implementing standard interface for external classes. If you'd like to use CustomContainer class with range-based for loop or template function which expects .begin() and .end() methods, you'd obviously have to implement those methods. If the class does provide those methods, that's not a problem. When it doesn't, you'd have to modify it*. This is not always feasible, for example when using external library, esspecially commercial and closed source one. In such situations, std::begin and std::end come in handy, since one can provide iterator API without modifying the class itself, but rather overloading free functions. Example: suppose that you'd like to implement count_if function that takes a container instead of a pair of iterators. Such code might look like this: template<typename ContainerType, typename PredicateType> std::size_t count_if(const ContainerType& container, PredicateType&& predicate) { using std::begin; using std::end; return std::count_if(begin(container), end(container), std::forward<PredicateType&&>(predicate)); } Now, for any class you'd like to use with this custom count_if, you only have to add two free functions, instead of modifying those classes. Now, C++ has a mechanisim called Argument Dependent Lookup (ADL), which makes such approach even more flexible. In short, ADL means, that when a compiler resolves an unqualified function (i. e. function without namespace, like begin instead of std::begin), it will also consider functions declared in namespaces of its arguments. For example: namesapce some_lib { // let's assume that CustomContainer stores elements sequentially, // and has data() and size() methods, but not begin() and end() methods: class CustomContainer { ... }; } namespace some_lib { const Element* begin(const CustomContainer& c) { return c.data(); } const Element* end(const CustomContainer& c) { return c.data() + c.size(); } } // somewhere else: CustomContainer c; std::size_t n = count_if(c, somePredicate); In this case, it doesn't matter that qualified names are some_lib::begin and some_lib::end - since CustomContainer is in some_lib:: too, compiler will use those overloads in count_if. That's also the reason for having using std::begin; and using std::end; in count_if. This allows us to use unqualified begin and end, therefore allowing for ADL and allowing compiler to pick std::begin and std::end when no other alternatives are found. We can eat the cookie and have the cookie - i. e. have a way to provide custom implementation of begin/ end while the compiler can fall back to standard ones. Some notes: For the same reason, there are other similar functions: std::rbegin/ rend, std::sizeand std::data. As other answers mentions, std::versions have overloads for naked arrays. That's useful, but is simply a special case of what I've described above. Using std::beginand friends is particularly good idea when writing template code, because this makes those templates more generic. For non-template you might just as well use methods, when applicable. P. S. I'm aware that this post is nearly 7 years old. I came across it because I wanted to answer a question which was marked as a duplicate and discovered that no answer here mentions ADL.
http://code.i-harness.com/en/q/73dc7e
CC-MAIN-2019-09
refinedweb
1,216
52.7
Overview. Description This articles assume some basic understanding of multi-threading in .NET. You can refer back to the first part for more information on it at:. The function of the port scanner we will be seeing shortly is simple. It determines rather a given port on a given IP address is open or closed, by initiating a connection to it. If a connection to the port can be made successfully, then its open. But if the connection to the port is rejected, unreachable, or timed out, then its closed. With some additional logic to parse the IP/host names and the ports, the logic of the port scanner is almost identical to the search engine in Part I. The port scanner will create a thread for each port and IP/host. Then each thread will be started, and connect to their assigned port and IP independently from all the rest. When all the threads are started, then the port scanner will wait for all of them to return. The port scanner consists of only 2 files. (1) portscanner.aspx, which contains the form for entering the ip/hosts and ports, and a repeater control to display the result. And depending on the state of the page, one of the other will be hidden while the other one is displayed. (2) portscanner.aspx.cs, which is the code-behind file behind portscanner.aspx. It contains the classes and methods that perform the port scanning logic. Both files were developed and tested in the VS.NET (RTM edition), and the final edition of the .NET Framework SDK. Lets look at the portscanner.aspx first: <%@ Page Language="C#" AutoEventWireUp="false"Inherits="MultiThreadedWebApp.PortScanner" Src="portscanner.aspx.cs" %> Its pretty simple. The C# code block contains the method Scan(), which will be called when the user click the scan button. Theres a repeater control, which we will be binding scan result data to. Then theres a form, which we will be using to enter ip/host and port information. It will be hidden by the Scan() method, when we display the scan result via the repeater control. Now comes some fancy stuff: using This code-behind file contains the namespace MultiThreadedWebApp. And there are 2 classes in this namespace: (1) PortScanner, which inherits the System.Web.UI.Page class. Its also the page portscaner.aspx diverts from. (2) ScanPort, which contains information about a given port and ip, as well as performing the scanning task. The PortScanner contains 2 methods: (1) parse(), it accepts a string we passed to the class from the form. It then parse the string, extracting the ip/host and port information and construct the private member field _ports holding an ArrayList of ScanPort objects. _ports is accessed by the public property ScanResults. Note that in order for this method to parse the string properly, the string must be entered in a recognizable format. The format is: <ip | host>:<a port number | starting port numberending port number[,]>. It starts with the ip address or the host, then a :, then a list of individual port number, or a range of port numbers. Port numbers are separated by ,. And each ip/host entry is separated by the new line character. So for example:, will mean to scan the ports 80, 100 to 110 and 443 on the host. (2) Scan(), it accepts a string we obtained from the form in portscanner.aspx. It then pass the string to parse() to parse it, and construct the _ports object. Once the parse() has done its part, and we have the _ports object containing the ip/host and port information, we can start scanning them. The method loops through the _ports ArrayList, creates a thread for each ScanPort object. As soon as the thread is created, its started. When it started all the threads, it then waits for them to finish before it itself returns to its caller (the portscanner.aspx). The ScanPort contains just one method, Scan(). Its where the actual scanning task is performed. It first creates an object of type TcpClient in the System.Net.Socket namespace. It then passes the ip/host and port information that we has extracted by the parse() method, to its Connect() method. If a connection is made, we close it immediately, and set its status to Open. However, due to god knows what reasons, if the connection cannot be made, then the status is set to Closed. Note that if the scanning ip/host has the port opened behind a firewall, and only accessible from within its internal LAN, then its still closed to our port scanner, and we still determine its closed. Port Scanner in action Fire up the form, and type in some host/URLs: Sit back, relax and wait for a few seconds: Conclusion Imagine if the port scanner does not have the multi-threading capability, and you have to scan all the ports one by one. Depending on how fast it can response to you, and rather its open or close, or how you set the timeout on the connection, each one might take up to 20 seconds. If you just have 15 ports to scan, it might take up to 5 minutes! Imagine if you want to do a full scan (some 60,000 ports) on an IP However, as mentioned in the last part, starting and running too many threads (like around 60,000 ports for a full scan) is also not good. But this is where the thread pooling comes in. More information on thread pooling can be found in the MSDN Library at: Source Code Copy and paste your source code if its not big, otherwise leave it blank and discuss about code in the description. Multi-threaded Web Applications - Case II: Port Scanner Multithreading Part 2: Understanding the System.Threading.Thread Class public bool Scan(string pIPs) { try { parse(pIPs); I cant seem to understand how the portscanner.aspx passes the ip and port as "pIPs". I can't find where pIPs is set to contain the information?
http://www.c-sharpcorner.com/uploadfile/tinlam/multithreadportscanner11172005005326am/multithreadportscanner.aspx
crawl-003
refinedweb
1,012
72.56
47573/save-numpy-arrays-to-file Try this: np.savetxt('file.txt',arr,delimiter=' ') will save to a text file and np.savetxt('file.csv',arr,delimiter=',') will save to a CSV file. If you have matplotlib, you can do: import ...READ MORE Hi @Lina, you can use this: numpy_array = ...READ MORE You can try something like this: import numpy ...READ MORE I have this code, and I want ...READ MORE code from This uses the L1 distance ...READ MORE Good question, glad you brought this up. I ...READ MORE Hi, it is pretty simple, to be ...READ MORE Hi. Refer to the below command: import pandas ...READ MORE First open the file that you want ...READ MORE To append a file without overwriting, open ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/47573/save-numpy-arrays-to-file
CC-MAIN-2021-39
refinedweb
151
87.82
import "github.com/stephens2424/muxchain/muxchainutil" globmux.go gzip.go log.go methodmux.go muxchainutil.go panic.go pathmux.go const ( Lpath = log.Lshortfile << iota Lmethod LremoteAddr LresponseStatus LcontentLength LstdFlags = log.Ldate | log.Ltime | Lmethod | Lpath | LresponseStatus | LcontentLength ) Default is a handler that enables panic recovery, logging to standard out, and gzip for all request paths chained after it. var DefaultPanicRecovery = PanicRecovery{http.HandlerFunc(DefaultRecoverFunc)} DefaultPanicRecovery is a handler that enables basic panic recovery for all handlers chained after it. var Gzip = muxchain.ChainedHandlerFunc(gzipHandler) Gzip is a handler that enables gzip content encoding for all handlers chained after it. It adds the Content-Encoding header to the response. func DefaultRecoverFunc(w http.ResponseWriter, req *http.Request) func NoopHandler(w http.ResponseWriter, req *http.Request) GlobMux muxes patterns with wildcard (*) components. NewGlobMux initializes a new GlobMux. Handle registers a pattern to a handler. Handler accepts a request and returns the appropriate handler for it, along with the pattern it matched. If no appropriate handler is found, the http.NotFoundHandler is returned along with an empty string. GlobMux will choose the handler that matches the most leading path components for the request. ServeHTTP handles requests with the appropriate globbed handler. var DefaultLog *LogHandler func (l LogHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) func (l LogHandler) ServeHTTPChain(w http.ResponseWriter, req *http.Request, h ...http.Handler) MethodMux allows the caller to specify handlers that are specific to a particular HTTP method NewMethodMux returns a new MethodMux Handle registers a pattern to a particular handler. The pattern may optionally begin with an HTTP method, followed by a space, e.g.: "GET /homepage". The method may be *, which matches all methods. The method may also be omitted, which is the same as *. HandleMethods registers a pattern to a handler for the given methods. Handler selects a handler for a request. It will first attempt to chose a handler that matches the particular method. If a handler is not found, pattern will be the empty string. func (p PanicRecovery) ServeHTTP(w http.ResponseWriter, req *http.Request) func (p PanicRecovery) ServeHTTPChain(w http.ResponseWriter, req *http.Request, h ...http.Handler) PathMux muxes patterns by globbing over variable components and adding those as query parameters to the request for handlers. NewPathMuxer initializes a PathMux. Handle registers a handler to a pattern. Patterns may conatain variable components specified by a leading colon. For instance, "/order/:id" would map to handlers the same as "/order/*" on a GlobMux, however, handlers will see requests as if the query were "/order/?id=". Handlers will also match if a partial query is provided. For instance, /order/x will match /order/:id/:name, and the name variable will be empty. Variables are always matched from left to right and the handler with the most matches wins (with static strings beating variables). ServeHTTP handles requests by adding path variables to the request and forwarding them to the matched handler. (See Handle) Package muxchainutil imports 10 packages (graph) and is imported by 1 packages. Updated 2016-07-16. Refresh now. Tools for package owners.
https://godoc.org/github.com/stephens2424/muxchain/muxchainutil
CC-MAIN-2018-26
refinedweb
510
53.07
I have this class using linq to sql, how do I implement the same by using normal sql in ASP.NET MVC 3 without use EF? public ActionResult Index() { var List = (from c in db.OFFICE join s in db.CAMPUS_UNIVERSITY on c.IdCampus equals s.IdCampus join u in db.UNIVERSITY on s.IdUniversity equals u.IdUniversity select u).ToList(); return View(List); } This is just a sample.(Tested & working ).That is y i am keeping the GetUniversities method inside the controller class . I suggest you to move the GetUniversities method to some service layer so that many UI/Controllers can use that. public ActionResult Index() { var items= GetUniversities(); return View(items); } private List<DataRow> GetUniversities() { List<DataRow> list=null; string srtQry = "SELECT U.* FROM Office O INNER JOIN CampusUniversity CU ON O.IdCampus equals CU.IdCampus INNER JOIN UNIVERSITY U ON U.IdUniversity=CU.IdUniversity"; string connString = "Database=yourDB;Server=yourServer;UID=user;PWD=password;"; using (SqlConnection conn = new SqlConnection(connString)) { string strQry = ""; using(SqlCommand objCommand = new SqlCommand(srtQry, conn)) { objCommand.CommandType = CommandType.Text; DataTable dt = new DataTable(); SqlDataAdapter adp = new SqlDataAdapter(objCommand); conn.Open(); adp.Fill(dt); if (dt != null) { list = dt.AsEnumerable().ToList(); } } } return list; } Keep in mind that the GetCustomers method returns a List of DataRows. Not your custom domain entities. Entity framework is giving you the list of Domain Entities. So in the custom SQL case, you need to map the Data Row to an instance of your custom object yourself. With LINQ, You can convert the List of DataRow to your custom objects like this public ActionResult Index() { var items= GetCustomers(); var newItems = (from p in items select new { Name= p.Field<String>("Name"), CampusName= p.Field<String>("CampusName") }).ToList(); return View(newItems); } This will give you a list of anonymous type which has 2 properties, Name and CampusName. Assuming Name and CampusName are 2 columns present in the result of your query. EDIT2 : As per the Comment, To List these data in a view, Create a view called Index inside your controller( where we wrote this action methods) folder under Views Folder.We need to make it a strongly typed view. But Wait! What type are we going to pass to the view ? Our result is annonymous type. So We will create a ViewModel in this case and instead of annonymous, We will return a List of the ViewModel. public class UniversityViewModel { public string UniversityName { set;get;} public string CampusName { set;get;} } Now we will update the code in our Index action like this. var newItems = (from p in items select new UserViewModel { UniversityName = p.Field<String>("Name"), CampusName = p.Field<String>("CampusName") }).ToList(); The only change is we now mentioned a type here. So the output is no more annonymous type. But known type. Let us go back to our View and write code like this. @model IEnumerable<SO_MVC.Models.UserViewModel> @foreach (var item in Model) { <p>@item .UniversityName @item.CampusName</p> } This view is strongly typed to a collection of our ViewModel. As usual we are looping thru that and displaying. This should work fine. It is Tested.
https://codedump.io/share/VsxelYe3qMjt/1/how-to-use-normal-sql-in-aspnet-mvc-without-ef
CC-MAIN-2016-50
refinedweb
518
61.43
To create one, go to Controller Services (on the management toolbar on the right), create a new Controller Service, and select type DistributedMapCacheServer. Once named and saved, you can click the edit button (pencil icon) and set up the hostname and port and such: Once the properties are saved, you can click the Enable button (lightning-bolt icon) and it will be ready for use by some NiFi Processors. The DistributedMapCacheServer is mainly used for lookups and for keeping user-defined state. The PutDistributedMapCache and FetchDistributedMapCache processors are good for the latter. Other processors make use of the server to keep track of things like which files they have already processed. DetectDuplicate, ListFile, ListFileTransfer, GetHBase, and others make use of the cache server(s) too. So it is certainly possible to have a NiFi dataflow with FetchDistributedMapCache scheduled for some period of time (say 10 seconds) connected to a LogAttribute processor or something to list the current contents of the desired keys in the cache. For this post I wanted to show how to inspect the cache from outside NiFi for two reasons. The first is to illustrate the use case of working with the cache without using NiFi components (good for automated population or monitoring of the cache from the outside), and also to show the very straightforward protocol used to get values from the cache. Putting data in is equally as simple, perhaps I'll add a follow-on post for that. The DistributedMapCacheServer opens a port at the configured value (see dialog above), it expects a TCP connection and then various commands serialized in specific ways. The first task, once connected to the server, is to negotiate the communications protocol version to be used for all future client-server operations. To do this, we need the following: - Client sends the string "NiFi" as bytes to the server - Client sends the protocol version as an integer (4-bytes) If you are using an output stream to write these values, make sure you flush the stream after these steps, to ensure they are sent to the server, so that the server can respond. The server will respond with one of three codes: - RESOURCE_OK (20): The server accepted the client's proposal of protocol name/version - DIFFERENT_RESOURCE_VERSION (21): The server accepted the client's proposal of protocol name but not the version - ABORT (255): The server aborted the connection Once we get a RESOURCE_OK, we may continue on with our communications. If instead we get DIFFERENT_RESOURCE_VERSION, then the client needs to read in an integer containing the server's preferred protocol version. If the client can proceed using this version (or another version lower than the server's preference), it should re-negotiate the version by sending the new client-preferred version as an integer (note you do not need to send the "NiFi" again, the name has already been accepted). If the client and server cannot agree on the protocol version, the client should disconnect from the server. If some error occurs on the server and it aborts the connection, the ABORT status code will be returned, and a message error can be obtained by the client (before disconnect) by reading in a string of UTF-8 bytes. So let's see all this working in a simple example written in Groovy. Here is the script I used: def protocolVersion = 1 def keys = ['entry', 'filename'] s = new Socket('localhost', 4557) s.withStreams { input, output -> def dos = new DataOutputStream(output) def dis = new DataInputStream(input) // Negotiate handshake/version dos.write('NiFi'.bytes) dos.writeInt(protocolVersion) dos.flush() status = dis.read() while(status == 21) { protocolVersion = dis.readInt() dos.writeInt(protocolVersion) dos.flush() status = dis.read() } // Get entries keys.each { key = it.getBytes('UTF-8') dos.writeUTF('get') def baos = new ByteArrayOutputStream() baos.write(key) dos.writeInt(baos.size()) baos.writeTo(dos) dos.flush() def length = dis.readInt() def bytes = new byte[length] dis.readFully(bytes) println "$it = ${new String(bytes)}" } // Close dos.writeUTF("close"); dos.flush(); } I have set the protocol version to 1, which at present is the only accepted version. But you can set it higher to see the protocol negotiation work. Also I have the variable "keys" with a list of the keys to look up in the cache. There is no mechanism at present for retrieving all the keys in the cache. This is probably for simplicity and to avoid denial-of-service type stuff if there are tons and tons of keys. For our example, it will fetch the value for each key and print out the key/value pair. Next you can see the creation of the socket, using the same port as was configured for the server (4557). Then Groovy has some nice additions to the Socket class, to offer an InputStream and OutputStream for that socket to your closure. Since we'll be dealing with bytes, strings, and integers, I thought a DataInputStream and DataOutputStream would be easiest (also this is how the DistributedMapCacheClient works). The next two sections perform the protocol version negotiation as described above. Then for each key we write the string "get" followed by the key name as bytes. That is the entirety of the "get" operation :) The server responds with the length of the key's value (in bytes). We read in the length as an integer, then read in a byte array containing the value. For my case I know the keys are strings, I simply create a String from the bytes and print out the key value pair. To my knowledge the only kinds of serialized values used by the NiFi DistributedMapCacheClient are a byte array, String, and CacheValue (used exclusively by the DetectDuplicate processor). Once we're done reading key/value pairs, we write and flush the string "close" to tell the server our transaction is complete. I did not expressly close the socket connection, that is done by withStreams() which closes the streams when the closure is finished. That's all there is to it! This might not be a very common use case, but it was fun to learn about some of the intra-NiFi protocols, and being able to get some information out of the system using different methods :) Cheers! hi, thanks for this post, it's interesting and i would like to know how it is possible for using javascript instead of groovy, thank you
https://funnifi.blogspot.com/2016/04/inspecting-your-nifi.html
CC-MAIN-2018-22
refinedweb
1,064
59.64
Warning: You are browsing the documentation for Symfony 5.1, which is no longer maintained. Read the updated version of this page for Symfony 5.3 (the current stable version). IsTrue IsTrue¶ Validates that a value is true. Specifically, this checks if the value is exactly true, exactly the integer 1, or exactly the string "1". Basic Usage¶ This constraint can be applied to properties (e.g. a termsAccepted property on a registration model) and methods. It’s most powerful in the latter case, where you can assert that a method returns a true value. For example, suppose you have the following method: // src/Entity/Author.php namespace App\Entity; class Author { protected $token; public function isTokenValid() { return $this->token == $this->generateToken(); } } Then you can validate this method with IsTrue as follows: - Annotations - YAML - XML - PHP If the isTokenValid() returns false, the validation will fail. Options¶ groups¶ type: array | string It defines the validation group or groups this constraint belongs to. Read more about validation groups. type: string default: This value should be true. This message is shown if the underlying data is not true..
https://symfony.com/index.php/doc/5.1/reference/constraints/IsTrue.html
CC-MAIN-2021-31
refinedweb
185
59.6
Why do we want to write our own functions? Here's a piece of code which prints the AT content (the proportion of a DNA sequence which is either A or T) of a given DNA sequence: my_dna = "ACTGATCGATTACGTATAGTATTTGCTATCATACATATATATCGATGCGTTCAT" length = len(my_dna) a_count = my_dna.count('A') t_count = my_dna.count('T') at_content = (a_count + t_count) / length print("AT content is " + str(at_content)) If we discount the first line (whose job is to store the input sequence) and the last line (whose job is to print the result), we can see that it takes four lines of code to calculate the AT content. This means that every place in our code where we want to calculate the AT content of a sequence, we need these same four lines – and we have to make sure we copy them exactly, without any mistakes. It would be much simpler if Python had a built in function (let's call it get_at_content()) for calculating AT content. If that were the case, then we could just run get_at_content() in the same way we run print(), or len(), or open(). Although, sadly, Python does not have such a built in function, it does have the next best thing – a way for us to create our own functions. Creating our own function to carry out a particular job has many benefits. It allows us to reuse the same code many times within a program without having to copy it out each time. Additionally, if we find that we have to make a change to the code, we only have to do it in one place. Splitting our code into functions also allows us to tackle larger problems, as we can work on different bits of the code independently. We can also reuse code across multiple programs. Defining a function Let's go ahead and create our get_at_content() function. Before we start typing, we need to figure out what the inputs (the number and types of the function arguments) and outputs (the type of the return value) are going to be. For this function, that seems pretty obvious – the input is going to be a single DNA sequence, and the output is going to be a decimal number. To translate these into Python terms: the function will take a single argument of type string, and will return a value of type number. Here's the code: def get_at_content(dna): length = len(dna) a_count = dna.count('A') t_count = dna.count('T') at_content = (a_count + t_count) / length return at_content Reminder: if you're using Python 2 rather than Python 3, include this line at the top of your program: from __future__ import division The first line of the function definition contains several different elements. We start with the word def, which is short for define (writing a function is called defining it). Following that we write the name of the function, followed by the names of the argument variables in parentheses. Just like we saw before with normal variables, the function name and the argument names are arbitrary – we can call them whatever we like. The first line ends with a colon, just like the first line of the loops that we were looking at in the previous chapter. And just like loops, this line is followed by a block of indented lines – the function body. The function body can have as many lines of code as we like, as long as they all have the same indentation. Within the function body, we can refer to the arguments by using the variable names from the first line. In this case, the variable dna refers to the sequence that was passed in as the argument to the function. The last line of the function causes it to return the AT content that was calculated in the function body. To return from a function, we simply write return followed by the value that the function should output. There are a couple of important things to be aware of when writing functions. Firstly, we need to make a clear distinction between defining a function, and running it (we refer to running a function as calling it). The code we've written above will not cause anything to happen when we run it, because we've not actually asked Python to execute the get_at_content() function – we have simply defined what it is. The code in the function will not be executed until we call the function like this: get_at_content("ATGACTGGACCA") If we simply call the function like that, however, then the AT content will vanish once it's been calculated. In order to use the function to do something useful, we must either store the result in a variable: at_content = get_at_content("ATGACTGGACCA") Or use it directly: print("AT content is " + str(get_at_content("ATGACTGGACCA"))) Secondly, it's important to understand that the argument variable dna does not hold any particular value when the function is defined. Instead, its job is to hold whatever value is given as the argument when the function is called. In this way it's analogous to the loop variables we saw in the previous section: loop variables hold a different value each time round the loop, and function argument variables hold a different value each time the function is called. Finally, be aware that any variables that we create as part of the function only exist inside the function, and cannot be accessed outside. If we try to use a variable that's created inside❶ the function from outside❷: def get_at_content(dna): length = len(dna) a_count = dna.count('A')❶ t_count = dna.count('T') at_content = (a_count + t_count) / length return at_content print(a_count)❷ We'll get an error: NameError: name 'a_count' is not defined Calling and improving our function Let's write a small program that uses our new function, to see how it works. We'll try both storing the result in a variable before printing it❶ and printing it directly❷: def get_at_content(dna): ... my_at_content = get_at_content("ATGCGCGATCGATCGAATCG") print(str(my_at_content))❶ print(get_at_content("ATGCATGCAACTGTAGC"))❷ print(get_at_content("aactgtagctagctagcagcgta")) Looking at the output, we can see that the first function call works fine – the AT content is calculated to be 0.45, is stored in the variable my_at_content, then printed. However, the output for the next two calls is not so great. The second function call produces a number with way too many figures after the decimal point, and the third function call, with the input sequence in lower case, gives a result of 0.0, which is definitely not correct: 0.45 0.5294117647058824 0.0 We'll fix these problems by making a couple of changes to the get_at_content() function. We can add a rounding step in order to limit the number of significant figures in the result. Python has a built in round() function that takes two arguments – the number we want to round, and the number of significant figures. We'll call the round() function on the result before we return it❶. And we can fix the lower case problem by converting the input sequence to uppercase❷ before starting the calculation. Here's the new version of the function, with the same three function calls: def get_at_content(dna): length = len(dna) a_count = dna.upper().count('A')❷ t_count = dna.upper().count('T') at_content = (a_count + t_count) / length return round(at_content, 2)❶ my_at_content = get_at_content("ATGCGCGATCGATCGAATCG") print(str(my_at_content)) print(get_at_content("ATGCATGCAACTGTAGC")) print(get_at_content("aactgtagctagctagcagcgta")) and now the output is just as we want: 0.45 0.53 0.52 We can make the function even better though: why not allow it to be called with the number of significant figures as an argument? In the above code, we've picked two significant figures, but there might be situations where we want to see more. Adding the second argument is easy; we just add it to the argument variable list❶ on the first line of the function definition, and then use the new argument variable in the call to round()❷. We'll throw in a few calls to the new version of the function with different arguments to check that it works: def get_at_content(dna, sig_figs):❶ length = len(dna) a_count = dna.upper().count('A') t_count = dna.upper().count('T') at_content = (a_count + t_count) / length return round(at_content, sig_figs)❷ test_dna = "ATGCATGCAACTGTAGC" print(get_at_content(test_dna, 1)) print(get_at_content(test_dna, 2)) print(get_at_content(test_dna, 3)) The output confirms that the rounding works as intended: 0.5 0.53 0.529 Encapsulation with functions Let's pause for a moment and consider what happened in the previous section. We wrote a function, and then wrote some code that used that function. In the process of writing the code that used the function, we discovered a couple of problems with our original function definition. We were then able to go back and change the function definition, without having to make any changes to the code that used the function. I've written that last sentence in bold, because it's incredibly important. It's no exaggeration to say that understanding the implications of that sentence is the key to being able to write larger, useful programs. The reason it's so important is that it describes a programming phenomenon that we call encapsulation. Encapsulation just means dividing up a complex program into little bits which we can work on independently. In the example above, the code is divided into two parts – the part where we define the function, and the part where we use it – and we can make changes to one part without worrying about the effects on the other part. This is a very powerful idea, because without it, the size of programs we can write is limited to the number of lines of code we can hold in our brain at one time. Some of the example code in the previous chapter were starting to push at this limit already, even for relatively simple problems. By contrast, using functions allows us to build up a complex program from small building blocks, each of which individually is small enough to understand in its entirety. Because using functions is so important, future examples will use them when appropriate, even when it's not explicitly mentioned in the text. I encourage you to get into the habit of using functions in your programs too. Functions don't always have to take an argument There's nothing in the rules of Python to say that your function must take an argument. It's perfectly possible to define a function with no arguments: def get_a_number(): return 42 but such functions tend not to be very useful. For example, we can write a version of get_at_content() that doesn't require any arguments by setting the value of the dna variable inside the function: def get_at_content(): dna = "ACTGATGCTAGCTA" length = len(dna) a_count = dna.upper().count('A') t_count = dna.upper().count('T') at_content = (a_count + t_count) / length return round(at_content, 2) but that's obviously not very useful, as it calculates the AT content for the exact same sequence every time it's called! Occasionally you may be tempted to write a no-argument function that works like this: def get_at_content(): length = len(dna) a_count = dna.upper().count('A') t_count = dna.upper().count('T') at_content = (a_count + t_count) / length return round(at_content, 2) dna = "ACTGATCGATCG"❶ print(get_at_content()) At first this seems like a good idea – it works because the function gets the value of the dna variable that is set before the function call❶. However, this breaks the encapsulation that we worked so hard to achieve. The function now only works if there is a variable called dna set in the bit of the code where the function is called, so the two pieces of code are no longer independent. If you find yourself writing code like this, it's usually a good idea to identify which variables from outside the function are being used inside it, and turn them into arguments. Functions don't always have to return a value Consider this variation of our function – instead of returning the AT content, this function prints it to the screen: def print_at_content(dna): length = len(dna) a_count = dna.upper().count('A') t_count = dna.upper().count('T') at_content = (a_count + t_count) / length print(str(round(at_content, 2))) When you first start writing functions, it's very tempting to do this kind of thing. You think: "OK, I need to calculate and print the AT content – I'll write a function that does both." The trouble with this approach is that it results in a function that is less flexible. Right now you want to print the AT content to the screen, but what if you later discover that you want to write it to a file, or use it as part of some other calculation? You'll have to write more functions to carry out these tasks. The key to designing flexible functions is to recognize that the job calculate and print the AT content is actually two separate jobs – calculating the AT content, and printing it. Try to write your functions in such a way that they just do one job. You can then easily write code to carry out more complicated jobs by using your simple functions as building blocks. Functions can be called with named arguments What do we need to know about a function in order to be able to use it? We need to know what the return value and type is, and we need to know the number and type of the arguments. For the examples we've seen so far, we also need to know the order of the arguments. For instance, to use the open() function we need to know that the name of the file comes first, followed by the mode of the file. And to use our two-argument version of get_at_content() as described above, we need to know that the DNA sequence comes first, followed by the number of significant figures. There's a feature in Python called keyword arguments which allows us to call functions in a slightly different way. Instead of giving a list of arguments in parentheses: get_at_content("ATCGTGACTCG", 2) we can supply a list of argument variable names and values like this: get_at_content(dna="ATCGTGACTCG", sig_figs=2) This style of calling functions6 has several advantages. It doesn't rely on the order of arguments, so we can use whichever order we prefer. These two statements behave identically: get_at_content(dna="ATCGTGACTCG", sig_figs=2) get_at_content(sig_figs=2, dna="ATCGTGACTCG") It's also clearer to read what's happening when the argument names are given explicitly. We can even mix and match the two styles of calling – the following are all identical: get_at_content("ATCGTGACTCG", 2) get_at_content(dna="ATCGTGACTCG", sig_figs=2) get_at_content("ATCGTGACTCG", sig_figs=2) Although we're not allowed to start off with keyword arguments then switch back to normal – this will cause an error: get_at_content(dna="ATCGTGACTCG", 2) Keyword arguments can be particularly useful for functions and methods that have a lot of optional arguments, and we'll use them where appropriate in future examples. Function arguments can have defaults We've encountered function arguments with defaults before, when we were discussing opening files. Recall that the open() function takes two arguments – a filename and a mode string – but that if we call it with just a filename it uses a default value for the mode string. We can easily take advantage of this feature in our own functions – we simply specify the default value in the first line of the function definition❶. Here's a version of our get_at_content() function where the default number of significant figures is two: def get_at_content(dna, sig_figs=2):❶ length = len(dna) a_count = dna.upper().count('A') t_count = dna.upper().count('T') at_content = (a_count + t_count) / length return round(at_content, sig_figs) Now we have the best of both worlds. If the function is called with two arguments, it will use the number of significant figures specified; if it's called with one argument, it will use the default value of two significant figures. Let's see some examples: get_at_content("ATCGTGACTCG") get_at_content("ATCGTGACTCG", 3) get_at_content("ATCGTGACTCG", sig_figs=4) The function takes care of filling in the default value for sig_figs for the first function call where none is supplied: 0.45 0.455 0.4545 Function argument defaults allow us to write very flexible functions which can have varying numbers of arguments. It only makes sense to use them for arguments where a sensible default can be chosen – there's no point specifying a default for the dna argument in our example. They are particularly useful for functions where some of the optional arguments are only going to be used infrequently. Testing functions When writing code of any type, it's important to periodically check that your code does what you intend it to do. If you look back over the solutions to exercises from the first few sections of this tutorial, you can see that we generally test our code at each step by printing some output to the screen and checking that it looks OK. For example when we were first calculating AT content, we used a very short test sequence to verify that our code worked before running it on the real input. The reason we used a test sequence was that, because it was so short, we could easily work out the answer manually and compare it to the answer given by our code. This idea – running code on a test input and comparing the result to an answer that we know to be correct1 – is such a useful one that Python has a built in tool for expressing it: assert. An assertion consists of the word assert, followed by a call to our function, then two equals signs, then the result that we expect. For example, we know that if we run our get_at_content() function on the DNA sequence "ATGC" we should get an answer of 0.5. This assertion will test whether that's the case: assert get_at_content("ATGC") == 0.5 Notice the two equals signs – we'll learn the reason behind that in the next section. The way that assertion statements work is very simple; if an assertion turns out to be false (i.e. if Python executes our function on the input "ATGC" and the answer isn't 0.5) then the program will stop and we will get an AssertionError. Assertions are useful in a number of ways. They provide a means for us to check whether our functions are working as intended and therefore help us track down errors in our programs. If we get some unexpected output from a program that uses a particular function, and the assertion tests for that function all pass, then we can be confident that the error doesn't lie in the function but in the code that calls it. They also let us modify a function and check that we haven't introduced any errors. If we have a function that passes a series of assertion tests, and we make some changes to it, we can rerun the assertion tests and, assuming they all pass, be confident that we haven't broken the function. Assertions are also useful as a form of documentation. By including a collection of assertion tests alongside a function, we can show exactly what output is expected from a given input. Finally, we can use assertions to test the behaviour of our function for unusual inputs. For example, what is the expected behaviour of get_at_content() when given a DNA sequence that includes unknown bases (usually represented as N)? A sensible way to handle unknown bases would be to exclude them from the AT content calculation – in other words, the AT content for a given sequence shouldn't be affected by adding a bunch of unknown bases. We can write an assertion that expresses this: assert get_at_content("ATGCNNNNNNNNNN") == 0.5 This assertions fails for the current version of get_at_content(). However, we can easily modify the function to remove all N characters before carrying out the calculation❶: def get_at_content(dna, sig_figs=2): dna = dna.replace('N', '')❶ length = len(dna) a_count = dna.upper().count('A') t_count = dna.upper().count('T') at_content = (a_count + t_count) / length return round(at_content, sig_figs) and now the assertion passes. It's common to group a collection of assertions for a particular function together to test for the correct behaviour on different types of input. Here's an example for get_at_content() which shows a range of different types of behaviour: assert get_at_content("A") == 1 assert get_at_content("G") == 0 assert get_at_content("ATGC") == 0.5 assert get_at_content("AGG") == 0.33 assert get_at_content("AGG", 1) == 0.3 assert get_at_content("AGG", 5) == 0.33333 When we have a collection of tests like this, we often refer to it as a test suite. Recap In this section, we've seen how packaging code into functions helps us to manage the complexity of large programs and promote code reuse. We learned how to define and call our own functions along with various new ways to supply arguments to functions. We also looked at a couple of things that are possible in Python, but rarely advisable – writing functions without arguments or return values. Finally, we explored the use of assertions to test our functions, and discussed how we can use them to catch errors before they become a problem. For a more structured approach to testing functions that's invaluable for larger programming projects, see the chapter on automated testing in Effective Python development for Biologists, which you can find on the books page. The remaining sections of this tutorial will make use of functions in both the examples and the exercise solutions, so make sure you are comfortable with the new ideas from this section before moving on. This chapter has covered the basics of writing and using functions, but there's much more we can do with them – in fact, there's a whole style of programming (functional programming) which revolves around the manipulation of functions. You'll find a discussion of this in the chapter in Advanced Python for Biologists called, unsurprisingly, functional programming. Exercises Both parts of the exercise for this section require you to test your answers with a collection of assert statements. Rather than typing them out, just copy and paste them from the exercise description into your program. Remember, you can always find solutions and explanations for all the exercises in the Python for Biologists books. Percentage of amino acid residues, part one You'll have to change the function name my_function in the assert statements to whatever you decide to call your function. Reminder: if you're using Python 2 rather than Python 3, include this line at the top of your program: from future import division Percentage of amino acid residues, part two SOLUTIONS You can find solutions to all the exercises, along with explanations of how they work, in the Python for Biologists books - head over to the books page to check them out.
https://pythonforbiologists.com/writing-our-own-functions/
CC-MAIN-2018-26
refinedweb
3,844
57.71
CURLOPT_FTPPORT man page CURLOPT_FTPPORT — make FTP transfer active Synopsis #include <curl/curl.h> CURLcode curl_easy_setopt(CURL *handle, CURLOPT_FTPPORT, char *spec); Description Pass a pointer to a zero terminated string as parameter. It specifies that the FTP transfer will be made actively and the given string will be used to get the IP address to use for the FTP PORT instruction. The PORT instruction tells the remote server to connect to our specified IP address. The string may be a plain IP address, a host name, a network interface name (under Unix) or just a '-' symbol to let the library use your system's default IP address. Default FTP operations are passive, and thus won't use PORT.. Examples with specified ports: eth0:0 192.168.1.2:32000-33000 curl.se:32123 [::1]:1234-4567 You disable PORT again and go back to using the passive version by setting this option to NULL. The application does not have to keep the string around after setting this option. Default NULL Protocols FTP Example CURL *curl = curl_easy_init(); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, ""); curl_easy_setopt(curl, CURLOPT_FTPPORT, "-"); ret = curl_easy_perform(curl); curl_easy_cleanup(curl); } Availability Port range support was added in 7.19.5 Return Value Returns CURLE_OK if the option is supported, CURLE_UNKNOWN_OPTION if not, or CURLE_OUT_OF_MEMORY if there was insufficient heap space. See Also CURLOPT_FTP_USE_EPRT(3), CURLOPT_FTP_USE_EPSV(3), Referenced By curl_easy_setopt(3), CURLOPT_FTP_SKIP_PASV_IP(3), CURLOPT_FTP_USE_EPRT(3), CURLOPT_FTP_USE_EPSV(3), libcurl-errors(3), libcurl-tutorial(3).
https://www.mankier.com/3/CURLOPT_FTPPORT
CC-MAIN-2018-09
refinedweb
240
53.92
« Previous 1 2 3 Next » High-Performance Python – Compiled Code and Fortran Interface f2py Early in the development and growth of Python, the desire to integrate Fortran into Python led people to develop the necessary tools. To make the process more accessible, Pearu Peterson decided to create a tool for make porting easier. The tool, f2py, was first released in 1999 and was known as f2py.py. Eventually, the tool was incorporated into NumPy (numpy.f2py) and is still there today. F2py is probably the most popular tool for interfacing Fortran with Python and has been used in a large number of projects. It creates an extension module that is imported into Python by the import Python command. The module has automatically generated wrapper functions that create the interface between Fortran and Python. The f2py users guide introduction shows three methods of use. Instead of presenting all three methods, I will focus on the “quick and smart way.” The first step is to create a signature file for the Fortran (or C) code (f2py also accommodates C code) that describes the wrapper for the Fortran or C functions. F2py refers to these as “signatures” of the functions. Usually f2py can create a signature file for you just by scanning the source. Once the signatures are created, you compile the code to make it ready for Python. f2py example The simple Fortran code in Listing 2 illustrates how to call a subroutine with input data: Listing 2: secret.f90 subroutine hellofortran(n) integer n write(*,*) "Hello from Fortran! The secret number is: ", n return end The next step is to create the signature of the code for f2py: $ python3 -m numpy.f2py secret.f90 -m secretcode -h secret.pyf For this command, f2py takes the Fortran90 code and creates the signature file secret.pyf. The Python module is named secretcode (the -m option). The resulting signature file is shown in Listing 3. Listing 3: secret.pyf $ more secret.pyf ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module secretcode ! in interface ! in :secretcode subroutine hellofortran(n) ! in :secretcode:secret.f90 integer :: n end subroutine hellofortran end interface end python module secretcode ! This file was auto-generated with f2py (version:2). ! See The final step is to compile the code with f2py and create the Python module: $ python3 -m numpy.f2py -c secret.pyf secret.f90 The output from the command (not shown) provides some good information about what it's doing and is useful in understanding what f2py does. F2py creates a shared object (.so) suitable for Python to import: $ ls -s total 120 112 secretcode.cpython-37m-x86_64-linux-gnu.so 4 secret.f90 4 secret.pyf To test the code, run python3 (Listing 4). Remember, the Python module is secretcode. Listing 4: Testing the Code $ python3 Python 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import secretcode >>> secretcode.hellofortran(5) Hello from Fortran! The secret number is: 5 The first command (see the >>> prompts) imports the secretcode Python module. The shared object has the function hellofortran, which you use in the second command, secretcode.hellowfortran(). I used the argument 5 in the test. OpenMP and f2py If you can compile Fortran routines to use in Python, shouldn't you be able to compile OpenMP Fortran code for Python? After all, Python only uses a single core. If you either take existing Fortran routines or write new Fortran routines that use OpenMP to take advantage of all of the CPU cores, you would make much better use of computational resources. Previously, I wrote an article that introduced the concept of using multicore processing with Fortran and OpenMP. The article has a simple example of finding the minimum of an array. F2py was used on the command line with several options for the compiler to build OpenMP code: $ f2py --f90flags=-fopenmp -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/ -lgomp -c -m test test.f90 In this case, the GFortran compiler option -fopenmp was used to compile the OpenMP. It also added the appropriate gomp library with the full path. This test.f90 code also used the ompmin function and created the test module. Read the article to understand how you can use OpenMP with your Fortran code in Python. gfort2py During my research for this article, I ran into the interesting gfort2py project, which uses GFortran, the GNU Fortran compiler, and Fortran modules (.mod files) to translate the Fortran code's ABI (application binary interface) to Python-compatible types with Python's ctypes library. In principle, gfort2py can accommodate anything the compiler can compile (i.e., valid Fortran), as long as it is in a Fortran module. Gfort2py is mostly written in Python and requires no changes to your Fortran code, as long as it is a module; however, you can only use GFortran. The requirements for gfort2py is a GFortran version greater than 5.3.1. It works with both Python 2.7 and Python 3. Although it is set up to be installed by pip, because I’m using Anaconda, I will install it the old-fashioned way and build it on the target system. The instructions for installation on the gfort2py website are easy to follow, and it's simple to install. Just be sure to start with the Python version you intend to use later. The Fortran test code is very simple (Listing 5) and almost the same as the previous code. It just writes out a variable that is passed in. Notice that the code is in a module, as required. Listing 5: test1.f90 module test1 contains subroutine hello(n) integer :: n write(*,*) "hello world. The secret number is ",n return end end module test1 The next step is to compile the code and create the library (Listing 6). The resulting Python library is very easy to use: You use a function from the gfort2py Python module, and it handles the heavy lifting, so to speak (Listing 7). Listing 6: Compiling Fortran Code $ gfortran -fPIC -shared -c test1.f90 $ gfortran -fPIC -shared -o libtest1.so test1.f90 $ ls -s total 28 8 libtest1.so 4 test1.f90 4 test1.fpy 4 test1.mod 4 test1.o 4 test1.py Listing 7: Using the Python Library import gfort2py as gf SHARED_LIB_NAME = "./libtest1.so" MOD_FILE_NAME="test1.mod" x = gf.fFort(SHARED_LIB_NAME, MOD_FILE_NAME) y = x.hello(6) First, variables are defined pointing to the library and the compiled .mod file. Next, an object is defined that contains the functions in the Fortran module file (x). Finally, a function from the code is used. Running the Python code shows the output: $ python3 test1.py hello world. The secret number is 6 Going beyond this simple code would make the article too long, but it seems like almost anything you can put into a Fortran module would work – perhaps even OpenMP or an other extension. Gfort2py seems to be under active development, so check back in from time to time to see what new features have been added. « Previous 1 2 3 Next »
https://www.admin-magazine.com/HPC/Articles/High-Performance-Python-2/(offset)/2/(tagID)/885
CC-MAIN-2021-25
refinedweb
1,198
67.86
Check if a number can be expressed as power | Set 2 (Using Log) Check if a number can be expressed as x^y (x raised to power y) Given a positive integer n, find if it can be expressed as x^y where y > 1 and x > 0. x and y both are integers. Examples : Input: n = 8 Output: true 8 can be expressed as 2^3 Input: n = 49 Output: true 49 can be expressed as 7^2 Input: n = 48 Output: false 48 can't be expressed as x^y We have discussed two different approaches in below post. Check if a number can be expressed as x^y (x raised to power y). The idea is find Log n in different bases from 2 to square root of n. If Log n for a base becomes integer then result is true, else result is false. C++ Java Python3 # Python3 program to find if a number # can be expressed as x raised to # power y. import math def isPower(n): # Find Log n in different # bases and check if the # value is an integer for x in range(2,int(math.sqrt(n)) + 1): f = math.log(n) / math.log(x); if ((f – int(f)) == 0.0): return True; return False; # Driver code for i in range(2, 100): if (isPower(i)): print(i, end = ” “); # This code is contributed by mits C# PHP 4 8 9 16 25 27 32 36 49 64 81 Recommended Posts: - Check if a number can be expressed as x^y (x raised to power y) - Check if a number can be expressed as 2^x + 2^y - Check if a number can be expressed as a^b | Set 2 - Check if a number can be expressed as sum two abundant numbers - Check if a number can be expressed as a sum of consecutive numbers - Check whether a number can be expressed as a product of single digit numbers - Check if given number is a power of d where d is a power of 2 - Find ways an Integer can be expressed as sum of n-th power of unique natural numbers - Check if a prime number can be expressed as sum of two Prime Numbers - Check whether a given Number is Power-Isolated or not - Given a HUGE number check if it's a power of two. - Check if a number is power of k using base changing method - Check if a number is a power of another number - Numbers within a range that can be expressed as power of two numbers - Check if an integer can be expressed as a sum of two semi-pr, Mithun Kumar
https://www.geeksforgeeks.org/check-number-can-expressed-power-set-2-using-log/
CC-MAIN-2019-26
refinedweb
446
61.63
You can subscribe to this list here. Showing 8 results of 8 On Thursday, August 7, 2003, at 06:25 PM, Daniel Barlow wrote: > Foreign memory isn't touched by GC anyway; it's foreign. Anything you > malloc will stay allocated, at that address, until you free it. Thanks - I somehow missed the section on Foreign Dynamic Allocation in the manual. Now that I've read it, this is much clearer to me. (RTFM in action) Right now though, my initial problem is that I can't even get the alien example from the manual to load. One reply suggested that the Darwin linker/loader is flakey. Any known work-arounds? Raf Raffael Cavallaro, Ph.D. raffaelcavallaro@... Raffael Cavallaro <raffaelcavallaro@...> writes: > What would it take to enable sbcl to lock pointers to foreign (i.e., c > language) objects (data, pointers, and functions) in memory for future > reference, and then dispose of it when no longer needed? I know that Foreign memory isn't touched by GC anyway; it's foreign. Anything you malloc will stay allocated, at that address, until you free it. If you need to hold onto it for a particular dynamic extent, you want to wrap the free in an unwind-protect or something. Once upon a time you could also have added a finalizer which would free the foreign code when some Lisp object it was associated with got collected, but unfortunately finalisation is currently broken. On x86/gencgc, there's also support for pinning pages in Lisp memory to protect them from garbage collection: any pointer on the C stack will cause the page(s) where the object resides to be locked, so as long as you can keep a pointer on the C stack, it will be safe. So anything you pass to a foreign call is guaranteed unmovable until the foreign call returns. Additionally, on x86 the C stack is also the Lisp control stack, so calling a Lisp function with the pointer to be protected will cause it to be pinned until that function returns. (Hmm. That sounds really rather excessively conservative. I wonder if I'm talking rubbish) If you're using PPC that's not relevant to you, because that port still uses cheneygc, which doesn't have these niceties. It wouldn't be impossible to port the genreational collector, and I probably will do if I ever get as far as doing native threads for non-x86 platforms, but it's not a priority right now. I'm not sure if I'd want to combine the stacks even if I do: I think that precise scanning of the control stack is probably desirable where possible. >. Purify is permanent, yes. But, like GC, it doesn't touch foreign memory anyway. > A little background - I'm trying to get sbcl to work with Apple's > Carbon APIs. I'll need to: Cool. Good luck ... I don't follow the OpenMCL lists, but I have a feeling this general topic has come up there in the past. It might be worthwhile checking with them for previous work in the area. > 1. pass pointers to large data structures both ways (lisp -> c, c-> > lisp) so that I can manipulate the data in lisp, and display it using > the platform native c APIs. If you allocate them in C space, you will _probably_ have to access them from Lisp using SAPs. If the objects have a roughly Lisp-compatible layout (e.g. specialised arrays of numbers) there's some possibility you could allocate them in C space, then slap a Lisp header on the front. That's a "works in principle but I've never tried it" strategy; you might have to get quite intimate with SBCL memory layout if it starts doing odd things. > 2. provide c language callback routines defined in lisp (using alien > of course) to native c APIs. Do the callbacks have a user_data argument or similar? If so, the simplest way to do this is going to be to write a single Lisp function that dispatches on the user_data argument, run purify to lock it into a known address, then write callback functions in C that call funcall[0123] to call into the Lisp dispatch function. Did that make any sense at all? If you don't have an argument to the callback function that can be used to identify why the callback was called, it could all be a lot messier. -dan -- - Link farm for free CL-on-Unix resources Darwin's linker/loader is a funky beast. We've had some interesting times with it at work. On Thu, 2003-08-07 at 14:15, Raffael Cavallaro wrote: > I'm trying to get the alien example from the manual to run under Mac OS=20 > X. >=20 > I've tried the simple: > cc -c test.c > (when I tried the -G option I was informed by cc that there's no such=20 > option on this platform). >=20 > However, when I go to load it in sbcl, I get the following error: >=20 > * (load-foreign "/Users/raffaelc/Developer/test/test.o") >=20 > debugger invoked on condition of type SIMPLE-ERROR: > /usr/bin/ld failed: > /usr/bin/ld: Undefined symbols: > dyld_stub_binding_helper >=20 >=20 > Any ideas? >=20 > Raf >=20 > Raffael Cavallaro, Ph.D. > raffaelcavallaro@... > > _______________________________________________ > Sbcl-devel mailing list > Sbcl-devel@... > --=20 Miles Egan <miles@...> I'm trying to get the alien example from the manual to run under Mac OS X. I've tried the simple: cc -c test.c (when I tried the -G option I was informed by cc that there's no such option on this platform). However, when I go to load it in sbcl, I get the following error: * (load-foreign "/Users/raffaelc/Developer/test/test.o") debugger invoked on condition of type SIMPLE-ERROR: /usr/bin/ld failed: /usr/bin/ld: Undefined symbols: dyld_stub_binding_helper Any ideas? Raf Raffael Cavallaro, Ph.D. raffaelcavallaro@... Dear all, What would it take to enable sbcl to lock pointers to foreign (i.e., c language) objects (data, pointers, and functions) in memory for future reference, and then dispose of it when no longer needed? I know that. In other words what is the preferred way to keep alien resources around for data transfer back and forth? Should I just wrap all the relevant code in a with-alien form? Would this help me at all with callbacks? A little background - I'm trying to get sbcl to work with Apple's Carbon APIs. I'll need to: 1. pass pointers to large data structures both ways (lisp -> c, c-> lisp) so that I can manipulate the data in lisp, and display it using the platform native c APIs. 2. provide c language callback routines defined in lisp (using alien of course) to native c APIs. Thanks in advance for any help, Raf Raffael Cavallaro, Ph.D. raffaelcavallaro@... =2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 [ CCed sbcl-help in the hope of reaching actual users who aren't also developers ] Brian Mastenbrook <chandler@...> writes: > I'm looking from some feedback from anybody who interfaces SBCL with FFI > code. > When you load foreign code, do you load more than one DSO, and do > undefined symbols in one of these DSOs resolve to symbols in another of > these DSOs? Can this situation be worked around on OS X? For Linux, and I'm guessing for other ELF-based platforms, the convention when linking a library is to pass -l options for other libraries it depends on. For example :; ldd /usr/X11R6/lib/libXt.so.6 libX11.so.6 =3D> /usr/X11R6/lib/libX11.so.6 (0x40054000) libSM.so.6 =3D> /usr/X11R6/lib/libSM.so.6 (0x40110000) libICE.so.6 =3D> /usr/X11R6/lib/libICE.so.6 (0x40118000) libc.so.6 =3D> /lib/libc.so.6 (0x4012d000) libdl.so.2 =3D> /lib/libdl.so.2 (0x4023d000) /lib/ld-linux.so.2 =3D> /lib/ld-linux.so.2 (0x80000000) so I would expect that references to, say, XOpenDisplay in libXt will get resolved to libX11 even if Xt is dlopened without RTLD_GLOBAL. In fact, I would tend to consider that a library not built this is buggy. If SBCL is using RTLD_GLOBAL it's at least as likely for historical nobody-quite-understood-this or it-used-to-be-buggy[*] reasons as it is for any actual purpose.=20=20 [*] I have a dim recollection that, yes, it was buggy, once upon a time. Early Linux libc 5, or so. =2D -dan =2D --=20 - Link farm for free CL-on-Unix resources=20 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/MonvHDK5ZnWQiRMRArfwAKC/Dv9aBXiCW/ZgLnmyW5XOreGPGACfR1jA /Kr5qD6OeJG/8OFzHL3Pu1c=3D =3DSeji =2D----END PGP SIGNATURE----- Stig E Sandoe <stig@...> writes: > WITH-TIMEOUT in src/code/unix.lisp fails when it is given a BODY > argument with more than one expression. A simple patch to fix it > is included as an attachment. Thank you; I've merged this into sbcl-0.8.2.20. > I'm not sure how much WITH-TIMEOUT is used, but the syntax is a bit > uncommon [...] Hmm, it is, isn't it. I'm not entirely comfortable with changing an exported interface, though :-/. It's probably possible to do something clever; such a clever thing would have to get all of (with-timeout 0.5 foo) (with-timeout (0.5) foo) (with-timeout (/ 1 2) foo) (with-timeout ((/ 1 2)) foo) right. In any case, thanks again, Cheers, Christophe -- +44 1223 510 299/+44 7729 383 757 (set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b))) (defvar b "~&Just another Lisp hacker~%") (pprint #36rJesusCollegeCambridge) Hello all, I'm looking from some feedback from anybody who interfaces SBCL with FFI code. As some of you who have attempted to build SBCL on OS X have no doubt noticed, SBCL requires an installation of dlcompat to compile or run. This makes Darwin the only platform on which SBCL has a required external dependency. Worse, there are two versions of the dlcompat library, which differ in the way they handle symbol names. Mach-O mangles symbols with an underscore prefix, and when compiled with --enable-fink dlcompat does not prepend this prefix, while normally it does. SBCL depends on the first behavior. Needless to say this makes building difficult. I would like to replace this with a small dlopen / dlsym wrapper; however, it seems that the only way to "fake" RTLD_GLOBAL behavior on OS X is through an undocumented function. Basically the two level namespace on OS X makes it impossible for undefined symbols in one module to be bound to symbols in another module without this call. So, what I want to know is: When you load foreign code, do you load more than one DSO, and do undefined symbols in one of these DSOs resolve to symbols in another of these DSOs? Can this situation be worked around on OS X? I would think myself that this situation is uncommon, to say the least, and could probably be worked around (namely by separating out these symbols into a shared library and linking all the loadable DSOs, aka bundles, against this library). If not the other option is to force a flat namespace, which I could see possibly making debugging external code difficult. Your feedback is appreciated! Brian Mastenbrook chandler@...
http://sourceforge.net/p/sbcl/mailman/sbcl-devel/?viewmonth=200308&viewday=7
CC-MAIN-2014-23
refinedweb
1,897
64.81
Docs | Forums | Lists | Bugs | Planet | Store | GMN | Get Gentoo! Not eligible to see or edit group visibility for this bug. View Bug Activity | Format For Printing | XML | Clone This Bug The live library will no longer compile with current versions of gcc. The first error I tracked down was an "include" in Groupsock.cpp that was looking for strstream.h. Unfortunately, strstream.h is no longer included with gcc, so that should be changed to a plain #include <strstream> I tried this, and it got past that error, but it now errors when trying to use classes that should have been defined in strstream (ostrstream). I think this needs to be looked at. Is it possible that there's newer code from live.com? Just checked on live.com, this is from the changelog: 2004.02.20: - Added "#ifdef"s to the two files that #include <strstream.h>, so that they will compile OK with GCC v3.*. (Thanks to Goetz Waschk for this.) The Gentoo repository should probably switch to the newer version. committed, thanks .
http://bugs.gentoo.org/44005
crawl-002
refinedweb
175
84.88
core threads threads what are threads? what is the use in progarmming Threads threads CoreJava Project CoreJava Project Hi Sir, I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava threads and events threads and events Can you explain threads and events in java for me. Thank you. Java Event Handling Java Thread Examples corejava - Java Beginners corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass by value semantics. That is, a copy of the value of each of the specified corejava - Java Beginners Corejava - Java Interview Questions corejava - Java Interview Questions CoreJava - Java Beginners Coding for life cycle in threads Coding for life cycle in threads program for life cycle in threads Explain about threads:how to start program in threads? Explain about threads:how to start program in threads? import...; Learn Threads Thread is a path of execution of a program... more than one thread. Every program has at least one thread. Threads are used Examples on threads and mulithreading..... Examples on threads and mulithreading..... Is any good examples on threads and Mulithreading... Hi Friend, Please visit the following link: Thread Tutorial Thanks java threads - Java Beginners java threads What are the two basic ways in which classes that can be run as threads may be defined Java Threads - Java Beginners allows the threads to wait for resources to become available and also notify the thread that makes resource available to notify other threads threads in java - Java Beginners threads in java what is the difference between preemptive scheduling and time slicing? hi friend, In Preemptive scheduling, a thread... or the priority of one of the waiting threads is increased. While in Time Slicing Daemon Threads Daemon Threads In Java, any thread can be a Daemon thread. Daemon threads are like a service providers for other threads or objects running in the same process as the daemon threads & autorelease pool threads & autorelease pool How to set autorelease pool for NSThread method in Objective C? [NSThread detachNewThreadSelector:@selector(yourMethod) toTarget:self withObject:nil]; - (void)yourMethod Threads,Servlets - Java Beginners Threads,Servlets 1)Is two Start mathods exist in one Thread Class? like create an object ThreadClass a= new ThreadClass; a.start(); a.start(); 2)How can u refresh a Servlet when new record is added to D.Base multi threads - Java Beginners using three threads. I want to declare variables which will be available to all the threads to access. Is there a way to declare the variables as global variables which will be available to all the threads. If so please send me the code threads - Java Interview Questions that will work even if many Threads are executing it simultaneously. Writing it is a black... interactions between Threads. You have to do it by logic. In a computer, something interfaces,exceptions,threads THE COMPLETE CONEPTS OF INTERFACES,EXCEPTIONS,THREADS Interface...: Exception Handling in Java Threads A thread is a lightweight process which exist within a program and executed to perform a special task. Several threads Execution of Multiple Threads in Java Execution of Multiple Threads in Java Can anyone tell me how multiple threads get executed in java??I mean to say that after having called the start method,the run is also invoked, right??Now in my main method if I want Threads - Java Interview Questions . If you want to create threads, please visit the following link: http Threads - Java Beginners Threads hi, how to execute threads prgm in java? is it using appletviewer as in applet? Hi manju import java.awt.*; import...(" returning a value from Threads Threads - Java Beginners threads - Java Interview Questions java threads - Java Beginners Threads in Java Swing MVC Application Threads in Java Swing MVC Application Hello, I am currently making... threads into my program. I use the MVC paradigm and I just can't seem to implement custom threads into it, so my GUI doesn't freeze when I run certain parts Threads on runnable interface - Java Beginners Threads on runnable interface need a program.....please reply asap Create 2 threads using runnable interface.First threads shd print "hello" & second shd print "welcome". using synchronisation tech print d words alternatively Java - Threads in Java Java - Threads in Java Thread is the feature of mostly languages including Java. Threads... be increased by using threads because the thread can stop or suspend a specific java threads - Java Interview Questions for Java threads in the range of 1 to 10. Following is the constaints defined how to create a reminder app using threads in Servlets? how to create a reminder app using threads in Servlets? I want... (threads will be required!), a "pop-up window or a web-page should automatically get re-directed!". I have used threads for core java, but never used for Servlets java threads - Java Interview Questions creating multiple threads - Java Beginners Threads in Java Threads in Java help in multitasking. They can stop or suspend a specific... returns the number of active threads in a particular group and subgroups... the threads on which it is invoked. yield( ) void pls tell me the difference between the run() and start() in threads in java.... pls tell me the difference between the run() and start() in threads in java.... difference between the run() and start() in threads in java wap showing three threads working simulteniously upon single object. wap showing three threads working simulteniously upon single object. wap showing three threads working simulteniously upon single object Threads(suspend(),resume()) run time abnormal behaviour Threads(suspend(),resume()) run time abnormal behaviour class A implements Runnable { Thread t; A() { t=new Thread(this); t.start(); } public void run implementing an algorithm using multi threads - Java Beginners to breakdown into two or three threads and need to implemented and need Creating multiple Threads Daemon Threads Daemon Threads This section describe about daemon thread in java. Any thread can be a daemon thread. Daemon thread are service provider for other thread running in same process, these threads are created by JVM for background task Diff between Runnable Interface and Thread class while using threads Diff between Runnable Interface and Thread class while using threads Diff between Runnable Interface and Thread class while using threads Hi Friend, Difference: 1)If you want to extend the Thread class Digital watch using threads - Design concepts & design patterns Creation of Multiple Threads Creation of Multiple Threads  ... In this program, two threads are created along with the "main" thread... second) and switches to the another threads to execute it. At the time Life Cycle of Threads ; When you are programming with threads, understanding... states implementing Multiple-Threads are: As we have seen different...;This method returns the number of active threads in a particular thread group and all autentication & authorisation - JSP-Servlet /interviewquestions/corejava/null-marker-interfaces-in-java.shtml Thanks Count Active Thread in JAVA () method of thread to count the current active threads. Thread activeCount... of active threads in the current thread group. Example : class ThreadCount... threads */ System.out .println("Number of active threads Inter Thread Communication between two threads. In this process, a thread outside the critical section is tried... the critical section. For this the threads sends a signal using which they can...:// Multithreading Example In Java threads. In the multithreading environment CPU can switch between the two threads to execute them concurrently. In multithreading in Java some of the non... different threads and the ThreadClassDemo.java is the main class to execute Volatile with multiple threads. The variable declared as volatile specifies that the variable is modified asynchronously by concurrently running threads... in registers. Threads that access shared variables to keep private working copies Java Multithreading Example a different task using the different threads. In Java Multithreading we can create multiple threads to run different tasks. This example will demonstrate you step-by-step how to run different tasks using different threads. Example Here JList JList pls tell me about the concept the JList in corejava? and tell me a suitable example Java Multithreading threads of execution running concurrently independently. When a program contains multiple threads then the CPU can switch between the two threads to execute them... variables are shared between threads. If one is modified by a thread java - Java Server Faces Questions java Java Server Faces Quedtions Hi friend, Thanks pavan pavan purpose of threads A Holistic counter in Servlet will count the number it has been accessed and the number of threads created... a Hashtable object. This object will be shared by all the threads in the container Multithreading Java Tutorial for Beginners . Different threads run in simultaneous mode. Multithreading Java tutorials... and always is a part of process. A process can have many threads that run... programs that make the maximum use of CPU and since many threads runs simultaneously Interthread communication communication among threads which reduces the CPU?s idle time i.e. A process where... the threads that called wait( ) on the same object. The highest priority... consume: 1 In this program, two threads "Producer" and " Inter-Thread Communication ; Java provides a very efficient way through which multiple-threads...) all the threads that called wait( ) on the same object. The highest... Produce: 1 consume: 1 In this program, two threads "Producer" Java Method Synchronized ; The Java language Program supports multi threads. The synchronized... the threads to wait for resources to become available and also notify the thread that makes resource available to notify other threads are on the queues Thread Priorities schedule of threads . Thread gets the ready-to-run state according... that created it. At any given time, when multiple threads are ready to be executed.... If at the execution time a thread with a higher priority and all other threads Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/49303
CC-MAIN-2015-35
refinedweb
1,664
56.05
Distilled • LeetCode • Stack - Pattern: Stack - [20/Easy] Valid Parentheses - [32/Hard] Longest Valid Parentheses - [71/Medium] Simplify Path - [232/Easy] Implement Queue using Stacks - [227/Medium] Basic Calculator II - [301/Hard] Remove Invalid Parentheses - [394/Medium] Decode String - [636/Medium] Exclusive Time of Functions - [921/Medium] Minimum Add to Make Parentheses Valid - [1047/Easy] Remove All Adjacent Duplicates In String - [1209/Medium] Remove All Adjacent Duplicates in String II - [1246/Medium] Minimum Remove to Make Valid Parentheses Pattern: Stack [20/Easy] Valid Parentheses Given a string scontaining Problem Solution: Stack class Solution: def isValid(self, s: str) -> bool: stack = [] for paren in s: if paren in "({[": stack.append(paren) elif not stack: return False elif paren == ")": if stack.pop() != "(": return False elif paren == "}": if stack.pop() != "{": return False elif paren == "]": if stack.pop() != "[": return False return not len(stack) - Same approach; concise: class Solution: def isValid(self, s: str) -> bool: if len(s)%2: return False d = {'(':')', '{':'}','[':']'} stack = [] for char in s: if char in d: stack.append(char) # "not len(stack)" is used to ensure we don't pop an empty list elif not len(stack) or d[stack.pop()] != char: return False return not len(stack) Complexity - Time: \(O(n)\) - Space: \(O(1)\) [32/Hard] Longest Valid Parentheses Problem Given a string containing just the characters (and ), find the length of the longest valid (well-formed) parentheses substring. Example 1: Input: s = "(()" Output: 2 Explanation: The longest valid parentheses substring is "()". - Example 2: Input: s = ")()())" Output: 4 Explanation: The longest valid parentheses substring is "()()". - Example 3: Input: s = "" Output: 0 Solution: Stack One of the key things to realize about valid parentheses strings is that they’re entirely self-satisfied, meaning that while you can have one parentheses substring that is entirely inside another, you can’t have two parentheses substrings that only partially overlap. This means that we can use a greedy \(O(n)\) time complexity solution to this problem without the need for any kind of backtracking. In fact, we should be able to use a very standard stack-based valid parentheses string algorithm with just three very minor modifications. In a standard valid parentheses string algorithm, we iterate through the string ( s) and push the index ( i) of any (to our stack. Whenever we find a ), we match it with the last entry on the stack and pop said entry off. We know the string is not valid if we find a )while there are no (indexes in the stack with which to match it, and also if we have leftover (in the stack when we reach the end of s. For this problem, we will need to add in a step that updates our answer ( max_length) when we close a parentheses pair. Since we stored the index of the (in our stack, we can easily find the difference between the )at iand the last entry in the stack, which should be the length of the valid substring which was just closed. But here we run into a problem, because consecutive valid parentheses substrings can be grouped into a larger valid parentheses substring (i.e., ()()= 4). So instead of counting from the last stack entry, we should actually count from the second to last entry, to include any other valid closed parentheses substrings since the most recent (that will still remain after we pop the just-matched last stack entry off. This, of course, brings us to the second and third changes. Since we’re checking the second to last stack entry, what happens in the case of ()()when you close the second valid substring yet there’s only the one stack entry left at the time? To avoid this issue, we can just wrap the entire string in another imaginary set of parentheses by starting with stack = [-1], indicating that there’s an imaginary (just before the beginning of the string at i = 0. The other issue is that we will want to continue even if the string up to ibecomes invalid due to a )appearing when the stack is “empty”, or in this case has only our imaginary index left. In that case, we can just effectively restart our stack by updating our imaginary (index ( stack[0] = i) and continue on. Then, once we reach the end of s, we can just return ans. class Solution: def longestValidParentheses(self, s: str) -> int: # stack, used to record index of parenthesis # initialized to -1 as dummy head for valid parentheses length computation # for the case of consecutive valid parentheses substrings being grouped # into a larger valid parentheses substring (for e.g., `()()` = 4) stack = [-1] max_length = 0 # linear scan each index and character in input string s for cur_idx, char in enumerate(s): if char == '(': # push when current char is "(" stack.append(cur_idx) else: # pop when current char is ")" stack.pop() if not stack: # stack is empty, push current index into stack stack.append(cur_idx) else: # stack is non-empty, update max valid parentheses length max_length = max(max_length, cur_idx - stack[-1]) return max_length Complexity - Time: \(O(n)\) where \(n\) is the number of characters in the input string, i.e., n = len(s). - Space: \(O(n)\) for stack Solution: Stack - Cleaner; doesn’t need stack initialization as in the previous solution. - Algorithm: - Whenever we see a new open paren, we push the current longest streak to the stack. - Whenever we see a close paren, we pop the top value if possible, and add the value (which was the previous longest streak up to that point) to the current one (because they are now contiguous) and add 2 to count for the matching open and close parens. - If there is no matching open paren for a close paren, reset the current count. class Solution: def longestValidParentheses(self, s: str) -> int: stack, curr_longest, max_longest = [], 0, 0 for c in s: if c == '(': stack.append(curr_longest) curr_longest = 0 elif c == ')': if stack: curr_longest += stack.pop() + 2 max_longest = max(max_longest, curr_longest) else: curr_longest = 0 return max_longest Complexity - Time: \(O(n)\) where \(n\) is the number of characters in the input string, i.e., n = len(s). - Space: \(O(n)\) for stack [71/Medium] Simplify Path Problem -. - Example 1: Input: path = "/home/" Output: "/home" Explanation: Note that there is no trailing slash after the last directory name. - Example 2: Input: path = "/../" Output: "/" Explanation: Going one level up from the root directory is a no-op, as the root level is the highest level you can go. - Example 3: Input: path = "/home//foo/" Output: "/home/foo" Explanation: In the canonical path, multiple consecutive slashes are replaced by a single one. - Constraints: 1 <= path.length <= 3000 path consists of English letters, digits, period '.', slash '/' or '_'. path is a valid absolute Unix path. - See problem on LeetCode. Solution: Stack class Solution: def simplifyPath(self, path: str) -> str: stack = [] for token in path.split('/'): if token in ('', '.'): # if not token or token == ".": # skip current dir continue # or pass elif token == '..': if stack: # if we're not in the root directory, go back; else don't do anything stack.pop() # (since "cd .." on root, does nothing) else: stack.append(token) return '/' + '/'.join(stack) Complexity - Time: \(O(n)\) where \(n\) is the number of characters in the input string, i.e., n = len(path). - Space: \(O(n)\) [232/Easy] Implement Queue using Stacks Problem Implement a first in first out (FIFO) queue using only two stacks. The implemented queue should support all the functions of a normal queue ( push, peek, pop, and empty). - Implement the MyQueueclass: void push(int x)Pushes element x to the back of the queue. int pop()Removes the element from the front of the queue and returns it. int peek()Returns the element at the front of the queue. boolean empty()Returns true if the queue is empty, false otherwise. - calls will be made to push, pop, peek, and empty. All the calls to pop and peek are valid. Follow-up: Can you implement the queue such that each operation is amortized O(1) time complexity? In other words, performing n operations will take overall O(n) time even if one of those operations may take longer. - See problem on LeetCode. Solution: Stack class MyQueue: def __init__(self): self.stack = [] def push(self, x: int) -> None: self.stack += [x] # or self.stack.append(x) def pop(self) -> int: return self.stack.pop(0) def peek(self) -> int: return self.stack[0] def empty(self) -> bool: return len(self.stack) == 0 # Your MyQueue object will be instantiated and called as such: # obj = MyQueue() # obj.push(x) # param_2 = obj.pop() # param_3 = obj.peek() # param_4 = obj.empty() Complexity - Time: \(O(n)\) because of pop(0). - Space: \(O(1)\) Solution: Stack with Index Book-keeping class MyQueue: def __init__(self): self.stack = [] self.idx = 0 def push(self, x: int) -> None: self.stack += [x] def pop(self) -> int: self.idx += 1 return self.stack[self.idx-1] def peek(self) -> int: return self.stack[self.idx] def empty(self) -> bool: return self.idx == len(self.stack) # Your MyQueue object will be instantiated and called as such: # obj = MyQueue() # obj.push(x) # param_2 = obj.pop() # param_3 = obj.peek() # param_4 = obj.empty() Complexity - Time: \(O(1)\) - Space: \(O(1)\) [227/Medium] Basic Calculator II Problem - Given a string swhich represents an expression, evaluate this expression and return its value. - The integer division should truncate toward zero. - You may assume that the given expression is always valid. All intermediate results will be in the range of [-2^31, 2^31 - 1]. Note: You are not allowed to use any built-in function which evaluates strings as mathematical expressions, such as eval(). - Example 1: Input: s = "3+2*2" Output: 7 - Example 2: Input: s = " 3/2 " Output: 1 - Example 3: Input: s = " 3+5 / 2 " Output: 5 - Constraints: 1 <= s.length <= 3 * 105 s consists of integers and operators ('+', '-', '*', '/') separated by some number of spaces. s represents a valid expression. All the integers in the expression are non-negative integers in the range [0, 2^31 - 1]. The answer is guaranteed to fit in a 32-bit integer. - See problem on LeetCode. Solution: Stack class Solution: def calculate(self, s): if not s: return "0" num, stack, sign = 0, [], "+" for i in range(len(s)): if s[i].isdigit(): num = num * 10 + int(s[i]) # or num = num*10 + ord(s[i]) - ord("0") if s[i] in "+-*/" or i == len(s) - 1: if sign == "+": stack.append(num) elif sign == "-": stack.append(-num) elif sign == "*": stack.append(stack.pop()*num) else: stack.append(int(stack.pop()/num)) num = 0 sign = s[i] return sum(stack) Complexity - Time: \(O(n)\) - Space: \(O(1)\) [301/Hard] Remove Invalid Parentheses Given a string sthat contains parentheses and letters, remove the minimum number of invalid parentheses to make the input string valid. Return all the possible results. You may return the answer in any order. Example 1: Input: s = "()())()" Output: ["(())()","()()()"] - Example 2: Input: s = "(a)())()" Output: ["(a())()","(a)()()"] - Example 3: Input: s = ")(" Output: [""] - Constraints: 1 <= s.length <= 25 s consists of lowercase English letters and parentheses '(' and ')'. There will be at most 20 parentheses in s. - See problem on LeetCode. Solution: BFS - Being lazy and using evalfor checking: def removeInvalidParentheses(self, s): level = {s} while True: valid = [] for s in level: try: eval('0,' + filter('()'.count, s).replace(')', '),')) valid.append(s) except: pass if valid: return valid level = {s[:i] + s[i+1:] for s in level for i in range(len(s))} Solution: BFS - Three times as fast as the previous one: def removeInvalidParentheses(self, s): def isvalid(s): ctr = 0 for c in s: if c == '(': ctr += 1 elif c == ')': ctr -= 1 if ctr < 0: return False return ctr == 0 level = {s} while True: valid = filter(isvalid, level) if valid: return valid level = {s[:i] + s[i+1:] for s in level for i in range(len(s))} Solution: BFS - Just a mix of the above two: def removeInvalidParentheses(self, s): def isvalid(s): try: eval('0,' + filter('()'.count, s).replace(')', '),')) return True except: pass level = {s} while True: valid = filter(isvalid, level) if valid: return valid level = {s[:i] + s[i+1:] for s in level for i in range(len(s))} Solution: BFS def removeInvalidParentheses(self, s): def isvalid(s): s = filter('()'.count, s) while '()' in s: s = s.replace('()', '') return not s level = {s} while True: valid = filter(isvalid, level) if valid: return valid level = {s[:i] + s[i+1:] for s in level for i in range(len(s))} Solution: Stack + Backtracking/DFS - Use a stack to find invalid left and right braces. - If its close brace is at index i, you can remove it directly to make it valid and also you can also remove any of the close braces before that i.e in the range [0, i-1]. - Similarly for open brace, left over at index i, you can remove it or any other open brace after that, i.e., [i+1, end]. - If left over braces are more than 1 say 2 close braces here, you need to make combinations of all 2 braces before that index and find valid parentheses. - So, we count left and right invalid braces and do backtracking to remove them. class Solution: def removeInvalidParentheses(self, s: str) -> List[str]: def isValid(s): stack = [] for i in range(len(s)): if( s[i] == '(' ): stack.append( (i,'(') ) elif( s[i] == ')' ): if(stack and stack[-1][1] == '('): stack.pop() else: stack.append( (i,')') ) # pushing invalid close braces also return len(stack) == 0, stack def dfs(s, left, right): visited.add(s) if left == 0 and right == 0 and isValid(s)[0]: res.append(s) for i, ch in enumerate(s): if ch != '(' and ch != ')': continue # if it is any other char ignore. if (ch == '(' and left == 0) or (ch == ')' and right == 0): continue # if left == 0 then removing '(' will only cause imbalance. Hence, skip. if s[:i] + s[i+1:] not in visited: dfs( s[:i] + s[i+1:], left - (ch == '('), right - (ch == ')') ) stack = isValid(s)[1] lc = sum([1 for val in stack if val[1] == "("]) # num of left braces rc = len(stack) - lc res, visited = [], set() dfs(s, lc, rc) return res Complexity - Time: \(O(2^n)\) since each brace has two options: exits or to be removed. - Space: \(O(n)\) [394/Medium] Decode String Problem Given an encoded string, return its decoded string. The encoding rule is: k[encoded_string], where the encoded_string inside the square brackets is being repeated exactly ktimes. Note that kis guaranteed to be a positive integer. You may assume that the input string is always valid; there are no extra white spaces, square brackets are well-formed, etc. Furthermore, you may assume that the original data does not contain any digits and that digits are only for those repeat numbers, k. For example, there will not be input like 3aor 2[4]. Example 1: Input: s = "3[a]2[bc]" Output: "aaabcbc" - Example 2: Input: s = "3[a2[c]]" Output: "accaccacc" - Example 3: Input: s = "2[abc]3[cd]ef" Output: "abcabccdcdcdef" - Constraints: 1 <= s.length <= 30 s consists of lowercase English letters, digits, and square brackets '[]'. s is guaranteed to be a valid input. All the integers in s are in the range [1, 300]. - See problem on LeetCode. Solution: Backtracking/DFS class Solution: def decodeString(self, s: str) -> str: def dfs(s,p): res = "" i, num = p, 0 while i < len(s): asc = (ord(s[i])-48) if 0 <= asc <= 9: # can also be written as if s[i].isdigit() num = num*10 + asc elif s[i] == "[": local, pos = dfs(s, i+1) res += local*num i = pos num = 0 elif s[i] == "]": return res, i else: res += s[i] i += 1 return res,i return dfs(s, 0)[0] Complexity - Time: \(O(n^2)\) - Space: \(O(n)\) Solution: Stack Using a stack to store the previously stored string and the number which we have to use instantly after bracket(if any) gets closed. Possible inputs are: [, ], alphabet(s) or numbers. Lets talk about each one by one. - We will start for loop for traversing through each element of s. If we encounter a number, it will be handled by checking the isdigit()condition. curNum10+int(c)helps in storing the number in curNum, when the number is more than single digit. - When we encounter a character, we will start it adding to a string named curString. The character can be single or multiple. curString+=cwill keep the character string. - The easy part is over.Now, when we encounter [it means a start of a new substring, meaning the previous substring (if there was one) has already been traversed and handled. So , we will append the current curString and curNum to stack and, reset our curString as empty string and curNumas 0 to use in further processing as we have a open bracket which means start of a new substring. - Finally when we encounter a close bracket ], it certainly means we have reached where our substring is complete, now we have to find a way to calculate it. That’s when we go back to stack to find what we have stored there which will help us in calculating the current substring. In the stack, we will find a number on top which is popped and then a previous string which we will need to add with the num*curString, and everything will be stored in curStringafter calculation. - The calculated curStringwill be returned as answer if sis over else it will be again appended to stack when an open bracket is encountered. And the above process will be repeated per condition. class Solution(object): def decodeString(self, s: str) -> str: stack = [] curNum = 0 curString = '' for c in s: if c == '[': stack.append(curString) stack.append(curNum) curString = '' curNum = 0 elif c == ']': num = stack.pop() prevString = stack.pop() curString = prevString + num*curString elif c.isdigit(): # curNum*10+int(c) is helpful in keep track of more than 1 digit number curNum = curNum*10 + int(c) else: curString += c return curString Complexity - Time: \(O(n)\) - Space: \(O(n)\) [636/Medium] Exclusive Time of Functions Problem On a single-threaded CPU, we execute a program containing n functions. Each function has a unique ID between 0and n-1. Function calls are stored in a call stack: when a function call starts, its ID is pushed onto the stack, and when a function call ends, its ID is popped off the stack. The function whose ID is at the top of the stack is the current function being executed. Each time a function starts or ends, we write a log with the ID, whether it started or ended, and the timestamp. You are given a list logs, where logs[i]represents the i-thlog message formatted as a string "{function_id}:{"start" | "end"}:{timestamp}". For example, "0:start:3"means a function call with function ID 0 started at the beginning of timestamp 3, and "1:end:2"means a function call with function ID 1 ended at the end of timestamp 2. Note that a function can be called multiple times, possibly recursively. A function’s exclusive time is the sum of execution times for all function calls in the program. For example, if a function is called twice, one call executing for 2time units and another call executing for 1time unit, the exclusive time is 2 + 1 = 3. Return the exclusive time of each function in an array, where the value at the i-thindex represents the exclusive time for the function with ID i. Example 1: Input: n = 2, logs = ["0:start:0","1:start:2","1:end:5","0:end:6"] Output: [3,4] Explanation: Function 0 starts at the beginning of time 0, then it executes 2 for units of time and reaches the end of time 1. Function 1 starts at the beginning of time 2, executes for 4 units of time, and ends at the end of time 5. Function 0 resumes execution at the beginning of time 6 and executes for 1 unit of time. So function 0 spends 2 + 1 = 3 units of total time executing, and function 1 spends 4 units of total time executing. - Example 2: Input: n = 1, logs = ["0:start:0","0:start:2","0:end:5","0:start:6","0:end:6","0:end:7"] Output: [8] Explanation: Function 0 starts at the beginning of time 0, executes for 2 units of time, and recursively calls itself. Function 0 (recursive call) starts at the beginning of time 2 and executes for 4 units of time. Function 0 (initial call) resumes execution then immediately calls itself again. Function 0 (2nd recursive call) starts at the beginning of time 6 and executes for 1 unit of time. Function 0 (initial call) resumes execution at the beginning of time 7 and executes for 1 unit of time. So function 0 spends 2 + 4 + 1 + 1 = 8 units of total time executing. - Example 3: Input: n = 2, logs = ["0:start:0","0:start:2","0:end:5","1:start:6","1:end:6","0:end:7"] Output: [7,1] Explanation: Function 0 starts at the beginning of time 0, executes for 2 units of time, and recursively calls itself. Function 0 (recursive call) starts at the beginning of time 2 and executes for 4 units of time. Function 0 (initial call) resumes execution then immediately calls function 1. Function 1 starts at the beginning of time 6, executes 1 unit of time, and ends at the end of time 6. Function 0 resumes execution at the beginning of time 6 and executes for 2 units of time. So function 0 spends 2 + 4 + 1 = 7 units of total time executing, and function 1 spends 1 unit of total time executing. - Constraints: 1 <= n <= 100 1 <= logs.length <= 500 0 <= function_id < n 0 <= timestamp <= 109 No two start events will happen at the same timestamp. No two end events will happen at the same timestamp. Each function has an "end" log for each "start" log. - See problem on LeetCode. Solution: Stack class Solution: def exclusiveTime(self, n: int, logs: List[str]) -> List[int]: stack = [] result = [0] * n def normalizeProcessTime(processTime): return processTime.split(':') for processTime in logs: processId, eventType, time = normalizeProcessTime(processTime) if eventType == "start": stack.append([processId, time]) elif eventType == "end": processId, startTime = stack.pop() timeSpent = int(time) - int(startTime) + 1 # Add 1 cause 0 is included result[int(processId)] += timeSpent # Decrement time for next process in the stack if len(stack) != 0: nextProcessId, timeSpentByNextProcess = stack[-1] result[int(nextProcessId)] -= timeSpent return result - Same approach; rehashed: - Split a function call to two different type of states: - Before nested call was made. - After nested call was made. - Record time spent on these two types of states - For e.g., for n = 2, logs = ["0:start:0","1:start:2","1:end:5","0:end:6"]. - Initially stack is empty, then it will record [[0, 0]]meaning function 0start at timestamp 0. - Then nested call happens, we record time spent on function 0in ans, then append to stack - Record time spent before nested call ans[s[-1][0]] += timestamp - s[-1][1]. - Now stack has [[0, 0], [1, 2]]. - When a end is met, pop top of stack and record time as timestamp - s.pop()[1] + 1. - Now stack is back to [[0, 0]], but before we end this iteration, we need to update the start time of this record. - Because time spent on it before nested call is recorded, so now it’s like a new start. - Update start time: s[-1][1] = timestamp+1 class Solution: def exclusiveTime(self, n: int, logs: List[str]) -> List[int]: # to covert id and time to integer helper = lambda log: (int(log[0]), log[1], int(log[2])) # convert ["0:start:0", ...] to [(0, start, 0)] logs = [helper(log.split(':')) for log in logs] # initialize answer and stack ans, s = [0] * n, [] # for each record for (i, status, timestamp) in logs: # if it's the start if status == 'start': # if s is not empty, update time spent on previous id (s[-1][0]) if s: ans[s[-1][0]] += timestamp - s[-1][1] # then add to top of stack s.append([i, timestamp]) # if it's the end else: # update time spend on `i` ans[i] += timestamp - s.pop()[1] + 1 # if s is not empty, update start time of previous id if s: s[-1][1] = timestamp+1 return ans Complexity - Time: \(O(m)\) where where \(n\) is the number of logs (note that the length of each log is fixed and is thus a constant) - Space: \(O(n)\) [921/Medium] Minimum Add to Make Parentheses Valid Problem - A parentheses string is valid if and only if: - It is the empty string, - It can be written as AB( Aconcatenated with B), where Aand Bare valid strings, or - It can be written as ( A), where A - Constraints: 1 <= s.length <= 1000 s[i] is either '(' or ')'. Solution: Two counts class Solution: def without_stack(self, S): opening = count = 0 for i in S: if i == '(': opening += 1 else: if opening: opening -= 1 else: count += 1 return count + opening Complexity - Time: \(O(n)\) - Space: \(O(1)\) Solution: Stack class Solution: def using_stack(self, S): """ :type S: str :rtype: int """ stack = [] for ch in S: # or traverse the string using "for i in range(len(S))" and access "S[i]" # if character is "(", then append to the stack if ch == '(': stack.append(ch) else: # check if stack has a character and also if the character is "(", then pop the character if len(stack) and stack[-1] == '(': stack.pop() # else append ")" into the stack else: stack.append(ch) # Return the length of the stack which indicates # the number of invalid parenthesis return len(stack) Complexity - Time: \(O(n)\) - Space: \(O(n)\) [1047/Easy] Remove All Adjacent Duplicates In String Problem You are given a string sconsisting of lowercase English letters. A duplicate removal consists of choosing two adjacent and equal letters and removing them. We repeatedly make duplicate removals on suntil we no longer can. Return the final string after all such duplicate removals have been made. It can be proven that the answer is unique. Example 1: Input: s = "abbaca" Output: "ca" Explanation: For example, in "abbaca" we could remove "bb" since the letters are adjacent and equal, and this is the only possible move. The result of this move is that the string is "aaca", of which only "aa" is possible, so the final string is "ca". - Example 2: Input: s = "azxxzy" Output: "ay" Solution: Build stack and match last class Solution: def removeDuplicates(self, s: str) -> str: stack = [] for c in s: if stack and stack[-1] == c: stack.pop() else: stack.append(c) return ''.join(stack) Complexity - Time: \(O(n)\) - Space: \(O(n)\) [1209/Medium] Remove All Adjacent Duplicates in String II Problem You are given a string sand an integer. s only contains lower case English letters. - See problem on LeetCode. Solution: Recursion class Solution: def removeDuplicates(self, s: str, k: int) -> str: count, i = 1, 1 while i < len(s): if s[i] == s[i-1]: count += 1 else: count = 1 if count == k: # skip k chars s = self.removeDuplicates(s[:i+1-k] + s[i+1:], k) i += 1 return s Complexity - Time: \(O(n)\) - Space: \(O(1)\) Solution: One-liner class Solution: def removeDuplicates(self, s: str, k: int) -> str: for letter in s: s = s.replace(letter * k, "") return s Complexity - Time: \(O(n)\) - Space: \(O(n)\) Solution: Stack - Stack that tracks character and count, increment count in stack, pop when count reaches k. class Solution: def removeDuplicates(self, s: str, k: int) -> str: stack = [] res = '' for i in s: if not stack: # push character with length of adjacency = 1 stack.append([i,1]) continue if stack[-1][0] == i: # update last character's length of adjacency stack[-1][1] += 1 else: # push character with length of adjacency = 1 stack.append([i,1]) if stack[-1][1] == k: # pop last character if it has repeated k times stack.pop() # generate output string for i in stack: res += i[0] * i[1] return res Complexity - Time: \(O(n)\) - Space: \(O(n)\) [1246/Medium] Minimum Remove to Make Valid Parentheses Problem - Given. - Constraints: 1 <= s.length <= 105 s[i] is either'(' , ')', or lowercase English letter. - See problem on LeetCode. Solution: Stack - Convert the input string to a list, because string is an immutable data structure in Python and it’s much easier and memory-efficient to deal with a list for this task. - Iterate through the list. - Keep track of indices with open parentheses in the stack. In other words, when we come across open parenthesis, we add an index to the stack. - When we come across closed parenthesis, we pop an element from the stack. If the stack is empty we replace current list element with an empty string. - After iteration, we replace all indices we have in the stack with empty strings, because we don’t have close parentheses for them. - Convert list to string and return def minRemoveToMakeValid(self, s: str) -> str: s = list(s) stack = [] for i, char in enumerate(s): if char == '(': stack.append(i) elif char == ')': if stack: stack.pop() else: # Remove closed parentheses for which we don't have open parentheses. s[i] = '' # Remove invalid open parentheses for which we don't have close parentheses. while stack: # or for i in stack: s[stack.pop()] = '' # s[i] = '' return ''.join(s) Complexity - Time: \(O(3n) = O(n)\) - Space: \(O(n)\)
https://aman.ai/code/stack/
CC-MAIN-2022-40
refinedweb
4,929
62.48
Loops are a concept that is found in practically all programming languages. Iterating through a list, tuple, string, dictionary, or set is easier with Python loops. There are two types of loops in Python: “for” and “while.” The code block is executed several times inside the loop until the condition fails. The loop control statements interrupt the execution flow and terminate/skip iterations as needed. Inside the loop, Python break and continue are used to vary the loop’s flow from its standard process. The purpose of a for-loop or while-loop is to iterate until the provided condition fails. When you employ a break or continue statement, the loop’s flow is altered from its default. In Python, using for loops and while loops allows you to automate and repeat activities quickly. Break, continue, and pass statements can be used to accomplish these tasks. However, an external event may occasionally impact how your program performs. When this happens, your software may want to quit a loop altogether, bypass a loop section before continuing, or disregard the external element. Python Break, Continue and Pass Statements Prerequisites It would be best to have Python 3 installed on your computer or server and a programming environment set up. If you don’t already have one, consult the installation and setup recommendations for a local programming environment or a programming environment on your server that is appropriate for your operating system, for instance, Ubuntu, CentOS, or Debian, etc. Break statement in Python The break statement ensures that the loop in which it is used is terminated. The current loop is interrupted when the break statement is used inside nested loops, and the flow continues with the code that comes after the loop. The steps involved in the flow are as follows. 1st step The execution of the loop begins. Step two Upon the loop condition is true, it will proceed to step 2, which will execute the loop’s body. Step three The loop will quit and go to step 6 if the loop’s body contains a break statement. Step four It will move to the next iteration in step 4 after the loop condition has been executed and completed. Step five It will leave the loop and go to step 6 if the loop condition is false. Step six Step six is the end of the loop. Break the execution flow of a statement The if-condition will be checked when the for-loop begins to execute. The code inside the for-loop runs if the condition is false. If true, the break statement is executed, and the for–the loop is terminated. for var in range(len(my_list)): # inside for - loop if condition: continue # inside for-loop # exit for-loop When the while loop runs, it checks the if-condition; if it’s true, the break statement is executed, and the loop ends. The code segments inside the while loop is executed if the condition is false. while expression: # inside while-loop if condition: break # inside while-loop # while loop-exit A break statement is effectively used inside a for-loop Using for-loop, the list language_list = [‘Java’, ‘Python’, ‘C’, ‘JavaScript’, ‘Kotlin’, ‘Rails’] is looped. We’d want to look through the language list for the name ‘Kotlin.’ The if-condition matches each item in the list to ‘Kotlin’ inside the for-loop. If the condition is true, the break statement is executed, and the loop is terminated. The following is a working example of using the break statement: language_list = ['Java', 'Python', 'C', 'JavaScript', 'Kotlin', 'Rails'] for i in range(len(language_list )): print(language_list [i]) if language_list [i] == 'Kotlin': print('Found the name kotlin') break print(' Statement runs after the break statement') print(' Terminating the for-loop') Example: Demonstrating the Break statement inside the while-loop language_list = ['Java', 'Python', 'C', 'JavaScript', 'Kotlin', 'Rails'] i = 0 while True: print(language_list[i]) if (language_list[i] == 'Kotlin'): print('Found the name Kotlin') break print(' Statement runs after the break statement') i += 1 print('Statement runs after while-loop exit') Break statement: Inside nested loops There are two for-loops in this example. Both for-loops are iterating between 0 and 10. We’ve added a condition to the second for-loop that says it should break if the second for-loop index is 8. Due to the break statement, the second for-loop will never iterate for 8 and 9. for i in range(10): for j in range(10): if j==8: break print("current number is: ",i,j); continue statement in Python The continue statement is vital as it allows you to skip over a loop where an external condition is triggered but finish the loop. That is, the loop’s current iteration will be interrupted, but the program will return to the top. The control is transferred back to the start for the following iteration after the continue statement skips the code. The continue statement is usually seen after a conditional if statement in the code block under the loop statement. We’ll use a continue statement instead of a break statement in the same for loop program as in the Break Statement section above: Syntax: continue The steps involved in the flow of the continue statement are listed below. Step One The execution of the loop begins. Step two The code inside the loop is executed. If the loop contains a continue statement, control is returned to step four, which is the beginning of the loop for the following iteration. Step three The code inside the loop is executed. Step four It will call the next iteration if a continue statement or the loop execution inside the body is complete. Step five The loop will quit and proceed to step 7 after the loop execution is complete. Step six It will escape the loop and go to step 7 if the loop condition in step 1 fails. Step seven The loop has come to an end. The flow of the Continue Statement execution The for–loop loops through the specified item_list array. The if-condition is executed inside the for-loop. On the other hand, the continue statement is executed if the condition is true, and control is passed to the start of the loop for the next iteration. The following is the flow of the code: for var in range(len(item_list)): # inside for - loop if condition: continue # inside the for - loop # exit for -loop When the while loop runs, it checks the if-condition and performs the continue statement if it is true. The code inside the while-loop will be performed if the condition is false. For the next iteration, the control will return to the start of the while–loop. The following is the flow of the code: while expression: # inside while - loop if condition: continue # inside while-loop # while loop-exit Continue statement inside the for-loop for i in range(15, 20): if i == 18: continue print("Current number is :" , i) Continue statement inside the while-loop i = 0 while i >=15 and i <= 20: if i == 18: i += 1 continue print("Current number is :" , i) i += 1 Continue statement inside the nested-loop Two for-loops are used in the example below. Both for-loops are iterating between 5 and 10. There is a condition in the second for-loop that says it should continue if the value of the second for-loop index is 8. As a result of the continue statement, the second for-loop skips iteration eight and moves to iteration 10. for i in range(5,10): for j in range(5,10): if j==8: continue print("Current number is ",i,j); Example: using Continue Statement for choose_letter in 'Python': if choose_letter == 'y': continue print('The currently selected Letter :', choose_letter) num_val = 6 while num_val > 0: num_val = num_val -1 if num_val == 2: continue print('The Currently selected variable value :', num_val) print("END") Python’s statement “pass.” The pass statement in Python is used as a placeholder for code that is implemented later, such as loops, functions, classes, and if-statements. When an external condition occurs, the pass statement allows you to address the situation without interrupting the loop; all the code is read until a break or another statement happens. Like the other statements, the pass statement will be found within the loop statement’s code block, usually after a conditional if statement. So let’s replace the break statement or the continue statement with a pass statement, using the identical code block as before. Syntax pass In Python, what is the pass statement? A null statement in Python is called a pass. The Python interpreter does nothing and ignores the across pass command when it encounters it. When should the pass statement be used? Consider the case of a function or a class with nobody. You intend to write the code afterward. If the Python interpreter encounters an empty body, it will error. A remark is placed either in the class’s body or function, but the interpreter will ignore it and throw an error. The pass statement is utilized either inside the class’s body or function. When the interpreter comes across the pass statement during execution, it ignores it and continues without throwing an error. Inside a function, for example, there is a pass statement. The pass is added inside the function in this case. When the function is invoked, it is executed. def passFunc(): print('pass statement is inside the function') pass passFunc() pass statement: Inside the class We’ve merely constructed an empty class with a print statement and a pass statement in the example below. The pass statement denotes that the code included within the class “passClass” is implemented in the future. class passClass: print("Inside passClass") pass pass statement inside the loop The for-loop string ‘Kotlin’ is utilized in the example below. If the character ‘r’ is found, the if-condition executes the print statement, followed by the pass. # the pass statement as in for-loop test = "Kotlin" for i in test: if i == 'r': print('Pass executed') pass print(i) After the if conditional statement, the pass statement tells the program to keep running the loop and ignore that the variable number evaluates to ‘r’ during one of its iterations. using pass statement in an if-loop The if loop in this example checks for a value and prints the statement “pass executed,” followed by the pass if the condition is true. a=5 if b==5: print('pass executed') pass When is it OK to utilize a break-and-continue statement? When a break statement is used inside a loop, the loop is terminated and exited. It will break out of the current loop if used inside nested loops. When used inside a loop, the continue statement will halt the current execution and return control to the loop’s beginning. The fundamental distinction between the break and continue statements is that the loop is terminated when the break keyword is used. The current iteration is terminated, and the next iteration is started if the continue keyword is used. Conclusion Inside the loop, Python break and continue are used to vary the loop’s flow from its typical routine. The purpose of a for-loop or while-loop is to iterate until the provided condition fails. When you employ a break or continue statement, the loop’s flow is altered from its default. It will break out of the current loop if used inside nested loops. On the other hand, when a break statement is used inside a loop, the loop is terminated and exited. When used inside a loop, the continue statement stops the current execution and returns control to the beginning of the loop. The fundamental distinction between the break and continue statements is that the loop is terminated when the break keyword is used. Inside loops, functions, classes, and if-statements, the Python pass statement is used as a placeholder for eventual implementation. A null statement in Python is called a pass. When the interpreter encounters the pass statement during execution, it does nothing and ignores it. Overall, the break, continue, and pass statements in Python make it easier to use for loops and while loops in your code.
https://www.codeunderscored.com/python-break-continue-and-pass-statements/
CC-MAIN-2022-21
refinedweb
2,054
60.14
Dynamic languages and testing Friday 23 October 2009 19:57 The debate between static and dynamic typing has gone on for years. Static typing is frequently promoted as effectively providing free tests. For instance, consider the following Python snippet: def reverse_string(str): ... reverse_string("Palindrome is not a palindrome") # Works fine reverse_string(42) # 42 is not a string reverse_string() # Missing an argument In a static language, those last two calls would probably be flagged up as compilation errors, while in a dynamic language you'd have to wait until runtime to find the errors. So, has static typing caught errors we would have missed? Not quite. We said we'd encounter these errors at runtime -- which includes tests. If we have good test coverage, then unit tests will pick up these sorts of trivial examples almost as quickly as the compiler. Many people claim that this means you need to invest more time in unit testing with dynamic languages. I've yet to find this to be true. Whether in a dynamic language or a static language, I end up writing very similar tests, and they rarely, if ever, have anything to do with types. If I've got the typing wrong in my implementation, then the unit tests just won't pass. Far more interesting is what happens at the integration test level. Even with fantastic unit test coverage, you can't have tested every part of the system. When you wrote that unit, you made some assumptions about what types other functions and classes took as arguments and returned. You even made some assumptions about what the methods are called. If these assumptions were wrong (or, perhaps more likely, change) in a static language, then the compiler will tell you. Quite often in a static language, if you want to change an interface, you can just make the change and fix the compiler errors -- “following the red”. In a dynamic language, it's not that straightforward. Your unit tests as well as your code will now be out-of-date with respect to its dependencies. The interfaces you've mocked in the unit tests won't necessarily fail because, in a dynamic language, they don't have to be closely tied to the real interfaces you'll be using. Time for another example. Sticking with Python, let's say we have a web application that has a notion of tags. In our system, we save tags into the database, and can fetch them back out by name. So, we define a TagRepository class. class TagRepository(object): def fetch_by_name(self, name): ... return tag Then, in a unit test for something that uses the tag repository, we mock the class and the method fetch_by_name. (This example uses my Funk mocking framework. See what a subtle plug that was?) ... tag_repository = context.mock() expects(tag_repository).fetch_by_name('python').returns(python_tag) ... So, our unit test passes, and our application works just fine. But what happens when we change the tag repository? Let's say we want to rename fetch_by_name to get_by_name. Our unit tests for TagRepository are updated accordingly, while the tests that mock the tag repository are now out-of-date -- but our unit tests will still pass. One solution is to use our mocking framework differently -- for instance, you could have passed TagFetcher to Funk, so it will only allows methods defined on TagFetcher to be mocked: ... tag_repository = context.mock(TagFetcher) # The following will raise an exception if TagFetcher.fetch_by_name is not a method expects(tag_repository).fetch_by_name('python').returns(python_tag) ... Of course, this isn't always possible, for instance if you're dynamically generating/defining classes or their methods. The real answer is that we're missing integration tests -- if all our tests pass after renaming TagRepository's methods, but not changing its dependents, then nowhere are we testing the integration between TagRepository and its dependents. We've left a part of our system completely untested. Just like with unit tests, the difference in integration tests between dynamic and static languages is minimal. In both, you want to be testing functionality. If you've got the typing wrong, then this will fall out of the integration tests since the functionality won't work. So, does that mean the discussed advantages of static typing don't exist? Sort of. While compilation and unit tests are usually quick enough to be run very frequently, integration tests tend to take longer, since they often involve using the database or file IO. With static typing, you might be able to find errors more quickly. However, if you've got the level of unit and integration testing you should have in any project, whether its written in a static or dynamic language, then I don't think using a static language will mean you have fewer errors or fewer tests. With the same level high level of testing, a project using a dynamic language is just as robust as one written in a static language. Since integration tests are often more difficult to write than unit tests, they are often missing. Yet relying on static typing is not the answer -- static typing says nothing about how you're using values, just their type, and as such make weak assertions about the functionality of your code. Static typing is no substitute for good tests.
http://mike.zwobble.org/topic/mocking/
CC-MAIN-2018-17
refinedweb
885
62.58
Most of the time, types are inferred on their own and may then be unified with an expected type. In a few places, however, an expected type may be used to influence inference. We then speak of top-down inference. Define: Expected Type Expected types occur when the type of an expression is known before that expression has been typed, e.g. because the expression is argument to a function call. They can influence typing of that expression through what is called top-down inference. A good example are arrays of mixed types. As mentioned in Dynamic, the compiler refuses [1, "foo"] because it cannot determine an element type. Employing top-down inference, this can be overcome: class Main { static public function main() { var a:Array<Dynamic> = [1, "foo"]; } } Here, the compiler knows while typing [1, "foo"] that the expected type is Array<Dynamic>, so the element type is Dynamic. Instead of the usual unification behavior where the compiler would attempt (and fail) to determine a common base type, the individual elements are typed against and unified with Dynamic. We have seen another interesting use of top-down inference when construction of generic type parameters was introduced: import haxe.Constraints; class Main { static public function main() { var s:String = make(); var t:haxe.Template = make(); } @:generic static function make<T:Constructible<String->Void>>():T { return new T("foo"); } } The explicit types String and haxe.Template are used here to determine the return type of make. This works because the method is invoked as make(), so we know the return type will be assigned to the variables. Utilizing this information, it is possible to bind the unknown type T to String and haxe.Template respectively.
https://haxe.org/manual/type-system-top-down-inference.html
CC-MAIN-2018-05
refinedweb
284
54.73
Question about XPath with functions in it this is a follow-up question of my previous thread: Please help me on understanding this XPath I have an XPath as: <xsl:value-of currently I can only understand the parts of it, like position(). Also, I know that preceding-sibling is to choose all siblings before the current node, but I have no idea what the statement mean when they get combined like above. Could anyone give some help on understanding this XPath? thanks in advance. Answers All answers with the exception of @Alejandro 's have the same common fault: It is not true that: preceding-sibling::* selects all preceding-sibling nodes. It only selects all preceding-sibling elements. To select all preceding-sibling nodes use: preceding-sibling::node() There are these kind of nodes in XPath: The root node (denoted as /), also denoted as document-node() in XPath 2.0 Element nodes. such as <a/> Text nodes. In <a> Hello </a> the text node is the only child of a and has a string value of " Hello " Comment nodes. <!-- This is a comment--> Processing instruction nodes. <?someName I am a PI ?> Attribute nodes. In <a x="1"/> x is the only attribute of a. Namespace nodes. <a xmlns: a has a namespace node with value "my:namespace" and name (prefix) my Nodes of the first 5 kinds can be selected using the preceding-sibling:: axis: preceding-sibling::node() selects all sibling nodes of kind 1 to 5. preceding-sibling::* selects all element preceding siblings preceding-sibling::someName selects all elemens named "someName" preceding siblings preceding-sibling::text() selects all text nodes preceding siblings (useful in mixed content) preceding-sibling::comment() selects all comment node preceding siblings. preceding-sibling::processing-instruction() selects all preceding siblings that are PIs preceding-sibling::processing-instruction('someName') selects all preceding siblings that are PIs and are named "someName". Your expresion is doing some calculation using the static position (from input source) and the dynamic position (from current node list). Lets see some examples. Suppose this stylesheet and this input: <xsl:stylesheet <xsl:template <xsl:copy> <xsl:apply-templates/> </xsl:copy> </xsl:template> <xsl:template <xsl:copy> <xsl:value-of </xsl:copy> </xsl:template> </xsl:stylesheet> <list> <a/> <b/> <a/> <b/> </list> Output: <list> <a>1 + 0 = 1</a> <b>2 + 1 = 3</b> <a>3 + 2 = 5</a> <b>4 + 3 = 7</b> </list> Now, changing the second rule to match="a": <list> <a>1 + 0 = 1</a> <a>3 + 2 = 5</a> </list> So, patterns don't change the current node list What if position() is in pattern? Lets change the rule to match="a[position()=2]": <list> <a>3 + 2 = 5</a> </list> Strange? No. In the XPath pattern position() works against its own context node list and the axis direction. This case: child::a[position()=2] meaning a second a child. This shows that position() in patterns works with different context than position() in the content template. So, How change the current context node list? Well, apply-templates and for-each instructions. Add now to the apply-templates instruction some select attribute like select="a": <list> <a>2 + 2 = 4</a> </list> position() returns the position of the current node within the node-set being iterated right now. Assume there are four <foo> elements: <xml> <foo /><foo /><foo /><foo /> </xml> and you iterate them via <xsl:apply-templates>: <xsl:template <!-- this selects four nodes --> <xsl:apply-templates </xsl:template> <!-- this runs four times --> <xsl:template <xsl:value-of </xsl:template> Then this will output "1234". count() counts the nodes in a node-set. preceding-sibling::* selects all elements on the preceding-sibling axis, as seen from the current node (unless the current node is an attribute, as attributes technically do not have preceding siblings). <xsl:value-of Should be pretty self-explanatory now. The XSLT concept you are probably missing is that of the "current node". The current node is the execution context of your XSLT program. There is always a node that is the current node, and most XSLT/XPath operations implicitly work on the current node.
http://unixresources.net/faq/4344594.shtml
CC-MAIN-2019-04
refinedweb
686
54.22
We all have heard of nested classes, nested namespaces. Have we ever heard of nested functions??? Can we write a nested function in C#??? The answer is Yes, we can write a nested function in C# using delegates. This article deals with writing nested methods in C#. Knowledge of delegates is required. For creating nested methods, we write anonymous methods and associate delegates with them. delegate For writing a nested function in C#, we would make use of delegates and anonymous functions. A delegate is nothing but a "function pointer". And an anonymous function is a function/method without a name. Below, we are declaring a delegate which will point to an anonymous function. // Declare a delegate which will point to different anonymous methods. public delegate long FactorialDelegate(long n); After declaring the delegate, we will write the anonymous functions as given below: // This method defines anonymous functions. public static. FunctionInFunction 0 Some properties of anonymous methods are as follows: jump goto break continue ref out More information on anonymous function can be found here. This is a wonder that we can do with delegates. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) // The current method will always be at the 0th frame on the stack.<br /> MethodBase method = new StackTrace().GetFrame(0).GetMethod();<br /> <br /> return number > 1 ? number * (long)method.Invoke(null,<br /> new object[] { number - 1 }) : number;<br /> }; <br /> FactorialDelegate factorialRecursive = null;<br /> factorialRecursive = delegate(int n)<br /> {<br /> if (n > 2)<br /> return n * factorialRecursive(n - 1);<br /> else<br /> return 2;<br /> };<br /> leppie delegate int Bar(); Bar Foo() { int i = 0; return delegate { i++; return i; } } ... Bar b = Foo(); int i = b(); 1 is returned i = b(); 2 is returned i = b(); 3 is returned i = b(); 4 is returned Func void NestedMethods() { Func<int,int> Increment = ( int i ) => i + 1; Trace.WriteLine( Increment( 41 ) ); } J@@NS wrote:It is not true as it seems General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/25486/Nested-Functions-in-Csharp?PageFlow=FixedWidth
CC-MAIN-2017-30
refinedweb
362
55.54
VS 2005 Properties Of Ajax AutoCompleteExtender?Jan 25, 2010 Is it possible to set the font size of the drop down text. Also is there a way to limit number of dropdown lines and have a scroll control.View 14 Replies Is it possible to set the font size of the drop down text. Also is there a way to limit number of dropdown lines and have a scroll control.View 14 Replies trying to create strogly typed datasets for my project. When I right click on the xsd file that I need to generate a strongly typed dataset for I can't find the 'Advanced' properties as described in the below article (almost half way down)[URL]As I do not see the Advanced properties option I can't find the 'Custom Tool' property/option to set its value to 'MSDatasetGenerator'. Do I have to install some kind of patch on my VS 2005?View 5 Replies? Why custom Gridview control not render html properlies under <Columns> properly in Visual Studio 2005? For example: [code].... Is the a way to have the Ajax drop down overlay other control on the page (currently I have a gridview and I cannot see much of the drop down)? [code]..... This is my VERY FIRST experience with AJAX, so be as non-elitist and as clear as possible -- mayve even to the point of talking to me like I'm a child. GIS programmer add an autocomplete search box to her mapping application. Before I can do that, I need to understand how to do it in a simple place first. I'm working with Visual Studio 2008 on my desktop machine and my website files are on a Windows 2003 Server running IIS 6 to which I have a mapped share called X:. Yesterday, I downloaded the AJAX Control Toolkit and added it to Visual Studio. It shows up in my toolbox and I can add and remove items. Cool. I can start an "ASP.NET Web Application" from Visual C# -> Web and I can drop many controls on the page with design view, but the extenders do not drop there. Particularly, I am interested in the AutoCompleteExtender. If I double-click, nothing happens. If I drag it over the design view, I see a slashed circle. If I add it in the source view, it gets highlighted and I'm told "Element 'AutoCompleteExtender' is not a known element. This can occur if there is a compilation error in the Web site, or the web.config file is missing." The web.config is actually not missing. I have been told by a friend that I may need to add something to my Web.config to enable AJAX stuff, but he offered nothing else on that. I see a system.webServer section in my Web.config and a comment above it that tells me it is not required for IIS versions earlier than 7. Also, if I drop a TextBox on my page and try to choose an extender, the AutoCompleteExtender is listed in the Extender Wizard; however, adding it this way ends up with the same result. I tested the web service.it is working fine.The problem that i want to solve is to make call to webservice. Here is the .aspx code <asp:TextBox</asp:TextBox> <ajaxToolkit:AutoCompleteExtender</ajaxToolkit:AutoCompleteExtender> Webservice code : using System; using System.Web; using System.Collections; using System.Web.Services; using System.Web.Services.Protocols; using System.Configuration; using System.Data.SqlClient; using System.Data; using System.Collections; using System.Collections.Generic; /// <summary> /// Summary description for WebService /// </summary> [WebService(Namespace = "[URL]/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.Web.Script.Services.ScriptService] public class WebService : System.Web.Services.WebService { public WebService () { //Uncomment the following line if using designed components //InitializeComponent(); } [WebMethod] public string HelloWorld() { return "Hello World"; } [WebMethod] public string[] FindFullName(string prefixText) { string sql = "Select FName from Investigators Where FName like @prefixText"; SqlDataAdapter da = new SqlDataAdapter(sql,"Data Source=vvvvv;Initial Catalog=vyyyd;Persist Security Info=True;User ID=PowetttrUsyyer;Password=888888); da.SelectCommand.Parameters.Add("@prefixText", SqlDbType.VarChar, 50).Value = prefixText+ "%"; DataTable dt = new DataTable(); da.Fill(dt); List<string> liste = new List<string>(); foreach (DataRow dr in dt.Rows) { liste.Add(dr.ItemArray[0].ToString()); } return liste.ToArray(); } }
http://asp.net.bigresource.com/VS-2005-Properties-of-Ajax-AutoCompleteExtender--DUlK7gZVK.html
CC-MAIN-2018-34
refinedweb
713
50.63
The following test compiles successfully with Eigen 3.2.92, but fails in 3.3.4 and 3.3.5: #include <Eigen/Dense> int main() { const Eigen::Matrix<double,1,1> a(1), b(2), c(3); const double prod1 = a*b; const double prod2 = a*b*c; } The error given by g++ 7.3.0-27ubuntu1~18.04 is as follows: test.cpp:6:30: error: cannot convert ‘Eigen::internal::enable_if<true,> > >::type {aka> >}’ to ‘const double’ in initialization const double prod2 = a*b*c; Notice that the prod1 line compiles successfully, unlike the prod2 one. The workaround is to explicitly take (0,0) element instead of relying on automatic conversion. I'm not sure, if we actually want to fix this -- we also don't allow assigning "products with one factor" (i.e., 1x1 matrices) to a scalar. Assigning products 1xN * Nx1 directly to a scalar already is a hack, which usually is better expressed by using a `.dot()` product, IMO, i.e.: a.adjoint().dot(b*c); This of course also works if the inner dimensions of the products are not 1. I would actually prefer to deprecate the auto-conversion (and remove it at some point). Allowing any 1x1 expression to be convertible to a Scalar is not possible. I've tried that a while back and this yield to countless ambiguous calls. This is why we decided to enable it for inner products only. This is especially useful when dealing for complexes as it makes which operands gets conjugated explicit. Regarding this precise issue, that is: some_inner_prod * 1x1_mat the problem is that the compiler prefers to convert the lhs to a scalar instead of calling the matrix-product operator*. I tried to add more specialized overload of operator* to either Product<> or MatrixBase<>, but at bet I get an ambiguous call error. One possibility would be to have a single generic operator* for MatrixBase that would return a scalar multiple or matrix product depending on the argument. But that sounds overkill. -- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance:.
https://eigen.tuxfamily.org/bz/show_bug.cgi?id=1610
CC-MAIN-2020-45
refinedweb
375
57.47
scalajs-loggingscalajs-logging scalajs-logging is a tiny logging API for use by the Scala.js linker and JS environments. It is not a general-purpose logging library, and you should probably not use it other than to interface with the Scala.js linker API and the JSEnv API. SetupSetup Normally you should receive the dependency on scalajs-logging through scalajs-linker or scalajs-js-envs. If you really want to use it somewhere else, add the following line to your settings: libraryDependencies += "org.scala-js" %% "scalajs-logging" % "1.1.1" UsageUsage import org.scalajs.logging._ // get a logger that discards all logs val logger = NullLogger // discard all logs // or get a logger that prints all logs of severity Warn and above to the console val logger = new ScalaConsoleLogger(Level.Error) // or create your own Logger: val logger = new Logger { def log(level: Level, message: => String): Unit = ??? def trace(t: => Throwable): Unit = ??? } then give your logger instance to an API. Available severity levels are Debug, Info, Warn and Error. For more details, consult the Scaladoc of scalajs-logging.
https://index.scala-lang.org/scala-js/scala-js-logging/scalajs-logging/1.1.1?target=_2.13
CC-MAIN-2022-05
refinedweb
179
51.75
Consistent logging for EIQ projects. Project description eiq_logging This package exists to make it easy to configure logging in a consistent way across multiple EclecticIQ Python projects. Installation pip install eiq-logging Usage In your application's entrypoint, whatever that may be: import eiq_logging eiq_logging.configure() The configure function takes a few arguments: streamdetermines where logs are written. Defaults to sys.stderr. log_formatcan be either "plain" or "json". "plain" means plain text and is meant to be read by humans. "json" is newline-delimited JSON, meant for log aggregation and machine parsing. log_levelscan be either a dict of {logger_name: log_level}or a string which will be parsed as such. The string is comma-separated, and each item in the string should be in the format of "logger_name:log_level" - for example, root:info,example:debugwill set the root logger to the level INFO, and the logger "example" to level DEBUG. If you leave out the log_format and log_level arguments, you can configure these through the environment variables EIQ_LOG_FORMAT and EIQ_LOG_LEVEL. If you're using Gunicorn, you don't need to call configure yourself, you can just start the process with the --config flag: gunicorn --config=python:eiq_logging.gunicorn myapp Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/eiq-logging/
CC-MAIN-2020-40
refinedweb
227
56.25
Hello everyone! Total beginner here, so I apologize beforehand if I ask trivial/stupid questions. For an assignment, I need to use the MPU9250 ( ... 0-v1.6.pdf) to measure accelerations in z direction and use SPI to transfer the data to the pi. I have enabled SPI on my RPI. I have connected: RPI | MPU9250 17 -> VCC 19 -> SDA/SDI 21 -> ADO/SDO 23 -> SCLK 24 -> NCS 25 -> GND So far I have tried using the spidev library based on the example here () and wrote: import spidev import time spi = spidev.SpiDev() spi.open(0, 1) i=0 while i<10 resp = spi.xfer([0x3F]) print(resp) i +=1 as I understood(please correct me if I am wrong) this should start a transaction with the device and read me the values on the register 0x3F(z acceleration) but it returns something like this: [15] [15] [31] [15] [31] [31] [31] [15] [15] [31] every time I run the code it returns a different set of numbers. I have no idea what the value in the brackets mean. I need acceleration values measured in g's, what do I need to do to get that? And also, (I haven't tried it yet because I couldn't even get any measurements), I need to set the sampling rate to around 4kHz. How can I do that? Is my approach so far completely wrong, am I missing something obvious? As I said earlier, I am a complete beginner and have almost no background in coding, so I apologize if I am asking silly questions. If you could help me I would appreciate it a lot. Thanks in advance
https://www.raspberrypi.org/forums/viewtopic.php?t=221293&p=1415855
CC-MAIN-2019-13
refinedweb
278
70.84
DLSYM(3) Linux Programmer's Manual DLSYM(3) dlsym, dlvsym - obtain address of a symbol in a shared object or exe‐ cutable #include <dlfcn.h> void *dlsym(void *handle, const char *symbol); #define _GNU_SOURCE #include <dlfcn.h> void *dlvsym(void *handle, char *symbol, char *version); Link with -ldl. version string as an additional argument. On success, these functions return the address associated with symbol. On failure, they return NULL; the cause of the error can be diagnosed using dlerror(3). dlsym() is present in glibc 2.0 and later. dlvsym() first appeared in glibc 2.1.dlsym(), dlvsym() │ Thread safety │ MT-Safe │ └──────────────────┴───────────────┴─────────┘ POSIX.1-2001 describes dlsym(). The dlvsym() function is a GNU extension. History The dlsym() function is part of the dlopen API, derived from SunOS. That system does not have dlvsym(). See dlopen(3). dl_iterate_phdr(3), dladdr(3), dlerror(3), dlinfo(3), dlopen(3), ld.so(8) This page is part of release 4.11 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2015-08-08 DLSYM(3) Pages that refer to this page: dladdr(3), dlerror(3), dlinfo(3), dlopen(3), rtld-audit(7)
http://man7.org/linux/man-pages/man3/dlsym.3.html
CC-MAIN-2017-26
refinedweb
206
70.7
#include <Time.h>#include <TimeAlarms.h>int led = 2;void setup(){ Serial.begin(9600); setTime(9,15,0,8,11,12); // set time to Saturday 8:29:00am Jan 1 2011 Alarm.alarmRepeat(5, 45, 0, MorningAlarm); pinMode(led, OUTPUT);}void MorningAlarm(){ Serial.println("Wake UP"); digitalWrite (led, HIGH); delay(1000); digitalWrite (led, LOW); delay(1000);} Hi, where does one find the TimeAlarms library? Been searching the arduino site but no luck. I took line 256 from TimeAlarms.cpp and put it in a blank sketch with the setup and void loop, and it compiles fine? can you post/share the updated file? I have the same issue re install did not work for me, tested on 2 machines.thanksJasper
http://forum.arduino.cc/index.php?topic=118132.msg1011381
CC-MAIN-2017-22
refinedweb
120
69.38
Prime Numbers Program problemHi @hoovern88 You are really close, read what Zhuge wrote above, this example works and is fairly ... Book for c++ beginner?[output]Bjarne stroustrup principles and practice using c++[/output] My Function No Responding When RunYes it is; [code] //return_array_value.cpp //## #include <iostream> using namespace std; int f()... My Function No Responding When RunHi @quisite in a quick review i see that you are using the assignment operator [b]=[/b] instead of t... problem with creating a sum of equationHi @aizlewood40, The "exit" condition of your [code]while[/code] loop is that [b]sval[/b] must be gr...
http://www.cplusplus.com/user/eyenrique/
CC-MAIN-2016-30
refinedweb
101
55.95
In this part of my Java Video Tutorial I introduce Java collection classes. Specifically I cover how to use Java ArrayLists. Java ArrayLists make it easy to keep track of groups of objects. A Java ArrayList differs from a java array in that it automatically resizes itself when you add or remove values. I’ll go over nearly every Java ArrayList method in the video below. The code follows the video and it is heavily commented to help you learn easily. If you like videos like this share it Code From the Video // Collection classes were created to make it easy to keep track // of groups of objects. An ArrayList differs from an array in // that it automatically resizes itself automatically. ArrayLists // are easy to add to and delete from. import java.util.ArrayList; // The ArrayList library import java.util.Iterator; // The Iterator Library import java.util.Arrays; // The Arrays Library public class LessonEleven { public static void main(String[] args) { // You can create an ArrayList variable ArrayList arrayListOne; // Then create an ArrayList object // You don't have to declare the ArrayList size like you // do with arrays (Default Size of 10) arrayListOne = new ArrayList(); // You can create the ArrayList on one line ArrayList arrayListTwo = new ArrayList(); // You can also define the type of elements the ArrayList // will hold ArrayList<String> names = new ArrayList<String>(); // This is how you add elements to an ArrayList names.add("John Smith"); names.add("Mohamed Alami"); names.add("Oliver Miller"); // You can also add an element in a specific position names.add(2, "Jack Ryan"); // You retrieve values in an ArrayList with get // arrayListName.size() returns the size of the ArrayList for( int i = 0; i < names.size(); i++) { System.out.println(names.get(i)); } // You can replace a value using the set method names.set(0, "John Adams"); // You can remove an item with remove names.remove(3); // You can also remove the first and second item with // the removeRange method // names.removeRange(0, 1); // When you print out the ArrayList itself the toString // method is called System.out.println(names); // You can also use the enhanced for with an ArrayList for(String i : names) { System.out.println(i); } System.out.println(); // Creates a newline // Before the enhanced for you had to use an iterator // to print out values in an ArrayList // Creates an iterator object with methods that allow // you to iterate through the values in the ArrayList Iterator indivItems = names.iterator(); // When hasNext is called it returns true or false // depending on whether there are more items in the list while(indivItems.hasNext()) { // next retrieves the next item in the ArrayList System.out.println(indivItems.next()); } // I create an ArrayList without stating the type of values // it contains (Default is Object) ArrayList nameCopy = new ArrayList(); ArrayList nameBackup = new ArrayList(); // addAll adds everything in one ArrayList to another nameCopy.addAll(names); System.out.println(nameCopy); String paulYoung = "Paul Young"; // You can add variable values to an ArrayList names.add(paulYoung); // contains returns a boolean value based off of whether // the ArrayList contains the specified object if(names.contains(paulYoung)) { System.out.println("Paul is here"); } // containsAll checks if everything in one ArrayList is in // another ArrayList if(names.containsAll(nameCopy)) { System.out.println("Everything in nameCopy is in names"); } // Clear deletes everything in the ArrayList names.clear(); // isEmpty returns a boolean value based on if the ArrayList // is empty if (names.isEmpty()) { System.out.println("The ArrayList is empty"); } Object[] moreNames = new Object[4]; // toArray converts the ArrayList into an array of objects moreNames = nameCopy.toArray(); // toString converts items in the array into a String System.out.println(Arrays.toString(moreNames)); } } HELLO AGAIN DEREK I WROTE TO YOU AND ASKED YOU THIS QUESTION BEFORE BUT I GUESS YOU’RE A BUSY MAN. I FOLLOWED THIS VIDEO TUTORIAL ON THE COLLECTIONS. YOU IMPORTED SEVERAL CLASS LIKE ITERATORS AND SUCH. CAN YOU IN THAT SITUATION JUST IMPORT JAVA.UTIL* FOR ALL THE CLASSES NECESSARY TO GET THE JOB DONE? THANKS At the very least I think this would dramatically slow down your program. I’m going to check, but I’d advise against doing that. I must have missed one of your messages. Sorry about that Well Hello Derek Do you have an opinion of LearnNowJava.com is this a good or a bad investment. Thanks With as many free programming tutorials as there are I wouldn’t pay anyone for them Hey Derek i am a keen follower of your website,i just want to say that these tutorials are awesome and i hope u will put some programming tutorials on c++.God bless you. You’re very welcome 🙂 I’m very happy that you like it. May God bless you as well derek u have done an awesome job by developing this kind of website.God bless u!! May God bless you and your family as well 🙂 Ill do my best to make a c++ tutorial very soon Hi master. I have small problem. How can I put objects from different classes to one ArrayList? -here is a code Sorry, there was propably white symbol. It’s working 🙂 You can delete this post. Are they subclasses from the same class? How many ArrayLists can you create in one file? If you can create more than one, then how do you specify which arraylist you are adding elements to? names.add() seems very general. Thanks. nevermind I figured it out. 🙂 (names was what you named your arraylist) Sorry I couldn’t help quicker. I’m glad you figured it out Derek – you do a GREAT job with these lessons. Could you recommend a good text that might supplement your tutorials? Preferably a book with follow up exercises. thanks Thank you very much 🙂 Everyone I talk to recommends the Head First Java book. I can’t think of a book that gets more good reviews. Stay away from technical books like Thinking in Java and you’ll be fine. Thanks Derek. I’ve been using Head First Java. It is a good book. Yes it is a very good book Object[] moreNames = new Object[4]; moreNames = nameCopy.toArray(); System.out.println(Arrays.toString(moreNames)); Hey Derek, Tell me why do we need Object array to copy the arraylist into. I am getting the same thing by doing this. System.out.println(Arrays.toString(nameCopy.toArray())); any specific reason doing that way and also not giving the object array size doesnt make any difference.Even after putting 1 in the arraysize,the output is showing all the nameCopy list items. I dont understand what is going on at the backside. You don’t need to do it that way. Your way works as well. I just like to separate everything in the tutorials so it is more understandable that’s what i love in your tutorial, u explain in a modular way. great.. Thank you 🙂 I try to do my best. – Hi Derek – – Thank You for the tutorials – much obliged – ! – Would You help me with ArrayLists further… – (?) – I have a scenario where I have a Vehicle Class with Vehicle Objects contained in an ArrayList – I have another Class, ‘Showroom’ to hold a Vehicle Objects ArrayList (The Showroom Constructor initialises the Vehicle Objects ArrayList) – I have managed to do this – – The Vehicles have an attribute (String) that contains their Registration / ID Number – I am able to search for a Vehicle Object, by passing the Registration Number attribute into a ‘findVehicle’ method in the Showroom Class – – I have to set a ‘currentVehicle’, again by passing in the Reg. No. attribute – I have managed to do this also – – the problem I am having though, is accessing / returning the next & previous Vehicle in the ArrayList – – I am wanting to have both a ‘nextVehicle’ & a ‘previousVehicle’ method that moves forward & backwards though the ArrayList from a start-point of the ‘currentVehicle’ – – When the ‘nextVehicle’ is returned, it is supposed to then become the ‘currentVehicle’ – likewise with the ‘previousVehicle’ method – – How might I iterate forwards & backwards through the ArrayList & return the appropriate Vehicle Object – (?) – I hope I have been clear… – I’m quite new to Java – – Any help is much appreciated – – Thanks – D. – I need to clarify… – that the Vehicle Class has the Vehicle Attributes (e.g. Reg No. / Make / Model etc.), & the ArrayList of Vehicles is initialised by the Showroom Class – – Sorry if I’m not being clear – D. I made a tutorial on how to set something like you want in Linked List in Java. It will also explain how an ArrayList works on an elementary level. I hope that helps 🙂 Hey Darek.. I just wanna say.. Wohoooooo awesome work man.. (Y) Thank you 🙂 I appreciate your message. It made me laugh i didnt understand the math.random method. plz explain me. To generate a random number using math.random you could do this to generate a number between 1 and 10 : int n = (int)(10.0 * Math.random()) + 1; This translates into int n = (int)(LargestNumber * Math.random()) + Smallest Number; Math.random() returns numbers between 0.0 to 0.999 You could also use Random rand = new Random(); int value = rand.nextInt(10); This gives you a number between 0 and 9. rand.nextInt(10) + 1; would give you numbers between 1 and 10. I hope that helps 🙂 Thanks again Derek.. I understood now!! Great I’m glad I could help 🙂 Hey Derek. I’m pretty new to Java, have only been programming for 3 days. My question is, at the moment my ArrayList only accepts Strings as I have set it to accept only Strings, How do I make my ArrayList accept both doubles Strings. code is as follows: //Name of class public class Profile{ //Variables String Carbchoice = “rice”; double Height = 155.0; ArrayList arrayListOne = new ArrayList(); ArrayList UserProfile = new ArrayList(); //Method public void AddingToList(){ UserProfile.add(CarbChoice); UserProfile.add(Height); //this is what I would like to do } } You could use something like this List Is it possible to add String[] to an ArrayList? I tried but it does’nt seems to works. List yourList = Arrays.asList(yourArray)
http://www.newthinktank.com/2012/01/java-video-tutorial-11/?replytocom=54777
CC-MAIN-2019-18
refinedweb
1,681
66.13
, I am using the same criteria we use for Elixir, which is the ability to implement language constructs using the language itself. For example, in many languages the short-circuiting && operator is defined as special part of the language. In those languages, you can’t reimplement the operator using the constructs provided by the language. In Elixir, however, you can implement the && operator as a macro: defmacro left && right do quote do case unquote(left) do false -> false _ -> unquote(right) end end end In Swift, you can also implement operators and easily define the && operator with the help of the @auto_closure attribute: func &&(lhs: LogicValue, rhs: @auto_closure () -> LogicValue) -> Bool { if lhs { if rhs() == true { return true } } return false } The @auto_closure attribute automatically wraps the tagged argument in a closure, allowing you to control when it is executed and therefore implement the short-circuiting property of the && operator. However, one of the features I suspect that will actually hurt extensibility in Swift is the Extensions feature. I have compared the protocols implementation in Swift with the ones found in Elixir and Clojure on Twitter and, as developers have asked for a more detailed explanation, I am writing this blog post as result! Extensions The extension feature in Swift has many use cases. You can read them all in more detail in their documentation. For now, we will cover the general case and discuss the protocol case, which is the bulk of this blog post. Following the example in Apple documentation itself: extension Double { var km: Double { return self * 1_000.0 } var m: Double { return self } var cm: Double { return self / 100.0 } var mm: Double { return self / 1_000.0 } var ft: Double { return self / 3.28084 } } let oneInch = 25.4.mm println("One inch is \(oneInch) meters") // prints "One inch is 0.0254 meters" let threeFeet = 3.ft println("Three feet is \(threeFeet) meters") // prints "Three feet is 0.914399970739201 meters" In the example above, we are extending the Double type, adding our own computed properties. Those extensions are global and, if you are Ruby developer, it will remind you of monkey patching in Ruby. However, in Ruby classes are always open, and here the extension is always explicit (which I personally consider to be a benefit). What troubles extensions is exactly the fact they are global. While I understand some extensions would be useful to define globally, they always come with the possibility of namespace pollution and name conflicts. Two libraries can define the same extensions to the Double type that behave slightly different, leading to bugs. This has always been a hot topic in the Ruby community with Refinements being proposed in late 2010 as a solution to the problem. At this moment, it is unclear if extensions can be scoped in any way in Swift. The case for protocols Protocols are a fantastic feature in Swift. Per the documentation: “a protocol defines a blueprint of methods, properties, and other requirements that suit a particular task or piece of functionality”. Let’s see their example: protocol FullyNamed { var fullName: String { get } } struct Person: FullyNamed { var fullName: String } let john = Person(fullName: "John Appleseed") // john.fullName is "John Appleseed" In the example above we defined a FullyNamed protocol and implemented it while defining the Person struct. The benefit of protocols is that the compiler can now guarantee the struct complies with the definitions specified in the protocol. In case the protocol changes in the future, you will know immediately by recompiling your project. I have been long advocating this feature for Ruby. For example, imagine you have the following Ruby code: class Person attr_accessor :first, :last def full_name first + " " + last end end And you have a method somewhere that expects an object that implements full_name: def print_full_name(obj) puts obj.full_name end At some point, you may want to print the title too: def print_full_name(obj) if title = obj.title puts title + " " + obj.full_name else puts obj.full_name end end Your contract has now changed but there is no mechanism to notify implementations of such change. This is particularly cumbersome because sometimes such changes are done by accident, when you don’t want to actually modify the contract. This issue has happened multiple times in Rails. Before Rails 3, there was no official contract between the controller and the model and between the view and the model. This meant that, while Rails worked fine with Active Record (Rails’ built-in model layer), every Rails release could possibly break integration with other models because the contract suddenly became larger due to changes in the implementation. Since Rails 3, we actually define a contract for those interactions, but there is still no way to: - guarantee an object complies with the contract (besides extensive use of tests) - guarantee controllers and views obey the contract (besides extensive use of tests) Similar to real-life contracts, unless you write it down and sign it, there is no guarantee both parts will actually maintain it. The ideal solution is to be able to define multiple, tiny protocols. Someone using Swift would rather define multiple protocols for the controller and view layers: protocol URL { func toParam() -> String } protocol FormErrors { var errors: Dict } The interesting aspect about Swift protocols is that you can define and implement protocols for any given type, at any time. The trouble though is that the implementation of the protocols are defined in the class/struct itself and, as such, they change the class/struct globally. Protocols and Extensions Since protocols in Swift are implemented directly in the class/struct, be it during definition or via extension, the protocol implementation ends up changing the class/struct globally. To see the issue with this, imagine that you have two different libraries relying on different JSON protocols: protocol JSONA { func toJSON(precision: Integer) -> String } protocol JSONB { func toJSON(scale: Integer) -> String } If the protocols above have different specifications on how the precision argument must be handled, we will be able to implement only one of the two protocols above. That’s because implementing any of the protocols above means adding a toJSON(Integer) method to the class/struct and there can be only one of them per class/struct. Furthermore, if implementing protocols means globally adding method to classes and structs, it can actually hinder the use of protocols as a whole, as the concerns to avoid name clashes and to avoid namespace pollution will speak louder than the protocol benefits. Let’s contrast this with protocols in Elixir: defprotocol JSONA do def to_json(data, precision) end defprotocol JSONB do def to_json(data, scale) end defimpl JSONA, for: Integer do def to_json(data, _precision) do Integer.to_string(data) end end JSONA.to_json(1, 10) #=> 1 Elixir protocols are heavily influenced by Clojure protocols where the implementation of a protocol is tied to the protocol itself and not to the data type implementing the protocol. This means you can implement both JSONA and JSONB protocols for the same data types and they won’t clash! Protocols in Elixir work by dispatching on the first argument of the protocol function. So when you invoke JSONA.to_json(1, 10), Elixir checks the first argument, sees it is an integer and dispatches to the appropriate implementation. What is interesting is that we can actually emulate this functionality in Swift! In Swift we can define the same method multiple times, as long as the type signatures do not clash. So if we use static methods and extension, we can emulate the behaviour above: // Define a class to act as protocol dispatch class JSON { } // Implement it for Double extension JSON { class func toJSON(double: Double) -> String { return String(double) } } // Someone may implement it later for Float too extension JSON { class func toJSON(float: Float) -> String { return String(float) } } JSON.toJSON(2.3) The example above emulates the dynamic dispatch ability found in Elixir and Clojure which guarantees no clashes in multiple implementations. After all, if someone defines a JSONB class, all the implementations would live in the JSONB class. Since dynamic dispatch is already available, we hope protocols in Swift are improved to support local implementations instead of changing classes/structs globally. Summing up Swift is a very new language and in active development. The documentation so far doesn’t cover topics like exceptions, the module system and concurrency, which indicates there are many more exciting aspects to build, discuss and develop. It is the first time I am excited to do some mobile development. Plus the Swift playground may become a fantastic way to introduce programming. Finally, I would personally love if Swift protocols evolved to support non-global implementations. Protocols are a very extensible mechanism to define and implement contracts and it would be a pity to see their potential hindered due to the global side-effects it may cause to the codebase.
http://blog.plataformatec.com.br/2014/06/comparing-protocols-and-extensions-in-swift-and-elixir/
CC-MAIN-2017-17
refinedweb
1,474
50.36
Zumba Tech 2016-09-13T02:18:58+00:00 Zumba Tech engineering@zumba Zumbatech takes on #hackforchange How Zumbatech contributed to Hackforchange utilizing opensource tools. Chris Saylor christopher.saylor@zumba.com 2015-06-15T00:00:00+00:00 <p>On June 6, 2015, a team of engineers from Zumbatech decided to contribute in an all-day hackathon event called <a href="">Hackforchange</a>. This is a national effort for civic hacking that brings engineers and designers together to make a positive impact in our communities.</p> <h2 id="lets-not-re-invent-the-wheel">Let’s not re-invent the wheel</h2> <p>The day before the event, our team got together to figure out what we were going to work on that would give the most impact to our community. We chose to work on generating visualizations of Florida vendor transactions. The first thing we noticed is that a couple of projects had already been underway to create restful APIs and alternate data formats for this data. We decided that an API that is specific to this data set wouldn’t be very reusable for other data sets, <em>and</em> it would still take an engineer’s effort to visualize the data from those APIs.</p> <p>We wanted to make something that is generic enough to work with any sort of data set, be flexible enough for other engineers to create tools and visualizations via an API, and be easy enough for non-engineers to construct visualizations that fit their needs. A daunting task, especially for it to be <em>mostly</em> completed in a single hackathon. After some planning and discussion, we came up with a solution ready for hacking!</p> <h2 id="hackathon">Hackathon</h2> <p>Bright and early on that Saturday morning, we arrived in the <a href="">LAB Miami</a> offices to work on a project we called <a href="">Datamnom</a>. The idea of the project is to make a generic ingestion program that can take in multiple data sources and populate an <a href="">Elasticsearch</a> index. Once the data is in Elasticsearch, a tool called <a href="">Kibana</a> can be hooked up to the Elasticsearch index we populated to create visualizations.</p> <p>After writing a prototype <a href="">nodejs</a> program and setting up a <a href="">Vagrant</a> environment, we had Kibana up and running with data to visualize:</p> <p><img alt="Kibana running FL Vendor data" class="img-responsive" src="/img/blog/visualization1.png" /></p> <p>Here is team Zumba demoing our work to the Florida CFO, Jeff Atwater:</p> <p><img alt="Team Zumba demoing Datamnom to Jeff Atwater and others" class="img-responsive" src="/img/blog/miamiherald-hackathon.jpg" /></p> <p>via <a href="">Miami Herald</a></p> <h2 id="presentation-and-reception">Presentation and Reception</h2> <p>After importing about 8 years worth of Florida Vendor transactions, the team presented our idea and applications to the group. Other groups that were working with this data set decided to use our tools to make their own visualizations.</p> <div class="row"> <div class="col-md-6"> <blockquote class="twitter-tweet" lang="en"><p lang="en" dir="ltr">Team <a href="">@zumbatech</a> presents FL State Payments API and visualization app. <a href="">#hackforchange</a> <a href="">pic.twitter.com/R48vs6AnUZ<">Using <a href="">@zumbatech</a>'s Payments API, <a href="">@robdotd</a> and team visualize printing costs across FL departments. <a href="">#hackforchange</a> <a href="">pic.twitter.com/eB0BpAgpyV</a></p>— Code For Miami (@CodeForMiami) <a href="">June 6, 2015</a></blockquote> <script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script> </div> </div> <div class="row"> <div class="col-md-6"> <blockquote class="twitter-tweet" lang="en"><p lang="en" dir="ltr">State challenge group is visualizing Vendor Data in a bunch of different ways and automating new viz. <a href="">#hackforchange</a> <a href="">pic.twitter.com/ZGZYnUrcS6<">Great presentation from <a href="">@CodeforFTL</a> and group! <a href="">#hackforchange</a> <a href="">@knightfdn</a> <a href="">@CFJBLaw</a> <a href="">@wyncode</a> <a href="">@socrata</a> <a href="">pic.twitter.com/sEidId7pE9</a></p>— Code For Miami (@CodeForMiami) <a href="">June 6, 2015</a></blockquote> <script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script> </div> </div> <h1 id="conclusion">Conclusion</h1> <p>We had a fun time at LAB Miami hacking together a project we think can really help lawmakers, researchers, and reporters visualize public data in a way that allows them to ask the right questions and help our communities.</p> <img src="" height="1" width="1" alt=""/> W3C Push API Crash Course Getting started with push notifications in Google Chrome Nick Comer nicholas@zumba.com 2015-05-29T00:00:00+00:00 <p>W3C’s <a href="">Push API</a> is exciting. Almost as exciting as all the possibilities that arise from having a persistent presence in your users’ browser. At Zumba, we already have ideas for it, and I was in charge of doing the gritty work of getting notifications to show up in the browser, but also developing a way to manage who gets those notifications. I had a lot of questions along the way that I will lay out and give you straight answers. Let’s get pushing already!</p> <h2 id="the-serviceworker">The ServiceWorker</h2> <p>The <a href=""><strong>ServiceWorker</strong></a> is where the Push API lives. A <strong>ServiceWorker</strong> is a JavaScript file that defines activities that are allowed to run continuously, well after the life-cycle of a web-page. If you have heard of <a href=""><strong>WebWorkers</strong></a>, ServiceWorkers are quite similar, but they will continue to work in the background even after a user has closed the web-page, which is key to being available to show notifications.</p> <h3 id="serviceworker-registration">ServiceWorker Registration</h3> <p>Before we get started with registering, you need to set up a Google Application that can be used with Google Cloud Messaging to actually send the notifications; it is free for development and takes about 2-3 minutes.</p> <h4 id="google-application-setup">Google Application Setup</h4> <ol> <li>Go to <a href="">Google Developer Console</a> and <strong>create</strong> a new application.</li> <li>Note your new applications <strong>Project Number</strong>: <img src="/img/blog/google-api-console-app-id.png" alt="Google API Project Number" class="img-responsive" /></li> <li>Go to the new application and under <strong>APIs & auth</strong> click <strong>APIs</strong> and look for “messaging” and enable: <ul> <li><em>Google Cloud Messaging for Chrome</em></li> <li><em>Google Cloud Messaging for Android</em></li> </ul> </li> <li>Under the same sidebar menu, go to <strong>Credentials</strong> and generate new <strong>Public API access</strong> keys for <strong>server</strong>. Keep them handy for a little later on.</li> </ol> <h4 id="manifestjson">manifest.json</h4> <p>Google Chrome relies on a simple JSON file to detect your websites Google Cloud app. It is placed in the root of your website and it goes a little something like this:</p> <script src=""> </script> <p>Replace <code class="highlighter-rouge">application_project_number</code> with the project number of your Google App from the steps above. Moving on!</p> <hr /> <h4 id="serviceworker-life-cycle">ServiceWorker Life-Cycle</h4> <p>ServiceWorkers have a registration process that goes through the following steps:</p> <ul> <li><strong>install</strong>: currently being installed</li> <li><strong>waiting</strong>: waiting on registration</li> <li><strong>active</strong>: registered and running</li> </ul> <hr /> <h4 id="scope">Scope</h4> <p>ServiceWorkers also have a <em>scope</em> which is set to the directory under which it can be found. For example, if your service worker script is located at <code class="highlighter-rouge">/static/js/muh/scripts/ServiceWorker.js</code> Then its <em>scope</em> will be: <code class="highlighter-rouge">/static/js/muh/scripts/</code>. This will not work; your users will not be doing their browsing in your static assets, so placing this script further up the directory tree is highly recommended. Service Workers also will not install if not served over HTTPS.</p> <h4 id="registering">Registering</h4> <p>Now that you know what to expect with setting up a suitable environment for your service worker let’s get it installed.</p> <p>A good installation script should also consider that their might be a service worker already in place and running. To check for that let’s do the following:</p> <script src=""> </script> <p>Now that we have the installation status: we know if registration of the service worker is necessary. Assuming registration is necessary:</p> <script src=""> </script> <p><strong>ServiceWorkerRegistrations</strong> are important to push because they hold the key to our next step which is the <strong>PushManager</strong>. Once we have a reliable way to either register a ServiceWorker or retrieve an existing ServiceWorker’s registration, we can continue to…</p> <h3 id="the-pushmanager">The PushManager</h3> <p>The <strong>PushManager</strong> is a creatively named API that manages our push subscriptions. The entry point for it can be found on that <code class="highlighter-rouge">ServiceWorkerRegistration</code> I mentioned above. You should take the same approach with this as with service worker registrations. The functions you should know about are: <code class="highlighter-rouge">getSubscription</code> and <code class="highlighter-rouge">subscribe</code>.</p> <script src=""> </script> <p>As you can see here, we check for an existing push subscription, then if one is not found we attempt to get permission for one. This attempt is when the user is prompted to approve notifications.</p> <h3 id="putting-it-together">Putting It Together</h3> <p>So, putting the two together, you probably end up with something like this:</p> <script src=""> </script> <p>This will auto-register a service-worker and will subscribe your website to push notifications and send the push subscription to your server.</p> <h4 id="pushsubscription">PushSubscription</h4> <p>Providing that everything went well, and the user agreed to be notified, you will be given a <strong>PushSubscription</strong> object that will contain details about how to execute a push notification. (<a href="">PushSubscription</a>) This object contains an <code class="highlighter-rouge">endpoint</code> attribute which is the entry point for push notifications. (The Push API is still in flux, but I would recommend keeping the <code class="highlighter-rouge">endpoint</code>; Google is the only one running a W3C compliant push server but I am sure there will be more to come and this will be what distinguishes them.)</p> <p>What distinguishes this user is the subscriptionId, which is like a device ID and will be how you reach this particular user with your push notifications.</p> <h4 id="using-the-pushsubscription">Using the PushSubscription</h4> <p>Once you have the <code class="highlighter-rouge">subscriptionId</code>, you can send it to Google Cloud Messaging endpoint like so:</p> <script src=""> </script> <p>Put your Google Cloud project API key next to <code class="highlighter-rouge">key=</code> and put the users <code class="highlighter-rouge">subscriptionId</code> in <code class="highlighter-rouge">registration_ids</code> and this will send notifications but we need to tell the service worker how to receive them, and what to do with them.</p> <h4 id="receiving-notifications-with-serviceworker">Receiving Notifications with ServiceWorker</h4> <p>ServiceWorkers run in a completely different global context than a DOMWindow. The global object in a ServiceWorker is <code class="highlighter-rouge">self</code>, and the event to look out for is <code class="highlighter-rouge">onpush</code>. A very simple notifier service worker will look something like this:</p> <script src=""> </script> <p>This will extract that push data out of the event and show it as a notification.</p> <h3 id="the-bad-news">The Bad News</h3> <p>W3C’s Push API is very new. Google Chrome is the only browser to begin working on implementation and it still isn’t finished. With that in mind, the Chromium development team has intentionally blocked arbitrary data from being sent through the Push API for now <a href="">(chromium issue #449184)</a>. The reason stated is that as of right now, the API does not mandate encryption of incoming push messages. Without mandatory encryption, anyone can easily use a <a href="">Man-in-the-middle attack</a> to put bad data into push messages.</p> <p>As stated in the linked chromium issue, push messages are really just fancy pings at this point. But those pings <em>can</em> be used to tell clients to fetch the latest data from your servers.</p> <hr /> <p>I will continue to follow the Push API and keep this article updated if any important developments occur. Drop me a line on <a href="">twitter</a> if you have any questions. Happy pushing!</p> <h4 id="other-helpful-resources">Other Helpful Resources:</h4> <ul> <li><a href="">Introduction to Service Worker</a></li> <li><a href="">MDN: ServiceWorker API</a></li> <li><a href="">Jake Archibald: The ServiceWorker is coming, look busy | JSConf EU 2014</a></li> </ul> <img src="" height="1" width="1" alt=""/> Relational Database Design Conventions and strategies to create a schema that will grow with your application and organization. Justin Oakley justin.oakley@zumba.com 2015-05-22T00:00:00+00:00 <p.</p> <p. The views expressed in this article are my own and may or may not reflect that of Zumba.</p> <h3 id="overview---the-basics">Overview - The Basics</h3> <p.</p> <p>That’s a basic description of a relational database. When designing a database schema, it is important to know the purpose of the application(s) in which it will be used and where the data may be transported at a later time.</p> <h3 id="where-to-begin">Where to begin</h3> <p.</p> <p.</p> <ul> <li>Who is the audience of the application? <ul> <li>Who will be using the application and how?</li> </ul> </li> <li>Are there administrative areas? <ul> <li>If so, will there be different statuses / states of items?</li> <li>Do changes to these need to be monitored and logged?</li> <li>Will there be a need to see real-time statistics?</li> </ul> </li> <li>Does admin activity need to be logged and reported?</li> <li>Does regular user activity need to be logged and reported?</li> <li>Will there be a need for real-time reporting of user activity?</li> <li>Will the application need to interact with 3rd party systems?</li> <li>How long must data be stored to be compliant with any rules that must be followed?</li> </ul> <p>There are other questions that may be more specific to the type of application.</p> <h3 id="naming-conventions">Naming Conventions</h3> <p.</p> <p>So let’s just get to it then.</p> <h4 id="tables">Tables</h4> <p>The name should describe what is in the table and be very obvious to someone looking at the list of tables in your database.</p> <ul> <li>For tables that will map back to an object in code, use singular object names like ‘<strong>product</strong>’, ‘<strong>user</strong>’, ‘<strong>color</strong>’, etc.</li> <li>Join Tables that facilitate 1 to many relationships will have the main object listed first and the attribute name second and joined together with an underscore (eg. ‘<strong>product_language</strong>’, ‘<strong>user_product</strong>’).</li> <li>Tables that store logged data will have a suffix of ‘_log’ (eg. ‘<strong>activity_log</strong>’).</li> <li>Tables that store historical states of objects or object/attributes will contain a suffix of ‘_hist’ as in “history” (eg. ‘<strong>product_status_hist</strong>’).</li> <li>When extending another table, typically a table in a framework that you want to be able to update without issue, you will add a meta table named the same as the table being extended with the suffix ‘<strong>_meta</strong>’ (eg. ‘<strong>user_meta</strong>’).</li> <li>If this application will be packaged within an existing database, then all tables should contain the same prefix. The prefix should be small, kept to <= 3 characters + an underscore (eg. ‘<strong>zba_</strong>’).</li> </ul> <h4 id="columns">Columns</h4> <p>Names of your columns should be as self-explanatory as possible when someone is simply looking at a list of names without seeing the full structure of the table to see types.</p> <ul> <li>Try to avoid reserved words or words that are used in the SQL language. For instance, don’t name a description column ‘desc’. However, a suffix of ‘_desc’ is fine. More on that later.</li> <li>Most objects will have a “Name” element. It is very easy to just call the column ‘name’ and be done with it. <ul> <li>However, if you use the table name (or portion of) and add the suffix of ‘<strong>_name</strong>’, that will help to ensure that your query will not be ambiguous and that you will be able to pull data from a single query without having to specify an alias for each column name that is named generically.</li> <li>In fact, any commonly used elements of an object should have their column name begin with the table so that the same simple usage can be achieved. The convention for this would be to use ‘<strong>TABLE_</strong>’ as a prefix to the generic name.</li> </ul> </li> <li>When adding a column that is a foreign key to another table’s id, it should be named as the foreign table name + ‘_id’ (eg. in the product table, you would create the column ‘<strong>color_id</strong>’ to reference the color table). <ul> <li>This format should be used for any foreign key columns. The pattern is ‘<strong>FOREIGN-TABLE_FOREIGN-TABLE-COLUMN</strong>’.</li> </ul> </li> <li>When adding a column that is used as a flag (1 or 0, true or false, etc.), it should contain the prefix: ‘is_’. So, an active flag would be named as ‘<strong>is_active</strong>’, published flag would be ‘<strong>is_published</strong>’.</li> </ul> <h4 id="constraints">Constraints</h4> <p>Constraints are used to create restrictions on Foreign Keys and indexes or just to put a restriction on the data going into the column.</p> <ul> <li><strong>Foreign Key</strong>: Use the prefix ‘<strong>fk_</strong>’ and follow this format: ‘<strong>fk_MAIN-TABLE_ref_FOREIGN-TABLE_FOREIGN-TABLE-COLUMN</strong>’ (eg. ‘<strong>fk_product_ref_color_id</strong>’).</li> <li><strong>Unique Index</strong>: Use the prefix ‘<strong>udx_</strong>’ and follow this format: ‘<strong>udx_MAIN-TABLE_MAIN-TABLE-COLUMN</strong>’ (eg. ‘<strong>udx_product_product_name</strong>’).</li> <li><strong>Non-Unique Index</strong>: Use the prefix ‘<strong>idx_</strong>’ and follow this format: ‘<strong>idx_MAIN-TABLE_MAIN-TABLE-COLUMN</strong>’ (eg. ‘<strong>idx_product_product_name</strong>’).</li> <li>Multi-column indexes follow the same convention as single column unique and non-unique, except, you just keep appending the column names that are used.</li> </ul> <h3 id="defaults-conventions">Defaults Conventions</h3> <p>Now that the naming conventions are covered, let’s talk about some conventions when creating tables and how to put this to good use in your applications.</p> <h4 id="standard-tables">Standard Tables</h4> <p>Just about every web application will need one or all of the following tables.</p> <ul> <li><strong>user</strong>: This table may already exist if you are using an existing framework. <ul> <li>If using a framework, you may want to extend the user data in a table called ‘<strong>user_meta</strong>’.</li> </ul> </li> <li><strong>role</strong>: Just as you need users to use your system, you will want to differentiate those users. If using a framework in your application, this table may already exist.</li> <li><strong>status</strong>: Something in your application will likely have a need for different statuses. This is typically a small table and should likely make use of bit masks (discussed later).</li> </ul> <p>There are other standard tables based on what type of application you are building. <strong>product</strong> might be a another common table. But if your app does not deal with products, this might not be needed.</p> <h4 id="standard-columns">Standard Columns</h4> <ul> <li>All tables should have the following column. <ul> <li>A primary key column named simply ‘<strong>id</strong>’. This column can be simply a number or a large UUID string. Unless there is a really good reason, the ID should generally be an integer of whatever size you think the table may grow. The UUID can be another column in the table. <ul> <li>If numeric, It should be set to autoincrement (in databases that support it, or create a trigger and sequence in DBs such as Oracle to achieve the same thing).</li> <li>If using a UUID string style, there is no autoincrement. You will need to handle the generated value another way.</li> </ul> </li> </ul> </li> <li>Nearly all tables should have the following columns. The only exception to this is when maintaining integrity is not important. <ul> <li>An ‘<strong>is_active</strong>’ column with a default of 1 or true. Instead of deleting from this table, you will set this to 0 or false. This requires that your queries that select data be aware of the column and its purpose.</li> <li>A ‘<strong>created</strong>’ column with a default as specified below. This should be set on row creation and not altered afterwards.</li> <li>A ‘<strong>modified</strong>’ column with a default as specified below. This will contain a timestamp of when the most recent change was made to the record.</li> </ul> </li> </ul> <h3 id="column-types">Column Types</h3> <h4 id="basic-columns">Basic Columns</h4> <ul> <li>Flag columns should be stored as either boolean column types (if supported) or 1-byte integers where they will hold either a 0 or 1.</li> <li>Description or other columns that will hold markup or a lot of text should be some type of <strong>TEXT</strong> column based on what is appropriate for the length of the data and what is supported by your database.</li> <li.</li> <li>When adding a foreign key column, be sure to set its type to be the same as the column that it references.</li> </ul> <h4 id="date-and-time-columns">Date and Time Columns</h4> <p>For columns that store date and time data, there are two ways to approach this. Perform you calculations and procedures in the <strong>database</strong> or the <strong>code</strong>.</p> <ul> <li><strong>database</strong> <ul> <li>When offloading calculations and functions onto the database server, You will want to use datetime column types that are supported by the database server you are using.</li> <li>At a minimum, you should choose a date and time column that that is as granular as 1 second.</li> <li>You should use the same column type for all of your date-time columns so it will be obvious and predictable to anyone using the data.</li> <li>For the ‘<strong>created</strong>’ column, you will specify the default as whatever equates to NOW() for the column type chosen.</li> <li>For the ‘<strong>modified</strong>’ column, you will want to create a trigger that will set the value to whatever value was used for the default of the ‘created’ column ON UPDATE</li> <li>The Date and Time column type should support timezone as well so that the time can be meaningful and re-produced reliably.</li> </ul> </li> <li><strong>code</strong> <ul> <li>When writing code that can be used on multiple databases or when you want to perform manipulations or calculations in the code or directly in the SQL, then you should use a numeric type to store the data.</li> <li>The data should be stored as a unix timestamp (Number of seconds since epoch). So the column must be at least as large as an INTEGER(10).</li> <li>The Integer type should also be unsigned.</li> <li>For the ‘<strong>created</strong>’ column, you will specify the default as whatever equates to CURRENT_TIMESTAMP in the database or 0 and handle the value in the code.</li> <li>For the ‘<strong>modified</strong>’ column, you will want to have code that is centralized in whatever base object everything extends to set this value to be the current timestamp.</li> <li>Timestamps should be set using UTC timezone with the proper offset for the timezone desired. An additional column to store the timezone may be required as well.</li> </ul> </li> </ul> <h3 id="joining-records">Joining records</h3> <p>Join tables are the most common way to store many-to-many relationships. These tables typically just contain the IDs of both tables and the standard columns. Sometimes, there may be some additional meta data. But that is up to how they are being used.</p> <p>Another option that works for attribute tables that have a relatively small number of records is to use <strong>bit masks</strong>. To use a bit mask:</p> <ul> <li>Create your attribute table as normal except for the addition of 1 column called ‘<strong>bitmask</strong>’.</li> <li>The value would start at 2 for the first row and then go up by the power of 2 from there. <ul> <li>An easy way to achieve this is to create a trigger that will set the column value = the product of ‘<strong>id</strong>’ x 2 AFTER INSERT if possible.</li> <li>The data type of the bitmask column should be an unsigned integer that is large enough to account for all of your records as the power of 2 increases.</li> </ul> </li> <li>Your main object table will then have a column called ‘<strong>FOREIGN-TABLE_bitmask</strong>’ (eg. ‘<strong>language_bitmask</strong>’). <ul> <li>To store the relationships, you will simply OR the languages bitmask values together and store that sum in the language_bitmask column. <ul> <li>Just a tip, you can simply add all of the values together and it will yield the same final number as OR-ing them all together.</li> </ul> </li> <li>To query the data (Example: pull product and language data), you would join language lang with product prod ON (prod.lang_bitmask & lang.bitmask = lang.bitmask).</li> </ul> </li> </ul> <p.</p> <h3 id="warehousing">Warehousing</h3> <p>One of the benefits of following these conventions and standards is that it will make the data very easy to be reported on and warehoused for Business Intelligence reporting and insights. In order to tie data together from multiple applications and systems, a <strong>Data Warehouse</strong> is used. There are many different commercial solutions for this in existence (Cognos, Oracle BI, Tableau, …etc.). However, they all basically store processed data in a database of their own.</p> <p.</p> <h4 id="etl">ETL</h4> <p>In order for the warehouse to collect data from many disparate systems, it uses what is called an <strong>ETL</strong> process. <strong>ETL</strong> stands for <strong>E</strong>xtract, <strong>T</strong>ransform, <strong>L</strong>oad. Essentially, what it does in a basic sense is</p> <ol> <li>Pull data from the application database.</li> <li>Transform the data or parse it so that it will fit into its schema and be able to be used with the data from the other systems.</li> <li>Insert / Update records in the warehouse database with the newly transformed data.</li> </ol> <h4 id="why-is-this-relevant">Why Is This Relevant</h4> <p).</p> <p.</p> <h3 id="conclusion">Conclusion</h3> <p.</p> <p>Thanks for reading.</p> <img src="" height="1" width="1" alt=""/> How We Do CSS at Zumba This is how we do CSS at Zumba. Kaue Ribeiro kaue.ribeiro@zumba.com 2015-03-31T00:00:00+00:00 <p>As you may or may not know Zumba’s website consists of a robust app that includes a fully custom ecommerce shop that supports international transactions, class and training search and registration, Zumba Instructor Network (ZIN) admin portal and many other apps. Our in-house tech department is responsible for managing the interactive products, developing, testing and deploying the projects.</p> <p>I was inspired by the teams at <a href="" target="_blank">Ghost</a> and <a href="" target="_blank">Github</a> to write this article to give you an idea of how we do CSS at Zumba.</p> <h3 id="intro---browser-and-device-support">Intro - Browser and Device Support</h3> <p>We try to keep our code as light, modular and DRY as possible. We use Modernizr, Autoprefixer and some Foundation components. We try to keep our nesting no more than 4 levels deep and we try to make sure that our naming convention lends itself to reusable components.</p> <p>We support the last few versions of popular browsers (Chrome, Safari and Firefox). At the time of writing this we currently support IE 8+. About 2% of our users still use IE 8; we are not out of the woods yet.</p> <p>We made the leap to a responsive application with our recent redesign of the site. Therefore, we support a wide range of desktop, tablet and mobile devices. At the time of writing this we don’t currently support smart watches. We will soon, though… so, “watch” out for that… sorry.</p> <h3 id="preprocessor">Preprocessor</h3> <p>We use Sass with Compass as our preprocessor. Our team is proficient and has much training on this popular pre-processor and it’s large following helps.</p> <h3 id="foldersandwich-structure">Folder/Sandwich Structure</h3> <p>Here’s the fun stuff - we structured our css based on a few important factors:</p> <h4 id="modularity-and-reusability">+ Modularity and Reusability</h4> <p>Develop components that can be reused throughout the application.</p> <h4 id="css-rules">+ CSS Rules</h4> <p>Consider the size of the compiled stylesheets per section of the site to not exceed IE9’s rules limit of 4,095. Our previous site was compiled into two massive stylesheets and we learned the hard way. We currently break it down per section of the site. See examples below.</p> <h4 id="light-stylesheets">+ Light stylesheets</h4> <p>Keeping the stylesheet declaration low kills two birds with one stone. We break the stylesheets down into individual sheets that contain only the rules for each section of the site—this has the added benefit of avoiding unnecessary or redundant styles per section. Doing this helps us keep our stylesheets light.</p> <p>Can I have my sandwich with a side of css? Zumba is a happy and energetic fitness lifestyle company. That doesn’t mean we can’t have our sandwich and eat it too. We decided to structure our css like a sandwich order.</p> <div><img src="/img/blog/sandwich-breakdown.jpg" alt="sandwich" class="img-responsive" /></div> <h4 id="plate">Plate</h4> <p>The plate represents the foundation of the code and is reused throughout the site. Example:</p> <div class="highlighter-rouge"><pre class="highlight"><code>plate |— _angular.scss |— _animations.scss |— _fonts.scss |— _icons.scss |— _mixins.scss |— _typography.scss |— _variables.scss </code></pre> </div> <h4 id="toppings">Toppings</h4> <p>These can be added to your sandwich to make it as delicious as you want it. These consist of mini components that can be reused throughout the site, but is not necessary for every page. Example:</p> <div class="highlighter-rouge"><pre class="highlight"><code>toppings |— _button.scss |— _input.scss |— _shares.scss |— _tooltip.scss </code></pre> </div> <h4 id="sandwich">Sandwich</h4> <p>These nom noms can contain a combination of toppings to make a complete sandwich or component. These are usually a bit more complex and a little more specific. Sandwiches can be used throughout the entire site and can be customized per section of the site to meet that section’s needs or requirements. Example:</p> <div class="highlighter-rouge"><pre class="highlight"><code>sandwich |— _accordion.scss |— _featured.scss |— _form.scss |— _hero-slider.scss |— _modal.scss |— _pagination.scss |— _rangeslider.scss </code></pre> </div> <h4 id="example-of-our-overall-structure">Example of our overall structure:</h4> <div class="highlighter-rouge"><pre class="highlight"><code>assets-build/css-blt |— checkout/ |— classes/ |— consumer/ |— header-footer/ |— plate/ |— sandwiches/ |— shop/ |— toppings/ |— _settings.scss |— classes.scss |— consumer.scss |— header-footer.scss |— main.scss |— shop.scss </code></pre> </div> <h4 id="mainscss---consists-of-the-basics-of-our-pages-layout">main.scss - Consists of the basics of our pages’ layout.</h4> <script src=""></script> <h3 id="css-structure">CSS structure</h3> <p>This is an example of how we expect our stylesheets to be laid out. This is not an actual file.</p> <script src=""></script> <ul> <li>Each file is named after the page or the page module it supports</li> <li>We never style ID’s only Classes</li> <li>Keep nesting down to a minimum and no deeper than 4 levels including pseudo selectors</li> <li>Group related properties wherever possible</li> <li>Media queries go after properties and the order should reflect mobile first</li> </ul> <h3 id="naming">Naming</h3> <h4 id="variables-and-mixins">Variables and Mixins</h4> <p>Our variables and mixing naming is pretty specific in order to avoid any confusion. We have been here long enough to understand that the company likes to make simple to drastic changes to support the business in general. We have to develop keeping scalability in mind and this variable/mixin strategy just makes more sense for us.</p> <h4 id="mixin">Mixin</h4> <script src=""></script> <h4 id="color-variables">Color Variables</h4> <script src=""></script> <h4 id="classes">Classes</h4> <p>Our preferred technique is a more descriptive naming convention over deep nesting.</p> <script src=""></script> <h4 id="media-queries">Media Queries</h4> <p>For the most part we target based on viewport width. We have a feature or two that depends on the detection of touch. We stick to three basic breakpoints and rarely make exceptions for custom breakpoints. We don’t want to compromise code quality to consider every viewport width consideration.</p> <script src=""></script> <h4 id="typography">Typography</h4> <script src=""></script> <h4 id="linting">Linting</h4> <p>Currently our linting consists of pull request code reviews. This is what we look for:</p> <ul> <li>Levels of nesting</li> <li>Indentation</li> <li>Flag the use of ID’s</li> <li>Spacing after selectors and properties</li> <li>Formatting of properties</li> <li>Code that’s commented out</li> <li>Formatting of selectors - lowercase and hyphened preferred</li> <li>Null values should be 0 or none instead of 0px</li> <li>Use of color names should use variables</li> <li>Duplicate selectors or properties</li> </ul> <h4 id="the-fun-never-ends">The fun never ends</h4> <p>This post and our style guide that is soon to come will never be a done deal written in stone. What we build today is cutting edge and future proof until the future is now and we realize how crazy we were when we developed using archaic methods and tools. The tech world moves at an insane pace and that’s what makes our job fun. New technologies will emerge and we will use them to make our applications and our lives as developers better. Therefore, this is relevant for the time being and I hope that it covers us for a very long time, but we all can’t be surprised if we have to come back and revise this to support new new. Until then, we will enjoy this sandwich.</p> <img src="" height="1" width="1" alt=""/> Zumba<sup>®</sup> Technology Day We are looking for a few good engineers to help us achieve our technical vision. Julian Castaneda julian.castaneda@zumba.com 2014-12-30T00:00:00+00:00 <p>Let me start out by saying, that before I started working here at Zumba<sup>®</sup>, my perception of companies that were not “tech companies” was completely wrong. I was very naive, and thought that ‘those’ companies did not have the correct culture to nurture and expand my career as a software engineer. Not all companies are the same, but at least there are some companies, such as Zumba<sup>®</sup> that strongly believe in technology. Our team is constantly innovating, and we are on top of the latest technology. Don’t believe me? Here is some inside knowledge about our Zumba<sup>®</sup> Technology Day.</p> <p>Nowadays, a lot of the big tech companies have some sort of program that incentivizes it’s employees to tinker with ideas that are not part of their assigned work. Apparently this concept was <a href="">started by 3m</a> and then was made famous by Google with their “20 percent program”, in which employees get 20% of their time to work on side projects or ideas. Following in their footsteps, we decided to implement a similar program in-house called “Zumba<sup>®</sup> Technology Day”, where the technology department gets at least one day a month (usually the last Thursday of the month) to actually experiment and come up with new ideas that can benefit the company.</p> <p>The way our tech day works is pretty simple. It started simply with a google doc that we used as master list of ideas that anyone at anytime could go in and update. Over time, we refined the process to a Kanban project in Jira. This change made it easier to maintain and team members can now take ownership of what they want to work on and invite other team members to contribute. Note that we do not force anyone to use the google doc or the Jira project. In fact, we only have 2 actual rules:</p> <ul> <li>The only interrupts that will be taken are business critical interrupts.</li> <li>The Technology team must work on something that stabilizes our current systems or designs for the future. This includes documenting what we now have, designing a future vision, prototyping something that will help solve a business problem, documenting an improvement to the development process, automating tests, writing scripts, bashing a bunch of pet peeve bugs, researching an alternative technology, or the like.</li> </ul> <p>It’s been a year and half since we established the Zumba<sup>®</sup> Technology Day in our department and during this time it has proven to be a success. Not only some of the ideas/projects that we have implemented started with a tech day and became actual projects that are now vital part of our company, but it has also allowed us to have a day where we can interact with each other in a different setting. We have created a work setting that we can work without the pressure of a deadline and restrictions from the business- It’s our time to be creative!</p> <p>If you have been missing working with exciting technologies and experienced engineers on solutions to complex problems, then <a href="">Join the team</a>! We have several openings for software engineers and frontend developers.</p> <img src="" height="1" width="1" alt=""/> Caching CakePHP 2.x routes Tutorial of how to cache the CakePHP 2.x routes. Juan Basso juan.basso@zumba.com 2014-10-26T00:00:00+00:00 ).</p> <p>This week we profiled our app using <a href="">Xdebug profiler</a> and we identified the router was responsible for a big part of the request time. In our main app we use over 130 custom routes which makes CakePHP generate an object for each route, and consequently parse and generate a regex for each route. This takes significant time and many function calls to do it.</p> <p>In order to optimize the routing time, we started looking at options to optimize our routing process. After some research and deep checking in our codebase as well as CakePHP’s code, we found we could cache the routes easily. The solution we found is easily applicable for many other CakePHP applications. Basically it consists of exporting the compiled routes into a file and loading it instead of connecting them on every request to the Router. This strategy is similar of how <a href="">FastRoute</a> caches their routes.</p> <p>First, we moved all the <code class="highlighter-rouge">Router::connect()</code> to another file called <code class="highlighter-rouge">routes.connect.php</code>. On the <code class="highlighter-rouge">routes.php</code> we added the logic for the caching. In the end, we ended up with something like this:</p> <script src=""> </script> <script src=""> </script> <p>We had a couple options here. For example, we used <code class="highlighter-rouge">var_export</code> instead of <code class="highlighter-rouge">serialize</code>. It changes how the application caches the file and adds some extra steps to the process. Using <code class="highlighter-rouge">serialize</code> you can just cache in memory (using some cache engine like APC), avoiding writing routes to the disk and avoiding to change the default route class.</p> <p>We chose to use <code class="highlighter-rouge">var_export</code> and dump the output to a file because it allows PHP to opcache the file, avoiding to re-parse everything again on every request. Using <code class="highlighter-rouge">serialize</code> it generate a string that needs to be parsed and executed on every request. Depending of your app and the number of routes that you have, the use of <code class="highlighter-rouge">serialize</code> is simpler and faster than <code class="highlighter-rouge">var_export</code>. Give both approaches a try and compare the performance between them.</p> <p>Using <code class="highlighter-rouge">var_export</code> brings some consequences, which is the requirement of implementing the magic method <code class="highlighter-rouge">__set_state</code>. This method is not implemented in CakePHP core until the version 2.6. I opened the PR <a href="">cakephp/cakephp#4953</a> to support it on CakePHP 2.6+. So, to solve this problem in CakePHP prior to 2.6 we created a class that extends Cake’s <code class="highlighter-rouge">CakeRoute</code> to implement the magic method and this looks like it:</p> <script src=""> </script> <p><i>PS: This code is not compatible with PHP 5.2. If you are using PHP 5.2, stop everything and upgrade your PHP version.</i></p> <p>With this class and changing the default route class in the <code class="highlighter-rouge">routes.php</code> you can cache the routes using <code class="highlighter-rouge">var_export</code>. If you have plugins with routes, you may need to do some additional changes. If you have plugins but these plugins doesn’t have any route, Cake automatically create a route to it using the <code class="highlighter-rouge">PluginShortRoute</code> (which also doesn’t implement the magic method). It means you probably will have to remove these classes from your routes before build the cache. Knowing these limitations you can also workaround this type of issue by creating another extended classes.</p> <p>If you are wondering why we create a temporary file and rename it to the final filename it is to avoid a concurrency issue between one request writing the file and another request reading the file at the same time. It could be avoided by using file lock, but it would stop all concurrent requests until the cache is finally done and stored on the disk. Using a temporary file and renaming it is an atomic operation, so it is avoided. Thanks <a href="">Jose Diaz Gonzalez</a> (<a href="">@savant</a>) for pointing it out.</p> <p>We used the <code class="highlighter-rouge">sha1_file()</code> function to clear any kind of cache when a route is changed in <code class="highlighter-rouge">routes.connect.php</code>. Everytime you change the file, the SHA1 of the file will change and consequently it will generate another cache file. This allows the cached file to regenerate for seamless deployments where routes where changed.</p> <p>I would like to present some numbers of these changes, but it is very subjective because it depends a lot from the hardware (specially the disk), the number and type of routes, etc. What I can say is for our case the time to load the routes are half of the time from connecting on every request and to generate an URL using <code class="highlighter-rouge">Router::url()</code> (or via some helper/controller) it is four times faster on the first route that hits the last route (usually the generic one from Cake).</p> <p>One interesting thing we found on the tests were that loading the cached file and routing to the first route was faster than connecting all the routes via <code class="highlighter-rouge">Router::connect()</code> and matching the first route (which just compile one route).</p> <p>In summary, the changes to cache the routes are small for most of the applications. There is different approaches that you have to test and decide which one fits better for your application. Also, some limitations could block your cache, but most likely there is a workaround for it. If you can’t find one, contact me and I can try to help you.</p> <img src="" height="1" width="1" alt=""/> Developing With AngularJS? Forget jQuery Exists. Understanding how jQuery's power and familiarity can unintentionally subvert AngularJS' goals. Stephen Young stephen.young@zumba.com 2014-08-02T00:00:00+00:00 <p>Before I begin, take a moment to remember how hard our lives were before <a href="">jQuery</a> ironed out the treacherous wrinkles of cross-browser development. Client-side Javascript is to jQuery as mammals are to sharks with frickin’ laser beams attached to their heads. <strong>Zumba® Tech</strong> engineers are big fans of jQuery.</p> <p>Now, if you are writing an AngularJS application, do us all a favor and <strong>forget jQuery exists</strong>.</p> <p>Here’s the rub — the power of simple jQuery scripts can sometimes be incongruent with the goals and maintainability of an <a href="">AngularJS</a> application. jQuery is a library with a lot going on under the hood, but it is primarily used for these things:</p> <ol> <li>Performing AJAX operations</li> <li>Animating HTML elements</li> <li>Handling browser events</li> <li>Querying, traversing, and manipulating the DOM</li> </ol> <p>There are caveats when using jQuery to execute <em>any</em> the above actions within an AngularJS app, but this post is focused on the last two: Events and DOM interactions.</p> <h2 id="separation-of-concerns">Separation of Concerns</h2> <p>Querying for CSS selectors is very difficult to do without implicitly tying your code to the DOM; it’s nigh impossible. This can lead to brittle code in the long run. For example, when you write <code class="highlighter-rouge">$('.soso')</code> you create an implied contract that the HTML template will contain an element with the <code class="highlighter-rouge">soso</code> class. If you later remove the <code class="highlighter-rouge">soso</code> class and add a new <code class="highlighter-rouge">awesome</code> class to the element, you have broken the contract and the Javascript may stop working.</p> <p>At a fundamental level, the principle of separation of concerns was violated. The <code class="highlighter-rouge">soso</code> class was given <em>meaning</em> outside of its implied styling. Worse still, nothing in the HTML template indicated that the <code class="highlighter-rouge">soso</code> class was more than just a boring presentational element of the page.</p> <p.</p> <h2 id="manually-querying-the-dom-is-like-writing-sql">Manually Querying the DOM Is Like Writing SQL</h2> <p>Here’s some pseudo code for you. Use your imagination.</p> <div class="highlighter-rouge"><pre class="highlight"><code>SELECT DOMElement FROM document WHERE document.$('#navBar')</code>.</p> <p>Querying the DOM is a low level task that is akin to writing SQL statements. You would never sprinkle SQL statements around your backend codebase, so why do the same thing inside your AngularJS directives?</p> <h2 id="indigestion">Indigestion</h2> <p>One of the awesome features of AngularJS is <a href="">two way data binding</a>:</p> <blockquote> <p>Any changes to the view are immediately reflected in the model, and any changes in the model are propagated to the view.</p> </blockquote> <p>AngularJS periodically loops over the properties of the “model” and updates the DOM if the data has changed. This is called the $digest cycle. When you use <code class="highlighter-rouge">ngClick</code> or <code class="highlighter-rouge">scope.$watch</code>, or even the <code class="highlighter-rouge">$timeout</code> service, AngularJS will automatically kick-off a $digest cycle for you.</p> <p>However, when you manually bind click events via <code class="highlighter-rouge">$(element).on()</code>, AngularJS is not aware of those event handlers. Data in the model could be updated, but the view will likely remain stale. You could manually call <code class="highlighter-rouge">scope.$digest</code> or <code class="highlighter-rouge">scope.$apply</code>, but this quickly becomes messy and bug-prone. If you use lots of manual $digest calls, you are bound to start getting AngularJS errors about trying to start a digest cycle when one is already in progress. Trust me.</p> <h2 id="refactoring-a-problematic-directive">Refactoring a Problematic Directive</h2> <p>Breaking away from the jQuery methodology is hard. I figure that the best way to illustrate these concepts is by example. So, I’ve cooked up a brittle, jQuery-style directive that I’m going to refactor into a resilient, reusable “AngularJS Way” component.</p> <h3 id="the-template">The Template</h3> <script src=""></script> <p>Let’s say we want the content paragraphs to toggle visibility when the user clicks on the corresponding links in the list, and we want the <code class="highlighter-rouge">li</code> elements to indicate the active paragraph. The following directive gets the job done.</p> <h3 id="the-brittle-directive">The Brittle Directive</h3> <script src=""></script> <p>The above code is somewhat clean on the surface, but there are a lot of implied dependencies on the DOM. For example, the directive will only work if the links remain children of <code class="highlighter-rouge">li</code> tags. The directive will break if the DOM is modified to put the links inside of a <code class="highlighter-rouge">div</code> or <code class="highlighter-rouge">p</code> tag.</p> <p.</p> <h3 id="use-the-scope">Use the Scope</h3> <p>We need to divorce the data/state from the template’s attributes and classes. To do that we can introduce <code class="highlighter-rouge">ngClass</code>, <code class="highlighter-rouge">ngClick</code>, and <code class="highlighter-rouge">ngIf</code> to the template. We can then keep track of the links’ states inside the directive scope, and leave the DOM manipulation to Angular.</p> <h3 id="the-new-and-improved-template-amp-directive">The New and Improved Template & Directive</h3> <script src=""></script> <script src=""></script> <ol> <li>There are no dependencies on the DOM at all. It’s just a directive that wraps an object called <code class="highlighter-rouge">scope.active</code> that contains some boolean values. The booleans get flipped on and off when the <code class="highlighter-rouge">select</code> function is called. This directive is relatively immune to DOM updates. You could change all of the classes in the template; you could delete and add new HTML; you could change <code class="highlighter-rouge">a</code> tags into <code class="highlighter-rouge">div</code> tags or <code class="highlighter-rouge">p</code> tags into <code class="highlighter-rouge">span</code> tags. <strong>Nothing will break in the JS file</strong>. There are no classes being toggled in the directive. If a developer decides to change the classes on the <code class="highlighter-rouge">li</code> elements from <code class="highlighter-rouge">active</code> to <code class="highlighter-rouge">awesomesauce</code>, <em>everything still works</em>. This is awesome.</li> <li>There is no DOM traversal happening at all. This is awesome; and it’s fast.</li> <li>Looking at the HTML reveals how this widget works. It is declarative, which is awesome. (there’s a theme here)</li> <li>Since all of the click operations are handled by <code class="highlighter-rouge">ngClick</code>, everything will remain inside the digest cycle, and I don’t have to use <code class="highlighter-rouge">scope.$apply</code> anywhere. It will just work.</li> </ol> <h3 id="jquery-under-the-hood">jQuery Under the Hood</h3> <p>AngularJS ships with <a href="">jqLite</a>:</p> <blockquote> <p>jqLite is a tiny, API-compatible subset of jQuery that allows Angular to manipulate the DOM in a cross-browser compatible way. jqLite implements only the most commonly needed functionality with the goal of having a very small footprint.</p> </blockquote> <p>The interesting thing here is that Angular uses jqLite to perform DOM manipulation such as adding and removing classes for the <code class="highlighter-rouge">ngClass</code> directive. This allows <code class="highlighter-rouge">ngClass</code> and other built-in directives to be an abstraction that separates the queries from your code.</p> <h2 id="summary">Summary</h2> <p>jQuery is a badass tool for scripting interactions with the DOM. It’s a <strong>global</strong> hammer that you can whip out anywhere in the code and smash out your feature on the spot. This is awesome, but it leads to brittle, non-reusable, hard to maintain code (inside AngularJS, at least).</p> <h4 id="tldr">TL;DR</h4> <p>When trying to solve a problem or add functionality to an AngularJS app, start by forgetting about jQuery solutions. Reach for directives like <code class="highlighter-rouge">ngClick</code> and <code class="highlighter-rouge">ngClass</code>. Doing so will likely result in a more elegant solution.</p> <img src="" height="1" width="1" alt=""/> Enforce code standards with composer, git hooks, and phpcs Reduce number of back-and-forths in pull requests by enforcing code quality at the commit level. Chris Saylor christopher.saylor@zumba.com 2014-04-14T00:00:00+00:00 <p, <code class="highlighter-rouge">if</code> statements with no brackets, and other such things. Luckily there are tools that can assist maintainers. In this post, I’ll be going over how to use <a href="">composer</a>, <a href="">git hooks</a>, and <a href="">phpcs</a> to enforce code quality rules.</p> <p>There are a couple of things to keep in mind. First, you want this process to be as simple as possible. Secondly, you want it to be easy to run when necessary. Finally, you want it to be universally accepted as part of your contribution procedure.</p> <h3 id="there-is-no-catch">There Is No Catch</h3> <p>What if I told you that it could be done without the developer even knowing it’s happening?</p> <p>Most modern PHP projects use composer as their dependency manager. Before you can make anything work, you need to run <code class="highlighter-rouge">composer install</code>. This is where the magic happens.</p> <h3 id="phpcs-dependency">Phpcs Dependency</h3> <p>First, we need a development dependency specified to install phpcs. It looks something like this:<="p">}</span><span class="w"> </span></code></pre> </div> <h3 id="install-scripts">Install Scripts</h3> <p>Composer has a handy schema entry called <code class="highlighter-rouge">scripts</code>. It supports a script hook <code class="highlighter-rouge">post-install-cmd</code>. We will use this to install a git pre-commit hook. Adding to our example above:<="nt">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nt">"post-install-cmd"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="s2">"bash contrib/setup.sh"</span><span class="w"> </span><span class="p">]</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre> </div> <p>This will run a bash script called <code class="highlighter-rouge">setup.sh</code> when the command <code class="highlighter-rouge">composer install</code> is run.</p> <h3 id="setup-the-git-pre-commit-hook">Setup the Git Pre-commit Hook</h3> <p>In our <code class="highlighter-rouge">setup.sh</code>, we will need to copy a <code class="highlighter-rouge">pre-commit</code> script into the <code class="highlighter-rouge">.git/hooks</code> directory:</p> <div class="highlighter-rouge"><pre class="highlight"><code><span class="c">#!/bin/sh</span> cp contrib/pre-commit .git/hooks/pre-commit chmod +x .git/hooks/pre-commit </code></pre> </div> <p>This will copy our pre-commit script from the <code class="highlighter-rouge">contrib</code> directory to the hooks section of the special git directory and make it executable.</p> <h3 id="create-the-pre-commit-hook">Create the Pre-commit Hook</h3> <p>Whenever a contributing developer attempts to commit their code, it will run our <code class="highlighter-rouge">pre-commit</code> script. Now all we need to do is run the code sniffer rules on relavent files specific to this commit:</p> <script src=""> </script> <p>This script will get the staged files of the commit, run a php lint check (always nice), and apply the code sniffer rules to the staged files.</p> <p>If there is a code standards violation, the phpcs process will return a non-zero exit status which will tell git to abort the commit.</p> <h3 id="bringing-it-all-together">Bringing it all together</h3> <p>With all of these things in place, the workflow is as follows:</p> <ul> <li>Developer runs <code class="highlighter-rouge">composer install</code>.</li> <li>PHP Code Sniffer is installed via a dev dependency.</li> <li>The post-install command automatically copies the pre-commit hook into the developer’s local git hooks.</li> <li>When the developer commits code, the pre-commit hook fires and checks the staged files for coding standards violations and lint checks.</li> </ul> <p>This is a relatively simple setup that can save pull request code reviews a significant amount of time preventing back-and-forth on simple things such as mixed tabs/spaces, bracket placement, etc.</p> <img src="" height="1" width="1" alt=""/> Asynchronous Control Flow with jQuery.Deferred How to adapt callback structures to callback queues using deferred objects (promises). Stephen Young stephen.young@zumba.com 2014-01-16T00:00:00+00:00 <p><strong>Zumba® Tech</strong> is a big fan of javascript promises; they free our code from <a href="">callback hell</a> by allowing us to use a more elegant and abstract structure. With promises, asynchronous control flow becomes intuitive (first <code class="highlighter-rouge">a</code> and <code class="highlighter-rouge">b</code>, then <code class="highlighter-rouge">c</code>) and fun. Today, I’ll show you how to adapt a third party library that uses callbacks to a promise interface.</p> <h2 id="well-be-using-jquerydeferred">We’ll be using jQuery.Deferred</h2> <p><a href="">Chris Saylor</a> blogged about the node.js <a href="">async module</a> last November. I’m going to focus on the client side for this post, but the principles apply to any environment.</p> <p>There are many great promise libraries:</p> <ul> <li><a href="">Q</a> — A tool for making and composing asynchronous promises</li> <li><a href="">RSVP.js</a> — tools for organizing asynchronous code</li> <li><a href="">when.js</a> — the promise library from <a href="">cujoJS</a></li> </ul> <p>There’s even a spec for <a href="">promises in ES6</a>.</p> <p>However, until the native spec is fully supported we’ll be using jQuery.Deferred <sup><a href="#footnote1"> [1] </a></sup> on the client.</p> <h2 id="a-typical-scenario">A Typical Scenario</h2> <p>We recently built a new internal app to handle some administrative tasks. We used <a href="">Bootstrap</a> to get up and running quickly, and we stumbled upon a great library for handling our dialog boxes called <a href="">bootbox.js</a> from <a href="">Nick Payne</a>.</p> <p>Here is a typical snippet of code, straight from bootbox:</p> <script src=""></script> <p>With this code, bootbox will open a dialog containing the <code class="highlighter-rouge">"Hello world!"</code> text. The callback is fired after the user clicks on the <code class="highlighter-rouge">"ok"</code> button and the <code class="highlighter-rouge">Example.show("Hello world callback");</code> code is executed.</p> <p>This is already sweet, but soon we found ourselves writing a lot of boilerplate when we implemented the <code class="highlighter-rouge">confirm</code> dialogs:</p> <script src=""></script> <p>That’s not overly messy, but handling these conditions in every confirmation dialog with inline callbacks didn’t appeal to us.</p> <h2 id="wrapping-bootbox-with-an-adapter">Wrapping Bootbox With an Adapter</h2> <p>First we created a new object that wraps the bootbox logic. We called it <strong>Popper</strong> (we’re not always clever with lib naming).</p> <script src=""></script> <h3 id="returning-a-deferred-object">Returning a Deferred Object</h3> <p>The above snippet isn’t very impressive. However, now that we have a wrapper we can adapt the bootbox interface to use a deferred object:</p> <script src=""></script> <p>In effect, confirming the bootbox dialog will resolve the deferred object, and canceling the dialog will reject the deferred object. Our implementation code now looks like this:</p> <script src=""></script> <p>This has a couple of advantages: * There are no more messy <code class="highlighter-rouge">if</code> blocks to parse; the code is more “flat”. * We’ve changed the flow of the application from using a single, multipurpose callback into using separate callbacks with unique concerns.</p> <h3 id="jqueryajax-returns-a-deferred">jQuery.ajax returns a Deferred</h3> <p>There were many cases where we would fire off an ajax request after the user confirmed the dialog. Since jQuery returns a deferred object from <code class="highlighter-rouge">$.ajax</code>, we were able to envelop that logic into our adapter as well:</p> <script src=""></script> <h2 id="a-few-steps-further">A Few Steps Further</h2> <p>The above code is nice, but there are still a few outstanding issues: * <code class="highlighter-rouge">.done</code> and <code class="highlighter-rouge">.fail</code> are not very semantic method names for confirming or canceling a dialog. * It would be nice to register callbacks that fire as soon as the buttons are clicked, and additional callbacks that fire after the ajax responds.</p> <p>To solve these issues, we created a map of semantic method names (<code class="highlighter-rouge">ok</code> maps to <code class="highlighter-rouge">done</code>, <code class="highlighter-rouge">cancel</code> maps to <code class="highlighter-rouge">fail</code>, etc). Popper’s methods return a new object that contains these semantic methods, making our implementation code look beautiful:</p> <script src=""></script> <h2 id="the-source">The Source</h2> <p>Here’s a link to the <a href="">full Popper.js source</a>.</p> <h2 id="conclusion">Conclusion</h2> <p>I hope I’ve expanded your toolbelt a little by showing you how to use a promise pattern in a real-world scenario.</p> <p>Do you like the Popper lib? Leave us a comment!</p> <p><a name="footnote1">[ 1 ]</a> <em> We know that <a href="">jQuery.Deferred</a> objects are not Promise/A+ compliant, and we sympathize with why that’s non-optimal. However, since we are already using jQuery in most of our projects, jQuery.Deferred solves 95% of our needs without requiring us to depend on an additional library.</em></p> <img src="" height="1" width="1" alt=""/> Introducing Json Serializer - An open source library to serialize to JSON Serialize PHP content in JSON, with support to unserialization, even for objects. Juan Basso juan.basso@zumba.com 2014-01-06T00:00:00+00:00 <p>After the release of some of our libraries, such as <a href="">Symbiosis</a>, <a href="">MongoUnit</a> and <a href="">CSV Policy</a>, we are proud to announce our newest open source project: <a href="">Json Serializer</a>. The Json Serializer–as the name suggests–serializes content in JSON format. It is a very simple concept, but very useful in some cases.</p> <h3 id="use-case">Use Case</h3> <p.</p> <p.</p> <p>For some time we just serialized the cart using the PHP <code class="highlighter-rouge">serialize()</code>.</p> <p.</p> <h3 id="what-about-jsonserializable-in-php-54">What about <code class="highlighter-rouge">JsonSerializable</code> in PHP 5.4?</h3> <p>The <a href=""><code class="highlighter-rouge">JsonSerializable</code> interface</a> is used with the <code class="highlighter-rouge">json_encode()</code> function to serialize an object. This means it is very useful to convert your class to JSON when you are responding with the entity to your API, for example; however, it doesn’t support unserialization.</p> <h3 id="what-about-using-jsonencode-with-the-serializable-interface">What about using <code class="highlighter-rouge">json_encode</code> with the <code class="highlighter-rouge">Serializable</code> interface?</h3> <p>Nope. If your entity implements <code class="highlighter-rouge">Serializable</code> you can define how to serialize and unserialize, but PHP wraps their serializable code around it which means the output is not 100% JSON.</p> <h3 id="hmmm-so-what-about-jms-serializer">Hmmm, so what about JMS Serializer?</h3> <p><a href="">JMS Serializer</a> is a great library and does serialize to JSON, but it doesn’t have as many features as this library. For example, when you de-serialize, you have to pass the class where the data was generated. Also, they don’t support nested encoding.</p> <h3 id="why-not-just-create-a-method-to-aggregate-all-the-data-and-respond-as-json">Why not just create a method to aggregate all the data and respond as JSON?</h3> <p <code class="highlighter-rouge">JsonSerializer</code> does).</p> <h3 id="you-are-convincing-me-but-i-have-so-much-crap-on-my-classes-can-i-ignore-some-of-the-properties">You are convincing me, but I have so much crap on my classes. Can I ignore some of the properties?</h3> <p>Not a problem. The library supports the magic methods <code class="highlighter-rouge">__sleep()</code> and <code class="highlighter-rouge">__wakeup()</code>. In each class you can define what will be stored in the JSON and what to execute when unserialized. It works exactly the same way as the <code class="highlighter-rouge">serialize()</code> and <code class="highlighter-rouge">unserialize()</code> functions.</p> <h3 id="ok-but-my-code-is-a-little-complex-i-have-some-double-linked-lists-some-recursing-objects-etc">Ok, but my code is a little complex. I have some double linked lists, some recursing objects, etc.</h3> <p>This is also not a problem. The library handles code recursion. You can call the serialize method and enjoy it.</p> <h3 id="fcking-amazing-but-i-doubt-it-will-work-in-my-old-environment">F*cking amazing, but I doubt it will work in my old environment.</h3> <p>The library supports PHP 5.3.6 and above. We don’t need any other dependency. If you are using any PHP version before that, stop now, read some blogs, and go upgrade your system! PHP 5.3 only has a few months of support left.</p> <h3 id="is-the-library-compatible-with-serializable-or-jsonserializable">Is the library compatible with <code class="highlighter-rouge">Serializable</code> or <code class="highlighter-rouge">JsonSerializable</code>?</h3> <p>Yep. Each one has different functionality, so everything is fine. You can still use <code class="highlighter-rouge">Serializable</code> with <code class="highlighter-rouge">serialize()</code> and <code class="highlighter-rouge">unserialize()</code> functions. You can use <code class="highlighter-rouge">JsonSerializable</code> with <code class="highlighter-rouge">json_encode()</code> to respond to your API and our library to serialize the object for internal storage.</p> <h3 id="you-are-talking-too-much-youre-like-a-car-salesmen-do-you-have-some-code-to-show">You are talking too much; you’re like a car salesmen. Do you have some code to show?</h3> <p>Indeed, let’s go to an example:</p> <script src=""> </script> <p>Is that clear?</p> <h3 id="wow-amazing-how-can-i-get-it">Wow, amazing! How can I get it?</h3> <p>You can install via composer (package <code class="highlighter-rouge">zumba/json-serializer</code>), or download directly from <a href="">GitHub</a>. If you find something wrong, or something that we can improve, feel free to open a <a href="">GitHub issue</a>. If you want to make a pull request, even better. :)</p> <p>You are welcome to check our other open source projects on <a href=""></a>.</p> <img src="" height="1" width="1" alt=""/> Tame Stale Pull Requests with Drill Sergeant Get notified via email when github pull requests become stale. Chris Saylor christopher.saylor@zumba.com 2013-12-07T00:00:00+00:00 <p>Our tech team uses a process similar to the <a href="">Github Flow</a>, whereby all changes are pushed to the main branch of a repo by way of Github pull requests. We use this internally, but it is also the most common mechanism for open source projects to get community contributions on Github. We think it’s a great way to encourage code reviews and incremental improvements prior to going into a main production pushable branch, however it’s not without its pain points.</p> <h3 id="stale-pull-requests--death-to-open-source">Stale Pull Requests <code class="highlighter-rouge">==</code> Death to Open Source</h3> <p>How many times have you come across an open source project that does <em>almost</em> what you need? How many of these have pull requests that are one or more months old? It’s discouraging when you know that you could contribute and fix whatever problem may exist with the project but you know that the maintainer has abandoned or is not responsive to merging pull requests. As both a maintainer of open source projects, it’s critical to stay up-to-date and aware of open pull requests.</p> <h3 id="introducing-drill-sergeant">Introducing Drill Sergeant</h3> <p>As a part of our development lifecycle, stale pull requests mean that QA doesn’t have the opportunity to begin testing until it is merged. Therefore, it is vital that the dev team know about pull requests to be reviewed and merged as soon as possible. We developed an open source tool called <a href="">Drill Sergeant</a> to send reports via email of any pull request older than a specified period.</p> <h3 id="reporting-for-duty">Reporting for Duty</h3> <p>An example report from running Drill Sergeant against the CakePHP project configured for one week staleness looks something like this:</p> <p><img src="" alt="Drill Sergeant Report" /></p> <p>The command for the above:</p> <div class="highlighter-rouge"><pre class="highlight"><code>GITHUB_TOKEN=atokenhere drillsergeant -e "myemail@address" -r "cakephp/cakephp" -s 168 # 24 * 7 </code></pre> </div> <h3 id="schedule-it-with-crontab">Schedule It with Crontab</h3> <p>This tool is meant to be run on a schedule and it’s easy to setup to get daily reports:</p> <div class="highlighter-rouge"><pre class="highlight"><code>0 0 * * * GITHUB_TOKEN=atokenhere drillsergeant -e "myemail@address" -r "cakephp/cakephp" -s 168 </code></pre> </div> <h3 id="install-it">Install It</h3> <p>The full instructions can be found in the <a href="">readme of the project</a></p> <img src="" height="1" width="1" alt=""/> Handling Asynchronism with Promises Using the async library in Node.js for combining multiple MongoDB aggregation queries into a single list without blocking the event queue. Chris Saylor christopher.saylor@zumba.com 2013-11-06T00:00:00+00:00 <p>While experimenting with Node.js for a coding game we are working on called <a href="">Node Defender</a><sup>1</sup> for the <a href="">Ultimate Developer Conference</a>, many asynchronous problems arose that we were unaccustomed to dealing with since we are primarily a PHP shop (read: synchronous). The examples covered in this post are specific to MongoDB, but the essence can be applied to any asynchronous process.</p> <h3 id="intro">Intro</h3> <p <a href="">aggregation framework</a>.” <a href="">Martin Fowler</a> does a good writeup about what a promise is in javascript in <a href="">this post</a>.</p> <h3 id="the-aggregate">The Aggregate</h3> <p>Each game is represented in MongoDB as a document containing various values for calculating the score. In order to get the highest value in various categories the following aggregate query needs to be run:</p> <script src=""> </script> <p>For a quick explanation:</p> <ul> <li>The <code class="highlighter-rouge">sorter</code> controls which category by which we’re going to sort; descending in this case.</li> <li>The <code class="highlighter-rouge">$project</code> is the projection of the aggregate we’d like MongoDB to return. In this case: the player’s name and the value of the category.</li> <li>Finally, the <code class="highlighter-rouge">$limit</code> of 1, we just want the top value.</li> </ul> <h3 id="async-all-the-things">Async All the Things</h3> <p>Where does the promise come into play? With the <a href="">async</a>.</p> <p>We use the async “map” method to accomplish this. It allows us to pass an array of categories to the above aggregate, run the getTopCategory method against each one, compile the results, and then call our promise.</p> <script src=""> </script> <p>Explanation:</p> <ul> <li>The <code class="highlighter-rouge">map</code> array contains the categories we want to run with the aggregate queries.</li> <li>We bind the <code class="highlighter-rouge">db</code> (which is the connector to MongoDB) to the <code class="highlighter-rouge">getTopCategory</code> method, which allows this method to have direct access to the <code class="highlighter-rouge">db</code> instance without having to pass it in as a parameter.</li> <li>Finally, the promise anonymous method attached to the <code class="highlighter-rouge">Async.map</code> parameter gets the results (or errors) from all of the category aggregates and compiles them into a single list to be consumed by the calling method.</li> </ul> <h3 id="conclusion">Conclusion</h3> <p>Using the “promise” pattern with the <code class="highlighter-rouge">async</code>.</p> <h3 id="footnotes">Footnotes</h3> <ol> <li.</li> </ol> <img src="" height="1" width="1" alt=""/> Incorporating Mongounit into Multi-datasource Models with Traits Incorporating Mongounit into Multi-datasource Models with Traits Chris Saylor christopher.saylor@zumba.com 2013-10-30T00:00:00+00:00 <p>A while back we open sourced <a href="">Mongounit<.</p> <h3 id="workarounds">Workarounds</h3> <p.</p> <p>The second workaround was to manually set and clear data in mongo in the test cases, which works of course, but causes a lot of duplicate code strewn throughout the test code base.</p> <h3 id="enter-54-and-traits">Enter 5.4 and Traits</h3> <p>PHP does not support the concept of <a href="">multiple inheritance</a>.</p> <p>One of the nice features of 5.4 was the introduction of <a href="">traits</a>..</p> <p>You can see Mongounit’s trait implementation in <a href="">Github</a> available as of version 1.1.3.</p> <p <a href="">TestCase</a>, so in our case, it was a matter of moving the implementation into the trait method.</p> <script src=""> </script> <p>Now that our test case has implemented the trait, we now have access to setup fixtures for both MySQL and Mongo and utilize both within the same test case.</p> <h3 id="conclusion">Conclusion</h3> <p>Using some of the “newer” features of PHP, we are able to make our test cases more flexable and easier to use without having to reimplement entirely what mongounit does into a dbunit extended class.</p> <img src="" height="1" width="1" alt=""/> Symbiosis: Instanced Dispatcher Support Added New version of Symbiosis (v1.2) Chris Saylor christopher.saylor@zumba.com 2013-09-22T00:00:00+00:00 <p>A new version of Symbiosis <a href="">v1.2</a> has been released.</p> <h3 id="introducing-instanced-dispatchers">Introducing Instanced Dispatchers</h3> <p>Prior to v1.2, all events were globally registered via a <a href="">statically defined event manager</a>. This posed issues when some models needed to have events contained to themselves, instead of available in the global event space.</p> <p>To solve this issue, we’ve added an <a href="">Event Registry</a>..</p> <h3 id="composer-update">Composer update</h3> <p>We’ve also improved the composer setup to properly use the <a href="">PSR-0</a> standard. As such, the autoloader has been removed. If you were using Symbiosis prior to v1.2 and were using the autoloader directly, you will need to update to use the composer autoloader.</p> <h3 id="conclusion">Conclusion</h3> <p>Symbiosis is now more flexable to implement. Our next steps will be to improve the plugin system to be more (and less) coupled to the event system.</p> <p>See the full changeset: <a href=""></a></p> <img src="" height="1" width="1" alt=""/> Join the Zumba Engineering Team We are looking for a few good engineers to help us achieve our technical vision. Chris Saylor christopher.saylor@zumba.com 2013-08-06T00:00:00+00:00 <p>Our engineering team is in the process of adopting new technologies to solve complex business problems, and we’re looking for additional engineers to help us out.</p> <h2 id="introduction">Introduction</h2> <p>Zumba offers a unique opportunity of providing a startup feel with the security of an established company. Our business is growing, and as such our technology stack also needs to grow with it. This allows us to work on newer technologies (when it makes sense), so there’s always something to learn.</p> <h2 id="what-to-expect">What to expect</h2> <p>The Zumba system operates on AWS linux based instances with a mix of MySQL and MongoDB driving our applications in a service oriented architecture. Our deployments work on a continous integration strategy as we use unit testing coupled with Jenkins CI server to deploy quality code to production.</p> <p>Our projects are scheduled and structured with the scrum methodology in two-week sprints.</p> <p>If you join our engineering team, you will be up and making your first commit within a few hours as your environment is managed by our vagrant setup.</p> <p>Various positions offer unique opportunities to work on fascinating technologies:</p> <h3 id="frontend">Frontend</h3> <ul> <li>Use sass to style and build rich applications.</li> <li>Work with backbone.js and require.js to make our applications more interactive.</li> <li>Drive the user experience using modern web frameworks and mixins.</li> </ul> <p>Positions:</p> <ul> <li><a href="">Frontend Developer</a></li> </ul> <h3 id="backend">Backend</h3> <ul> <li>Work with advanced PHP techniques as more of our application moves to PHP 5.4.</li> <li>Solve difficult performance issues by utilizing different datastores, such as Memcached, MongoDB, and Elasticsearch.</li> <li>Help us explore and improve our architecture with the JVM and Scala.</li> <li>Continuously improve our continous integration strategy.</li> <li>Work with other engineers that are excited about the technologies they bring to the table.</li> </ul> <p>Positions:</p> <ul> <li><a href="">Senior Software Engineer</a></li> <li><a href="">Software Engineer</a></li> </ul> <h2 id="join-us">Join Us</h2> <p>If working with exciting technologies and experienced engineers on solutions to complex problems is what you’ve been missing, then <a href="">Join the team</a>! We have several openings for software engineers and frontend developers.</p> <img src="" height="1" width="1" alt=""/> Introducing Zlogd - An open source universal logging agent Socket server for external apps to ship logs in a non-blocking fashion. Chris Saylor christopher.saylor@zumba.com 2013-06-20T00:00:00+00:00 <p>Logging is a critical part to an application’s life-cycle. It helps identify problems when they occur and track trends of events.</p> <p>There are many ways and products to accomplish this depending on your goal, but today we’re introducing our open-source logging agent to help: <a href="">Zlogd</a>.</p> <h2 id="use-case">Use case</h2> <p>In many instances in web applications, multiple servers are involved, each involving one or many log files. When tracking down a problem or examining for trends, this becomes very problematic.</p> <p.</p> <h2 id="zlogd-workflow">Zlogd Workflow</h2> <p>In our proposed stack configuration, our central logging server will contain the following:</p> <ul> <li>Logstash agent for receiving all messages from all the Zlogd instances.</li> <li>Elastic search for Logstash to store the messages</li> <li>Kibana3 to have one interface to examine the logs</li> </ul> <p>The workflow would then lend itself to this configuration:</p> <p><img src="" alt="Zlogd Workflow" /></p> <h2 id="benefits">Benefits</h2> <p.</p> <p>This also facilitates the central logging paradigm, which allows the consolidation of all application logs.</p> <h2 id="conclusion">Conclusion</h2> <p).</p> <h2 id="reference">Reference</h2> <ul> <li><a href="">Nodejs NPM package</a></li> <li><a href="">Github repo</a></li> </ul> <img src="" height="1" width="1" alt=""/> Middleman.js Project Open Sourced A library giving you control over the execution of third party libs. Stephen Young stephen.young@zumba.com 2013-02-11T00:00:00+00:00 <p>We’re on a roll this week, and it’s only Monday! Yesterday Chris Saylor introduced <a href="">Mongounit</a>. Today I am going to show you another open source project we’ve written called <a href="">Middleman.js</a></p> <p>Middleman is a javascript library that can be used in a browser or as a node.js module. It allows you to hook into the execution of any function to apply prefiltering to that function’s arguments; globally change the function’s execution context; pipe that function’s arguments to several others; or, overload the function to behave differently. It does all this for you seamlessly, so you can simply call the original function as you normally would.</p> <h3 id="example">Example</h3> <p>For a practical example we need to start with some assumptions: You have a new mobile project that communicates with a RESTful server API. After a few weeks of development some project requirements change and now every ajax url must start with <code class="highlighter-rouge">/api</code>.</p> <ol> <li>You don’t want to manually find every ajax call in the project and prepend the url strings with <code class="highlighter-rouge">/api</code>.</li> <li>You don’t want to force your team to remember to do this as you develop.</li> <li>You want to implement this new requirement as seamlessly as possible.</li> </ol> <p>jQuery’s powerful ajax methods are well known, but let’s say you’re working with a more simplified library like <a href="">Zepto.js</a> which was built for mobile. Middleman.js can solve this problem with just a few lines:</p> <script src=""> </script> <h3 id="seamless">Seamless</h3> <p>You can call <code class="highlighter-rouge">$.ajax</code> as you normally would. The filter method will be executed first, and the array <code class="highlighter-rouge">args</code> that is returned by the filter method will be passed to the original <code class="highlighter-rouge">$.ajax</code> method as parameters.</p> <p>Object <em>references</em> are passed by value in Javascript, which is why you don’t see me doing anything with the <code class="highlighter-rouge">ajaxSettings</code> variable after I’ve modified it’s <code class="highlighter-rouge">url</code> property. See <a href="">this</a> for more details.</p> <h3 id="context">Context</h3> <p>In the previous example, Middleman will use the <code class="highlighter-rouge">lib</code> object for context when executing the ajax method (in this case Zepto.js). However, Middleman can also take an optional <code class="highlighter-rouge">context</code> parameter that will be used instead, like this:</p> <script src=""> </script> <p>Notice that the context of the <code class="highlighter-rouge">stringify</code> method became <code class="highlighter-rouge">Array</code>. Had I passed <code class="highlighter-rouge">context : Object,</code> instead, the output would have been <code class="highlighter-rouge">[object Array]</code> instead of <code class="highlighter-rouge">a,b,c,d</code>. You’ve probably seen this kind of behavior in other popular libraries like <a href="">Underscore.js</a>’s <code class="highlighter-rouge">_.bind</code> method.</p> <h3 id="final-word">Final word</h3> <p>Currently there is no way to disable Middleman’s meddling once you’ve called the map function. We’re planning to incorporate an internal registery in a future release that will allow you to inspect the Middleman object to see what methods have been overloaded, and also make it easy for Middleman to flip it’s filters on or off at a granular level.</p> <ul> <li>Repository - <a href="">Middleman.js</a></li> <li>License - MIT</li> </ul> <img src="" height="1" width="1" alt=""/> Mongounit Project Open Sourced Zumba®'s second open source project, Mongounit, has been open sourced. Chris Saylor christopher.saylor@zumba.com 2013-02-10T00:00:00+00:00 <p>Introducing <a href="">Mongounit</a>.</p> <p>Mongounit is a PHPUnit extension modeled after dbunit that allows for fixture-based unit testing of mongo db backed code.</p> .</p> <p>See an example <a href="">Testcase</a></p> <h3 id="more-info">More info</h3> <ul> <li>Repository - <a href="">Mongounit</a></li> <li>License - MIT</li> <li>CI Build - <a href="">Travis CI</a></li> </ul> <img src="" height="1" width="1" alt=""/> Mocking Singleton PHP classes with PHPUnit Convenient way to mock singleton classes for easy, reusable testing. Chris Saylor christopher.saylor@zumba.com 2012-11-26T00:00:00+00:00 <p>In many of our projects, utilities and vendor classes are implemented with a <a href="">singleton pattern</a>. If you’re not familiar, a singleton pattern looks something like:</p> <script src=""> </script> <p>In this post, we’ll cover a nice way to inject a PHPUnit mock object for use in testing methods that utilize singleton classes.</p> <h3 id="inception">Inception</h3> <p>First, we need to identify how this sort of mechanism is mocked. The key aspect of the singleton class is the protected static self attribute. This is what we’re most interested in for injecting our mock object. In our example singleton class, the <code class="highlighter-rouge">self</code> attribute is protected (which is usually the case), so we’ll need the use of the reflection class in order to “unprotect” and inject.</p> <h3 id="mock-class">Mock Class</h3> <p>Now we will look at the code to do the above. First, a sample class that implements our singleton as part of a normal method:</p> <script src=""> </script> <p>In this exercise, we will mock <code class="highlighter-rouge">Someclass</code>’s <code class="highlighter-rouge">hello</code> method to make it return a different string. This is a super-simplified example, normally you would want to do this sort of thing for classes that do network based things, such as an emailer, REST service connector, etc.</p> <p>Let’s create our mock class:</p> <script src=""> </script> <p>The <code class="highlighter-rouge">expects</code> method creates the mock object and injects that mock object into the <code class="highlighter-rouge">self</code> attribute. The method you test that utilizes the singleton will get the mock object when they call the <code class="highlighter-rouge">getInstance</code> method.</p> <h3 id="example-use">Example Use</h3> <p>Here is our PHPUnit test case. We instantiate the class we want to test, mock the singleton object, and alter the output of the method.</p> <script src=""> </script> <h3 id="conclusion">Conclusion</h3> <p>This technique is really nice for common utility classes. We typically use them in the <code class="highlighter-rouge">setUp</code> and <code class="highlighter-rouge">tearDown</code> methods, so we can test the other parts of the method that are important for their logic, and leave a single test to test the actual functionality of the singleton class we’re mocking.</p> <img src="" height="1" width="1" alt=""/> Some CakePHP optimizations Few tips in how optimize CakePHP applications to get better response times and save money Juan Basso juan.basso@zumba.com 2012-11-05T00:00:00+00:00 <p.</p> <h3 id="architecture">Architecture</h3> <p>All our infra-structure is hosted by Amazon AWS. So, some tips are only for Amazon, but others can be reused by other hosting.</p> <ol> <li> <p>Install <a href="">APC</a> or any other opcode cache. It is kind of trivial, but just to make sure.</p> </li> <li>Move things out of the box. We moved all our assets to <a href="">S3</a> and <a href="">CloudFront</a>. It helps in two ways:</li> <li>reduce the load in host boxes (<a href="">EC2</a>), giving more CPU and memory for the PHP applications</li> <li>Allow the browser to make more concurrent requests, since the host is usually different (ok, you can do the same with the same box).</li> <li).</li> <li>Avoid network requests. They could be fast, but you always have the overhead of the protocol, transmission, etc. I will give one example on the code portion.</li> </ol> <h3 id="code">Code</h3> <ol> <li>Store the cake caches locally. All our cache is based on <a href="">Memcached</a>. The internal cake caches (<code class="highlighter-rouge">_cake_core_</code> and <code class="highlighter-rouge">_cake_model_</code>) wasn’t different. But these caches don’t change that often, so why use network for that? So we simply changed these 2 caches to use APC instead of Memcache. It reduced couple milliseconds on each request with a ridiculous change.</li> <li>Do you change your files often in production? Do you have an automatized deployment process? Do you restart your webserver every deploy? If you answered yes for these questions, you may disable the configuration <code class="highlighter-rouge">apc.stat</code>.</li> <li>Check your APC configurations. Don’t let APC run out of memory enough to cache all your code.</li> <li>Do you use cake query cache or cached view based on the model? If no, create a method on your <code class="highlighter-rouge">AppModel</code> called <code class="highlighter-rouge">_clearCache</code> and make it returns <code class="highlighter-rouge">true</code>. Why? By default Cake do some inflections and try loop thru the caches on the <code class="highlighter-rouge">app/tmp/cache/</code> folder to delete the cached files when you make some database change operation (INSERT/UPDATE/DELETE). We did that and we got a huge improvement. One reason is because our sessions are stored in database, causing a insert/update in every single request.</li> <li>View cache. You can use cake view cache or other system, like <a href="">Varnish</a>. Depending of your system cake view cache is enough and give more flexibility, since you can still add some PHP in the caches. Just be careful with restrict pages. Sometimes the cache will make it public for people with the URL. See more details on <a href="">CakePHP Documentation</a>.</li> <li><code class="highlighter-rouge">requestAction</code> and elements. Most of the sites has some dynamic content which need to render several times, for example the site menu. Usually the content comes from database and you have to add the request to the model into the <code class="highlighter-rouge">AppController</code>. It causes one or more query per request, which probably is unnecessary to do in all requests, since it will be the same for few minutes or even hours. The solution is use the <a href=""><code class="highlighter-rouge">requestAction</code></a> inside the element and use the <a href="">cache element</a>. All built-in CakePHP and very simple to use. In the end, after the first request (which could be a little bit slower than usual because the <code class="highlighter-rouge">requestAction</code>) the code will get the raw html rendered from the cache, saving a good time and CPU from your database.</li> </ol> <h3 id="conclusion">Conclusion</h3> <p>The changes above show few actions you could do to improve your site performance and save money with your hosting. There is much more tips to say, but I guess these are the most simple to implement in short time and take advantage of the results.</p> <p>Some of the tips or concept are valid for non-CakePHP sites as well.</p> <img src="" height="1" width="1" alt=""/> Schema.org Product Microdata and Breadcrumbs How to solve the problem of breadcrumbs inside product page Juan Basso juan.basso@zumba.com 2012-09-19T00:00:00+00:00 <p>This week I was looking to improve our shop <abbr title="Search Engine Optimization">SEO</abbr> and one of the things was the <a href="">microdata for HTML5</a>. <a href="">Google Rich Snippets Testing Tool</a>.</p> <p>Looking at <a href="">Google Webmaster</a> documentation, they recommend do use the microdata from <a href="">schema.org</a>, which is a schema made by search providers like Bind, Google, Yahoo! and Yandex. The schema contains a lot of types useful for you website. This works very well in many cases, but some times it requires layout changes to fit well.</p> <p>Before I explain the problem, a little description of microdata in HTML5. Most of HTML tags can have the attributes <code class="highlighter-rouge">itemprop</code>, <code class="highlighter-rouge">itemscope</code>, <code class="highlighter-rouge">itemtype</code>. The <code class="highlighter-rouge">itemprop</code> define the property name and usually the content of the tag is the value, for example a name. <code class="highlighter-rouge">itemscope</code> define a block of properties, for example a product. <code class="highlighter-rouge">itemtype</code> contains the URL which contais the definition of the <code class="highlighter-rouge">itemscope</code>.</p> <p>It means that if you want to describe a product, you create a some <code class="highlighter-rouge">itemscope</code> with, for example, a <code class="highlighter-rouge">div</code> and define all the properties there, like <code class="highlighter-rouge">name</code>, <code class="highlighter-rouge">price</code>, <code class="highlighter-rouge">reviews</code>, etc. It sounds great and worked for almost everything in the page, except one thing: breadcrumbs! The <a href="">Product type</a> do not have any property for categories or breadcrumbs, it is part from the <a href="">WebPage type</a>. But you can not define property for multiple scopes or in a different scope with microdata.</p> <p>Unfortunately I didn’t find any solution with schema.org only, then I had to use schema.org and the deprecated <a href="">data-vocabulary</a>. The code in the end was something similar with that:</p> <script type="text/javascript" src=""> </script> .</p> <p>Do you have a better solution using microdata?</p> <img src="" height="1" width="1" alt=""/> CakeFest 2012 Experiencie from our team in the CakeFest 2012 Juan Basso juan.basso@zumba.com 2012-09-18T00:00:00+00:00 <p>This post is a little bit late, but two of our team participated in <a href="">CakeFest 2012</a>. This was a really nice event, with friendly people, and great topics.</p> <p>If you missed the event, you can check the slides on <a href="">Joind.in</a>. If you would like to see the pictures, <a href="">Facebook</a>.</p> <p>Thanks Zumba<sup>®</sup> for this opportunity. Thanks to everybody that helped to organize CakeFest and everybody we met there.</p> <img src="" height="1" width="1" alt=""/> Our First Presentation - Javascript the important bits Zumba®'s First Training Presentation - Javascript the important bits Chris Saylor christopher.saylor@zumba.com 2012-09-11T00:00:00+00:00 <p>Part of being a web developer is having to work with multiple languages and multiple technologies. We are in the process of moving all our front-end javascript to <a href="">backbone.js</a>, and I thought it would be a good idea to have a refresher on javascript and jQuery to get the team in the right mindset during this transition.</p> <p>Our first presentation has been put on our public <a href="">slideshare</a> account: <a href="">Javascript: the Important Bits</a>.</p> <div style="text-align: center"> <iframe src="" width="427" height="356" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC;border-width:1px 1px 0;margin-bottom:5px" allowfullscreen="allowfullscreen"> </iframe> </div> <p>In the near future, we’re also doing something similar for PHP, including new 5.4 features, namespaces with autoload, and other similar topics.</p> <img src="" height="1" width="1" alt=""/> Symbiosis Project Open Sourced Zumba®'s first open source project, Symbiosis, has been open sourced. Chris Saylor christopher.saylor@zumba.com 2012-08-21T00:00:00+00:00 <p>I’m excited to announce that we have open-sourced our first project: <a href="">Symbiosis</a>. <a href="">Symbiosis</a> is a PHP library that allows for easy integration of an event driven plugin system to any PHP project. It also allows the event system to be used stand-alone from the plugin system.</p> <h3 id="plug-me-in">Plug Me In</h3> <p>In a <a href="">previous post</a>, I described how we used a rudamentary event system to modularize our cart system. Now, we’ve taken a big step in making that sort of modularization easy to implement in many of our projects, and hopefully many of yours. There are many differences between our example in the <a href="">previous post</a> and our open-sourced version:</p> <ol> <li>The event that gets passed to the listeners is now an object. Instead of just passing an array of data, we pass an event object. Not only is the data available, but several key flow control methods as well. Such as stopping propagation of the event, suggesting the client prevent further actions, as well as an easy way to trigger the event.</li> <li>We’ve included the plugin system. Prior, we wrote off how to include the plugin, but with an event driven mechanism, I realized that this was an important step. The plugin manager included with <a href="">Symbiosis</a> includes a couple of key attributes. While developing with a rudamentary version of the plugin manager, I quickly realized that we needed to be able to control the order of some plugins. I’ve added a priority order to the plugin framework that allows you to set the order in which the plugins are executed. I’ve also added the ability to disable plugins from startup.</li> </ol> <h3 id="example-of-symbiosis-in-action">Example of Symbiosis in Action</h3> <p>With a few lines of code, you can see how easy it is to include a plugin system in your PHP project:</p> <p>First we construct our plugin:</p> <script src=""> </script> <noscript>All example code is available in a github gist:</noscript> <p>The plugin must extend <code class="highlighter-rouge">\Zumba\Symbiosis\Framework\Plugin</code> to implement all the proper methods and have default attributes.</p> <p>Next, we register the plugins in the application bootstrap:</p> <script src=""> </script> <p>And finally, somewhere in our application, we need to trigger the event that the plugin is listening:</p> <script src=""> </script> <p>When the event is triggered, the anonymous function we registered in our sample plugin should execute:</p> <script src=""> </script> <h3 id="conclusion">Conclusion</h3> <p>We found this project useful in our own applications, and we hope that you may find it useful as well. We’ve included a way of installation either via a git submodule, <a href="">download</a>, or a <a href="">composer</a> install. The project has 100% code coverage in <a href="">PHPUnit</a> tests, and we feel comfortable with it’s stability.</p> <p>You can see more examples and full documentation on the <a href="">Symbiosis Wiki</a>.</p> <p>If you see anything that could be improved, let us know by opening an issue in the github project, or submit a patch.</p> <img src="" height="1" width="1" alt=""/> Creating bash completion to your console application Shows how to create a file compatible with bash_completion.d and interact with user application Juan Basso juan.basso@zumba.com 2012-08-20T00:00:00+00:00 <p>The number of console applications are growing more and more in web application environment. Many frameworks of many languages uses it and save a lot of time from developers and sysops to do some operations or even create cron jobs.</p> <p>In our company it is not different. We use tons of jobs to do many operations and sometimes makes hard to remember the exactly name of the jobs. To simplify a little bit, we implemented support to <a href="">bash completion</a>.</p> <p>Our jobs has a “starter” (common executable file, which load the configurations, basic stuffs, and run the target operation), similar the frameworks. In our case, we call <code class="highlighter-rouge">/app_folder/job ClassName [methodName]</code> and the <code class="highlighter-rouge">job</code> is just a shell script to execute <code class="highlighter-rouge">php job.php $@</code>. The <code class="highlighter-rouge">job.php</code> do the bootstrap part and loads the classname from the first parameter in a specific namespace. For example, our base namespace is <code class="highlighter-rouge">Zumba\Job</code>, if you pass <code class="highlighter-rouge">job Engineering</code> it will try to load the class <code class="highlighter-rouge">Zumba\Job\Engineering</code>. By default we call the method <code class="highlighter-rouge">execute</code>, but if you pass the second parameter we try to run that method instead of <code class="highlighter-rouge">execute</code>, making one job class reusable for multiple actions.</p> <p>This weekend I saw the <a href="">bash completion for CakePHP</a> from <a href="">Andy Dawson</a>.</p> <p>First thing you need to do is make your application returns the available options for the first and second parameter. It means you will be able to get the available options from console by executing some command. PS: If your application do not change often or you don’t want or can update your app, you can have the parameters hard coded in the bash completion script.</p> <p>In our case, I could add a job class to get that informaiton, but I prefered to use a symbolic method and parse with an <code class="highlighter-rouge">if</code> in the <code class="highlighter-rouge">job.php</code> file. I used that way to avoid create a job class for this proposal and be one more in the list and share the structure from the others methods, which is a little complicated (we have some logs, register execution, etc.). But few free to do the way you want. In the end, you can run my script like <code class="highlighter-rouge">job __check__</code> to get the available classes or <code class="highlighter-rouge">job __check__ SpecificJobClass</code> to get the available methods.</p> <p>The PHP portion is basically that:</p> <script type="text/javascript" src=""></script> <noscript>The code is available on <a href=""></a></noscript> <p>With that, from the console we are able to execute the check (<code class="highlighter-rouge">job __check__</code>) and the list of available jobs: <code class="highlighter-rouge">Job1 MyCustom MyApp TopSecret StopTheCompany</code></p> <p>This was the first step from the script. The second part is get a list of the available methods, ie when execute the <code class="highlighter-rouge">job __check__ StopTheCompany</code> we get something like: <code class="highlighter-rouge">shutdownServers dropDatabase removeAppCode</code></p> <p>Now is time to make the bash interact with your code. You need to create a script in your <code class="highlighter-rouge">/etc/bash_completion.d/</code> folder, with permission 0644. It will be like the code below.</p> <script type="text/javascript" src=""></script> <noscript>The code is available on <a href=""></a></noscript> <p>To explain a bit, the last line will tell bash to complete the executable <code class="highlighter-rouge">job</code> using the function <code class="highlighter-rouge">_my_application</code> (defined few lines above). The function basically test the number of parameters and call your application with the current parameters and transmit to the bash completion. In our case, we have just two parameters, as you can see.</p> <p>After save the file in the <code class="highlighter-rouge">/etc/bash_completion.d</code> restart your bash/console/terminal and you can try to use the completion. I guess I don’t need to put some examples how to use or how it looks like. If you never used bash completion it is not the place to get started.</p> <img src="" height="1" width="1" alt=""/> Creating a testing interface for your API Tired to use curl, crap scripts, remember fields to test your API? Make a simple interface to make it easily. Juan Basso juan.basso@zumba.com 2012-08-16T00:00:00+00:00 <p>Couple months ago we started an integration with <a href="">Ooyala</a>, using JSON API. They provide the <a href="">documentation</a> to the methods and I saw they have a simple interface to test their methods: <a href="">Ooyala API Scratchpad</a>. This interface was very useful while we integrate with them and I thought: <em>“Why we don’t have one interface like that for our API?”</em></p> <p>I started a page with <a href="">Twitter Bootstrap</a> to have a similar functionality, which the goal was to get an interface easy to developers see the response for multiple HTTP protocols, set the parameters, etc. The initial page looks like this:</p> <p><img src="" alt="API Test Page" /></p> <p>For the request, I used <a href="">jQuery</a> AJAX. It means that some HTTP methods (usually <code class="highlighter-rouge">PUT</code> and <code class="highlighter-rouge">DELETE</code>) are not supported by old browsers, but we don’t care as it is an internal tool and everybody uses Chrome or Firefox. Doing the request you get a JSON response formatted, like that:</p> <p><img src="" alt="API Test Page Response" /></p> <p>Developers liked this page, but it wasn’t good enough to me. Then I get the response from <a href="">Creating Self Documentation to your API</a> (also thru AJAX) and made an autocomplete integrated with the parameters generation. Resulting something like that:</p> <p><img src="" alt="API Test Page Autocomplete" /> <img src="" alt="API Test Page Parameters" /></p> <p>This exemplifies how the annotations are important. We get all the API methods, method descriptions, method parameters, HTTP method, etc. from the annotation and generate nice pages like that.</p> <p>The productivity of using this page was greatly improved with not having to toggle between the tester and the documentation. As it is an internal tool, why not complement adding some request log in this tool to show possible errors or behaviors?! I got some help from <a href="">ChromePHP</a> (that put log messages on the response header), integrated with our log system and then parsed it from the response, showing in the page as well.</p> <p><img src="" alt="API Test Page Logs" /></p> <p>After have it in our API, we started this blog and looking the <a href="">Disqus API</a> I saw they have a similar tool called <a href="">Disqus Console</a>. Again, it helped me with some integration. If you have an API, make this type of tool for you client. He definitely will like it.</p> <h3 id="conclusion">Conclusion</h3> <p>I spent one day to create this tool and people are saving time every day. It means sometimes you should take a break and spend some time doing simple tools to gain in long term.</p> <p>You can see what annotations, some open source projects and few lines of code are able to do. Use it! As I told in the annotations post, it is a simple thing with many benefits.</p> <p>Do you want the code of that page? Sure, why not?! <a href=""></a>.</p> <img src="" height="1" width="1" alt=""/> Tech Team at Zumba Convention 2012 Tech team also have fun in the company and went to Zumba Instructor Convention 2012 Juan Basso juan.basso@zumba.com 2012-08-13T00:00:00+00:00 <p>This weekend happened the <a href="">Zumba® Instructor Convention 2012</a>, a meeting between thousands of <abbr title="Zumba Instructor Network">ZIN™</abbr> members. It happens every year and all the home office is invited to attend and participate. It is an odd experience that we hadn’t in any other company and it’s a great opportunity to get feedback from our users and not only from our boss, that sometimes don’t reflect the reality.</p> <p>As the convention happened during business days, we closed the IT department and went there to have some fun, exercise (even if it’s just walking around, because as you know mostly IT people are not dancers, further between professionals), party, interact and for some craziness.</p> <p>This kind of opportunity is nice to see that you helped to make that happen, people are happy and having fun together because you made your job. It is very gratifying and, my friend, no money can pay it. Back in the office you can see the results too, just look in the face and behavior from people, they look more friendly and relaxed, what is improving the productivity.</p> <p>Below you can see a picture with few people from our department in the convention. Yes, we are with this sleepy-face because we were in the Daddy Yankee show in the previous night.</p> <p><img src="" alt="IT Team Picture" /> Left to right: Daniel Matsinger, Girish Raut, Cristiano da Silva, Chris Saylor, Ralph Valdes, Juan Basso</p> <p>If you think it is just one more convention, look how is that convention…</p> <div style="text-align: center" itemprop="video" itemscope="itemscope" itemtype=""> <meta itemprop="url" content="" /> <iframe itemprop="embedUrl" width="560" height="315" src="" frameborder="0" allowfullscreen="allowfullscreen"> </iframe> </div> <img src="" height="1" width="1" alt=""/> Frisby.js or: How I Learned to Stop Worrying and Love the Service Test Using Frisby.js to test API endpoints at the request level. Chris Saylor christopher.saylor@zumba.com 2012-08-11T00:00:00+00:00 <p>Writing a public facing API can be a daunting task, and making sure your endpoints behave how your customers and partners expect is always a challenge. In this article, I will go over how to use <a href="">NodeJS</a>, <a href="">Frisby.js</a>, and <a href="">Jasmine-node</a> to test your API, and how to involve your customers and partners in the process.</p> <h3 id="behavior-driven-development">Behavior Driven Development</h3> <p>Frisby.js and Jasmine are very conducive to the idea of <abbr title="Behavior Driven Development">BDD</abbr>. When creating an API (especially in conjunction with a partner), <abbr title="Behavior Driven Development">BDD</abbr> is a fantastic way to drive coordination between <del>business requirements</del> user stories and code. Frisby.js gives a nice set of methods to turn a user story into a functioning test case.</p> <h3 id="writing-frisbyjs-test-cases">Writing Frisby.js Test Cases</h3> <p>Suppose you have a user story about getting a user’s profile by ID and returning some basic information. The test you would write would be similar to this:</p> <script src=""> </script> <noscript>The code is available on <a href=""></a></noscript> <p>To execute the above test, execute the following from the command line:</p> <p><code class="highlighter-rouge">jasmine-node --verbose</code></p> <h3 id="integrating-with-partners-and-customers">Integrating with Partners and Customers</h3> <p>Communicating problems, changes, and features of an API to a partner or customer can be problematic. Our approach was to separate this kind of testing framework for the API from our internal API repository. This way, we can share the API test repository, allow the tests to show some example uses of the API, and provide an avenue for partners to report an issue.</p> <p>With the use of <a href="">Github</a>, a partner can fork our testing repo at any time, write a test case for the issue they are having, and submit a pull request. We can then verify by reproducing the issue from their pull request test case and truely know that we’ve resolved the partner’s issue when their own test case passes with the fix.</p> <h3 id="caveats">Caveats</h3> <p>There is a small drawback to service-level testing (as opposed to unit testing). Since you’ll be making actual requests to your API, you don’t have the luxury of fixtures creating clean databases after the test case is run. So, depending on the data changed in the request, you’d have to chain a cleanup method after the test case is run.</p> <p>Also, as opposed to unit tests, the failed test will not pinpoint in the code where the error happens, so you’d need to rely on other tools, such as logging, to find the issue in the source.</p> <h3 id="conclusion">Conclusion</h3> <p>With a BDD approach and the use of some clever testing tools, it’s easier than ever to test an externally available service for full coverage and to provide a better channel of communication with partners to resolve bugs and implement features.</p> <img src="" height="1" width="1" alt=""/> Creating Self Documentation to your API Show an idea how to self document the API endpoints using code Juan Basso juan.basso@zumba.com 2012-08-05T00:00:00+00:00 <p>When we started our internal web-service we had a problem between the team that works in the web-service application and the front end developers because the front end developers didn’t know exactly what was the expected parameters, endpoints, HTTP verb, etc. This increased the delay in delivery features and in some cases caused a few bugs.</p> <p>Looking at the problem, <a href="">Chris Saylor</a> had a great idea to use the controller annotation and generate a page with that. He started with some proof of concept, which was very well accepted by the front end developers and than Chris and I improved that page content and layout.</p> <p>As part of the page content improvement, we started to use our own annotations, similar many projects do, ie: - <a href="">PHPDoc</a> - <a href="">PHPUnit</a> - <a href="">Doctrine</a> - <a href="">Symfony</a></p> <p>We created tags like <code class="highlighter-rouge">@inputParam</code>, <code class="highlighter-rouge">@optionalInputParam</code>, <code class="highlighter-rouge">@returnParam</code>, etc. It helped us to create a more useful documentation page, with more relevant information for front end developers instead of a regular PHP documentation that doesn’t help front end developers at all.</p> <p>In our web-service we created a method that reponse HTML or JSON, depending of <code class="highlighter-rouge">Accept</code> header. If the browser requests the site, it accepts <code class="highlighter-rouge">text/html</code> and we provide a HTML page using <a href="">Twitter Bootstrap</a> with <a href="">jQuery</a> and make an AJAX request to the same URL, getting the JSON version (that is usually what our web-service response) and generating the HTML with all URI and parameters required for that.</p> <p>Below is an example of our JSON response:</p> <script type="text/javascript" src=""> </script> <noscript>The code is available on <a href=""></a></noscript> <p>With jQuery we loop thru all the controllers and actions and generate the page. Also we have a search with jQuery to filter results, making the life much easier to front end developers.</p> <p><img src="" alt="Service Documentation Sample View" /></p> <p>Before you ask why we separate the HTML in one file and the content in an AJAX, the answer is simple. Our web-service is designed to performance and always response in JSON (yes, we can support different outputs, but it is our internal web-service and we don’t want to add more complexity). The HTML portion is a static file, not a PHP file. Another reason is make this method reusable, like we have for our web test interface (I will describe in another post).</p> <script type="text/javascript"> // Hack to fix the reddit link, because this link was inserted without the end slash reddit_url = ''; </script> <img src="" height="1" width="1" alt=""/> Using Application Events to Hook in Plugins How to use application level events to encapsulate core code from add-on code. Chris Saylor christopher.saylor@zumba.com 2012-08-04T00:00:00+00:00 <p.</p> <h3 id="benefits-of-plugins-versus-adding-the-functionality-in-place">Benefits of Plugins versus Adding the functionality in Place</h3> <ul> <li>It prevents gargantuan classes to handle every situation that your software might need to handle.</li> <li>Allows for easier testing of the individual plugins.</li> <li>Provides various ways of including the plugins (including ways for filtering in specific situations).</li> </ul> <h3 id="event-horizon">Event Horizon</h3> <p>Application events come in many shapes and sizes. We chose to utilize a static class for “registering” the event to a callback method.</p> <script src=""> </script> <noscript>The code is available on <a href=""></a></noscript> <p>This adds a callable entry to the events array per event “tag”. Notice that <code class="highlighter-rouge">$events</code> can be an array, so assigning multiple events to the same callback is very easy to do. Now all we need is a way to trigger the events we register to this class at runtime.</p> <script src=""> </script> <noscript>The code is available on <a href=""></a></noscript> <p>When triggering an event, we simply pass an array of data to the callback that was registered. So all our callbacks should have a signature similar to: <code class="highlighter-rouge">public function someCallback($data = array())</code>.</p> <p>Now we have all the tools we need in order to start registering and triggering events.</p> <p>Many frameworks have started to provide this kind of application runtime events (which inspired this kind of system), such as <a href="">CakePHP 2.x</a>, as well as ORMs such as <a href="">Doctrine2</a>.</p> <h3 id="plug-and-play">Plug and Play</h3> <p.</p> :</p> <script src=""> </script> <noscript>The code is available on <a href=""></a></noscript> <h3 id="conclusion">Conclusion</h3> <p.</p> <img src="" height="1" width="1" alt=""/> CakePHP and Code Coverage How to ignore CakePHP core files from code coverage in Jenkins when testing the application. Juan Basso juan.basso@zumba.com 2012-08-03T00:00:00+00:00 <p>Recently we started a new API application in <a href="">CakePHP</a> 2.2 and put some code and unit tests. When we put the code in <a href="">Jenkins CI</a> to a continuous integration we setup the code coverage and in the reports we got couple CakePHP core files as not covered.</p> <p>Our application doesn’t care about cover Cake files, and we also don’t want to include Cake core tests in our <abbr title="Continuous Integration">CI</abbr>. Leaving these files in CI give reports that our code is not well covered, which it is not true and hide the uncovered code from our app.</p> <p>As CakePHP, we also use <a href="">PHPUnit</a> in other project, but there we use the <code class="highlighter-rouge">phpunit.xml</code> configuration file. Using the configuration file is easy to put folders/files on <a href="">black and white lists</a>, but it is not possible when using CakePHP because it has their own runner system. Our solution was figure out a way using code. We changed our test suite to do it, see below.</p> <script type="text/javascript" src=""> </script> <noscript>The code is available on <a href=""></a></noscript> <p>With the code above you can run in your console: <code class="highlighter-rouge">./Console/cake test app AllApp --coverage-html=./report</code></p> <p>You can replace the <code class="highlighter-rouge">--coverage-html</code> by <code class="highlighter-rouge">--coverage-clover</code> to use in Jenkins (or both, like in our case).</p> <img src="" height="1" width="1" alt=""/> Kicking Off the Blog This article just explain the goals of this blog and to announce open source projects from the company Chris Saylor christopher.saylor@zumba.com 2012-07-21T00:00:00+00:00 <p>Zumba Fitness<sup>®</sup>’s techology department will begin blogging about the technology that we use, the problems that we encounter, and the cool things that’s in development.</p> <p>We’ll also use this as a platform to announce projects that our team has decided to open-source. We look forward to feedback (and hopefully some pull-requests) from the community.</p> <img src="" height="1" width="1" alt=""/>
http://feeds.feedburner.com/zumba_engineering
CC-MAIN-2017-43
refinedweb
20,667
53.41
Hi all, > >. > I now use a OutputStream decorator like this when calling ImageIO.write(): public class ImageIOBetterOutputStream extends OutputStream { private OutputStream out; // the actual stream private volatile boolean isActive = true; public ImageIOBetterOutputStream (OutputStream out) { this.out = out; } @Override public void close() throws IOException { if (isActive) { isActive = false; // deactivate try { out.close(); } finally { out = null; } } } @Override public void flush() throws IOException { // do nothing } @Override public void write(byte[] b, int off, int len) throws IOException { if (isActive) { out.write(b, off, len); } } [...] (overwrite the other methods the same way) } That way, I don't have to use a ByteArrayOutputStream to buffer the contents in memory, and Tomcat's OutputStream is also protected from future calls to flush() from the ImageIO (I just have to make sure that the call to close() is inside a finally-block). (If flush() is the only method that the ImageIO calls after an IOException, it probably would be enough to overwrite flush() only). Since I'm using this class as OutputStream for the ImageIO, there also haven't been any more errors. >? Does anyone have a clue about this? I can understand recycling objects like the Request/Response objects (e.g. if they contain lots of fields), but at the moment I don't have an idea why it would be useful to recycle OutputStream objects. > Also, is there any hint to this on the Tomcat documentation/wiki? Because I > don't think it would be unusual to dynamically generate images and > serve them to clients using the ImageIO, and I think it could be a bit > frustrating to find the actual problem if one doesn't know this. I couldn't find any hints about this in the Tomcat documentation/wiki, but I think it should be mentioned somewhere. Of course, the behavior of the ImageIO is strange, but Tomcat's recycling of OutputStreams enable the errors when using the ImageIO. If it is desired, maybe I could contribute something to this topic for docs/wiki? Regards, Konstantin Preißer --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org For additional commands, e-mail: users-help@tomcat.apache.org
http://mail-archives.apache.org/mod_mbox/tomcat-users/201107.mbox/%3c02e901cc4af3$a59d9ca0$f0d8d5e0$@preisser@t-online.de%3e
crawl-003
refinedweb
353
53.31
So first thing is: cannot use array<> inside namespace; using [] works just fine namespace nsTest { array<int> arrgh; }; Result: "Identifier 'array' is not a data type" Second thing found when trying to put one of our modules inside namespace; either it's me failing in reading docs , or AS in implementing them. Anyway, code is something like this: // file1 namespace nsTestTwo { shared interface nsIface { nsIface@ parent { get; } } } // file2 namespace nsTestTwo { class nsClass : nsIface { nsIface@ mommy; nsClass( nsIface@ parent ) { @this.mommy = parent; } nsIface@ get_parent() { return( @this.mommy ); } } } Fails to compile as well, "Identifier 'nsIface' is not a data type", same thing when i add any func returning nsIface@ to namespace above. If i get rid of namespace only, leaving everything in global scope, everything starts to work. Checked with r1558. EDIT: Small correction: "file1" is supposed to be used as a header file for other modules. Edited by Wipe, 17 February 2013 - 01:36 PM.
http://www.gamedev.net/topic/638946-namespace-problems/
CC-MAIN-2014-10
refinedweb
154
61.06
I have done a simple sitemap.xml for my website: <?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet <url> <loc></loc> </url> </urlset> and have an XSL stylesheet as follows: <?xml version="1.0" encoding="ISO-8859-1"?><xsl:stylesheet <xsl:template <html> <body> <h2>Site Map</h2> <table border="0"> <xsl:for-each <tr> <td><xsl:value-of</td> </tr> </xsl:for-each> </table> </body> </html></xsl:template></xsl:stylesheet> When I try and display this in my browser it shows the heading "Site Map" but not the link data in the <loc> elements. If I change the xml to remove "xmlns="" from the <urlset> element it works fine. I've also tried replacing my very basic xsl file with the google gss.xsl file and have the same problem. Anybody got any ideas? Thanks in advance, Rach I apologize for not seeing this post earlier. I'm not sure (yet) what the issue is. I'll look at it a bit more when I get time. I don't think it will be a big deal. The for-each loop isn't resolving into anything. My first theory would be a namespace [rpbourret.com] problem. Namespaces tend to be the big blocker in XML. This will fix it. You can simplify it if you want by making the sitemap namespace the default namespace: The XML File (Untitled1.xml): The XSLT File (Untitled2.xsl): Again, apologies for not responding sooner. The email must have blown right past me. It's been busy in my day job. Take care, Rachael
http://www.webmasterworld.com/xml/3423128.htm
CC-MAIN-2014-41
refinedweb
261
86.71
Advanced Namespace Tools blog 18 December 2016 Debugging rcpu out from ANTS service namespace The Problem While working on getting my newest node connected to the rest of my grid, I hit a snag. While working in the independent "service namespace" that ANTS creates under the usual userspace, I received errors trying to rcpu out to another node: /usr/glenda Write error Write error echo: write error: inappropriate use of fd bind: /mnt/cpunote: '/mnt/cpunote' not found I tried to rerootwin into the usual userspace, but received the same errors, minus the final line. If I am within the default user namespace (not entered via rerootwin) then rcpu out to other hosts works correctly. rcpu IN to the service namespace also works correctly. So, there is a bug/incompatiblity in the current service namespace relative to what the new rcpu expects and needs. Finding and fixing this is the current task. Debugging process Discovery: rimport works, but rcpu does not. Because rimport and rcpu share much behavior (both use rconnect to connect to exportfs on the remote host) this eliminates many possible sources of error. Discovery: when using stub to block off access to /net, the sequence of errors is mostly unchanged. I wanted to make sure the errors I was seeing were originating locally, not on the remote side. Discovery: the ants service namespace does not place a mntgen on /mnt, using instead a premade list of necessary directories. This is the cause of the /mnt/cpunote error. However, adding the mntgen solves only that error, and makes the behavior in the service namespace the same as the behavior in the rerootwin namespace, which still fails with the same output. Discovery: the initial line of the output appears to be from the pwd command, it changes to match whatever directory the rcpu command was executed from. In a working use of rcpu, that pwd command isn't printed. I believe it is sent to the remote host in the following line, the only place pwd appears and the final line of the rcpu script which 'does the business' after the functions and vars have been defined: exec $connect $host <{dir=`{pwd} pvar dir cmd; builtin whatis server; echo server} client <[10=0] >[11=1] >[12=2] Discovery: I verified that the old cpu command works as expected from the service or rerootwin namespace. Discovery: With some debugging statements echoed to a log file, I can see that nothing is being run on the remote side at all, the server function isn't being executed at all there. Discovery: Even after an auth/newns from the rerootwin namespace to establish a standard namespace, the same errors appear. (Side note: how does auth/newns work with drawterm without using the savedevs/mydevs trick in the manner of rerootwin? I would expect to lose access to the i/o from /mnt/term devices.) Discovery: rcpu works after rerootwin -f ramboot when drawterm in to the standard namespace. The combination of this fact with the last fact is making me believe that there must be a different issue involved than something just being missing/in the wrong place in the namespace. Discovery: the actual root of the issue seems to have to do with the /fd directory, but it is still unclear to me just what is going on. I have discovered that shells within the ants service namespace have 3 more fd files visible in /fd. The rcpu script makes some assumptions about what is going on in /fd. If I increase the fd numbers in the exec $connect command shown above, I no longer see any errors - but I still don't get a working connection, things just fail to work with no error messages displayed. The modified script fails silently the same way when run from the standard namespace. Discovery: played with the fd numbers in the redirects some more, seems like all the seemingly sensible things I tried (offsetting either/both the left and right side by 3) produced the same kind of silent failure in both standard and nonstand fd environments. Discovery: here is what cat /proc/$pid/fd looks like in a working environment: 0 r M 1065 (0000000000000001 0 00) 8192 84 /dev/cons 1 w M 1065 (0000000000000001 0 00) 8192 650 /dev/cons 2 w M 1065 (0000000000000001 0 00) 8192 713 /dev/cons 3 r M 1042 (000000000000890c 1 00) 8192 719 /rc/lib/rcmain 4 rw | 0 (0000000000011241 0 00) 65536 307 #|/data 5 r M 1065 (0000000000000001 0 00) 8192 84 /dev/cons And here is what it looks like from an environment where rcpu is failing: 0 r M 1066 (0000000000000001 0 00) 8192 154 /dev/cons 1 w M 1066 (0000000000000001 0 00) 8192 1010 /dev/cons 2 w M 1066 (0000000000000001 0 00) 8192 1073 /dev/cons 3 r c 0 (0000000000000002 0 00) 0 0 /dev/cons 4 w c 0 (0000000000000002 0 00) 0 22 /dev/cons 5 w c 0 (0000000000000002 0 00) 0 5 /dev/cons 6 r M 1016 (000000000000890c 1 00) 8192 719 /rc/lib/rcmain 7 rw | 0 (000000000000f501 0 00) 65536 307 #|/data 8 r M 1066 (0000000000000001 0 00) 8192 154 /dev/cons Something looks quite wrong with fd 3, 4, 5 in that list. The manpage for the dup device and for the proc device don't seem to explain exactly what the 'c' means vs the 'M' - it seems to be "device type" but what is an M device vs a c device? What is creating these 3 'extra' fds which have an iounit of 0? And I still don't get how all this is messing up rcpu. The Solution What I had missed in my attempt to fix things by increasing the fd redirection numbers in the exec $connect command was that those file descriptor redirection numbers also appear in the server side fn. With those numbers changed to match, things worked correctly. The reason I did not think to look at the server side fn was that earlier in debugging, it was not being run on the server at all. With the change to the redirects to use higher fd numbers, it was being run, but without the input/outputs matching up correctly, it did nothing. To fix this in ANTS means I need to add the slightly modified version of the rcpu script with higher numbers to my collection of patched files. I might even ask if the 9front maintainers want to increase these numbers in the standard distribution, because perhaps there are other ways that users might end up with "extra" fds being used in their shell and would run into the same conflict. However, its probably just me and my weird stuff that is affected, so I'm not really expecting such a change. [Later edit - there is more to this story, the next blog post describes the rest of the journey.] Lessons Learned This debugging process used up many more hours of work than it probably should have, and I only found the answer with help from cinap lenrek. (Thanks cinap!) I had actually put most of the pieces together, but I was failing to check my own server-side debug logs, which would have shown that after the first change of redirection numbers in the connect command, I was now actually reaching the server, just failing to establish the connection correctly after that. Even though my first, half-fix change caused the error messages to go away, I failed to realize that the failure mode might be different, and I needed to re-examine the knowledge I had gained from earlier in debugging. I made the incorrect assumption that even without the error messages, the server side fn still wasn't executing. The main lesson seems to be: when debugging, once you change one aspect/symptom of the problem, you need to recheck all the data you had gathered previously, because things may have changed and the exact mode of failure could be different. This is obvious when stated like this, but in practice, it is easy to forget about exactly what things you are still assuming to be true.
http://doc.9gridchan.org/blog/161218.rcpu.namespace
CC-MAIN-2017-22
refinedweb
1,372
59.16
List of Articles in C-Sharp 01) Introduction to classes and objectsA class is a blue print or a template. One or Many outcome of the template is known as object(s). 02) Passing parameters to a function by Value and by Reference In this article we will look at the parameter passing difference when it is passed as value or passed as reference. 03) Overloading the casting operators We know that standard data types supplied by languages are well known and there will be operators like +,* ,% operates on these data types. But, what is the case if it is user-defined types say a 3dpoint class, which is the combination of three integers. Well, all languages that supports operator overloading says, “It is you Type. Please you say how the operator + should work”. 04) Interfaces in C# An interface is a contract. If you are defining an interface, then you are describing set of rules. A class can follow the rules specified by the interface. When a class fulfills the contract or rules specified by the interface, we could say that a class implements the interface. 05) Position based indexers vs. Value based Indexer In this article we will look how we can implement our own indexer for our collection class. We will have a look at both position and value based indexers. 06) Support Foreach for your Collection class “ForEach” is loop like “For”, “While” etc. When you are using foreach loop you no need to worry about where the collection starts and where it ends. These are taken care by the foreach loop itself. You may already by the foreach. Now, let us move how can we implement such a foreach for our collection class. 07) Stack and Queue Stack and Queue both are collection classes sported by the dot net framework. Queue operates on First in First Out (FIFO) principle. Stack operates on Last in First out (LIFO) principle. That is; 08) Exception Handling Part - 1 An Exception is an object delivered by the Exception class. This Exception class is exposed by the System.Exception name space. Exceptions are used to avoid system failure in an unexpected manner. Exception handles the failure situation that may arise. All the exceptions in the dot net framework are derived from the System.Exception class. 09) Exception Handling Part - 2 This class is provided by dot net framework to handle any exception that occurred. The Exception class is the base class for all other Exception class provided by dot net framework. The exception object has some important properties. Some of then are given below: In the previous part we only used the catch block and missed all the above said information. The exception object is thrown by the piece of code, which raises an Exception and the handler code catch that Exception object and make use of the information packed in it.. 10) Delegates Part - 1 Delegate is a reference type just like other object. When you create an object, the memory is allocated for the object on the heap and reference for it is stored in the reference variable, which is in the stack. Consider the below statement: Organization Org = new Organization("C# Corner", staff1, staff2, staff3, staff4 ); Here, the organization object is created on the Heap memory and a reference to that memory location is stored on the stack identified by the token Org. Like the Org reference, the delegate reference type will refer to the function address. At runtime the function will be loaded into a code segment of memory like Heap Segment for object created using new keyword. If we take the starting address of the function (First line of translated code) in the Code segment and store it in a reference variable we call that reference variable a Delegate. 11) Delegates Part - 2. 12) Creating Custom Events An event is kind of ‘Something Happened’. Some examples are the button got pressed; a check mark from the check box is removed. We all know that we call this kind of action as Events happening to those controls. 13) Form Fade-out In this article I will explain how to display the form that goes fully transparent before the form get closed. Follow the below specified steps and I will give some explanation when the step requires it. 14) Modal and Modeless forms In this article I will walk you through how do you create different forms and access form from other forms. While we developing I will help you to understand model and modeless forms and some important control properties. 15) Auto Complete Textbox The auto complete feature of the text box allows the user to enter part of the details in the text box and complete the remaining automatically. Say for example a country text box, which will fill the entry India when first two letters are typed. There are two important way we can save the typing. One is auto complete by filling the remaining text and other one is providing a suggestion in the form a matching list to pick the correct one. 16) Listboxes and how do we use it In this article we will look at how do we use listboxes. First we will start with simple list box then we go ahead and deal with checked list boxes. Finally we will lear how to prohibit selecting some particular listbox items. 17) Linklabel and Numericupdown control Linklabel is kind of label control but it serves the concept of hyperlink. With this control you can mark portion the label text or even the entire label text with an underline. Clicking on the underlined text raises the event LinkClicked. By providing the handler for this event you can take an action. NumericUpDown control is combination of text box and a pair of arrows pointing is opposite direction. Clicking the arrow or holding the mouse pointer on the arrow will increment or decrement the associated value. The current value is displayed on the text box part of the control. When we click the arrow an event ValueChanged is raised for taking the action. 18) Picturebox and Progressbar controls In this article we will explore the PictureBox and Progressbar controls with an example walk through. Picturebox control is mainly used to an image. The image type can be bmp, jpg, gif, png etc. Progressbar control is used to show the progress of long running process visually. I will walkthrough an example and explain the control properties and methods and events when it is used in the application. 19) Stream Reader, Stream writer and with OpenFile and SaveFile dialogs Here you will see how to use File open dialog and folder select dialogs. Also you will see how do we use StreamReader and StreamWriter class. We also are going to use Path utility class, which has static function to phrase the string that comes in the form of file path. 20) Important Project properties that one should know Here, I will explain some important project properties with examples. I will show you the following in this article: 1) How do you set Output path for your binaries? 2) How do you set version and other file information to your output binary? 3) How do you set Icon to your project? 4) How do you conditionally compile some piece of code? 5) How do you ask compiler to skip some warning or treat a warning as error? 6) How do you use Pre-Build and Post-build events? 7) How do you debug a dll and get a hit to the break point when the when the dll project in not part of the solution in which exe resides?. 21) Introduction to debugging Debugging is the process of finding the logical errors on the program by examining the statements during the execution. The execution of a program pauses when the breakpoint is hit. Here, I will walk you through the debugging the sample application supplied with this article. 22) Some useful debugging windows In this second part we will still use the same sample that we used in the previous part. Here, we will cover some other important windows that help the process of debugging. on the deployed environment. Dot net framework provided a facility to do the tracing on the client end and you can the information from them in the form of flat text file or as an event log file. In this article, we will change the existing sample used in the previous two parts to enable the logging in the flat text file as well as in the windows event logger. In this article we will explore the usage of the container controls. The container controls are treated by the top most container say a dialog as a control. The container controls can hold some other controls. First we will explore the Group box container control then we will move ahead with other container controls. The container control panel is all about this article. We will see how do you use the panel control to group the controls and then we will explore the important properties of this container along with this example. different set of properties for the panels on both side of the splitter. TabControl is collection of Tab pages. Each tab page can act as a separate container. And we all know that a container can hold controls in it. The simple explanation for Tab Container is, a container, which can hold N number of containers in the form of book pages. The tab control is useful to group the controls based on the information they give or receive. Say for example, in the credit card transaction screen, the personal information can be kept in one tab page, and the actual card details and the amount of transaction can be kept in other page. In most cases, people usually go for the tab control when the form is not enough to fit all the controls. Here we will see how do we use the tab container, then we will add some tab pages dynamically. Finally we will explore some important property for this control. 28) Container Control: FlowLayoutPanel Flow Layout Panel container arranges the controls in a specific pre-defines orders. When the Container holding the control is resized the controls are automatically arranged inside the container to maintain the flow specified.In this example we will place some controls inside the flow layout container and see some important properties and methods that controls the container content. profession. Strip is a relatively narrow piece of something. Dot net has three important strip controls namely MenuStrip, StatusStrip and ToolStrip. In this article we will start with the menustrip control. Just Like menustrip control toolstrip control also a relatively large piece of placeholder where you can place icon buttons, combo box, labels etc. In this article we will look at the usage of the toolstrip controls Status strip is a family of strip controls like MenuStrip and ToolbarStrips, which is usually used to display some quick help and information to the user who is using the application. A status strip control will look like the one shown below: C# status strip control allows you add even combo and textboxes to the status. In this example we will have a look at how we add status bar to the application and how will we use it. Then we will look at some of the important properties and using that efficiently. We will see how we can play sound files in C#. Sound can be played in two different ways. One way of playing is using the System.Media namespace and the other way is using the Windows Media Player Active-X Control... BackgroundWorker component is used to perform long running tasks in the background while the application running in the foreground will still look for the user events and responds to them. Usually when application is busy (i.e.) performing a long running task, users of the application cannot interact with the UI elements and the only existing thread is engaged with that long running task. Here... FileSystemWatcher component will track the changes in the file system. The changes include file creation, deletion, Modification and renaming a file. In this article we will create sample that can spy on your file system based on the folder path that you specify. . This article explains publishing the application in the network share path. It also explains how do you push the application updates to the installed locations. The article will walk you through Publishing the Application, installing published application, updating the new version in the published location, notifying the clients with the new update... In this article I will walk you through generating a report using the Microsoft supplied report template (.rdlc file). We will get the data from the “Titles” table of the Microsoft supplied “Pubs” database.. What is WMI? WMI stands for Windows Management Instrumentation. WMI is set of classes living in a separate world of dot net framework in the form of wrapper classes around set of native classes that retrieves hardware information. Say for Example, if I want to retrieve the information like remaining battery in my laptop when it is off the power This article explain how do you perform drawing in GDI+ by drawing on screen with differen Pen and brush configurations. Once you learn this, you start using the Pen and brushes easily in your drawing. . This article explains how do we update the application configuration file at run time and retrieve the updated content without restarting the application. This article explains starting a simple process by displaying notepad. Later the article goes on explaining - running the batch file process, passing parameters to it, running it silently. The article end by showing receiving a exit notification from the spawned process. This article explains how can we create our own performance counters using C#.Net with a suitable example displayed on the right side. Here, the button clicks are tracked with two custom performance counters. This Tutorial Article shows how do you use the debugger object to log information into Output window. It also shows getting Call StackTrace information and displaying it in the output window This Article explains some most useful debugger attributes with examples. Through these attributes you can decide how the debugger should display the class informations in debug windows. This article shows writing your own custom debugger visualizers. The article starts with what is visualizers and then walks you through creating a visualizer for the Stack collection class provided by Framework... This article shows writing your own custom debugger visualizers. The article starts with what is visualizers and then walks you through creating a visualizer for the Stack collection class provided by Framework... This article with demo videos, explain the how one can make use of the below-given method level security actions: 1) Demand Action 2) Link Demand Action 3) Inheritance Demand Action 4) Deny Action 5) PermitOnly Action 6) Assert Action We know that a long running background processing application will make use of system tray as they do not want to interact with you. But in the meantime as a user you may need to know what is the status of the background application or what it is doing or get some input from it etc., This article implements a Notify Icon with a simple sample. The mask property controls the data format and data character validation. Like the text box control, the output from the masked edit also read from the text property. >>IMAGE. This article implements a Notify Icon with a simple sample. 34) Using the MaskedTextBox control. 35) Using RichTextBox Control in C# >>IMAGE. 36) Playing Wave Sounds 37) Using BackgroundWorker Component 38) Monitoring a FileSystem Using FileSystemWatcher 39) Making Use of WebBrowser Control 40) Publishing C# Applications 41) Generating RDLC report using dotnet framework 42) Retrieve List of stopped services using WMI Query 43) GDI+ Drawing using Pen and Brush 44) Creating OwnerDrawn ComboBox 45) Reading Data From App.Config File. 46) Dynamically updating the App.Config file This article explains how do we update the application configuration file at run time and retrieve the updated content without restarting the application. 47) Running a Batch file as Process 48) Creating Custom Performance counters 49) Using Debugger Break, Log and getting Stack Trace in C-Sharp 50) Debugger Attributes 51) Custom Debugger Visualizers 52) CAS - SecurityAction at assembly level 53) CAS - SecurityAction at Method level 1) Demand Action 2) Link Demand Action 3) Inheritance Demand Action 4) Deny Action 5) PermitOnly Action 6) Assert Action Excellent Site. Thanks for teaching and sharing your c# knowledge. Nice interesting information on the various concepts of C language using the objects and classes. nice interesting information
http://www.mstecharticles.com/p/csharp-articles-and-tutorials.html
CC-MAIN-2018-13
refinedweb
2,804
62.48
nawk (1) - Linux Man Pages nawk: pattern scanning and processing language NAME gawk - pattern scanning and processing language SYNOPSISgawk [ POSIX or GNU style options ] -f program-file [ -- ] file ... gawk [ POSIX or GNU style options ] [ -- ] program-text file ... DESCRIPTIONGawk is the GNU Project's implementation of the AWK programming language. Brian Kernighan's awk and numerous GNU-specific extensions. The command line consists of options to gawk itself, the AWK program text (if not supplied via the -f or --include and --include options.. Arguments to long options are either joined with the option by an = sign, with no intervening spaces, or they may be provided in the next command line argument. Long options may be abbreviated, as long as the abbreviation remains unique. Additionally,. Files read with -f are treated as if they begin with an implicit @namespace "awk" statement. - - .) - - -D[file] - - --debug[=file] Enable debugging of AWK programs. By default, the debugger reads commands interactively from the keyboard (standard input). The optional file argument specifies a file with a list of commands for the debugger to execute non-interactively. - - -e program-text - - --source program-text Use program-text as AWK program source code. This option allows the easy intermixing of library functions (used via the -f and --include options) with source code entered on the command line. It is intended primarily for medium to large AWK programs used in shell scripts. Each argument supplied via -e is treated as if it begins with an implicit @namespace "awk" statement. - - . Files read with --include are treated as if they begin with an implicit @namespace "awk" statement. - - -l lib - - --load lib Load a gawk extension from.) With an optional argument of no-ext, warnings about gawk extensions are disabled. - - -M - - --bignum Force arbitrary precision arithmetic on numbers. This option has no effect if gawk is not compiled to use the GNU MPFR and GMP libraries. (In such a case, gawk issues a warning.) - - -n - - --non-decimal-data Recognize octal and hexadecimal values in input data. Use this option with great caution! - - -N - - --use-lc-numeric Force[file] - - --pretty-print[=file] Output a pretty printed version of the program to file. If no file is provided, gawk uses a file named awkprof.out in the current directory. This option implies --no-optimize. - - -O - - --optimize Enable gawk's default optimizations upon the internal representation of the program. Currently, this just includes simple constant folding. This option is on by default. - - -p[prof-file] - - --profile[=prof-file] Start a profiling session, and send the profiling data to prof-file. The default is awkprof.out. The profile contains execution counts of each statement in the program in the left margin and function call counts for each user-defined function. This option implies --no-optimize. - - -P - - --posix This turns on compatibility mode, with the following additional restrictions: - • - \x escape sequences are not recognized. - together with --traditional. - - -s - - --no-optimize Disable gawk's default optimizations upon the internal representation of the program. - - -S - - --sandbox Run provides. For POSIX compatibility, the -W option may be used, followed by the name of a long option. AWK PROGRAM EXECUTION An AWK program consists of a sequence of optional directives, pattern-action statements, and optional function definitions. @include "filename" @load "filename" @namespace "name" pattern addition, lines beginning with @include may be used to include other source files into your program, making library use even easier. This is equivalent to using the --include option. Lines beginning with @load may be used to load extension functions into your program. This is equivalent to using the --load option. The environment variable AWKPATH specifies a search path to use when finding source files named with the -f and --include --load-1]). pattern in the AWK program. For each pattern that the record matches, gawk executes the associated action. The patterns are tested in the order they occur in the program. Finally, after all the input is exhausted, gawk executes the code in the END rule. Additionally, gawk allows variables to have regular-expression type. AWK also has one dimensional arrays; arrays with multiple dimensions may be simulated. Gawk provides true arrays of arrays; see Arrays, below. Several pre-defined variables are set as a program runs; these empty. Each field width may optionally be preceded by a colon-separated value specifying the number of characters to skip before the field starts. The value of FS is ignored. Assigning a new value to FS or FPAT overrides the use of FIELDWIDTHS., including leading and trailing whitespace. values,"). In POSIX mode, changing this array does not affect the environment seen by programs which gawk spawns via redirection or the system() function. Otherwise, gawk updates its real environment so that programs it spawns see the changes. - ERRNO - If a system error occurs either doing a redirection for getline, during a read for getline, or during a close(), then ERRNO is set to a string describing the error. The value is subject to translation in non-English locales. If the string in ERRNO corresponds to a system error in the errno(3) variable, then the numeric value can be found in PROCINFO[errno]. For non-system errors, PROCINFO[errno] will be zero. - FIELDWIDTHS - A whitespace-separated list of field widths. When set, gawk parses the input into fields of fixed width, instead of using the value of the FS variable as the field separator. Each field width may optionally be preceded by a colon-separated value specifying the number of characters to skip before the field starts. See Fields, above. - FILENAME - The name of the current input file. If no files are specified on the command line, the value of FILENAME is ``-''. However, FILENAME is undefined inside the BEGIN rule FS(),. - PREC - The working precision of arbitrary precision floating-point numbers, 53 by default. -["argv"] - The command line arguments as received by gawk at the C-language level. The subscripts start from zero. - PROCINFO["egid"] - The value of the getegid(2) system call. - PROCINFO["errno"] - The value of errno(3) when ERRNO is set to the associated error message. - PROCINFO["euid"] - The value of the geteuid(2) system call. - PROCINFO["FS"] - "FS" if field splitting with FS is in effect, "FPAT" if field splitting with FPAT is in effect, "FIELDWIDTHS" if field splitting with FIELDWIDTHS is in effect, or "API" if API input parser field splitting --load. - "scalar" - The identifier is a scalar. - "untyped" - The identifier is untyped (could be used as a scalar or array, gawk doesn't know yet). - "user" - The identifier is a user-defined function. - PROCINFO["pgrpid"] - The value of the getpgrp(2) system call. - PROCINFO["pid"] - The value of the getpid(2) system call. - PROCINFO["platform"] - A string indicating the platform for which gawk was compiled. It is one of: - "djgpp", "mingw" - Microsoft Windows, using either DJGPP, or MinGW, respectively. - "os2" - OS/2. - "posix" - GNU/Linux, Cygwin, Mac OS X, and legacy Unix systems. - "vms" - OpenVMS or Vax/VMS. - PROCINFO["ppid"] - The value of the getppid(2) system call. - PROCINFO["strftime"] - The default time format string for strftime(). Changing its value affects how strftime() formats time values when called with no arguments. - GMP["NONFATAL"] - If this exists, then I/O errors for all redirections become nonfatal. - PROCINFO["name", "NONFATAL"] - Make I/O errors for name be nonfatal. -["input", "RETRY"] - If an I/O error that may be retried occurs when reading data from input, and this array entry exists, then getline returns -2 instead of following the default behavior of returning -1 and configuring input to return no further data. An I/O error that may be retried is one where errno(3) has the value EAGAIN, EWOULDBLOCK, EINTR, or ETIMEDOUT. This may be useful in conjunction with PROCINFO["input", "READ_TIMEOUT"] or in situations where a file descriptor has been configured to behave in a non-blocking fashion. - (as a string). - ROUNDMODE - The rounding mode to use for arbitrary precision arithmetic on numbers, by default "N" (IEEE-754 roundTiesToEven mode). The accepted values are: - "A" or "a" - for rounding away from zero. These are only available if your version of the GNU MPFR library supports rounding away from zero. - "D" or "d" - for roundTowardNegative. - "N" or "n" - for roundTiesToEven. - "U" or "u" - for roundTowardPositive. - "Z" or "z" - for roundTowardZero. - string used to separate multiple typeof() function may be used to test if an element in SYMTAB is an array. You may not use the delete statement with the SYMTAB array, nor assign to elements with an index that is not a variable name. -. However, the (i, j) in array construct only works in tests, not in for loops.. NamespacesGawk provides a simple namespace facility to help work around the fact that all variables in AWK are global. A qualified name consists of a two simple identifiers joined by a double colon (::). The left-hand identifier represents the namespace and the right-hand identifier is the variable within it. All simple (non-qualified) names are considered to be in the ``current'' namespace; the default namespace is awk. However, simple identifiers consisting solely of uppercase letters are forced into the awk namespace, even if the current namespace is different. You change the current namespace with an @namespace "name" directive. The standard predefined builtin function names may not be used as namespace names. The names of additional functions provided by gawk may be used as namespace names or as simple identifiers in other namespaces. For more details, see GAWK: Effective AWK Programming. Variable Typing And Conversion Variables and fields may be (floating point) numbers, or strings, or both. They may also be regular expressions. How the value of a variable is interpreted depends upon its context. If used in a numeric expression, it will be treated as a number; if used as a string it will be treated as a string. To force a variable to be treated as a number, add zero to it; to force it to be treated as a string, concatenate it with the null string. Uninitialized variables have the numeric value zero and the string value "" (the null, or empty,". NOTE: When operating in POSIX mode (such as with the --posix. Octal and Hexadecimal ConstantsYou. Up to two following hexadecimal digits are considered part of the escape sequence. E.g., "\x1B" is the ASCII ESC (escape) character. - \ddd - The character represented by the 1-, 2-, or 3-digit sequence of octal digits. E.g., "\033" is the ASCII ESC (escape) character. - \c - The literal character c. In compatibility mode, the characters represented by octal and hexadecimal escape sequences are treated literally when used in regular expression constants. Thus, /a\52b/ is equivalent to /a\*b/. Regexp ConstantsA regular expression constant is a sequence of characters enclosed between forward slashes (like /value/). Regular expression matching is described more fully below; see Regular Expressions. The escape sequences described earlier may also be used inside constant regular expressions (e.g., /[ \t\f\n\r\v]/ matches whitespace characters). Gawk provides strongly typed regular expression constants. These are written with a leading @ symbol (like so: @/value/). Such constants may be assigned to scalars (variables, array elements) and passed to user-defined functions. Variables that have been so assigned have regular expression type. PATTERNS AND ACTIONSAWK is a line-oriented language. The pattern comes first, and then the action. Action statements are enclosed in { and }. Either the pattern may be missing, or the action may be missing, but, of course, not both. If the pattern is missing, the action executes for every single record of input. A missing action is equivalent to { print } which prints the entire record. Comments begin with the # character, and continue until the end of the line. Empty. However, a ``\'' after a # is not special. Multiple statements may be put on one line by separating them with a ``;''. This applies to both the statements within the action part of a pattern-action pair (the usual case), and to the pattern-action statements themselves. PatternsAWK patterns cannot have missing action parts. BEGINFILE and ENDFILE are additional special patterns whose actions are executed before reading the first record of each command-line input file and after reading the last record of each file. Inside the BEGINFILE rule, the value of ERRNO. To include a literal dash in the list, put it first or last. - [^abc...] - A. - \y - Matches the empty string at either the beginning or the end of a word. - \B - Matches the empty string within a word. - \< - String Constants) are also valid in regular expressions. Character classes are a characters.) - [:punct:] - Punctuation characters (characters that are not letter, digits, control characters, or space characters). - [:space:] - Space characters (such as space, tab, and formfeed, to name a few). - [:upper:] - Uppercase provides you want. - } switch (expression) { case value|regex : statement ... [ default: statement ] } I/O Statements The input/output statements are as follows: - close(file [, how]) - Close file, pipe or coprocess. The optional how should only be used when closing one end of a two-way pipe to a coprocess. It must be a string value, either "to" or "from". - getline - Set $0 from the next input record; set NF, NR, FNR, RT. - getline <file - Set $0 from the next record of file; set NF, RT. - getline var - Set var from the next input record; set NR, FNR, RT. - getline var <file - Set var from the next record of file; set RT. - command | getline [var] - Run command, piping the output either into $0 or var, as above, and RT. - command |& getline [var] - Run command as a coprocess piping the output either into $0 or var, as above, and RT. Coprocesses are a gawk extension. (The command can also be a socket. See the subsection Special File Names, below.) - Stop processing the current input record. Read the next input record and start processing over with the first pattern in the AWK program. Upon reaching the end of the input data, execute any END rule(s). - nextfile - Stop processing the current input file. The next input record read comes from the next input file. Update FILENAME and ARGIND, reset FNR to 1, and start processing over with the first pattern in the AWK program. Upon reaching the end of the input data, execute any ENDFILE and GAWK: Effective AWK Programming - Append output to the file. - print ... | command - Write on a pipe. - print ... |& command - Send data to a coprocess or socket. (See also the subsection Special File Names, below.) The getline command returns 1 on success, zero on end of file, and -1 on an error. If the errno(3) value indicates that the I/O operation may be retried, and PROCINFO["input", "RETRY"] is set, then -2 is returned instead of -1, and further calls to getline may be attempted. Upon an error, ERRNO is set to a string describing the problem. NOTE: Failure in opening a two-way socket results in a non-fatal error being returned to the calling function. If using a pipe, coprocess, or socket to getline, or from print or printf within a loop, you must use close() to create new instances of the command or socket. AWK does not automatically close pipes, sockets, or coprocesses when they return EOF. The printf Statement The AWK versions of the printf statement and sprintf() function (see below) accept the following conversion specification formats: - %a, %A - A floating point number of the form [-]0xh.hhhhp+-dd (C99 hexadecimal floating point format). For %A, uppercase letters are used instead of lowercase ones. - system library supports it, %F is available as well. This is like %f, but uses capital letters for special ``not a number'' and ``infinity'' values. If %F is not available, gawk uses %f. - , indicating that, significant digits. For the %d, %i, %o, %u, %x, and %X formats, it specifies the minimum number of digits to print. For the %s format, it specifies the maximum number of characters from the string that should be printed. The dynamic width and prec capabilities of the ISO: - - - The standard input. - |& coprocess operator for creating TCP/IP network connections: - - /inet/tcp/lport/rhost/rport - - ). Usable only with the |& two-way I/O operator. - - zero and one, previously for as, use $0 ampersands and backslashes in the replacement text of sub(), gsub(), and gensub().) - index(s, t) - Return the index of the string t in the string s, or zero zero zero'th element of a contains the portion of s matched by the entire regular expression r. possibly null separator that appeared after a[i]. The value of seps[0] is the possibly null leading separator. If r is omitted, FPAT is used instead. The arrays a and seps are cleared first. Splitting behaves identically to field splitting with FPAT, described above. -. In particular, if r is a single-character string, that string acts as the separator, even if it happens to be a regular expression metacharacter. -. Return either zero or one. -Since one of the primary uses of AWK programs is processing log files that contain time stamp information, gawk provides the following functions for obtaining time stamps and formatting them. - mktime(datespec [, utc-flag]) - Turn datespec into a time stamp of the same form as returned by systime(), and return the result.. If utc-flag is present and is non-zero or non-null, the time is assumed to be in the UTC time zone; otherwise, the time is assumed to be in the local time zone. If the DST. NOTE: Passing negative operands to any of these functions causes a fatal error.sThe following functions provide type related information about their arguments. - isarray(x) - Return true if x is an array, false otherwise. This function is mainly for use with the elements of multidimensional arrays and with function parameters. - typeof(x) - Return a string indicating the type of x. The string will be one of "array", "number", "regexp", "string", "strnum", "unassigned", or "undefined". Internationalization FunctionsThe following functions may be used from within your AWK program for translating strings at run-time. For full details, see GAWK: Effective AWK Programming. - bindtextdomain(directory [, domain]) - Specify the directory where gawk looks for the .gmo execute functions written in C or C++ was run. Applications came to depend on this ``feature.'' When awk was changed to match its documentation, the -v option for assigning variables before program execution was added to accommodate applications that depended upon the old behavior. (This feature was agreed upon by both the Bell Laboratories developers and the GNU developers.). - • - The ability to continue lines after ? and :. - • - Octal and hexadecimal constants in AWK programs. - • - The ARGIND, BINMODE, ERRNO, LINT, PREC, ROUNDMODE, RT and TEXTDOMAIN variables are not special. - • - The IGNORECASE variable and its side-effects are not available. - • - The FIELDWIDTHS variable and fixed-width field splitting. - • - The FPAT variable and field splitting based on field values. - • - The FUNCTAB, SYMTAB, and PROCINFO arrays are not available. - • - The use of RS as a regular expression. - • - The special file names available for I/O redirection are not recognized. - • - The |& operator for creating coprocesses. -. - • - Non-fatal I/O. - • - Retryable I/O. coprocess VARIABLESThe AWKPATH environment variable can be used to provide a list of directories that gawk searches when looking for files named via the -f, --file, -i and --include options, and the @include directive. the interval between retries.IfThis man page documents gawk, version 5.0. AUTHORST. [at] setting. Similarly, do NOT use a web forum (such as Stack Overflow) for reporting bugs. Instead, please use the electronic mail addresses given above. Really. maintainer. BUGSThe -F option is not necessary given the command line variable assignment feature; it remains only for backwards compatibility.") }' ACKNOWLEDGEMENTSBrian Kernighan provided valuable assistance during testing and debugging. We thank him. COPYING PERMISSIONSCopyright © 1989, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2001, 2002, 2003, 2004, 2005, 2007, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019,. SEE ALSOegrep(1), sed(1), getpid(2), getppid(2), getpgrp(2), getuid(2), geteuid(2), getgid(2), getegid(2), getgroups(2), printf(3), strftime(3), usleep(3) The AWK Programming Language, Alfred V. Aho, Brian W. Kernighan, Peter J. Weinberger, Addison-Wesley, 1988. ISBN 0-201-07981-X. GAWK: Effective AWK Programming, Edition 5.0, shipped with the gawk source. The current version of this document is available online at. The GNU gettext documentation, available online at.
https://www.systutorials.com/docs/linux/man/1-nawk/
CC-MAIN-2020-40
refinedweb
3,379
58.48
This post describes how to use the Service Bus brokered messaging features in a way to yield best performance. You can find more details in the full article on MSDN. Use the Service Bus Client Protocol The Service Bus supports the Service Bus client protocol and HTTP. The Service Bus client protocol is more efficient, because it maintains the connection to the Service Bus service as long as the message factory exists. It also implements batching and prefetching. The Service Bus client protocol is available for .NET applications using the .NET managed API. Whenever possible, connect to the Service Bus via the Service Bus client protocol. Reuse Factories and Clients Service Bus client objects, such as QueueClient or MessageSender, are created through a MessagingFactory, which also provides internal management of connections. When using the Service Bus client protocol avoid closing any messaging factories and queue, topic and subscription clients after sending a message and then recreating them when sending the next message. Instead, use the factory and client for multiple operations. Closing a messaging factory deletes the connection to the Service Bus. Establishing a connection is an expensive operation. Use Concurrent Operations Performing an operation (send, receive, delete, etc.) takes a certain amount of time. This time includes the processing of the operation by the Service Bus service as well as the latency of the request and the reply. To increase the number of operations per time, operations should be executed concurrently. This is particularly true if the latency of the data exchange between the client and the datacenter that hosts the Service Bus namespace is large. Executing multiple operations concurrently can be done in several different ways: Asynchronous operations. The client pipelines operations by performing asynchronous operations. The next request is started before the previous request completes. Multiple factories. All clients (senders as well. Use client-side batching Client-side batching allows a queue/topic client to batch multiple send operations into a single request. It also allows a queue/subscription client to batch multiple Complete requests into a single request. By default, a client uses a batch interval of 20ms. You can change the batch interval by setting MessagingFactorySettings.NetMessagingTransportSettings.BatchFlushInterval before creating the messaging factory. This setting affects all clients that are created by this factory. MessagingFactorySettings mfs = new MessagingFactorySettings(); mfs.TokenProvider = tokenProvider; mfs.NetMessagingTransportSettings.BatchFlushInterval = TimeSpan.FromSeconds(0.05); MessagingFactory messagingFactory = MessagingFactory.Create(namespaceUri, mfs); For low-throughput, low-latency scenarios you want to disable batching. To do so, set the batch flush interval to 0. For high-throughput scenarios, increase the batching interval to 50ms. If multiple senders are used, increase the batching interval to 100ms. Batching is only available for asynchronous Send and Complete operations. Synchronous operations are immediately sent to the Service Bus service. Batching does not occur for Peek or Receive operations, nor does batching occur across clients. Use batched store access To increase the throughput of a queue/topic/subscription, the Service Bus service batches multiple messages when writing to its internal store. If enabled on a queue or topic, writing messages into the store will be batched. If enabled on a queue or subscription, deleting messages from the store will be batched. Batched store access only affects Send and Complete operations; receive operations are not affected. When creating a new queue, topic or subscription, batched store access is enabled with a batch interval is 20ms. For low-throughput, low-latency scenarios you want to disable batched store access by setting QueueDescription.EnableBatchedOperations to false before creating the entity. QueueDescription qd = new QueueDescription(); qd.EnableBatchedOperations = false; Queue q = namespaceManager.CreateQueue(qd); Use prefetching Prefetching causes the queue/subscription client to load additional messages from the service when performing a receive operation. The client stores these messages in a local cache. The QueueClient.PrefetchCount and SubscriptionClient.PrefetchCount values specify the number of messages that can be prefetched. Each client that enables prefetching maintains its own cache. A cache is not shared across clients. Service Bus locks any prefetched messages so that prefetched messages cannot be received by a different receiver. If the receiver fails to complete the message before the lock expires, the message becomes available to other receivers. The prefetched copy of the message remains in the cache. The receiver will receive an exception when it tries to complete the expired cached copy of the message. To prevent the consumption of expired messages, the cache size must be smaller than the number of messages that can be consumed by a client within the lock timeout interval. When using the default lock expiration of 60 seconds, a good value for SubscriptionClient.PrefetchCount is 20 times the maximum processing rates of all receivers of the factory. If, for example, a factory creates 3 receivers and each receiver can process up to 10 messages per second, the prefetch count should not exceed 20*3*10 = 600. By default, QueueClient.PrefetchCount is set to 0, which means that no additional messages are fetched from the service. Enable prefetching if receivers consume messages at a high rate. In low-latency scenarios, enable prefetching if a single client consumes messages from the queue or subscription. If multiple clients are used, set the prefetch count to 0. By doing so, the second client can receive the second message while the first client is still processing the first message. Read the full article on MSDN.
https://azure.microsoft.com/tr-tr/blog/new-article-best-practices-for-performance-improvements-using-service-bus-brokered-messaging/
CC-MAIN-2017-34
refinedweb
896
50.33
Ajay Abhyankar wrote: > I was trying cElementTree for reading and updating an xml file. I am > using iterparse to parse and make relevant changes to the xml as required. > Everything works very fine till I use a valid xml namespace in xml file. > It is not giving any problems in manipulation of file content, but only > changes the namespace prefix on its own to something like "ns0" and > retains the original URL, when the file is written back after updates. > Can the namespace prefix be retained after manipultion? Am I doing > something wrong or have I missed out on something. > Please help to understand and solve the problem. the standard ET parser throws away the prefix, and the standard serializer generates new prefixes on the fly. for many applications, this is not a problem -- it's the namespace URL that matters in XML, not the prefix. if you want to preserve namespaces under stock ET, your best bet is to use iterparse's namespace events to collect prefix information, and either update the _namespace_map dictionary: from elementtree import ElementTree # undocumented, guaranteed to be supported in all 1.2 releases ElementTree._namespace_map[url] = prefix ElementTree._namespace_map[url] = prefix ... ... the serializer now maps {url}foo to prefix:foo, for all url/prefix ... pairs in the namespace map ... or use a custom serializer (or a postprocessing step). hope this helps! </F>
https://mail.python.org/pipermail/xml-sig/2006-February/011469.html
CC-MAIN-2016-44
refinedweb
228
57.67
-- Copyright (C) 2009-2011 Petr Rockai -- -- BSD3 {-# LANGUAGE ScopedTypeVariables, MultiParamTypeClasses, FlexibleInstances, BangPatterns #-} -- | The abstract representation of a Tree and useful abstract utilities to -- handle those. module Darcs.Util.Tree ( Tree, Blob(..), TreeItem(..), ItemType(..), Hash(..) , makeTree, makeTreeWithHash, emptyTree, emptyBlob, makeBlob, makeBlobBS -- * Unfolding stubbed (lazy) Trees. -- -- | By default, Tree obtained by a read function is stubbed: it will -- contain Stub items that need to be executed in order to access the -- respective subtrees. 'expand' will produce an unstubbed Tree. , expandUpdate, expand, expandPath, checkExpand -- * Tree access and lookup. , items, list, listImmediate, treeHash , lookup, find, findFile, findTree, itemHash, itemType , zipCommonFiles, zipFiles, zipTrees, diffTrees -- * Files (Blobs). , readBlob -- * Filtering trees. , FilterTree(..), restrict -- * Manipulating trees. , modifyTree, updateTree, partiallyUpdateTree, updateSubtrees, overlay , addMissingHashes ) where import Prelude () import Darcs.Prelude hiding ( filter ) import Control.Exception( catch, IOException ) import Darcs.Util.Path import Darcs.Util.Hash import qualified Data.ByteString.Lazy as BL import qualified Data.ByteString as B import qualified Data.Map as M import Data.Maybe( catMaybes, isNothing ) import Data.Either( lefts, rights ) import Data.List( union, sort ) import Control.Monad( filterM ) -------------------------------- -- Tree, Blob and friends -- data Blob m = Blob !(m BL.ByteString) !Hash data TreeItem m = File !(Blob m) | SubTree !(Tree m) | Stub !(m (Tree m)) !Hash data ItemType = TreeType | BlobType deriving (Show, Eq, Ord) -- | m = Tree { items :: M.Map Name (TreeItem m) -- | Get hash of a Tree. This is guaranteed to uniquely -- identify the Tree (including any blob content), as far as -- cryptographic hashes are concerned. Sha256 is recommended. , treeHash :: !Hash } listImmediate :: Tree m -> [(Name, TreeItem m)] listImmediate = M.toList . items -- | Get a hash of a TreeItem. May be Nothing. itemHash :: TreeItem m -> Hash itemHash (File (Blob _ h)) = h itemHash (SubTree t) = treeHash t itemHash (Stub _ h) = h itemType :: TreeItem m -> ItemType itemType (File _) = BlobType itemType (SubTree _) = TreeType itemType (Stub _ _) = TreeType emptyTree :: Tree m emptyTree = Tree { items = M.empty , treeHash = NoHash } emptyBlob :: (Monad m) => Blob m emptyBlob = Blob (return BL.empty) NoHash makeBlob :: (Monad m) => BL.ByteString -> Blob m makeBlob str = Blob (return str) (sha256 str) makeBlobBS :: (Monad m) => B.ByteString -> Blob m makeBlobBS s' = let s = BL.fromChunks [s'] in Blob (return s) (sha256 s) makeTree :: [(Name,TreeItem m)] -> Tree m makeTree l = Tree { items = M.fromList l , treeHash = NoHash } makeTreeWithHash :: [(Name,TreeItem m)] -> Hash -> Tree m makeTreeWithHash l h = Tree { items = M.fromList l , treeHash = h } ----------------------------------- -- Tree access and lookup -- -- | Look up a 'Tree' item (an immediate subtree or blob). lookup :: Tree m -> Name -> Maybe (TreeItem m) lookup t n = M.lookup n (items t) find' :: TreeItem m -> AnchoredPath -> Maybe (TreeItem m) find' t (AnchoredPath []) = Just t find' (SubTree t) (AnchoredPath (d : rest)) = case lookup t d of Just sub -> find' sub (AnchoredPath rest) Nothing -> Nothing find' _ _ = Nothing -- | Find a 'TreeItem' by its path. Gives 'Nothing' if the path is invalid. find :: Tree m -> AnchoredPath -> Maybe (TreeItem m) find = find' . SubTree -- | Find a 'Blob' by its path. Gives 'Nothing' if the path is invalid, or does -- not point to a Blob. findFile :: Tree m -> AnchoredPath -> Maybe (Blob m) findFile t p = case find t p of Just (File x) -> Just x _ -> Nothing -- | Find a 'Tree' by its path. Gives 'Nothing' if the path is invalid, or does -- not point to a Tree. findTree :: Tree m -> AnchoredPath -> Maybe (Tree m) findTree t p = case find t p of Just (SubTree x) -> Just x _ -> Nothing -- | List all contents of a 'Tree'. list :: Tree m -> [(AnchoredPath, TreeItem m)] list t_ = paths t_ (AnchoredPath []) where paths t p = [ (appendPath p n, i) | (n,i) <- listImmediate t ] ++ concat [ paths subt (appendPath p subn) | (subn, SubTree subt) <- listImmediate t ] expandUpdate :: (Monad m) => (AnchoredPath -> Tree m -> m (Tree m)) -> Tree m -> m (Tree m) expandUpdate update t_ = go (AnchoredPath []) t_ where go path t = do let subtree (name, sub) = do tree <- go (path `appendPath` name) =<< unstub sub return (name, SubTree tree) expanded <- mapM subtree [ x | x@(_, item) <- listImmediate t, isSub item ] let orig_map = M.filter (not . isSub) (items t) expanded_map = M.fromList expanded tree = t { items = M.union orig_map expanded_map } update path tree -- | Expand a stubbed Tree into a one with no stubs in it. You might want to -- filter the tree before expanding to save IO. This is the basic -- implementation, which may be overriden by some Tree instances (this is -- especially true of the Index case). expand :: (Monad m) => Tree m -> m (Tree m) expand = expandUpdate $ const return -- | Unfold a path in a (stubbed) Tree, such that the leaf node of the path is -- reachable without crossing any stubs. Moreover, the leaf ought not be a Stub -- in the resulting Tree. A non-existent path is expanded as far as it can be. expandPath :: (Monad m) => Tree m -> AnchoredPath -> m (Tree m) expandPath t (AnchoredPath []) = return t expandPath t (AnchoredPath (n:rest)) = case lookup t n of (Just item) | isSub item -> amend t n rest =<< unstub item _ -> return t -- fail $ "Descent error in expandPath: " ++ show path_ where amend t' name rest' sub = do sub' <- expandPath sub (AnchoredPath rest') let tree = t' { items = M.insert name (SubTree sub') (items t') } return tree -- | Check the disk version of a Tree: expands it, and checks each -- hash. Returns either the expanded tree or a list of AnchoredPaths -- where there are problems. The first argument is the hashing function -- used to create the tree. checkExpand :: (TreeItem IO -> IO Hash) -> Tree IO -> IO (Either [(AnchoredPath, Hash, Maybe Hash)] (Tree IO)) checkExpand hashFunc t = go (AnchoredPath []) t where go path t_ = do let subtree (name, sub) = do let here = path `appendPath` name sub' <- (Just <$> unstub sub) `catch` \(_ :: IOException) -> return Nothing case sub' of Nothing -> return $ Left [(here, treeHash t_, Nothing)] Just sub'' -> do treeOrTrouble <- go (path `appendPath` name) sub'' return $ case treeOrTrouble of Left problems -> Left problems Right tree -> Right (name, SubTree tree) badBlob (_, f@(File (Blob _ h))) = fmap (/= h) (hashFunc f `catch` (\(_ :: IOException) -> return NoHash)) badBlob _ = return False render (name, f@(File (Blob _ h))) = do h' <- (Just <$> hashFunc f) `catch` \(_ :: IOException) -> return Nothing return (path `appendPath` name, h, h') render (name, _) = return (path `appendPath` name, NoHash, Nothing) subs <- mapM subtree [ x | x@(_, item) <- listImmediate t_, isSub item ] badBlobs <- filterM badBlob (listImmediate t) >>= mapM render let problems = badBlobs ++ concat (lefts subs) if null problems then do let orig_map = M.filter (not . isSub) (items t) expanded_map = M.fromList $ rights subs tree = t_ {items = orig_map `M.union` expanded_map} h' <- hashFunc (SubTree t_) if h' `match` treeHash t_ then return $ Right tree else return $ Left [(path, treeHash t_, Just h')] else return $ Left problems class (Monad m) => FilterTree a m where -- | Given @pred tree@, produce a 'Tree' that only has items for which -- @pred@ returns @True@. -- The tree might contain stubs. When expanded, these will be subject to -- filtering as well. filter :: (AnchoredPath -> TreeItem m -> Bool) -> a m -> a m instance (Monad m) => FilterTree Tree m where filter predicate t_ = filter' t_ (AnchoredPath []) where filter' t path = t { items = M.mapMaybeWithKey (wibble path) $ items t } wibble path name item = let npath = path `appendPath` name in if predicate npath item then Just $ filterSub npath item else Nothing filterSub npath (SubTree t) = SubTree $ filter' t npath filterSub npath (Stub stub h) = Stub (do x <- stub return $ filter' x npath) h filterSub _ x = x -- | Given two Trees, a @guide@ and a @tree@, produces a new Tree that is a -- identical to @tree@, but only has those items that are present in both -- @tree@ and @guide@. The @guide@ Tree may not contain any stubs. restrict :: (FilterTree t m) => Tree n -> t m -> t m restrict guide tree = filter accept tree where accept path item = case (find guide path, item) of (Just (SubTree _), SubTree _) -> True (Just (SubTree _), Stub _ _) -> True (Just (File _), File _) -> True (Just (Stub _ _), _) -> bug "*sulk* Go away, you, you precondition violator!" (_, _) -> False -- | Read a Blob into a Lazy ByteString. Might be backed by an mmap, use with -- care. readBlob :: Blob m -> m BL.ByteString readBlob m -> Blob m -> a) -> Tree m -> Tree m -> m) -> Maybe (Blob m) -> a) -> Tree m -> Tree m -> [a] zipFiles f a b = [ f p (findFile a p) (findFile b p) | p <- paths a `sortedUnion` paths b ] where paths t = sort [ p | (p, File _) <- list t ] zipTrees :: (AnchoredPath -> Maybe (TreeItem m) -> Maybe (TreeItem m) -> a) -> Tree m -> Tree m -> [a] zipTrees f a b = [ f p (find a p) (find b p) | p <- reverse (paths a `sortedUnion` paths b) ] where paths t = sort [ p | (p, _) <- list t ] -- | Helper function for taking the union of AnchoredPath lists that -- are already sorted. This function does not check the precondition -- so use it carefully. sortedUnion :: [AnchoredPath] -> [AnchoredPath] -> [AnchoredPath] sortedUnion [] ys = ys sortedUnion xs [] = xs sortedUnion a@(x:xs) b@(y:ys) = case compare x y of LT -> x : sortedUnion xs b EQ -> x : sortedUnion xs ys GT -> y : sortedUnion a ys -- |' or 'zipTrees'. diffTrees :: forall m. (Monad m) => Tree m -> Tree m -> m (Tree m, Tree m) diffTrees left right = if treeHash left `match` treeHash right then return (emptyTree, emptyTree) else diff left right where isFile (File _) = True isFile _ = False notFile = not . isFile isEmpty = null . listImmediate subtree :: TreeItem m -> m (Tree m) subtree (Stub x _) = x subtree (SubTree x) = return x subtree (File _) = bug if isEmpty x' && isEmpty y' then return (n, Nothing, Nothing) else return (n, Just $ SubTree x', Just $ SubTree y') | isFile l && isFile r -> return (n, Just l, Just r) | otherwise -> do l' <- maybeUnfold l r' <- maybeUnfold r return (n, Just l', Just r') _ -> bug "n lookups failed" | n <- immediateN left' `union` immediateN right' ] let is_l = [ (n, l) | (n, Just l, _) <- is ] is_r = [ (n, r) | (n, _, Just r) <- is ] return (makeTree is_l, makeTree is_r) -- | Modify a Tree (by replacing, or removing or adding items). modifyTree :: (Monad m) => Tree m -> AnchoredPath -> Maybe (TreeItem m) -> Tree m modifyTree t_ p_ i_ = snd $ go t_ p_ i_ where fix t unmod items' = (unmod, t { items = (countmap items':: Int) `seq` items' , treeHash = if unmod then treeHash t else NoHash }) go t (AnchoredPath []) (Just (SubTree sub)) = (treeHash t `match` treeHash sub, sub) go t (AnchoredPath [n]) (Just item) = fix t unmod items' where !items' = M.insert n item (items t) !unmod = itemHash item `match` case lookup t n of Nothing -> NoHash Just i -> itemHash i go t (AnchoredPath [n]) Nothing = fix t unmod items' where !items' = M.delete n (items t) !unmod = isNothing $ lookup t n go t path@(AnchoredPath (n:r)) item = fix t unmod items' where subtree s = go s (AnchoredPath r) item !items' = M.insert n sub (items t) !sub = snd sub' !unmod = fst sub' !sub' = case lookup t n of Just (SubTree s) -> let (mod', sub'') = subtree s in (mod', SubTree sub'') Just (Stub s _) -> (False, Stub (do x <- s return $! snd $! subtree x) NoHash) Nothing -> (False, SubTree $! snd $! subtree emptyTree) _ -> bug $ "Modify tree at " ++ show path go _ (AnchoredPath []) (Just (Stub _ _)) = bug $ "descending in modifyTree, case = (Just (Stub _ _)), path = " ++ show p_ go _ (AnchoredPath []) (Just (File _)) = bug $ "descending in modifyTree, case = (Just (File _)), path = " ++ show p_ go _ (AnchoredPath []) Nothing = bug $ "descending in modifyTree, case = Nothing, path = " ++ show p_ countmap :: forall a k. M.Map k a -> Int countmap = M.foldr (\_ i -> i + 1) 0 updateSubtrees :: (Tree m -> Tree m) -> Tree m -> Tree m updateSubtrees fun t = fun $ t { items = M.mapWithKey (curry $ snd . update) $ items t , treeHash = NoHash } where update (k, SubTree s) = (k, SubTree $ updateSubtrees fun s) update (k, File f) = (k, File f) update (_, Stub _ _) = bug "Stubs not supported in updateTreePostorder" -- | Does /not/ expand the tree. updateTree :: (Monad m) => (TreeItem m -> m (TreeItem m)) -> Tree m -> m (Tree m) updateTree fun t = partiallyUpdateTree fun (\_ _ -> True) t -- | Does /not/ expand the tree. partiallyUpdateTree :: (Monad m) => (TreeItem m -> m (TreeItem m)) -> (AnchoredPath -> TreeItem m -> Bool) -> Tree m -> m (Tree m) partiallyUpdateTree fun predi t' = go (AnchoredPath []) t' where go path t = do items' <- M.fromList <$> mapM (maybeupdate path) (listImmediate t) subtree <- fun . SubTree $ t { items = items' , treeHash = NoHash } case subtree of SubTree t'' -> return t'' _ -> bug "function passed to partiallyUpdateTree didn't changed SubTree to something else" maybeupdate path (k, item) = if predi (path `appendPath` k) item then update (path `appendPath` k) (k, item) else return (k, item) update path (k, SubTree tree) = (\new -> (k, SubTree new)) <$> go path tree update _ (k, item) = (\new -> (k, new)) <$> fun item -- | Lay one tree over another. The resulting Tree will look like the base (1st -- parameter) Tree, although any items also present in the overlay Tree will be -- taken from the overlay. It is not allowed to overlay a different kind of an -- object, nor it is allowed for the overlay to add new objects to base. This -- means that the overlay Tree should be a subset of the base Tree (although -- any extraneous items will be ignored by the implementation). overlay :: (Monad m) => Tree m -> Tree m -> Tree m overlay base over = Tree { items = M.fromList immediate , treeHash = NoHash } where immediate = [ (n, get n) | (n, _) <- listImmediate base ] get n = case (M.lookup n $ items base, M.lookup n $ items over) of (Just (File _), Just f@(File _)) -> f (Just (SubTree b), Just (SubTree o)) -> SubTree $ overlay b o (Just (Stub b _), Just (SubTree o)) -> Stub (flip overlay o `fmap` b) NoHash (Just (SubTree b), Just (Stub o _)) -> Stub (overlay b `fmap` o) NoHash (Just (Stub b _), Just (Stub o _)) -> Stub (do o' <- o b' <- b return $ overlay b' o') NoHash (Just x, _) -> x (_, _) -> bug $ "Unexpected case in overlay at get " ++ show n ++ "." addMissingHashes :: (Monad m) => (TreeItem m -> m Hash) -> Tree m -> m (Tree m) addMissingHashes make = updateTree update -- use partiallyUpdateTree here where update (SubTree t) = make (SubTree t) >>= \x -> return $ SubTree (t { treeHash = x }) update (File blob@(Blob con NoHash)) = do hash <- make $ File blob return $ File (Blob con hash) update (Stub s NoHash) = update . SubTree =<< s update x = return x ------ Private utilities shared among multiple functions. -------- unstub :: (Monad m) => TreeItem m -> m (Tree m) unstub (Stub s _) = s unstub (SubTree s) = return s unstub _ = return emptyTree isSub :: TreeItem m -> Bool isSub (File _) = False isSub _ = True
http://hackage.haskell.org/package/darcs-2.14.3/docs/src/Darcs.Util.Tree.html
CC-MAIN-2021-49
refinedweb
2,369
67.28
Template::TAL - Process TAL templates with'; <html xmlns: <head> <title tal: </head> <body> <h1>This is the <span tal: page</h1> <ul> <li tal: <a href="?" tal:<span tal:</a> </li> </ul> </body> </html> ENDOFXML This template can be processed by passing it and the parameters to the process method: my $tt = Template::TAL->new(); $tt->process(\$template, { title => "Bert and Ernie Fansite", users => [ { url => "", name => "Henson", }, { url => "", name => "Workshop", }, { url => "", name => "Bert is Evil", }, ], }) Alternativly you can store the templates on disk, and pass the filename to process directly instead of via a reference (as shown in the synopsis above.) Template::TAL is designed to be extensible, allowing you to load templates from different places and produce more than one type of output. By default the XML template will be output as cross-browser compatible HTML (meaning, for example, that image tags won't be closed.) Other output formats, including well-formed XML, can easily be produced by changing the output class (detailed below.) For more infomation on the TAL spec itself, see Creates and initializes a new Template::TAL object. Options valid here are: If this parameter is set then it is passed to the provider, telling it where to load files from disk (if applicable for the provider.) If this parameter is set then it is passed to the output, telling it what charset to use instead of its default. The default output class will use the 'utf-8' charset unless you tell it otherwise. Pass a 'provider' option to specify a provider rather than using the default provider that reads from disk. This can either be a class name of a loaded class, or an object instance. Pass a 'output' option to specify a output class rather than using the default output class that dumps the DOM tree to as a string to create HTML. This can either be a class name of a loaded class, or an object instance. a listref of language plugins we will use when parsing. All templates get at least the Template::TAL::Language:TAL language module. adds a language to the list of those used by the template renderer. 'module' here can be a classname or an instance. Process the template with the passed data and return the resulting rendered byte sequence. $template can either be a string containing where the provider should get the template from (i.e. the filename of the template in the include path), a reference to a string containing the literal text of the template, or a Template::TAL::Template object. $data_hashref should be a reference to a hash containing the values that are to be substituted into the template. Parses a TALES string (see), looking in each of the passed contexts in order for variable values, and returns the value. These are get/set chained accessor methods that can be used to alter the object after initilisation (meaning they return their value when called without arguments, and set the value and return $self when called with.) In both cases you can set these to either class names or actual instances and they with do the right thing. The instance of the Template::TAL::Provider subclass that will be providing templates to this engine. The instance of the Template::TAL::Output subclass that will be used to render the produced template output. Petal is another Perl module that can process a templating language suspiciously similar to TAL. So why did we implement Yet Another Templating Engine? Well, we liked Petal a lot. However, at the time of writing our concerns with Petal were: In conclusion: You may be better off using Petal. Certainly the caching layer could be very useful to you. There's more than one way to do it. Written by Tom Insam, Copyright 2005 Fotango Ltd. All Rights Reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Template::TAL creates superfluous XML namespace attributes in the output. Please report any bugs you find via the CPAN RT system. The TAL specification: Petal, another Perl implementation of TAL on CPAN.
http://search.cpan.org/~fotango/Template-TAL-0.91/lib/Template/TAL.pm
CC-MAIN-2014-41
refinedweb
689
70.73
KILL(2) NetBSD System Calls Manual KILL(2)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME kill -- send signal to a process LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <signal.h> int kill(pid_t pid, int sig); DESCRIPTION The kill() function sends the signal given by sig to pid, a process or a group of processes. sig process group ID of the sender, and for which the process has permission; speci- fied. SEE ALSO getpgrp(2), getpid(2), sigaction(2), killpg(3), signal(7) STANDARDS The kill() function is expected to conform to ISO/IEC 9945-1:1990 (``POSIX.1''). NetBSD 4.0 April 19, 1994 NetBSD 4.0
https://man.netbsd.org/NetBSD-4.0/kill.2
CC-MAIN-2022-40
refinedweb
123
66.84
apecs-physics 2D physics for apecs See all snapshots apecs-physics appears in apecs-physics 2D physics library for apecs. Uses the Chipmunk physics engine. The apecs-gloss package provides a simple optional gloss-based renderer. Feel free to create an issue or PR for suggestions/questions/requests/critiques/spelling fixes/etc. See TODO.md for suggestions if you want to help out with the code. The examples directory contains a number of examples, each can be run with stack build && stack <examplename>: helloworld makeWorld "World" [''Physics, ''BodyPicture, ''Camera] Generate a world. The Physics component adds a physics space to the world. The BodyPicture contains a gloss Picture, which the renderer will match to the Body’s position and orientation. The Camera component tracks a camera position and zoom factor. initialize = do set global ( Camera (V2 0 1) 60 , earthGravity ) Globals can be set with any entity argument, global is just an alias for -1. earthGravity = V2 0 (-9.81), normal earth surface gravity if we assume normal MKS units. Note that the positive y-axis points upwards. let ballShape = cCircle 0.5 newEntity ( DynamicBody , Shape ballShape , Position (V2 0 3) , Density 1 , Elasticity 0.9 , BodyPicture . color red . toPicture $ ballShape ) Still in the initialize function, here we see our first object being instantiated. The type of ballShape is Convex, the apecs-physics format for shapes. Convex is a convex polygon, consisting of a number of vertices and a radius. In the case of a circle, the polygon consists of a single point with a non-zero radius. Both Chipmunk and gloss only support convex polygons, Convex is used to give them a common interface. A DynamicBody is one of three types of bodies. It is a normal body, fully affected by physical forces. The elasticity of a collision is the product of the elasticities of the colliding shapes. The final line shows how to do rendering. BodyPicture expects a gloss Picture, in this case we derive one from ballShape :: Convex using toPicture. color red comes from gloss, and is just one of the many Picture manipulation functions. Alternatively, you can use a Bitmap to use actual sprites. let lineShape = hLine 6 newEntity ( StaticBody , Angle (-pi/20) , Shape lineShape , Elasticity 0.9 , BodyPicture . color white . toPicture $ lineShape ) Static bodies are not affected by physics, and generally rarely move. They are equivalent to bodies with infinite mass and moment, and zero velocity. Changing their position triggers an explicit rehash of their shapes, wish is relatively expensive. main = do w <- initWorld runSystem initialize w defaultSimulate w defaultSimulate is a convenience wrapper around gloss’ simulateIO. You can find its definition in Apecs.Physics.Gloss, in case you want to change the rendering behavior. tumbler initialize :: System World () initialize = do set global ( Camera 0 50 , earthGravity ) let sides = toEdges $ cRectangle 5 tumbler <- newEntity ( KinematicBody , AngularVelocity (-1) , BodyPicture . color white . foldMap toPicture $ sides ) As previously stated, both Chipmunk and gloss exclusively have convex polygon primitives. Our tumbler, however, is obiously not convex. Fortunately, composing shapes is really easy. We use toEdges to turn a rectangle into an outline of one, and use foldMap to make a composite Picture. A KinematicBody is halfway between a DynamicBody and a StaticBody. It can have an (angular) velocity, but will not respond to forces. It can be used for e.g. moving platforms, or in this case. Note that we did not add any shapes to the tumbler yet. forM_ sides $ \line -> newEntity (ShapeExtend (cast tumbler) $ setRadius 0.05 line) The time has come to talk about the destinction between shapes and bodies. A body can have multiple shapes. Shapes belonging to the same body cannot move relative to one another, i.e. a body is a fixture for multiple shapes. When using the normal Shape data constructor to add a shape to a body, we actually create two Chipmunk structs; one for the body, and one for the shape, even though they are addressed by the same entity in apecs. When we want to add multiple shapes to a body, however, we need to make new entities for each individual shape. The reason for this is that this way, we can still easily change the properties of each individual shape. Shape actually just represents a special case of ShapeExtend, the case in which the body has the same entity as the shape. When you use a tuple of components in apecs, they are added in the order you list them in the tuple. This is important to realize, as adding a shape to an entity wihout a body is a noop. Always make sure you first add a body, and then the shapes. This also comes up when e.g. setting a shape’s properties: you can only set a shape’s Mass or Density when there is a shape in the first place. If you don’t, you will get a runtime error about simulating zero-mass DynamicBodies. replicateM_ 200 $ do x <- liftIO$ randomRIO (-2, 2) y <- liftIO$ randomRIO (-2, 2) r <- liftIO$ randomRIO (0.1, 0.2) let ballshape = cCircle r let c = (realToFrac x+2)/3 newEntity ( DynamicBody , Position (V2 x y) , Shape ballshape , BodyPicture . color (makeColor 1 c c 1) . toPicture $ ballshape , Density 1 ) return () Finally, we randomly add a bunch of balls. constraints The final example is a gallery of (some of) the available constraints. Drag shapes around with the left mouse button, create a new box with the right. This example is too large to fully include here, but if you have made it this far, I recommend looking at the source. Aside from demonstrating constraints, queries and interaction it also contains some neat tricks like: let rubber = (Friction 0.5, Elasticity 0.5, Density 1) newEntity ( DynamicBody , someShape , rubber ) Nesting tuples creates composable and reusable pieces of configuration (this is an apecs thing, not an apecs-physics thing). This can also be useful if you find yourself needing bigger tuples than the current maximum. Constraints are a lot like shapes, but instead of having one associated Body, they have two. It also comes in the varieties Constraint and ConstraintExtend. Dragging an object with the mouse is also done using a constraint. The mouse position actually controls the position of a static body without shapes, and we use a PinJoint to attach whatever we are dragging to it. Changes [0.3.2] Changed - Fixed links and added changelog to cabal file - Added version bounds for dependencies - Expanded haddocks [0.3.1] Changed - added apecsversion bound [0.3.0] Added ShapeListand ConstraintListcomponents for bodies, that contain a list of entity indices of their shapes and constraints (read-only). Changed Shapes, Constraints, and CollisionHandlers now track their original Haskell representations, and can be meaningfully read. Shapeand Constraintnow only have a single constructor, that explicitly takes an entity argument indicating what entity it belongs to. Previously, the interface suggested that shapes and constraints were properties of bodies, which was wrong. - Bodies now track their shapes and constraints in /mutable/ stores Removed - The ShapeBodycomponent has been removed. You can find out a shapes body by reading the Shapecomponent’s ShapeExtendconstructor directly.
https://www.stackage.org/nightly-2019-03-13/package/apecs-physics-0.3.2
CC-MAIN-2019-13
refinedweb
1,185
57.37
![. When I was shopping around for a microcontroller development kit, I came across the Texas Instruments Hercules™ microcontroller (MCU) LaunchPad™ development kit. It offers CPUs with clock speeds upwards of 80 MHZ, which was just right for someone like me, who didn't want to spend much, but wanted ample leg room for more CPU-intensive projects. 1.Prepare Your HALCoGen Project. 2.Prepare Your CCS Project. 3.The Final Step: Write Your Code. A friend told me that the Hercules MCU family isn't for beginners, so I heeded that warning, but the alternatives I was interested in exceeded my budget. Considering the fact that the Hercules™ MCU family was well-suited for inverters, PWM applications, and is automotive-grade, I eventually went ahead and purchased the Hercules TMS570LS0432 (TMS57004) MCU for $20. I was a bit lost at first, but I eventually connected the dots with the help of Project 0, and the datasheet. The LaunchPad development kit turned out to have a refreshingly straightforward interface. I was able to generate a PWM signal with just a handful of clicks using HALCoGen, and three lines of code in Code Composer Studio™ (CCS) IDE. I eventually went on to write a beginners' guide for the LS04x to help new users get started quickly. Before we continue, let's take a brief look at the tools we will be using. HALCoGen is a HAL code generator for Windows, provided by Texas Instruments, and it enables you to configure your LaunchPad development kit’s pins, ports, timers, and much more using an intuitive GUI. You can use HALCoGen to switch pins on and off, generate a PWM signal and adjust its duty cycle, but you'll often need to do this programmatically instead (for example: varying the duty cycle of PWM signals to control the speed of fan motors), and that's the focus of this tutorial. HALCoGen can be downloaded for free from the Texas Instruments website. Code Composer Studio is the IDE that we'll use to edit our code, debug it, and load it to the Hercules microcontroller via USB. In this tutorial, the TMS570LS1224x MCU (TMS57012) is used to blink an LED and vary its brightness by generating a PWM signal with an N2HET timer. HET stands for High End Timer. We will use the hetInit(), pwmStart(), and pwmSetDuty() functions in this article. Hercules LaunchPad development kit users can download CCS IDE for free from the Texas Instruments website (if you log into your T.I. account first). hetInit() initializes the N2HET High-End Timer (HET), pwmStart will turn on the PWM signal, and pwmSetDuty() is used to adjust the duty cycle of the PWM signal to control the brightness of the LED. You can find these functions in the het.c file (found under /source in your project directory) after generating your code with HALCoGen. First, create your HALCoGen project, and name it 'Blink'. Next, enable the HET1 driver. We will use HET port 1 for this exercise. The pin highlighted in the PWM 0 section is labelled HET 20 at the bottom of the LaunchPad development kit. Conveniently, this is right beside a ground pin. Ensure that you select pin 20 (otherwise known as bit 20), as that is what our PWM signal will be supplied to. Check the 'Enable' checkboxes for PWM 0. There are sockets under the LaunchPad that you can plug an LED into. Please use a discrete LED for this project to avoid damaging the MCU. 'Discrete' LEDs are low-wattage LEDs that are typically used to indicate when an appliance is on, when someone is calling, to indicate that your laptop is plugged in, etc. Set the Period[us] field to 1000000 microseconds as shown below (this causes the LED to blink every second). Leave the Duty[%] field for now, as that is the duty cycle value, which we will set via our own code near the end of this tutorial. The next step is to configure the pin HET 20. Set it to the output direction as shown below. Setting it to the output direction enables you to write to it/power it on. Leave the DOUT drop down box as it is. We will turn this pin on programmatically. DOUT is the output value. 0 means off, and 1 means on. Now you can finally generate the necessary code using HALCoGen by pressing the F5 key. You can now launch Code Composer Studio IDE and create a new project, as shown below. In the workspace field, enter the exact same directory that your HALCoGen project folder is in. If your HALCoGen Location field was set to ‘C:\LaunchPad’ and your project name was Blink, then your HALCoGen project folder was placed inside ‘C:\LaunchPad’, which will be your workspace. Your CCS IDE project directory should also have the same name as your HALCoGen project directory. Your CCS DIE and HALCoGen files should be in the same directory. So your HALCoGen project directory should be C:\LaunchPad\Blink, and your CCS IDE project directory should be C:\LaunchPad\Blink. This enables you to update your HALCoGen code easily without having to copy it all over again. HALCogen won't delete the code you write between the user code comments mentioned below, in the 'Start Coding' section. Select 'Empty Project', and name your project 'Blink'. For this exercise, your project name in CCS IDE should match your HALCoGen project name, and it should be in the same directory. Select your MCU from the Target drop down box on the right. Now you can add the include directory that HALCoGen generated to your project's dependencies. Right click your project in the Project Explorer (Blink [Active - Debug]), click Properties, and them Include Options. In the 'Add dir to #include search path' section, click the white icon with the green plus sign on it, select the include directory under /Blink, and click ok as follows. You can now start coding! Open the sys_main.c file in CCS (it's under /source). Between the /* USER CODE BEGIN (1) */ and /* USER CODE END */ comments, include the HET driver as shown below: /* USER CODE BEGIN (1) */ #include "het.h" /* USER CODE END */ Between the third set of user code tags, inside the void main function, type the following: void main(void) { /* USER CODE BEGIN (3) */ hetInit(); pwmStart(hetRAM1, pwm0); pwmSetDuty(hetRAM1, pwm0, 50U); } pwm0 refers to the timer PWM 0, which is the one we just configured in HALCoGen. From there, you can identify the other PWM timers as follows. On execution, pwmStart starts the PWM signal and the LED connected to HET 20 starts blinking every second. pwmSetDuty sets the duty cycle of the PWM signal supplied by pwm0. The duty cycle is the percentage of the time that the pin is on. The pwmSetDuty statement used in this exercise sets the duty cycle to 50%. If you wanted to increase it to 80%, for example, change the 50U to 80U. This is especially helpful if you want to vary the brightness of an LED, or control the speed of a motor. Increasing the duty cycle parameter will increase a motor's speed or in this case, an LED's brightness, but only if you reduce the value of the Period[us]. Why? Reducing that value causes the LED to blink faster/increases the frequency of the PWM signal. If you reduce it enough, the pin will switch on and off so many times per second, that it will appear as if the LED is perfectly steady. Change Period[us] to 1000 and the LED will stop blinking. Now you'll have the opportunity to adjust its brightness. Set the duty cycle to various values and observe the brightness of the LED. If you want to dim it to 50%, change the duty cycle parameter to 50U. You can learn more about the PWM and GPIO functions in the het.c and gio.c files (located in your /source directory). If you'd like to experiment with PWM motor control, it's important to note that only certain types of motors support PWM. A three-wire variable speed laptop fan is a good start for experimenting with PWM, as they are among the cheapest. There are also variable speed air conditioner condenser fans on the market. Learn more about Hercules M.
https://e2e.ti.com/blogs_/archives/b/launchyourdesign/archive/2016/06/22/getting-started-with-the-launchpad-tms57012
CC-MAIN-2018-05
refinedweb
1,397
71.85
Hopefully this is my LAST QUESTION!! D:< Anyway, I saw a code on lloydgoodall.com for an LWJGL FPCamera. It had SEVERAL errors, but I finally got to the (hopefully) last error. "Keyboard must be created before you can query key state" My code: import org.lwjgl.Sys; import org.lwjgl.opengl.Display; import org.lwjgl.opengl.GL11; import org.lwjgl.input.*; import org.lwjgl.util.vector.Vector3f; import org.lwjgl.input.Keyboard; //First Person Camera Controller public class FPCameraController { //3d vector to store the camera's position in private Vector3f position = null; //the rotation around the Y axis of the camera private float yaw = 0.0f; //the rotation around the X axis of the camera private float pitch = 0.0f; / to its current rotation (yaw) public void walkBackwards(float distance) { position.x += distance * (float)Math.sin(Math.toRadians(yaw)); position.z -= distance * (float)Math.cos(Math.toRadians(yaw)); } /)); } //translates and rotate the matrix so that it looks through the camera //this dose basic what gluLookAt() does public void lookThrough() { //roatate the pitch around the X axis GL11.glRotatef(pitch, 1.0f, 0.0f, 0.0f); //roatate the yaw around the Y axis GL11.glRotatef(yaw, 0.0f, 1.0f, 0.0f); //translate to the position vector's location GL11.glTranslatef(position.x, position.y, position.z); } public static void main(String[] args) { FPCameraController camera = new FPCameraController(0, 0, 0); float dx = 0.0f; float dy = 0.0f; float dt = 0.0f; //length of frame float lastTime = 0.0f; // when the last frame was float time = 0.0f; float mouseSensitivity = 0.05f; float movementSpeed = 10.0f; //move 10 units per second //hide the mouse Mouse.setGrabbed(true); // keep looping till the display window is closed the ESC key is down /* while (!Display.isCloseRequested() && !Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { */ time = Sys.getTime(); dt = (time - lastTime)/1000.0f; lastTime = time; //distance in mouse movement from the last getDX() call. dx = Mouse.getDX(); //distance in mouse movement from the last getDY() call. dy = Mouse.getDY(); //controll camera yaw from x movement fromt the mouse camera.yaw(dx * mouseSensitivity); //controll camera pitch from y movement fromt the mouse camera.pitch(dy * mouseSensitivity); //when passing in the distrance to move //we times the movementSpeed with dt this is a time scale //so if its a slow frame u move more then a fast frame //so on a slow computer you move just as fast as on a fast computer if (Keyboard.isKeyDown(Keyboard.KEY_W))//move forward { camera.walkForward(movementSpeed*dt); } if (Keyboard.isKeyDown(Keyboard.KEY_S))//move backwards { camera.walkBackwards(movementSpeed*dt); } if (Keyboard.isKeyDown(Keyboard.KEY_A))//strafe left { camera.strafeLeft(movementSpeed*dt); } if (Keyboard.isKeyDown(Keyboard.KEY_D))//strafe right { camera.strafeRight(movementSpeed*dt); } //set the modelview matrix back to the identity GL11.glLoadIdentity(); //look through the camera before you draw anything camera.lookThrough(); //you would draw your scene here. //draw the buffer to the screen Display.update(); } } Uhh... please help : D
https://www.daniweb.com/programming/software-development/threads/322166/hopefully-my-final-question-d-keyboard-must-be-created-before-you-can-query-key-s
CC-MAIN-2017-17
refinedweb
485
54.49
Chapter 24. Class Coding Details If you did not understand all of Chapter 23, don’t worry; now that we’ve had a quick tour, we’re going to dig a bit deeper and study the concepts introduced earlier in further detail. In this chapter, we’ll take another look at classes and methods, inheritance, and operator overloading, formalizing and expanding on some of the class coding ideas introduced in Chapter 23. Because the class is our last namespace tool, we’ll summarize the concepts of namespaces in Python here as well. This chapter will also present some larger and more realistic classes than those we have seen so far, including a final example that ties together much of what we’ve learned about OOP. The class Statement importing the module it is coded in, but not before). General Form class is a compound statement, with a body of indented statements ... Get Learning Python, 3rd Edition now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/learning-python-3rd/9780596513986/ch24.html
CC-MAIN-2021-21
refinedweb
178
58.32
(For more resources related to this topic, see here.) By default, Devise only allows e-mails to be used for authentication. For some people, this condition will lead to the question, "What if I want to use some other field besides e-mail? Does Devise allow that?" The answer is yes; Devise allows other attributes to be used to perform the sign-in process. For example, I will use username as a replacement for e-mail, and you can change it later with whatever you like, including userlogin, adminlogin, and so on. We are going to start by modifying our user model. Create a migration file by executing the following command inside your project folder: $ rails generate migration add_username_to_users username:string This command will produce a file, which is depicted by the following screenshot: The generated migration file Execute the migrate (rake db:migrate) command to alter your users table, and it will add a new column named username. You need to open the Devise's main configuration file at config/initializers/devise.rb and modify the code: config.authentication_keys = [:username] config.case_insensitive_keys = [:username] config.strip_whitespace_keys = [:username] You have done enough modification to your Devise configuration, and now you have to modify the Devise views to add a username field to your sign-in and sign-up pages. By default, Devise loads its views from its gemset code. The only way to modify the Devise views is to generate copies of its views. This action will automatically override its default views. To do this, you can execute the following command: $ rails generate devise:views It will generate some files, which are shown in the following screenshot: Devise views files As I have previously mentioned, these files can be used to customize another view. But we are going to talk about it a little later in this article. Now, you have the views and you can modify some files to insert the username field. These files are listed as follows: - app/views/devise/sessions/new.html.erb: This is a view file for the sign-up page. Basically, all you need to do is change the email field into the username field. #app/views/devise/sessions/new.html.erb <h2>Sign in</h2> <%= notice %> <%= alert %> <%= form_for(resource, :as => resource_name, :url => session_path (resource_name)) do |f| %> <div><%= f.label :username %><br /> <%= f.text_field :username, :autofocus => true %><div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <% if devise_mapping.rememberable? -%> <div><%= f.check_box :remember_me %> <%= f.label :remember_me %></div> <% end -%> <div><%= f.submit "Sign in" %></div> <% end %> %= render "devise/shared/links" %> You are now allowed to sign in with your username. The modification will be shown, as depicted in the following screenshot: The sign-in page with username - app/views/devise/registrations/new.html.erb: This file is a view file for the registration page. It is a bit different from the sign-up page; in this file, you need to add the username field, so that the user can fill in their username when they perform the registration. #app/views/devise/registrations/new.html.erb <h2>Sign Up</h2> <%= form_for() do |f| %> <%= devise_error_messages! %> <div><%= f.label :email %><br /> <%= f.email_field :email, :autofocus => true %></div> <div><%= f.label :username %><br /> <%= f.text_field :username %></div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <div><%= f.label :password_confirmation %><br /> <%= f.password_field :password_confirmation %></div> <div><%= f.submit "Sign up" %></div> <% end %> <%= render "devise/shared/links" %> Especially for registration, you need to perform extra modifications. Mass assignment rules written in the app/controller/application_controller.rb file, and now, we are going to modify them a little. Add username to the sanitizer for sign-in and sign-up, and you will have something as follows: #these codes are written inside configure_permitted_parameters function devise_parameter_sanitizer.for(:sign_in) {|u| u.permit(:email, :username )} devise_parameter_sanitizer.for(:sign_up) {|u| u.permit(:email, :username, :password, :password_confirmation)} These changes will allow you to perform a sign-up along with the username data. The result of the preceding example is shown in the following screenshot: The sign-up page with username I want to add a new case for your sign-in, which is only one field for username and e-mail. This means that you can sign in either with your e-mail ID or username like in Twitter's sign-in form. Based on what we have done before, you already have username and email columns; now, open /app/models/user.rb and add the following line: attr_accessor :signin Next, you need to change the authentication keys for Devise. Open /config/initializers/devise.rb and change the value for config.authentication_keys, as shown in the following code snippet: config.authentication_keys = [ :signin ] Let's go back to our user model. You have to override the lookup function that Devise uses when performing a sign-in. To do this, add the following method inside your model class: def self.find_first_by_auth_conditions(warden_conditions) conditions = warden_conditions.dup where(conditions).where(["lower(username) = :value OR lower(email) = :value", { :value => signin.downcase }]).first end As an addition, you can add a validation for your username, so it will be case insensitive. Add the following validation code into your user model: validates :username, :uniqueness => {:case_sensitive => false} Please open /app/controller/application_controller.rb and make sure you have this code to perform parameter filtering: before_filter :configure_permitted_parameters, if: :devise_controller? protected def configure_permitted_parameters devise_parameter_sanitizer.for(:sign_in) {|u| u.permit(:signin)} devise_parameter_sanitizer.for(:sign_up) {|u| u.permit(:email, : username, :password, :password_confirmation)} end We're almost there! Currently, I assume that you've already stored an account that contains the e-mail ID and username. So, you just need to make a simple change in your sign-in view file (/app/views/devise/sessions/new.html.erb). Make sure that the file contains this code: <h2>Sign in</h2> <%= notice %> <%= alert %> <%= form_for(resource, :as => resource_name, :url => session_path (resource_name)) do |f| %> <div><%= f.label "Username or Email" %><br /> <%= f.text_field :signin, :autofocus => true %></div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <% if devise_mapping.rememberable? -%> <div><%= f.check_box :remember_me %> <%= f.label :remember_me %> </div> <% end -%> <div><%= f.submit "Sign in" %></div> <% end %> <%= render "devise/shared/links" %> You can see that you don't have a username or email field anymore. The field is now replaced by a single field named :signin that will accept either the e-mail ID or the username. It's efficient, isn't it? Updating the user account Basically, you are already allowed to access your user account when you activate the registerable module in the model. To access the page, you need to log in first and then go to /users/edit. The page is as shown in the following screenshot: The edit account page But, what if you want to edit your username or e-mail ID? How will you do that? What if you have extra information in your users table, such as addresses, birth dates, bios, and passwords as well? How will you edit these? Let me show you how to edit your user data including your password, or edit your user data without editing your password. - Editing your data, including the password: To perform this action, the first thing that you need to do is modify your view. Your view should contain the following code: <div><%= f.label :username %><br /> <%= f.text_field :username %></div> Now, we are going to overwrite Devise's logic. To do this, you have to create a new controller named registrations_controller. Please use the rails command to generate the controller, as shown: $ rails generate controller registrations update It will produce a file located at app/controllers/. Open the file and make sure you write this code within the controller class: class RegistrationsController < Devise::RegistrationsController def update new_params = params.require(:user).permit(:email,:username, : current_password, :password,:password_confirmation) @user = User.find(current_user.id) if @user.update_with_password(new_params) set_flash_message :notice, :updated sign_in @user, :bypass => true redirect_to after_update_path_for(@user) else render "edit" end end end Let's look at the code. Currently, Rails 4 has a new method in organizing whitelist attributes. Therefore, before performing mass assignment attributes, you have to prepare your data. This is done in the first line of the update method. Now, if you see the code, there's a method defined by Devise named update_with_password. This method will use mass assignment attributes with the provided data. Since we have prepared it before we used it, it will be fine. Next, you have to edit your route file a bit. You should modify the rule defined by Devise, so instead of using the original controller, Devise will use the controller you created before. The modification should look as follows: devise_for :users, :controllers => {:registrations => "registrations"} Now you have modified the original user edit page, and it will be a little different. You can turn on your Rails server and see it in action. The view is as depicted in the following screenshot: The modified account edit page Now, try filling up these fields one by one. If you are filling them with different values, you will be updating all the data (e-mail, username, and password), and this sounds dangerous. You can modify the controller to have better data update security, and it all depends on your application's workflows and rules. - Editing your data, excluding the password: Actually, you already have what it takes to update data without changing your password. All you need to do is modify your registrations_controller.rb file. Your update function should be as follows: class RegistrationsController < Devise::RegistrationsController def update new_params = params.require(:user).permit(:email,:username, : current_password, :password,:password_confirmation) change_password = true if params[:user][:password].blank? params[:user].delete("password") params[:user].delete("password_confirmation") new_params = params.require(:user).permit(:email,:username) change_password = false end @user = User.find(current_user.id) is_valid = false if change_password is_valid = @user.update_with_password(new_params) else @user.update_without_password(new_params) end if is_valid set_flash_message :notice, :updated sign_in @user, :bypass => true redirect_to after_update_path_for(@user) else render "edit" end end end The main difference from the previous code is now you have an algorithm that will check whether the user intends to update your data with their password or not. If not, the code will call the update_without_password method. Now, you have codes that allow you to edit with/without a password. Now, refresh your browser and try editing with or without a password. It won't be a problem anymore. Summary Now, I believe that you will be able to make your own Rails application with Devise. You should be able to make your own customizations based on your needs. Resources for Article: Further resources on this subject: - Integrating typeahead.js into WordPress and Ruby on Rails [Article] - Facebook Application Development with Ruby on Rails [Article] - Designing and Creating Database Tables in Ruby on Rails [Article]
https://www.packtpub.com/books/content/authenticating-your-application-devise
CC-MAIN-2015-32
refinedweb
1,786
50.33
Chapter 12: Add The Sprinkles¶ Our real estate module now makes sense from a business perspective. We created specific views, added several action buttons and constraints. However our user interface is still a bit rough. We would like to add some colors to the list views and make some fields and buttons conditionally disappear. For example, the ‘Sold’ and ‘Cancel’ buttons should disappear when the property is sold or canceled since it is no longer allowed to change the state at this point. This chapter covers a very small subset of what can be done in the views. Do not hesitate to read the reference documentation for a more complete overview. Reference: the documentation related to this chapter can be found in Views. Inline Views¶ Note Goal: at the end of this section, a specific list of properties should be added to the property type view: In the real estate module we added a list of offers for a property. We simply added the field offer_ids with: <field name="offer_ids"/> The field uses the specific view for estate.property.offer. In some cases we want to define a specific list view which is only used in the context of a form view. For example, we would like to display the list of properties linked to a property type. However, we only want to display 3 fields for clarity: name, expected price and state. To do this, we can define inline list views. An inline list view is defined directly inside a form view. For example: from odoo import fields, models class TestModel(models.Model): _name = "test.model" _description = "Test Model" description = fields.Char() line_ids = fields.One2many("test.model.line", "model_id") class TestModelLine(models.Model): _name = "test.model.line" _description = "Test Model Line" model_id = fields.Many2one("test.model") field_1 = fields.Char() field_2 = fields.Char() field_3 = fields.Char() <form> <field name="description"/> <field name="line_ids"> <tree> <field name="field_1"/> <field name="field_2"/> </tree> </field> </form> In the form view of the test.model, we define a specific list view for test.model.line with fields field_1 and field_2. An example can be found here. Exercise Add an inline list view. Add the One2manyfield property_idsto the estate.property.typemodel. Add the field in the estate.property.typeform view as depicted in the Goal of this section. Widgets¶ Reference: the documentation related to this section can be found in Field Widgets. Note Goal: at the end of this section, the state of the property should be displayed using a specific widget: Four states are displayed: New, Offer Received, Offer Accepted and Sold. Whenever we’ve added fields to our models, we’ve (almost) never had to worry about how these fields would look like in the user interface. For example, a date picker is provided for a Date field and a One2many field is automatically displayed as a list. Odoo chooses the right ‘widget’ depending on the field type. However, in some cases, we want a specific representation of a field which can be done thanks to the widget attribute. We already used it for the tag_ids field when we used the widget="many2many_tags" attribute. If we hadn’t used it, then the field would have displayed as a list. Each field type has a set of widgets which can be used to fine tune its display. Some widgets also take extra options. An exhaustive list can be found in Field Widgets. Exercise Use the status bar widget. Use the statusbar widget in order to display the state of the estate.property as depicted in the Goal of this section. Tip: a simple example can be found here. Warning Same field multiple times in a view Add a field only once to a list or a form view. Adding it multiple times is not supported. List Order¶ Reference: the documentation related to this section can be found in Models. Note Goal: at the end of this section, all lists should display by default in a deterministic order. Property types can be ordered manually. During the previous exercises, we created several list views. However, at no point did we specify which order the records had to be listed in by default. This is a very important thing for many business cases. For example, in our real estate module we would want to display the highest offers on top of the list. Model¶ Odoo provides several ways to set a default order. The most common way is to define the _order attribute directly in the model. This way, the retrieved records will follow a deterministic order which will be consistent in all views including when records are searched programmatically. By default there is no order specified, therefore the records will be retrieved in a non-deterministic order depending on PostgreSQL. The _order attribute takes a string containing a list of fields which will be used for sorting. It will be converted to an order_by clause in SQL. For example: from odoo import fields, models class TestModel(models.Model): _name = "test.model" _description = "Test Model" _order = "id desc" description = fields.Char() Our records are ordered by descending id, meaning the highest comes first. Exercise Add model ordering. Define the following orders in their corresponding models: View¶ Ordering is possible at the model level. This has the advantage of a consistent order everywhere a list of records is retrieved. However, it is also possible to define a specific order directly in a view thanks to the default_order attribute (example). Manual¶ Both model and view ordering allow flexibility when sorting records, but there is still one case we need to cover: the manual ordering. A user may want to sort records depending on the business logic. For example, in our real estate module we would like to sort the property types manually. It is indeed useful to have the most used types appear at the top of the list. If our real estate agency mainly sells houses, it is more convenient to have ‘House’ appear before ‘Apartment’. To do so, a sequence field is used in combination with the handle widget. Obviously the sequence field must be the first field in the _order attribute. Attributes and options¶ It would be prohibitive to detail all the available features which allow fine tuning of the look of a view. Therefore, we’ll stick to the most common ones. Form¶ Note Goal: at the end of this section, the property form view will have: Conditional display of buttons and fields Tag colors In our real estate module, we want to modify the behavior of some fields. For example, we don’t want to be able to create or edit a property type from the form view. Instead we expect the types to be handled in their appropriate menu. We also want to give tags a color. In order to add these behavior customizations, we can add the options attribute to several field widgets. Exercise Add widget options. Add the appropriate option to the property_type_idfield to prevent the creation and the editing of a property type from the property form view. Have a look at the Many2one widget documentation for more info. Add the following field: Then add the appropriate option to the tag_ids field to add a color picker on the tags. Have a look at the FieldMany2ManyTags widget documentation for more info. In Chapter 6: Finally, Some UI To Play With, we saw that reserved fields were used for specific behaviors. For example, the active field is used to automatically filter out inactive records. We added the state as a reserved field as well. It’s now time to use it! A state field is used in combination with a states attribute in the view to display buttons conditionally. Exercise Add conditional display of buttons. Use the states attribute to display the header buttons conditionally as depicted in this section’s Goal (notice how the ‘Sold’ and ‘Cancel’ buttons change when the state is modified). Tip: do not hesitate to search for states= in the Odoo XML files for some examples. More generally, it is possible to make a field invisible, readonly or required based on the value of other fields thanks to the attrs attribute. Note that invisible can also be applied to other elements of the view such as button or group. The attrs is a dictionary with the property as a key and a domain as a value. The domain gives the condition in which the property applies. For example: <form> <field name="description" attrs="{'invisible': [('is_partner', '=', False)]}"/> <field name="is_partner" invisible="1"/> </form> This means that the description field is invisible when is_partner is False. It is important to note that a field used in an attrs must be present in the view. If it should not be displayed to the user, we can use the invisible attribute to hide it. Exercise Use attrs. Make the garden area and orientation invisible in the estate.propertyform view when there is no garden. Make the ‘Accept’ and ‘Refuse’ buttons invisible once the offer state is set. Do not allow adding an offer when the property state is ‘Offer Accepted’, ‘Sold’ or ‘Canceled’. To do this use the readonly attrs. Warning Using a (conditional) readonly attribute in the view can be useful to prevent data entry errors, but keep in mind that it doesn’t provide any level of security! There is no check done server-side, therefore it’s always possible to write on the field through a RPC call. List¶ Note Goal: at the end of this section, the property and offer list views should have color decorations. Additionally, offers and tags will be editable directly in the list, and the availability date will be hidden by default. When the model only has a few fields, it can be useful to edit records directly through the list view and not have to open the form view. In the real estate example, there is no need to open a form view to add an offer or create a new tag. This can be achieved thanks to the editable attribute. Exercise Make list views editable. Make the estate.property.offer and estate.property.tag list views editable. On the other hand, when a model has a lot of fields it can be tempting to add too many fields in the list view and make it unclear. An alternative method is to add the fields, but make them optionally hidden. This can be achieved thanks to the optional attribute. Exercise Make a field optional. Make the field date_availability on the estate.property list view optional and hidden by default. Finally, color codes are useful to visually emphasize records. For example, in the real estate module we would like to display refused offers in red and accepted offers in green. This can be achieved thanks to the decoration-{$name} attribute (see Field Widgets for a complete list): <tree decoration- <field name="name"> <field name="is_partner" invisible="1"> </tree> The records where is_partner is True will be displayed in green. Exercise Add some decorations. On the estate.property list view: Properties with an offer received are green Properties with an offer accepted are green and bold Properties sold are muted On the estate.property.offer list view: Refused offers are red Accepted offers are green The state should not be visible anymore Tips: Keep in mind that all fields used in attributes must be in the view! If you want to test the color of the “Offer Received” and “Offer Accepted” states, add the field in the form view and change it manually (we’ll implement the business logic for this later). Search¶ Reference: the documentation related to this section can be found in Search and Search defaults. Note Goal: at the end of this section, the available properties will be filtered by default, and searching on the living area returns results where the area is larger than the given number. Last but not least, there are some tweaks we would like to apply when searching. First of all, we want to have our ‘Available’ filter applied by default when we access the properties. To make this happen, we need to use the search_default_{$name} action context, where {$name} is the filter name. This means that we can define which filter(s) will be activated by default at the action level. Here is an example of an action with its corresponding filter. Exercise Add a default filter. Make the ‘Available’ filter selected by default in the estate.property action. Another useful improvement in our module would be the ability to search efficiently by living area. In practice, a user will want to search for properties of ‘at least’ the given area. It is unrealistic to expect users would want to find a property of an exact living area. It is always possible to make a custom search, but that’s inconvenient. Search view <field> elements can have a filter_domain that overrides the domain generated for searching on the given field. In the given domain, self represents the value entered by the user. In the example below, it is used to search on both name and description fields. <search string="Test"> <field name="description" string="Name and description" filter_domain="['|', ('name', 'ilike', self), ('description', 'ilike', self)]"/> </search> Exercise Change the living area search. Add a filter_domain to the living area to include properties with an area equal to or greater than the given value.
https://www.odoo.com/documentation/master/developer/howtos/rdtraining/12_sprinkles.html
CC-MAIN-2022-27
refinedweb
2,239
65.62
After a disasterous last attempt at programming, I tried my hand at something a little easyer! Namely... Encryption! I'm quite pleased with my current handy-work... but there's something that I don't quite understand. The following program (and it's Decrypting counterpart) work fine on .txt and .bmp files... but for some odd reason it mangles .jpg, .exe, .doc and a bunch of other file formats. I'm still sort of new to all of this and I'm not really sure where the problem is...I'm still sort of new to all of this and I'm not really sure where the problem is...Code:#include <iostream> #include <fstream> #include <string> using namespace std; int main () { /* The following variables are (in order of apearance): Object for File Input || Object for File Output || Variable to hold the Input Filename || Variable to hold the Output Filename || Variable for the Current Character || Variable to put the Encrypted character in || Integer to put the characters ASCII value || Boolean that states wheter this is the first cycle or not || Integer holding the file size */ ifstream Fileop; ofstream DestFil; char Filenam[32]; char OutFile[32]; char c; char e; string ch; int asc; bool firstrun = true; long begin,end; cout << "Encryptor\n-----------\nCreated by:Rider Rockon"; // A little bit of interface cout << "\n\n\nFile to Encrypt:> "; cin >> Filenam; // Wait for input, make Filenam whatever was typed cout << "Destination File:> "; cin >> OutFile; // Wait for input, make OutFile whatever was typed cout << "\n Opening Input: " << Filenam << " and Output: " << OutFile << "\n"; Fileop.open (Filenam); DestFil.open(OutFile,ios::trunc); // Count File size & Return the cursor to the start of the file begin = Fileop.tellg(); Fileop.seekg (0, ios::end); end = Fileop.tellg(); Fileop.seekg (0, ios::beg); cout << "Encrypting " << (end-begin) << " bytes of Data...\n"; while (Fileop.good()) // As long as there's no Error or End of File { ch = ""; if ( firstrun == false ) { // This is where the Encrypting takes place, when done, put it in ch asc = asc + 1; ch = asc; // Write ch to the file DestFil << ch; } else { firstrun = false; } c = Fileop.get(); // Get the next character in the Input file, put it in c asc = c; // asc now contains the ASCII value of c }; cout << "\nDone...\n"; Fileop.close(); DestFil.close(); return 0; } I'd really appreciate any help regarding the subject
http://cboard.cprogramming.com/cplusplus-programming/86308-file-reading-writing-question.html
CC-MAIN-2016-07
refinedweb
391
67.49
Your answer is one click away! I am finding prices of products from Amazon using their API with Bottlenose and parsing the xml response with BeautifulSoup. I have a predefined list of products that the code iterates through. This is my code: import bottlenose as BN import lxml from bs4 import BeautifulSoup i = 0 amazon = BN.Amazon('myid','mysecretkey','myassoctag',Region='UK',MaxQPS=0.9) list = open('list.txt', 'r') print "Number", "New Price:","Used Price:" for line in list: i = i + 1 listclean = line.strip() response = amazon.ItemLookup(ItemId=listclean, ResponseGroup="Large") soup = BeautifulSoup(response, "xml") usedprice=soup.LowestUsedPrice.Amount.string newprice=soup.LowestNewPrice.Amount.string print i , newprice, usedprice This works fine and will run through my list of amazon products until it gets to a product which doesn't have any value for that set of tags, like no new/used price. At which Python will throw up this response: AttributeError: 'NoneType' object has no attribute 'Amount' Which makes sense as there is no tags/string found by BS that I searched for. Having no value is perfectly fine from what I'm trying to achieve, however the code collapses at this point and will not continue. I have tried: if soup.LowestNewPrice.Amount != None: newprice=soup.LowestNewPrice.Amount.string else: continue and also tried: newprice=0 if soup.LowestNewPrice.Amount != 0: newprice=soup.LowestNewPrice.Amount.string else: continue I am at a loss for how to continue after receiving the nonetype value return. Unsure whether the problem lies fundamentally in the language or in the libraries I'm using. The correct way of comparing with None is is None, not == None or is not None, not != None. Secondly, you also need to check soup.LowestNewPrice for None, not the Amount, i.e.: if soup.LowestNewPrice is not None: ... read soup.LowestNewPrice.Amount You can use exception handling: try: # operation which causes AttributeError except AttributeError: continue The code in the try block will be executed and if an AttributeError is raised, the execution will immediately drop into the except block (which will cause the next item in the loop to be ran). If no error is raised, the code will happily skip the except block. If you just wish to set the missing values to zero and print, you can do try: newprice=soup.LowestNewPrice.Amount.string except AttributeError: newprice=0 try: usedprice=soup.LowestUsedPrice.Amount.string except AttributeError: usedprice=0 print i , newprice, usedprice
http://www.devsplanet.com/question/35266643
CC-MAIN-2017-22
refinedweb
408
56.76
What's the best way of checking if an object property in JavaScript is undefined? Sorry, I initially said variable rather than object property. I believe the same == undefined approach doesn't work there. In JavaScript there is null and there is undefined. They have different meanings. It's better to use the strict equality operator: if (variable === undefined) { alert('not defined'); } x == undefined also checks whether x is null, while strict equality does not (if that matters).(source) Or you can simply do this: if (!variable) { alert('not defined'); } Here you check if there's any value that can make the variable look false (undefined, null, 0, false, ...). Not a good method for integers ('0' is not false), but might do well for object properties. The solution is incorrect. In javascript, null == undefined will return true because they both are "casted" to a boolean and are false. The correct way would be to check if (something === undefined) which is the identity operator... Use: if (typeof something === "undefined") { alert("something is undefined"); } function isUnset(inp) { return (typeof inp === 'undefined') } Returns false if variable is set, and true if is undefined. Then use: if (isUnset(var)) { // initialize variable here } if ( typeof( something ) == "undefined") This worked for me while the others didn't. I believe there are a number of incorrect answers to this topic. Contrary to common belief "undefined" is NOT a keyword in javascript, and can in fact have a value assigned to it. // Degenerate code. DO NOT USE. var undefined = false; // Shockingly, this is completely legal! if(myVar === undefined) { alert("You have been misled. Run away!"); } Additionally, myVar === undefined will raise an error in the situation where myVar is undeclared. The most robust way to perform this test is: if(typeof myVar === "undefined") This will always return the correct result, and even handles the situation where myVar is not declared. The issue boils down to three cases: undefined. undefined. you do: if(myvar == undefined ) { alert('var does not exists or is not initialized'); } It will fail when the var var myvar is the same as window.myvar or window['myvar'] To avoid errors to test when a global variable exists, you better use: if(window.myvar == undefined ) { alert('var does not exists or is not initialized'); } The question if a var really exists doesn't matter, it's value is incorrect. Otherwise, it is silly to initialize variables with undefined, better use the value false to initialize. When you know that all variables that you declare are initialized with false, you can simply check it's next tests. It is always better to use the instance/object of the variable to check if it got a valid value. It is more stable, a better way of programming. Cheers! Kind regards, Erwin Haantjes You can get array all undefined with path using; } Object.hasOwnProperty(o, 'propertyname'); This doesn't look up through the prototype chain, however. Ok. What does this mean: "undefined object property"? Actually it can mean two quite different things! First, it can mean the property that has been never defined in object and, second, it can mean the property that has undefined value. Let's see So we can clearly see that typeof obj.prop == 'undefined' and obj.prop === undefined are equivalent and they do not distinguish that different situations. And 'prop' in obj can detect the situation when property hasn't been defined at all and doesn't pay attention to the property value which may be undefined. 1) You want to know if property is undefined by either first or second meaning (most typical situation). typeof obj.prop == 'undefined' // IMHO, see "final fight" below 2) You want to just know if object has some property and don't care of its value. 'prop' in obj x.a === undefinedor this typeof x.a == 'undefined'raises ReferenceError: x is not definedif x is not defined. undefinedis a global variable (so actually it is window.undefinedin browsers). It is supported since ECMAScript 1st Edition and since ECMAScript 5 it is read only. So in modern browsers it can't be redefined to true as many authors love to frighten us but this still a true for older browsers. obj.prop === undefinedvs typeof obj.prop == 'undefined' Pluses of obj.prop === undefined: undefined Minuses of obj.prop === undefined: undefinedcan be overridden in old browsers Pluses of typeof obj.prop == 'undefined': Minuses of typeof obj.prop == 'undefined': 'undefned'here is just a string constant, so js engine can't help you if you have misspelled it like I just did Node.js supports global variable undefined as global.undefined (also can be used without 'global' prefix). I don't know about other implementations of server side JS.. In this article I read that frameworks like Underscore use this function: function isUndefined(obj){ return obj === void 0; } Compare with void 0, for terseness. if (foo !== void 0) It's not as verbose as if (typeof foo !== 'undefined') Most likely you want "if (window.x)". This check is safe even if x hasn't been declared (var x;) - browser doesn't throw an error. if (window.history) { history.call_some_function(); } window is an object which holds all global variables as its members and it is legal to try to access a non-existing member. If x hasn't been declared or hasn't been set then window.x returns undefined. undefined leads to false when if() evaluates it. I hate to add yet another answer to an old question, but many existing ones are misleading at best. Never use typeof x === "undefined". (Or == "undefined" for that matter.) As with all “never”s, there are a few exceptional cases, but the majority of the time? If you don’t know whether a real variable is defined in your current scope, you are doing something wrong. The typeof check is really useful if you want to introduce a ton of potential for error by making a typo. Of course, this potential already exists in the case of object properties, which appears to be the topic of this question. Let’s just ignore the typeof check, then, because it’ll do more harm than good, and it’s a pain to read. You’re intuitively checking a value, not a type. var hasFoo = obj.foo !== undefined; The “default value” of a property on an object is undefined. undefined can also be set as the value on a property. This is the check you will want some of the time. var hasFoo = 'foo' in obj; This will check for the existence of the foo property somewhere along obj’s prototype chain, regardless of value (including undefined). var hasFoo = obj.hasOwnProperty('foo'); This will check for the existence of the foo property at the end of obj’s prototype chain, i.e. for properties directly on obj. var hasFoo = Object.prototype.hasOwnProperty.call(obj, 'foo'); This is the same as above, but will use the canonical hasOwnProperty in case obj also has a property named hasOwnProperty for some reason. In practice, if somebody overrode hasOwnProperty, they’d probably be a jerk in a bunch of other places and redefine undefined in scope, or alter Object or Object.prototype or Object.prototype.hasOwnProperty.call. var hasFoo = obj.foo != undefined; This one also checks for null. To make that clearer, I’d recommend using != null instead. var hasFoo = Boolean(obj.foo); // or !!obj.foo This checks for the other falsy values (I hope that’s obvious) – 0, NaN, false, and the empty string. Certainly practically useful for checking for function support: if (!Array.prototype.indexOf) { Array.prototype.indexOf = …; } To sum up: don’t use typeof to check for undefined values. It is prone to error. If you make a typo in the "undefined" part, you will get the wrong answer. If you make a typo in the testing variable (if you are testing a variable – which you shouldn’t be, ever, use the global object to do that kind of feature test), you will get the wrong answer. If you are paranoid about undefined being redefined, here’s why you shouldn’t be: undefined is read-only in modern browsers. If you’re developing in strict mode as you should be, attempting to assign to it will throw an error. (Even if you don’t develop in strict mode, though, it won’t change.) It’s also a non-configurable property. You will have to worry if you go “safe mode” by passing undefined into your IIFE. Never do that, for the reason outlined in this bullet point, and for the fact that… anybody who is redefining undefined is either an idiot or joking or something, and either wants to or deserves to have broken code. (In the “deserves to” case, note that their code is already quite broken.) Still paranoid? Compare against void 0. void is a keyword in JavaScript, and it always has been, and it will always give you a canonical undefined. "propertyName" in obj //-> true | false Also same things can be written shorter: if (!variable){ //do it if variable is Undefined } or if (variable){ //do it if variable is Defined } wanna check both is it undefined or its value is null //just in javascript var s; // undefined if(typeof s == "undefined" || s === null){ alert('either it is undefined or value is null') } If you are using jQuery Library then jQuery.isEmptyObject() will suffice for both case,. If you are using Angular: angular.isUndefined(obj) angular.isUndefined(obj.prop) Underscore.js: _.isUndefined(obj) _.isUndefined(obj.prop) if simple typeof is not working try this one it will help: if(jQuery.type(variable) === "undefined") {// do something} Reading through this, I'm amazed I didn't see this. I have found multiple algorithms that would work for this."); If you want it to result as true for values defined with the value of undefined, or never defined, you can simply use === undefined if(obj.prop === undefined) console.log("The value is defined as undefined, or never defined"); Commonly, people have asked me for an algorithm to figure out if a value is either falsy, undefined, or null. The following works. if(obj.prop == false || obj.prop === null || obj.prop === undefined) { console.log("The value is falsy, null, or undefined"); }. Use: To check if property is undefined: if (typeof something === "undefined") { alert("undefined"); } To check if property is not undefined: if (typeof something !== "undefined") { alert("not undefined"); } From lodash.js. var undefined; function isUndefined(value) { return value === undefined; } It creates an variable named undefined which is initialized with the default value -- the real undefined, then compares value with the variable undefined. There is a nice way to asign a defined property to a new variable if it is defined and assign a default value to it if Do you want to retrieve the value if it's defined? I'd use Taken from the website, it's a "simple function for retrieving deep object properties without getting "Cannot read property 'X' of undefined"" Edited: Thanks Tunaki. I'm surprised I haven't seen this suggestion yet, but it gets even more specificity than testing with typeof. Use Object.getOwnPropertyDescriptor() if you need to know whether an object property was initialized with undefined or if it was never initialized: // to test someObject.someProperty var descriptor = Object.getOwnPropertyDescriptor(someObject, 'someProperty'); if (typeof descriptor === 'undefined') { // was never initialized } else if (typeof descriptor.value === 'undefined') { if (descriptor.get || descriptor.set) { // is an accessor property, defined via getter and setter } else { // is initialized with `undefined` } } else { // is initialized with some other value } Use: if ("undefined" === typeof variable) { console.log("variable is undefined"); } If an object variable which have some properties you can use same thing like this: if ("undefined" === typeof my_obj.prop){ console.log('the property is not available...'); } This is probably the only explicit form of determining if the existing property-name has an explicit and intended value of undefined; which is, nonetheless, a JS type. "propertyName" in containerObject && ""+containerObject["propertyName"] == "undefined"; >> true \ false This expression will only return true if the property name of the given context exists (truly) and only if its intended value is explicitly undefined. There will be no false positives like with empty or blank strings zeros nulls or empty arrays and alike. This does exactly that. Checks i.e., makes sure the property name exists (otherwise it would be a false positive), than it explicitly checks if its value is undefined e.g. of undefined JS type In it's string representation form (literally "undefined") therefore == instead of === because no further conversion is possible. And this expression will only return true if both, that is all conditions are met. e.g. if the property-name doesn't exist, - it will return false. Which is the only correct return since nonexistent properties can't have values, not even an undefined one. Example: containerObject = { propertyName: void "anything" } >> Object { propertyName: undefined } // now the testing "propertyName" in containerObject && ""+containerObject["propertyName"] == "undefined"; >> true /* which makes sure that nonexistent property will not return a false positive * unless it is previously defined */ "foo" in containerObject && ""+containerObject["foo"] == "undefined"; >> false Similar Questions
http://ebanshi.cc/questions/95/detecting-an-undefined-object-property-in-javascript
CC-MAIN-2017-22
refinedweb
2,187
56.96
Why bringing back the Woolly Mammoth is bad news for humans It was reported last week that scientists are close to bringing back the Woolly Mammoth. Undoing the extinction of a beast that vanished 4000 years ago and engineering its return (in some form) is an incredible scientific feat, but it also raises broader questions about permanence, and in turn the survival of our planet. Many of us already live in a Command-Z culture. This is the idea that almost anything can be undone: from buying a replacement toy or bit of clothing to backing up in the cloud to avoid data loss. Even an incorrectly swiped Tinder match can be amended, for a fee. Mass production and digitisation of content have made big changes in how those with access can deal with their mistakes. “The only real mistake is the one from which we learn nothing.” While this enables a level of simplicity (as well as the many benefits of a safety net) that we have come to expect from the world on the internet, it omits an important process. Mistakes are a form of friction that help us evaluate other potential outcomes and perspectives. They often ask important questions about ourselves and the decisions we make. Loss, never lurking far from a mistake in some form or another, helps us to understand something’s value. Command-Z culture changes the way we experience these things. As Henry Ford is famously quoted in saying, “The only real mistake is the one from which we learn nothing.” While often tough in the moment, friction can be a rich soil for learning. Mistakes still exist, of course, and the internet has become the perfect stage for them. Bad selfies and embarrassing posts can be a daily occurrence but for a few this has meant learning the hard way: mention “ebay yellow dress” or “penis beaker” to a group of friends and it’s likely they’ll be able to tell you the story. Events like these live in the domain of the #fail: comedy fodder often lacking in any form of empathy. In this Command-Z mindset, few things remain sacred and out of reach: one of them is life and one of them is the planet; a fact we are all learning quickly, but nowhere near quickly enough. This is a time when such serious matters cannot be taken lightly. Yet it is also a time when Scott Pruitt, a climate change sceptic, has recently been sworn in to lead the US Environmental Protection Agency and the Paris climate Agreement looks shakier than ever. Looking a little deeper at the everyday aspects of Command-Z culture, the realities are often very different from the cleanliness of a simple key-stoke: the toys we replace mean more plastics going into the ground or sea, and cloud computing has rapidly become one of the biggest carbon producers. The mammoth sits as a totem of extinction. Perhaps only seconded by the Dodo, which casts the ignorance and greed of humans as leading roles in its demise. These are important symbols. We need them to remind us that the bad decisions we might be making now, should not be treated as reversible. If we think anything can be undone, then in can be crudely used to undermine a case to stop it in the first place. Mistakes and the scars that come from them are how we learn, and until we have done this, we really need them. If you liked this, hit the recommend button so that other people might discover it. Cheers
https://medium.com/@StringandCup/why-bringing-back-the-woolly-mammoth-is-the-bad-news-for-humans-214e8a9e835e
CC-MAIN-2018-47
refinedweb
601
66.88
Tell us what you think of the site. So in the Character Settings pane you’ve got all sorts of goodies to help you with your solve. Are any of the Reach or Pull sliders exposed in the SDK or in Python? It doesn’t look like it ... :-( You can access the Reach properties per effector this way: from pyfbsdk import * lEffector = FBFindModelByName("LeftWristEffector") IKReachTProperty = lEffector.PropertyList.Find('IK Reach Translation') IKReachTProperty.Data = 50 I am changing the value to 50 in this Python example. The Pull property is missing from Python but it is available in the OR SDK FBCharacterManipulatorCtrlSet class. This has been logged as an issue. Thanks! I guess I’m just dumb? The above code doesn’t work in 2009. I keep getting the error AttributeError: ‘NoneType’ object has no attribute ‘PropertyList’ so apparently FBFindModelByName is finding anything named “LeftHandEffector”? Hey again, A couple of things: #1. Make sure you have a Character with a Control Rig in the scene. I used the tutorials/mia_characterized sample file and created the default Control Rig for her to test this. Make sure you can select LeftWristEffector #2. You can also find the Pull settings in Python - they live in the Property List of the Character: from pyfbsdk import * lEffector = FBFindModelByName("LeftWristEffector") IKReachTProperty = lEffector.PropertyList.Find('IK Reach Translation') IKReachTProperty.Data = 50 lCharacter = FBSystem().Scene.Characters[0] HeadPullProperty = lCharacter.PropertyList.Find('Head Pull') HeadPullProperty.Data = 50 So, sorry for the incomplete/incorrect answer yesterday and hopefully you can actually run this code with the sample file. I am also testing in 2009 for this. Please let me know if you can make it work? Thanks! Still can’t make it work! I’ve plotted to a control rig and selected the LeftWristEffector. All’s well there. But if I do this: for foo in lEffector.PropertyList: print foo.Name The resulting list doesn’t contain “IK Reach Translation”, which jives with the new error message I was getting: Traceback (most recent call last): File “X:/Apps/Tools/Blade/User/SP/Scripts/Python-Dev/plot_test.py”, line 11, in <module> IKReachTProperty.Data = 50 AttributeError: ‘NoneType’ object has no attribute ‘Data’ raitch, How about IK Effect Pinning? Thanks,
http://www.the-area.com/forum/autodesk-motionbuilder/python/reach--pull-exposed-in-orsdk-or-python/page-last/
crawl-003
refinedweb
367
52.66
I need help in reading a text file and by using a switch statement the user can either press 1 to find the average of all the integers in the text file, and by pressing 2 they find only the averages of the real numbers in said file. This is my code thus far and my problem/question is how do you distinguish +- numbers from +- numbers that have a decimal, ie real numbers from integers. #include <iostream> #include <fstream> using namespace std; int main() { int n; double average(0), sum(0), count(0); ifstream data; data.open("data.txt"); if(data.fail( )) { cout << "Error Opening File"<<endl; return 2; } while(!data.eof()) { data>>n; sum=sum+n; count++; } int rank; cin >> rank; switch(rank){ case 1: average=(sum)/(count); cout << "The Average Is: " << average << endl; break; } data.close(); cout << endl; system("PAUSE"); return 0; }
https://www.daniweb.com/programming/software-development/threads/318518/c-real-and-integer-numbers-in-a-text-file
CC-MAIN-2017-34
refinedweb
144
58.92
As you can see, the default pattern for dates is "MMM d, yyyy". The getCurrencyFormat factory method returns a DecimalFormat object with a predefined pattern for the occasion (see Figure 3). It's bad enough that programming languages use English keywords, but popular computing environments have supported only those character sets that accommodate English (viz. If this works then you can add the import import java.text.DecimalFormat share|improve this answer answered Aug 31 '12 at 8:25 RNJ 8,21994798 add a comment| up vote 0 down vote The Source Wrong way on a bike lane? maximumFractionDigits must be >= minimumFractionDigits. This does not update the minimum or maximum 690 * number of fraction digits used by the number format. 691 * 692 * The default implementation throws If the 626 * new value for maximumFractionDigits is less than the current value 627 * of minimumFractionDigits, then minimumFractionDigits will also be set to 628 * the In Java platform 2 v1.2 and higher, the new 872 * int field maximumIntegerDigits is used instead. 873 * When writing to a stream, maxIntegerDigits is set to 874 Does my electronic parking brake remain engaged if I disconnect the battery? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter Contact Us | advertise | mobile view | Powered by JForum | Copyright © 1998-2016 Paul Wheaton RegisterLog in Languages are represented by two-character codes in lower case, and countries by codes in upper case [2]. For organizations looking to jump-start a big data analytics initiative, Talend provides applications that accelerate data loading and other aspects of Hadoop setup by enabling developers and analysts to leverage powerful The 428 * returned number format is configured to round floating point numbers 429 * to the nearest integer using half-even rounding (see {@link 430 * java.math.RoundingMode#HALF_EVEN RoundingMode.HALF_EVEN}) Can I use that to take out what he owes me? Import Decimal Python Two-headed version of \Rightarrow or \implies Blender add rough/random surface Move to directory that was no directory Solution to Chef and Squares challenge, timing out in Java but not in C++ Mimsy were the Borogoves - why is "mimsy" an adjective? "Carrie has arrived at the airport for two hours." - Is this sentence grammatically correct? Import Roundingmode The concrete subclass may enforce an 631 * upper limit to this value appropriate to the numeric type being formatted. 632 * @see #getMaximumFractionDigits 633 */ 634 Since I requested a minimum of three integer digits, I get a leading zero when printing the number 10, and since I make the upper limit four digits, I lose the check here For example, in the 543 * English locale, with grouping on, the number 1234567 might be formatted 544 * as "1,234,567". Objects for Formatting As you can see above, formatting output in Java is basically a two-step process: 1) build a string with a format object, and 2) send the string to Java Decimalformat 2 Decimal Places params) { 1143 assert params.length == 1; 1144 int choice = (Integer)params[0]; 1145 1146 switch (choice) { 1147 case NUMBERSTYLE: 1148 return numberFormatProvider.getNumberInstance(locale); 1149 One of the first efforts to solve this problem resulted in the concept of locales. How do pilots identify the taxi path to the runway? When reading from a stream, this field is used 930 * only if serialVersionOnStream is less than 1. 931 * 932 * @serial 933 * @see #getMinimumFractionDigits Java How To Use Decimalformat Click here for more information. Java Decimalformat Class Do we have "cancellation law" for products of varieties Will You (Yes, You) Decide The Election? posted 13 years ago Hi Mindy, Welcome to JavaRanch! this contact form I have declared the decimal format with the other integers but is still coming up with errors. Locales Historically, computing has been woefully provincial in favor of the United States. Some people praise this design for its flexibility, but it can drive C hackers nuts while learning Java. Import Decimalformat Class Java Display field value in Drop Link field How can I take a powerful plot item away from players without frustrating them? NUMBERSTYLE : choice; 769 DecimalFormat format = new DecimalFormat(numberPatterns[entry], symbols); 770 771 if (choice == INTEGERSTYLE) { 772 format.setMaximumFractionDigits(0); 773 format.setDecimalSeparatorAlwaysShown(false); 774 format.setParseIntegerOnly(true); 775 I'd be interested in your opinion. have a peek here I have declared the decimal format with the other integers but is still coming up with errors. However, Dr. Import Java Text Decimalformat Meaning Talend integrates, consolidates, transforms any data - Business - Extract Transform Load - ETL - EAI - ERP Index Tags Timeline User list Rules Search You are not logged in. I say that the program above is a "rough equivalent" of the C version because the output isn't right-justified in a field of eight characters, as you would expect. On output, 155 * getEndIndex will be set to the offset between the 156 * last character of the integer and the decimal. It's quick & easy. If you agree to our use of cookies, please close this message and continue to use this site. How To Round Double In Java Not the answer you're looking for? Parsing All the examples so far have dealt only with formatted output, but all Format classes also support a parse method for reading input according to the same conventions we've been For most locales, NumberFactory.getInstance returns a DecimalFormat object, but in FormatIntegers.java I'm just using the format method, which is declared in NumberFormat and overridden in DecimalFormat. Browse other questions tagged java decimalformat or ask your own question. Check This Out Are you sure the data types of the numbers are all correct? This site uses cookies, as explained in our cookie policy. Why is looping over find's output bad practice? DecimalFormat cannot be resolved to a type P: 7 natvarara import java.util.*; public class cof2 { public static void main(String[] args) { double latte, mocca, capu ; Scanner scannerlatte,scannermocca,scannercapu; scannerlatte = What movie is this? Java, on the other hand, comes with support for 145 locales out of the box (i.e., with the Sun JDK). First you create a FieldPosition object, giving its constructor the flag indicating the type of quantity you want it to track. (Other types include FRACTION and various date components.) The call
http://systemajo.com/how-to/decimal-format-cannot-be-resolved-to-a-type.php
CC-MAIN-2018-34
refinedweb
1,042
52.8
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. Hello, When deleting an empty basic block, we redirect all incoming non-fallthrough edges to its successor. However, we must also take care of the incoming fallthrough edge if the preceding basic block ends with a conditional jump to the next instruction (otherwise, by deleting the empty basic block we can remove the label that the jump refers to). Fixed in maybe_tidy_empty_bb by trying to remove the jump or redirecting it to the successor of the empty block. The patch also removes recomputing of topological order in that function. We can not invalidate it by moving an edge destination along an existing edge. I have tried to add the corresponding assert, but it would be quite verbose, since the fallthrough edge may be a back edge, in which case it will look like we are invalidating toporder, even though we are legally creating a new back edge in the region, and will delete the old back edge soon (we can create multiple back edges that way, but we'd only care when pipelining, and we wouldn't have a fallthrough back edge then). Bootstrapped and regtested on ia64 with sel-sched enabled at -O2, also tested on x86-64 with sel-sched enabled for bootstrap. OK? 2010-11-10 Alexander Monakov <amonakov@ispras.ru> PR rtl-optimization/46204 * sel-sched-ir.c (maybe_tidy_empty_bb): Remove second argument. Update all callers. Do not recompute topological order. Adjust fallthrough edges following a degenerate conditional jump. diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c index 141c935..e169276 100644 --- a/gcc/sel-sched-ir.c +++ b/gcc/sel-sched-ir.c @@ -3562,7 +3562,7 @@ sel_recompute_toporder (void) /* Tidy the possibly empty block BB. */ static bool -maybe_tidy_empty_bb (basic_block bb, bool recompute_toporder_p) +maybe_tidy_empty_bb (basic_block bb) { basic_block succ_bb, pred_bb; edge e; @@ -3612,10 +3612,29 @@ maybe_tidy_empty_bb (basic_block bb, bool recompute_toporder_p) if (!(e->flags & EDGE_FALLTHRU)) { - recompute_toporder_p |= sel_redirect_edge_and_branch (e, succ_bb); + /* We can not invalidate computed topological order by moving + the edge destination block (E->SUCC) along a fallthru edge. */ + sel_redirect_edge_and_branch (e, succ_bb); rescan_p = true; break; } + /* If the edge is fallthru, but PRED_BB ends in a conditional jump + to BB (so there is no non-fallthru edge from PRED_BB to BB), we + still have to adjust it. */ + else if (single_succ_p (pred_bb) && any_condjump_p (BB_END (pred_bb))) + { + /* If possible, try to remove the unneeded conditional jump. */ + if (INSN_SCHED_TIMES (BB_END (pred_bb)) == 0 + && !IN_CURRENT_FENCE_P (BB_END (pred_bb))) + { + if (!sel_remove_insn (BB_END (pred_bb), false, false)) + tidy_fallthru_edge (e); + } + else + sel_redirect_edge_and_branch (e, succ_bb); + rescan_p = true; + break; + } } } @@ -3631,9 +3650,6 @@ maybe_tidy_empty_bb (basic_block bb, bool recompute_toporder_p) remove_empty_bb (bb, true); } - if (recompute_toporder_p) - sel_recompute_toporder (); - #ifdef ENABLE_CHECKING verify_backedges (); #endif @@ -3651,7 +3667,7 @@ tidy_control_flow (basic_block xbb, bool full_tidying) insn_t first, last; /* First check whether XBB is empty. */ - changed = maybe_tidy_empty_bb (xbb, false); + changed = maybe_tidy_empty_bb (xbb); if (changed || !full_tidying) return changed; @@ -3715,8 +3731,8 @@ tidy_control_flow (basic_block xbb, bool full_tidying) that contained that jump, becomes empty too. In such case remove it too. */ if (sel_bb_empty_p (xbb->prev_bb)) - changed = maybe_tidy_empty_bb (xbb->prev_bb, recompute_toporder_p); - else if (recompute_toporder_p) + changed = maybe_tidy_empty_bb (xbb->prev_bb); + if (recompute_toporder_p) sel_recompute_toporder (); } return changed; @@ -3733,7 +3749,7 @@ purge_empty_blocks (void) { basic_block b = BASIC_BLOCK (BB_TO_BLOCK (i)); - if (maybe_tidy_empty_bb (b, false)) + if (maybe_tidy_empty_bb (b)) continue; i++;
https://gcc.gnu.org/legacy-ml/gcc-patches/2010-11/msg01138.html
CC-MAIN-2020-34
refinedweb
541
55.13
The code i posted did (with respect to needing the rest of course) Goal- My goal with this project is a auto shut off blinker for a motorcycle. I will read the angle of the bike at all times. Perhaps, if you know the bike is moving fast enough to make turn detection difficult, you could simply cancel the indicators after a set time on the basis that at high speed you rarely need to signal for longer than a few seconds. #include <serLCD.h>#include <SoftwareSerial.h>#include <LiquidCrystal.h>#include <math.h>int oldval=0; //sec value for the buttonint state1=0; //state the button is inint val = 0; //first value for the buttonconst int button = 7; //pin for the push button (for testing)// these constants won't change:const int xPin = 2; // X output of the accelerometerconst int yPin = 3; // Y output of the accelerometer//Button Left And Right pins (for use later)int right = 4;int left = 5;// Set pin to the LCD's rxPinint pin = 1;serLCD lcd(pin);int Xraw, Yraw;double xGForce, yGForce, Xangle, Yangle;double saveY, saveX, state;void setup() { // initialize serial communications: Serial.begin(9600); // initialize the pins connected to the accelerometer // as inputs: pinMode(xPin, INPUT); pinMode(yPin, INPUT); Serial.write(0xFE); //command flag Serial.write(0x01); //clear command. delay(10); Serial.print("Lets Begin"); delay(1000);}void loop(){ // lcd.clear(); Serial.write(0xFE); //command flag Serial.write(0x01); //clear command. delay(10); // variables to read the pulse widths: int pulseX, pulseY; // variables to contain the resulting accelerations int accelerationX, accelerationY; // read pulse from x- and y-axes: pulseX = pulseIn(xPin,HIGH); pulseY = pulseIn(yPin,HIGH); Xraw = pulseIn (xPin, HIGH); Xraw = pulseIn (xPin, HIGH); Yraw = pulseIn (yPin, HIGH); Yraw = pulseIn (yPin, HIGH); // Calculate G-force in Milli-g's. xGForce = (( Xraw / 10 ) - 500) * 8; yGForce = (( Yraw / 10 ) - 500) * 8; // Calculate angle (radians) for both -x and -y axis. Xangle = asin ( xGForce / 1000.0 ); Yangle = asin ( yGForce / 1000.0 ); // Convert radians to degrees. Xangle = Xangle * (360 / (2*M_PI)); Yangle = Yangle * (360 / (2*M_PI)); //save angle when button is pushed, then compare to live reading send a zero once live reading //is below saved angle Serial.print("Y Angle: "); Serial.print(Yangle); val= digitalRead(button); //Read input if ((val==HIGH) && (oldval==LOW)) { test(saveY); Serial.print(saveY); } else if (Yangle <= -10) { Serial.write(0xFE); //command flag Serial.write(192); //position Serial.print(saveY); delay(150); } else { Serial.write(0xFE); //command flag Serial.write(192); //position Serial.print(saveY); delay(150); }}int test(int saveY) { Serial.write(0xFE); //command flag Serial.write(192); //position saveY=Yangle; Serial.print(saveY); delay(150); return saveY;} Is this better? double saveY, saveX, state; test(saveY); int test(int saveY) { Serial.write(0xFE); //command flag Serial.write(192); //position saveY=Yangle; So i decided to use the accelerometer. They say its accurate up to 45 degrees and its easy to code. The project states i have to use angle. Yes peter, the project states "Student will research, design, and implement a circuit that will sense the pitch and yaw of a motorcycle and display the readings. The circuit will also automatically turn off the turn signals after a turn is completed. Student will create a PCB layout and will be soldered into a hard case. Then student will develop a mini presentation" Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=122424.msg923050
CC-MAIN-2015-06
refinedweb
590
50.12
All Articles | Submit Article | Pre-requisiteIntroductionThe SALT traditional 3-tier Issues with the traditional 3-tier architectureNo uniform mechanismNeed to write the business logic mapping codeAggregation and composition increases complexityQuerying and sorting of business objectInconsistent DAL componentCRUD (Create, Update and delete) multiplies complicationConsolidating issues using a Fishbone30,000 feet solution view to solve Non-uniform mechanismGoing back to basics what is dataEntity – the common protocol across layerCommon Query MechanismLINQ the Pepper – Generalized Query and entity transformation MechanismGeneralized QueryThree Tier Approach using LINQDepends how you view LINQQuantifying how much effort we can save using LINQSource code There is no pre-requisite (Oh yes even if you do not know LINQ this article will guide you) for this article ? what I need from you guys is time to read this article. So block your 10 minutes and rest assured you will understand LINQ in a much better way.For past some days I have been writing and recording videos in design patterns, UML, FPA, Enterprise blocks and lot you can watch the videos at You can download my 400 .NET FAQ EBook from In this article we will understand core reasons of why we should use LINQ. Three tier / N-Tier is now a standard in almost all projects. New architectures which are coming up like MVC, MVP have the fundamental base and thinking of 3-tier methodology. So we have termed the 3-tiers as SALT, in other words we can not stay with out it. LINQ (Pepper) on other hand is a new technology which helps us create execute queries against disparate data sources like ADO.NET, Custom objects, XML etc. So it’s not a needed thing (Pepper) but it does helps to remove lot of issues related with traditional 3-tier.So let’s make a nice Three-tier LINQ dish using SALT and PEPPER. So we will first understand Three-tier, issues with it and then see how LINQ helps us to improve the same. Let’s first go through the traditional 3-tier architecture. We understand for many people this is just a revision but the main point which we want to stress is the issues with three-tier and how LINQ helps us in addressing those issues.So let’s start from Data access layer and move towards the UI layers. The below code is pretty self explanatory open connection, process data adapter and send dataset back to the business logic layer. Ok, I can smell easily many experienced professional saying that the below code can be optimized using enterprise data access layer. Oh, yes we can but for simplicity sake lets take the below code for now. I really do not want people who have not used Enterprise data access layer to be left behind. Below code is a simple data access layer which queries a country master to retrieve countryid and country code. namespace NameSpaceCountryDAL { public class clsCountryDAL { public DataSet getCountriesByDataset() { SqlConnection objConnection = new SqlConnection(strConnectionString); DataSet objDataset = new DataSet(); objConnection.Open(); SqlDataAdapter objAdapter = new SqlDataAdapter("Select * from tbl_country", objConnection); objAdapter.Fill(objDataset); objConnection.Close(); return objDataset; } } } The second layer is the business logic layer. Below is the business logic layer which is used to load country data. We have inherited the class from a collection class and in the constructor we have loaded the country object from the dataset received from the DAL layer. public class CountriesNormalBO : CollectionBase { public CountriesNormalBO() { clsCountryDAL objCountryDal = new clsCountryDAL(); DataSet obj = objCountryDal.getCountriesByDataset(); foreach (DataRow objRow in obj.Tables[0].Rows) { CountryEntity objCountry = new CountryEntity(); objCountry.CountryId = Convert.ToInt16(objRow[0]); objCountry.CountryCode = Convert.ToString(objRow[1]); List.Add(objCountry); } } } Finally the UI code which uses the business objects collection and loads the values in the UI. CountriesNormalBO objCountryBo = new CountriesNormalBO(); lstCountry.Items.Clear(); foreach (CountryEntity obj in objCountryBo) { lstCountry.Items.Add(obj.CountryCode); } Below is the how the code snippet and 3-tier architecture mapping looks like. Now that we have understood what is a 3-tier architecture lets try to get the issues related with it. There are six big issues:-• No uniform mechanism• Need to write business logic mapping from DAL data sources.• For complex business object model transformation there is more complexity attached.• Querying and Sorting of business logic needs be done using custom code.• DAL component can be custom and in-consistent.• When we need to do CRUD operations using business object it increases complications.So what we will do is go through all of these issues and then see how LINQ can help us solve the problems. One of the biggest problems in 3-tier architecture is the way data travels and transforms across tiers inconsistently. Inconsistency of data representation and transformation is one of the main problems which stem other issues. • When the data is in the DAL component it’s represented by ADO.NET objects. So data has ADO.NET context attached to it. In other words it’s in DATASET, DATAREADER or some other ADO.NET object form.• From DAL these objects are sent to the BO. In BO data is now represented by business object collection and entity. So now the data representation is in form of strongly typed objects.• In UI the same strongly types business objects are referred and used. So in this section there is no transformation as such. Due to data inconsistency we need to write custom transformation and mapping logic to convert the DAL objects like DATASET,DATAREADER etc in to strongly typed business objects. The above figure represents how the dataset object is looped and strongly typed list object is created in the business layer section. If we look more closely we need to perform five steps to get the DAL data format in to business layer format:-First we create object of the DAL • Get the data in to dataset• Loop through the data rows of the data table.• Create objects using the row object • Finally we add the strongly object to list collection. In 3-tier model the data model and the business object model are different. The business object model represents the real world model while the database model represents a data model of a database. For instance below figure shows we have an address table which is de-normalized for performance. All the data is represented by one entity, in other words by one table. When we look at the object model i.e. business model the name and address are represented by two entities with one as to many relationships as shown below. So you have one ‘name’ class and many ‘address’ classes aggregated in the name object. In other words ‘name’ and ‘address’ objects have one to many relationships. Do to data inconsistency this transformation becomes more complex. You need to write more code and logic for transforming the de-normalized data in to aggregation and composition relationship in business object model. There are situations when you want to search within the strongly typed business objects. For instance you can see from the code snippet below we are searching for particular ‘country’ object using the ‘ID’ value. So we loop through the country object collection and when we find the ‘ID’ matching we return the object. This can become complicated when we have large number of objects in collection and it can be worse for strongly types aggregated and composed object collection. DAL layer is one of the integral parts of 3-tier architecture. Normally all Microsoft projects use enterprise data application blocks. But then there are projects that use their own DAL components and some special projects use third party DAL components which can connect to heterogeneous databases like SQL Server, ORACLE etc. Even with in same project some one can use a dataset , some on can use datareader to transfer data between tiers. In other word non-uniform DAL implementation.A word about Enterprise data application block from the consistent DAL perspective. An Enterprise data block even though provides a very consistent mechanism of DAL implementation it has one minor issue:-• It’s a part of a different framework i.e. Enterprise application block and not of the core framework. LINQ forms the part of the framework. So we do not need to install a separate framework for DAL implementation. 70% projects in IT industry are CRUD projects. In 3-tier architecture the CRUD operation needs to transfer data to and fro between layers. Due to inconsistent data representation between tiers the matter gets more complicated for CRUD operations. Fish bone helps us to identify the potential factor which causes the various other issues.You can read about fishbone on .So what we have done is we have listed down all the issues and we are trying to figure out the main cause. You can see the non-uniform data representation is the main cause of all issues. If we are able to crack that issues all other issues can be eliminated. So let’s see to solve the above problem what do we need. So we have two major issues one is the inconsistent data representation and the other is we need a generalized query mechanism by which we transform and filter business data. So to solve the problem we need to figure out a consistent data representation across all tiers and general query mechanism by which we can filter and sort business objects. So let’s first try to solve the first problem, by defining what DATA is?. If we really see at a granular microscopic level data is just a field with value. Yes, you can also say data has relationships between them but for now we will keep it simple. So DATA is nothing but a field and a value. If you visualize this field value concept for a data can be understood by any tier. So if we can pass data in representation of field and value across tiers we can remove the inconsistent representation of data. This field and value when represented by a class is termed as Entity. In the next section we will discuss how we can use the entity to establish a common representation mechanism across tiers. Entity is nothing but a class with ‘set’ and ‘get’ properties. For instance below is a country entity class which has country code and country id. You can see how the table data structure is represented using the entity class. This Entity A.K.A field/value can be used as a common representation platform across any tier. So let’s shout with one voice “WE ALL UNDERSTAND ONE REPRESENTATION ENTITY”. The other issue we discussed was about querying and filtering the business objects. Due to inconsistent data we need to implement custom logic for transforming the data model in to object model. Due to which the code complication increases tremendously. To handle this problem we need also have a technology which can implement generalized query mechanism on any heterogeneous data sources like ADO.NET, Custom business objects, XML etc. To summarize all the above mentioned issues for three tiers can be resolved by implementing a common data representation and a generalized query mechanism. So do we need make some framework – NOPE – the answer is LINQ. So we have discussed about the three tier and its issues. We have also looked from 30,000 feet angle the solution i.e. a common representation using entity and a generalized query mechanism. LINQ helps us to achieve both of them. Let’s first try to understand basics regarding LINQ. So what we will do is that we will create a simple ‘Customer’ table with ‘CustomerCode’ and ‘CustomerName’ fields and let’s try to load the same using LINQ.The first step is defining Entity – the common representation in LINQ. Below is the ‘ClsCustomer’ entity class which is attributed by using the ‘Table’ attribute. This table attribute is mapped with the physical table name present in the database. The SET and GET properties are mapped using ‘Column’ attribute as shown in the below code snippet. Now that we have built our Entity class using the ‘Table’ and ‘Column’ attribute its time to load the entity objects. ‘DataContext’ helps us to load the entity objects in LINQ. It acts a bridge in other words its a DAL layer between SQL Server database and the entity objects. So the first step we need to create the ‘DataContext’ object using the connection string. DataContext db = new DataContext(@"Data Source=.\SQLEXPRESS;AttachDbFilename=D:\SimpleLINQExample\WindowFormLINQ\ Country.mdf;Integrated Security=True;Connect Timeout=30;User Instance=True"); Now we can tag the customer entity class i.e. ‘ClsCustomer’ with ‘Table’ using generics and load data using the ‘GetTable’ function of the datacontext object. Table<clsCustomer> Customers = db.GetTable<clsCustomer>(); We prepare a query on customer code basis and fire it on the ‘Customers’ object which we have created using ‘Table<clsCustomer>’. LINQ queries look unusual but they are very powerful to solve complex business objects filtering and transformation. var query = from c in Customers where c.CustomerCode == txtCustomerCode.Text select new { c.CustomerCode, c.CustomerName }; Finally we use the enumerator to loop through the objects and display the data in UI. var enumerator = query.GetEnumerator(); lstCountry.Items.Clear(); if (enumerator.MoveNext()) { var customer = enumerator.Current; lstCountry.Items.Add(customer.CustomerName); } One of the powerful things about LINQ query is that it’s generalized. Below figure shows how we have used the same query to fire on an ADO.NET data source and a custom strong type’s business collection. You can download the above code which is attached with this article which shows how generalized queries are implemented using LINQ.The sample has two button one which loads from the database and the other from customer object collection and the query is same. Ok, now that we have understood the basics of LINQ. Let’s move ahead and implement LINQ in 3-tier architecture. As said previously to solve the inconsistent data representation across tiers we need to represent the data format in entity. Even though we use the entity class type to represent our data format the entity has always a context attached. So when the entity is in the DAL component it has the ‘Table’ context and when the entity is sent to the business layer it has the ‘IEnumerable’ interface context and in the UI it’s a plain entity class. Below is the code snippet for data layer. As this is the DAL we are creating the ‘DataContext’ object and attaching the entity with the ‘Table’ context. public Table<CountryEntity> getCountries() { DataContext db = new DataContext(strConnectionString); Table<CountryEntity> Customers = db.GetTable<CountryEntity>(); return Customers; } Once it comes to the business layer we are firing the LINQ query and attaching the ‘IEnumerable’ context. One of the important point to note is that we do not have any transformation code , all the magic is in the LINQ query. public IEnumerable<CountryEntity> getCountries() { clsCountryDAL objCountryDal = new clsCountryDAL(); return from c in objCountryDal.getCountries() select c; } The UI happily loops through the business collection and displays the same in the UI. CountryBO objCountryBO = new CountryBO(); lstCountry.Items.Clear(); foreach (CountryEntity obj in objCountryBO.getCountries()) { lstCountry.Items.Add(obj.CountryCode); } You can download the 3-tier code which is attached with thid article to understand the concepts more better. So depending from which layer you are looking at LINQ you can visualize how LINQ can be useful. In the DAL component it can act as a readymade data access , when data is moving from DAL to BO it can provide functionalities for transformation to object model and finally in the BO itself it helps us to query and filter data. Quantifying how much effort we can save using LINQ Ok, let’s quantify how much effort we can save using LINQ and also how much standardization we can achieve in project. In DAL side we can see 7 lines of code is only 3 lines. The most important thing it eliminates the non-standardized DAL components. The best productivity using LINQ is in business object layer. You can see how 7 lines of code are minimized to 2 lines of code. We have attached the simple LINQ and 3-tier LINQ implementation source code with this article. Submit Article
http://www.dotnetfunda.com/articles/article207.aspx
crawl-002
refinedweb
2,713
55.74
Inheritance colin shuker Ranch Hand Joined: Apr 11, 2005 Posts: 750 posted May 12, 2007 13:24:00 0 Hi, I'm doing a test , and I need to write a set of classes that represent a Circle, Triangle and Rectangle, and then allow user to find the area of any of them. I,ve added a 'name' field too, my plan is to just use an abstract class Shape, and let the others extend it, please can you tell me if the code below is well structured, and if it could be improved. Here is the ShapeInterface: public interface ShapeInterface { public abstract String getName(); public abstract double getArea(); } Here is the Shape class: public abstract class Shape implements ShapeInterface{ public String name; public Shape(String name) { this.name=name; } public String getName(){ return name; } public abstract double getArea(); } Here are the 3 subclasses, Circle,Triangle and Rectangle public class Circle extends Shape{ double radius; public Circle(double radius){ super("Circle"); this.radius=radius; } @Override public double getArea() { return Math.PI*radius*radius; } } public class Triangle extends Shape{ double base; double height; public Triangle(double base,double height){ super("Triangle"); this.base=base; this.height=height; } @Override public double getArea() { return 0.5*base*height; } } public class Rectangle extends Shape{ double base; double height; public Rectangle(double base,double height){ super("Rectangle"); this.base=base; this.height=height; } @Override public double getArea() { return base*height; } } Thanks Merrill Higginson Ranch Hand Joined: Feb 15, 2005 Posts: 4864 posted May 12, 2007 13:48:00 0 Looks good. The only change I might suggest would be to eliminate the abstract Shape class and just have Triangle, Circle, and Rectangle implement the Shape interface directly. The advantage of this is that if you start coding for these objects in a complex application, you may find that it would make more sense to make these objects part of some other class hierarchy. Since Java does not allow multiple inheritance, don't "waste" your one shot at having a class inherit from another class. By implementing the interface only, your objects are free to inherit from another class that might be more appropriate and provide more benefits for code reuse. The rule of thumb provided by Joshua Bloch in his book Effective Java is "Prefer interfaces to abstract classes". Merrill Consultant, Sima Solutions colin shuker Ranch Hand Joined: Apr 11, 2005 Posts: 750 posted May 12, 2007 14:10:00 0 I see your point, but what about the 'name' variable, I would have to declare this field in the 3 subclasses wouldn't I? My code makes use of code reuse, what do you think? Merrill Higginson Ranch Hand Joined: Feb 15, 2005 Posts: 4864 posted May 12, 2007 19:31:00 0 Yes, having an abstract class does save you from having to code a name variable and getter/setter in each class. That's a point in favor of an abstract class. The trade-off, though, is that the class becomes less flexible. You stated at the beginning of your first post that the assignment was to have each class calculate the area. The interface accomplishes that beautifully without an abstract class. The name attribute was something you added as an afterthought, and is peripheral to the real point of the class. I'd say it's not worth making the class less flexible. This is obviously a simple application, so it doesn't really matter, but in the real world, it's a good idea to keep classes free of dependencies on a static hierarchy whenever possible. Here's an example of what I'm talking about: Suppose now you want to draw each of these shapes on a screen. You now have a whole lot of other things to take into consideration, such as coordinates, background color, translucence, etc. Since all you used was a interface to calculate the area, you're free to have these classes implement as many other interfaces as you want (e.g. drawable, expandable, movable, etc.). Now suppose you have a whole lot of default behavior you want each of these shapes to inherit that has nothing to do with area, and it would be really convenient just to have the shape classes inherit from a class that embodies all that default behavior. If the shape classes already inherit from an abstract class, you're just out of luck, because a Java class can only inherit from one superclass. I think you can see why you don't want to play the abstract class card until you really need it. You just have to weigh the benefits on either side and make your decision. If, for example, there were ten properties shared by all classes, that would probably swing my decision over to using an abstract class. With only one property, though, I think I'd prefer to keep the class more flexible and just do the extra work of coding the property in each class. Campbell Ritchie Sheriff Joined: Oct 13, 2005 Posts: 43268 32 posted May 13, 2007 05:16:00 0 As for names, try getClass().getName(), then try substring(lastIndexOf(".")) BTW: what I quoted won't compile in its present state. colin shuker Ranch Hand Joined: Apr 11, 2005 Posts: 750 posted May 13, 2007 08:15:00 0 Thanks for your input, I didn't use the last getClass() part, since it is now unnecessary as I have removed the abstract class so that the 3 classes now implement getArea() and getName(), as below: public interface Shape { public abstract String getName(); public abstract double getArea(); } public class Circle implements Shape{ double radius; public Circle(double radius){ this.radius=radius; } public double getArea() { return Math.PI*radius*radius; } public String getName() { return "Circle"; } } public class Triangle implements Shape{ double base; double height; public Triangle(double base,double height){ this.base=base; this.height=height; } public double getArea() { return 0.5*base*height; } public String getName() { return "Triangle"; } } public class Rectangle implements Shape{ double base; double height; public Rectangle(double base,double height){ this.base=base; this.height=height; } public double getArea() { return base*height; } public String getName() { return "Rectangle"; } } Does that look better? Thanks Campbell Ritchie Sheriff Joined: Oct 13, 2005 Posts: 43268 32 posted May 13, 2007 11:24:00 0 Miss out "abstract" in your interface method declarations. Because it is an interface, the "abstract" bit is taken for granted and can be omitted. No problem about getClass(). I agree. Here's the link: subject: Inheritance Similar Threads Singleton Factory Object Polymorphism issue in the test code Storing objects polymorphically in a Figure array Regarding Polymorphism Runtime and compile-time ? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/407042/java/java/Inheritance
CC-MAIN-2015-27
refinedweb
1,128
58.01
Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video. Requiring Login9:59 with Jason Seifer Our application now allows people to sign in and out. Now it's time to make sure a user is logged in before they can create and modify todo lists. Code Snippets Add "my" and "types" as flash types: add_flash_types :my, :types Requiring the user: def require_user if current_user true else redirect_to new_user_session_path, notice: "You must be logged in to access that page." end end Adding a callback to our controllers requiring login: before_action :require_user Faking login in a controller test: controller.stub(:require_user).and_return(true) - 0:00 [MUSIC] - 0:05 Okay, now that we've got logging in working, let's go ahead and - 0:10 make sure our todo list controller actually requires people to log - 0:14 in before they can view, create, edit or delete their todo lists. - 0:20 Now, we're gonna do this a certain way, so first, - 0:22 let's go ahead and open up our authentication feature. - 0:29 So okay, this says exactly what we want, user logs in and goes to the to do lists, - 0:34 displays the email address in the event of a failed login. - 0:37 Okay, let's go ahead and look at our todo lists. - 0:40 Well, it looks like we never wrote one for listing all of the todo lists. - 0:44 So let's go ahead and do that now. - 0:48 Let's go ahead and create a new file here, we will save that as index spec dot rb. - 0:58 And we're gonna describe listing todo lists. - 1:02 And the first thing that we're gonna do is say it requires login. - 1:10 Now we can say visit/ todo_lists and - 1:15 let's go back and now we'll say expect(page).to - 1:22 have_content("You must be logged in"). - 1:28 So let's go ahead and run that and see what happens. - 1:36 Okay, and this is the failure message that we expected. - 1:40 We expected a fine text, - 1:41 you must be logged in in odot todo lists sign up, blah, blah, blah. - 1:46 So, let's go ahead and write this code to make this all work. - 1:52 Close that here. - 1:56 Now, what we're gonna do is first, - 1:58 we're going to create a method that we can call later. - 2:04 We're gonna call it require_user. - 2:07 We're gonna use something called a before action - 2:12 to make sure that a user is logged in. - 2:16 Now, before we have a user logged in, we need to have access to a user method. - 2:22 We're gonna call the method that we write current_user. - 2:25 The current_user is going to be the user object of the currently signed in user. - 2:33 So, lets go ahead and write this using conditional assignment. - 2:37 We're gonna say, current user, conditionally equals a user object. - 2:43 Now how do we find the user? - 2:46 Well, if a user is logged in, remember we set that - 2:51 over here in the user sessions controller by setting the session user ID variable. - 2:57 So we can do that here by saying user.find session user_ID. - 3:04 We'll do that if there is a session user ID key. - 3:13 Now we can take that method and - 3:15 say if we have a current_user this require_user method returns true. - 3:21 And if not, we're gonna redirect them to the login path. - 3:31 With the message you must be logged in to access that page. - 3:41 Okay. - 3:43 So we've got that done, now let's go over to our to do lists controller. - 3:48 Now remember, in the todo list index spec, - 3:52 we're expecting the page to have the content, you must be logged in. - 3:57 So if we run this again, - 3:59 it's still not gonna work, because we haven't called that require user yet. - 4:05 Okay. - 4:07 So now what we can do, since we've written the require_user method, - 4:12 to go over here and in our todo list controller, before we do anything, - 4:16 we can say before_action require_user. - 4:23 Now the reason that we can access require_user here - 4:26 is because our todo list controller inherits from the application controller. - 4:32 Now we go back to the application controller, - 4:34 all of the methods in here will be available to our todo list controller and - 4:39 any other controller we've subclassed from application controller. - 4:44 Now we get all these methods up here from action controller base and - 4:48 a bunch of other different modules but we don't need to worry about that right now. - 4:52 So let's go ahead and run that and see what happens. - 4:56 Okay that looks good, so that example passes well that's perfect. - 5:02 Now, what we can do is go ahead and run all of our tests, and - 5:06 make sure everything else is passing. - 5:10 So I'm just gonna run rake here, which is gonna call rake spec and - 5:15 spoiler alert, all of our tests are not going to be passing. - 5:21 We'll look at all this. - 5:24 All of our todo lists are not working. - 5:28 Look at all this. - 5:29 All these features have now broken, and - 5:31 our controller features have broken as well. - 5:36 So, how do we fix all this? - 5:38 Well first, let's run the controller specs again. - 5:45 Okay. - 5:46 This all fails, and the reason is, let me close all this here. - 5:53 The reason is, all of these different gets are not gonna work anymore. - 5:59 So, when we go to the index page and - 6:01 try to get the index, we're not gonna get anything because we can't load the user. - 6:07 So, what we could do is, - 6:13 since we're looking in this TodoListsController for - 6:17 require_user, we need to make that method pass somehow. - 6:22 So, we could do it one of two ways. - 6:25 We could say that this context is all logged in. - 6:30 We can do that by saying controller.stub(:require_user).and_return- - 6:37 (true). - 6:38 And that will always make sure that the require_user method passes. - 6:44 Now let's go ahead and run that, and see what happens. - 6:46 That's on line 33 and actually if we just wanna run that test, - 6:50 we can put a colon and the line number for the test that we wanna run. - 6:55 Okay, that passed. - 6:57 Let's run all of it again. - 7:00 And all of these things failed as well. - 7:03 So what we can do here in this case is a couple different things. - 7:08 We could either add a context for logged out, and another context for logged in, - 7:27 And in the logged in context we could just do that right there. - 7:33 Let's run that, make sure it works again, this is on line 33 we start. - 7:40 Okay, and we should also test the logged out context as well. - 7:45 So let's say it requires log in. - 7:53 So we can just copy this line here, get index. - 8:01 And then we can expect the response to be_redirect - 8:17 And we expected it redirects to the new user session path. - 8:21 So let's run that again and see what happens. - 8:23 Okay, we had two examples passed here and - 8:30 zero failures which means that this is working pretty much how we want it to. - 8:40 So what we're going to do now? - 8:42 And I'm going to do this off-screen. - 8:44 You can go ahead and download it. - 8:46 Is I'm going to write a logged-out context for all of our different actions. - 8:50 The other thing that we could do is just say that this whole thing requires login - 8:56 and move the before block up here. - 9:03 And that would test the behavior of our entire controller - 9:07 making sure that it requires login. - 9:10 It doesn't really matter what we do. - 9:12 We're just gonna keep that one here for right now. - 9:16 The other thing that we could do is if we wanted to make sure that - 9:20 a user object was returned, so we could stop require user and - 9:25 return true or, since require user looks for current user, - 9:30 we could stop the current user and return the user object. - 9:37 And, by doing that we actually made this fail because the logged - 9:42 out context no longer applies. - 9:45 So, we would have to do that block somewhere else. - 9:49 Now, let's actually run all these and see what happens. - 9:53 So, there we go, that assures us that our current user works and - 9:57 fixes all the tests.
https://teamtreehouse.com/library/user-authentication-with-rails/adding-user-support-to-our-application/requiring-login
CC-MAIN-2016-50
refinedweb
1,627
80.82
I am having some trouble getting started with using javascript files on my website (A Flask application). I start the website by running run.py which looks like this: #!flask/bin/python from app import app app.run(debug=True) Ah yes, luckily I am currently developing a flask application at the moment. You are currently missing the static folder which by default flask looks into, a folder structure something like this: |FlaskApp ----|FlaskApp --------|templates - html files are here --------|static - css and javascript files are here Two important default folders that Flask will look into templates and static. Once you got that sorted you use this to link up with your javascript files from your html page: <script src="{{url_for('static', filename='somejavascriptfile.js')}}"></script> Hope that helps any questions just ask. Plus - A good article to read but not super related but it talks about the folder structure of flask is this:
https://codedump.io/share/s8fYwpG1l52r/1/flask-application---how-to-link-a-javascript-file-to-website
CC-MAIN-2017-39
refinedweb
153
63.9
![endif]--> The Software Servo Library can drive servos on all of your pins simultaneously. The API is patterned after the wiring.org servo library but the code is different. You are not limited to 8 servos, but you must call the SoftwareServo::refresh() method at least once every 50ms or so to keep your servos updating. Note that as of Arduino 0017, the Arduino Servo library supports up to 12 motors on most Arduino boards and 48 on the Arduino Mega. The Arduino library does not need to have explicit calls to refresh so is easer to use than the software servo code that follows. If you have an Arduino version earlier than 0017 you can download the new hardware servo code from here. Even though you attach a servo, it won't receive any control signals until you send its first position with the write() method to keep it from jumping to some odd arbitrary value. The library takes about 850 bytes of flash and 6+(8*servos) bytes of SRAM. This library does not stop your interrupts, so millis() will still work and you won't lose incoming serial data, but a pulse end can be extended by the maximum length of your interrupt handles which can cause a small glitch in the servo position. If you have a large number of servos there will be a slight (1-3 degrees) position distortion in the ones with the lowest angular values. The following code lets you control Servo on pin2 by potentiometer on analog 0 #include <SoftwareServo.h> SoftwareServo myservo; // create servo object to control a servo int potpin = 0; // analog pin used to connect the potentiometer int val; // variable to read the value from the analog pin void setup() { myservo.attach(2); // attaches the servo on pin 2 SoftwareServo::refresh(); } The following code is a ping/pong rotations on A0 pin with speed variable #include <SoftwareServo.h> SoftwareServo myservo; // create servo object to control a servo #define pinServo A0 int speed = 1; int limits[2] = {30,150}; // set limitations (min/max: 0->180) boolean refresh = false; // toggle refresh on/off void setup() { Serial.begin(9600); // attaches the servo on pin to the servo object myservo.attach(pinServo); // init angle of servo inbetween two limitations myservo.write((limits[1]-limits[0])/2); } void loop() { // refresh angle int angle = myservo.read(); // change direction when limits if (angle >= limits[1] || angle <= limits[0]) speed = -speed; myservo.write(angle + speed); // set refresh one time / 2 refresh = refresh ? false : true; if (refresh) SoftwareServo::refresh(); Serial.print("Angle: "); Serial.println(angle); }
http://playground.arduino.cc/ComponentLib/Servo
CC-MAIN-2014-42
refinedweb
428
52.6
Law.. Law of Demeter”. Apart from knowing object oriented programming basic concepts e.g. Abstraction, Polymorphism, Inheritance and SOLID design principle, it’s also worth knowing useful principle like this, which has found it’s way via experience. In following example, we will see how a method can violate above rules to violate Law of Delimiter. public class LawOfDelimterDemo { /** * This method shows two violations of "Law of Delimiter" or "Principle of least knowledge". */ public void process(Order o) { // as per rule 1, this method invocation is fine, because o is a argument of process() method Message msg = o.getMessage(); // this method call is a violation, as we are using msg, which we got from Order. // We should ask order to normalize message, e.g. "o.normalizeMessage();" msg.normalize(); // this is also a violation, instead using temporary variable it uses method chain. o.getMessage().normalize(); // this is OK, a constructor call, not a method call. Instrument symbol = new Instrument(); // as per rule 4, this method call is OK, because instance of Instrument is created locally. symbol.populate(); } } You can see that when we get internal of Order class and call a method on that object, we violate Law of delimiter, because now this method knows about Message class. On the other hand calling method on Order object is fine because its passed to the method as parameter. This image nicely explains what you need to do to follow Law of Demeter. Let’s see another example of code, which violates the Law of Demeter and how does it affect code quality. public class XMLUtils { public Country getFirstBookCategoryFromXML(XMLMessage xml) { return xml.getXML().getBooks().getBookArrary(0).getBookHeader().getBookCategory(); } } This code is now dependent upon lot of classes e.g. XMLMessage XML Book BookHeader BookCategory Which means this function knows about XMLMessage, XML, Book, BookHeader and BookCategory. It knows that XML has list of Book, which in-turn has BookHeader and which internally has BookCategory, that’s a lot of information. If any of the intermediate class or accessor method in this chained method call changes, then this code will break. This code is highly coupled and brittle. It’s much better to put the responsibility of finding internal data into the object, which owns it. If we look closely, we should only call getXML() method because its method from XMLMessage class, which is passed to method as argument. Instead of putting all this code in XMLUtils, should be putting on BookUtils or something similar, which can still follow Law of Demeter and can return the required information. you probably have a spell checker with that “law of delimiter”.
http://www.javacodegeeks.com/2014/06/law-of-demeter-in-java-principle-of-least-knowledge-real-life-example.html
CC-MAIN-2014-41
refinedweb
436
54.73
Unofficial Deep Learning Lecture 2 Notes Hi All, Here’s my lecture 2 notes. Hope it’s useful. Part 1: Overview of Dogs vs. Cats Image Recognition Resources mainly from lesson 1 from the GitHub repository. # * Opening Recap: we made a simple classifier last week with dogs and cats. How do we tune these neural networks? Learning rate. Practice. Epoch number. Sample Code arch=resnet34 data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz)) learn = ConvLearner.pretrained(arch, data, precompute=True) learn.fit(0.01, 3) Output A Jupyter Widget [ 0. 0.04726 0.02807 0.99121] [ 1. 0.04413 0.02372 0.99072] [ 2. 0.03454 0.02609 0.9917 ] PATH = "/home/paperspace/Desktop/data/dogscats/" sz=224 arch=resnet34 data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz)) learn = ConvLearner.pretrained(arch, data, precompute=True) learn.fit(0.01, 3) A Jupyter Widget [ 0. 0.04247 0.02314 0.9917 ] [ 1. 0.03443 0.02482 0.98877] [ 2. 0.03072 0.02676 0.98975] Choosing a learning rate The thing that most determines how we are going to zoom in or hone in on the solution. Where is the “minimum point”. How do you find the minimum point? If i was a computer algorithm, how do i found the minimum. The learning rate is how big of a jump that we will advance (the size of the arrow in the image below). If your learning rate is too high: Learning rate finder learn = ConvLearner.pretrained(arch, data, precompute=True) This is a custom function. The ConvLearner Class def __init__(self, data, models, precompute=False, **kwargs): self.precompute = False super().__init__(data, models, **kwargs) self.crit = F.binary_cross_entropy if data.is_multi else F.nll_loss if data.is_reg: self.crit = F.l1_loss elif self.metrics is None: self.metrics = [accuracy_multi] if self.data.is_multi else [accuracy] if precompute: self.save_fc1() self.freeze() self.precompute = precompute @classmethod def pretrained(self, f, data, ps=None, xtra_fc=None, xtra_cut=0, **kwargs): models = ConvnetBuilder(f, data.c, data.is_multi, data.is_reg, ps=ps, xtra_fc=xtra_fc, xtra_cut=xtra_cut) return self(data, models, **kwargs) @property def model(self): return self.models.fc_model if self.precompute else self.models.model @property def data(self): return self.fc_data if self.precompute else self.data_ def create_empty_bcolz(self, n, name): return bcolz.carray(np.zeros((0,n), np.float32), chunklen=1, mode='w', rootdir=name) def set_data(self, data): super().set_data(data) self.save_fc1() self.freeze() def get_layer_groups(self): return self.models.get_layer_groups(self.precompute) def get_activations(self, force=False): tmpl = f'_{self.models.name}_{self.data.sz}.bc' # TODO: Somehow check that directory names haven't changed (e.g. added test set) names = [os.path.join(self.tmp_path, p+tmpl) for p in ('x_act', 'x_act_val', 'x_act_test')] if os.path.exists(names[0]) and not force: self.activations = [bcolz.open(p) for p in names] else: self.activations = [self.create_empty_bcolz(self.models.nf,n) for n in names] def save_fc1(self): self.get_activations() act, val_act, test_act = self.activations if len(self.activations[0])==0: m=self.models.top_model predict_to_bcolz(m, self.data.fix_dl, act) predict_to_bcolz(m, self.data.val_dl, val_act) if self.data.test_dl: predict_to_bcolz(m, self.data.test_dl, test_act) self.fc_data = ImageClassifierData.from_arrays(self.data.path, (act, self.data.trn_y), (val_act, self.data.val_y), self.data.bs, classes=self.data.classes, test = test_act if self.data.test_dl else None, num_workers=8) def freeze(self): self.freeze_to(-self.models.n_fc) What the Fastai library does: - uses the Adam optimizer - tries to find the fastest way to converge to a solution. Best thing to do for your model is get more data: Problem: models will eventually start memorizing answers, this is called overfitting. Ideally more data will prevent this occurrence. There’s other techniques to assist with gathering more data. Data augmentation (from lesson 1) If you try training for more epochs, you’ll notice that we start to overfit, which means that our model is learning to recognize the specific images in the training set, rather than generalizing such that we also get good results on the validation set. One way to fix this is to effectively create more data, through data augmentation. This refers to randomly changing the images in ways that shouldn’t impact their interpretation, such as horizontal flipping, zooming, and rotating. We can do this by passing aug_tfms (augmentation transforms) to tfms_from_model, with a list of functions to apply that randomly change the image however we wish. For photos that are largely taken from the side (e.g. most photos of dogs and cats, as opposed to photos taken from the top down, such as satellite imagery) we can use the pre-defined list of functions transforms_side_on. We can also specify random zooming of images up to specified scale by adding the max_zoom parameter. Transformations library We can use the available options to change the zoom, rotate and shift variations. tfms = tfms_from_model(resnet34, sz, aug_tfms=transforms_side_on, max_zoom=1.1) def get_augs(): data = ImageClassifierData.from_paths(PATH, bs=2, tfms=tfms, num_workers=1) x,_ = next(iter(data.aug_dl)) return data.trn_ds.denorm(x)[1] ims = np.stack([get_augs() for i in range(6)]) plots(ims, rows=2) Other Options transforms_side_on transforms_top_down Why do we use the learning rate that isn’t the lowest point? Each time we iterate, we will double the learning rate. The purpose of this to find what learning rate is helping use to decrease quickly. The learning rate is going too high. Comment: this augmentation won’t doing anything because of precompute Note, we are using a pretrained network. We can take the 2nd last layer and save those activations. There is this level of “dog space” “eyeballs” etc. We save these and call these pre-computed activations. Activations - is a number. This feature is in this location with this level of confidence and probability Making a new classifier from precompute We can quickly train a simple linear model based on these saved precomputed numbers. So the first time you run a model, it will take some time to calculate and compile. Then afterwards, it will train much faster. data = ImageClassifierData.from_paths(PATH, tfms=tfms) learn = ConvLearner.pretrained(arch, data, precompute=True) learn.fit(1e-2, 1) [ 0. 0.04783 0.02601 0.99023] Since we have precomputed the different cat pictures don’t help. So we will turn it off. By default when we create a learner, it sets all but the last layer to frozen. That means that it’s still only updating the weights in the last layer when we call fit. learn.precompute=False learn.fit(1e-2, 3, cycle_len=1) [ 0. 0.0472 0.0243 0.99121] [ 1. 0.04335 0.02358 0.99072] [ 2. 0.04403 0.0229 0.99219] Cycle Length = 1 As we get closer, we may want to decrease the learning rate to get a more precise. Also known as annealing. Most common annealing: Pick a rate, then drop it 10x, then drop again. Stepwise, very manual. A simpler approach is to choose a functional form such as a line. Turns out that half a cosine curve works out well. What do you do when you have more than one minima? Sometimes one minima will be better than others (based on how well it generalizes). So sharply changing the learning rate has the idea that if we suddenly jump up the learning rate, we will get out of “narrow” minimum and find the most “generalized” minimum. Note that annealing is not necessarily the same as restarts We are not starting from scratch each time, but we are ‘jumping’ a bit to ensure we are in the best minima. From the lesson 2 notebook: What is that cycle_len parameter? What we’ve done here is used a technique called stochastic gradient descent with restarts (SGDR), a variant of learning rate annealing, which gradually decreases the learning rate as training progresses. This is helpful because as we get closer to the optimal weights, we want to take smaller steps. However, we may find ourselves in a part of the weight space that isn’t very resilient - that is, small changes to the weights may result in big changes to the loss. We want to encourage our model to find parts of the weight space that are both accurate and stable. Therefore, from time to time we increase the learning rate (this is the ‘restarts’ in ‘SGDR’), which will force the model to jump to a different part of the weight space if the current area is “spiky”. Here’s a picture of how that might look if we reset the learning rates 3 times (in this paper they call it a “cyclic LR schedule”): (From the paper Snapshot Ensembles). The number of epochs between resetting the learning rate is set by cycle_len, and the number of times this happens is referred to as the number of cycles, and is what we’re actually passing as the 2nd parameter to fit(). So here’s what our actual learning rates looked like: learn.sched.plot_lr() Good Tip: Save your weights as you go! learn.save('224_lastlayer') Fine-tuning and differential learning rate annealing Now that we have a good final layer trained, we can try fine-tuning the other layers. To tell the learner that we want to unfreeze the remaining layers, just call (surprise surprise!) unfreeze(). learn.unfreeze() In general you can only freeze layer from ‘n’ and on Note that the other layers have already been trained to recognize ImageNet photos (whereas our final layers where randomly initialized), so we want to be careful of not destroying the carefully tuned weights that are already there. Generally speaking, the earlier layers (as we’ve seen) have more general-purpose features. Therefore we would expect them to need less fine-tuning for new datasets. For this reason we will use different learning rates for different layers: the first few layers will be at 1e-4, the middle layers at 1e-3, and our Fully Connected (FC) layers we’ll leave at 1e-2 as before. We refer to this as differential learning rates, although there’s no standard name for this technique in the literature that we’re aware of. Specifying learning rates We are going to specify ‘differential learning rates’ for different layers. We are grouping the blocks (ResNet blocks) in different areas and assigning different learning rates. Reminder: we unfroze the layers and now we are retraining the whole set. The learning rate is smaller for early layers and making them larger for the ones farther away. lr=np.array([1e-4,1e-3,1e-2]) # 3 is the number of cycles # 3 cycles of 2 epochs learn.fit(lr, 3, cycle_len=1, cycle_mult=2) [ 0. 0.04913 0.02252 0.99268] [ 1. 0.04842 0.02123 0.99219] [ 2. 0.03309 0.02412 0.99121] [ 3. 0.03528 0.02148 0.99072] [ 4. 0.02364 0.02106 0.99023] [ 5. 0.01987 0.01931 0.9917 ] [ 6. 0.01994 0.02058 0.99121] cycle_mult parameter It doubles the length of the cycle after each cycle. Another trick we’ve used here is adding the cycle_mult parameter. Take a look at the following chart, and see if you can figure out what the parameter is doing: learn.sched.plot_lr() At this point, we are going to look back at incorrect pictures We are going to do. Use test time augmentation we are going to take 4 random data augmentation. Move them around and flip and mix with the prediction. We are going to average all the predictions of the original + permutation. Ideally the rotating + zoom will get it in the right orientation. Test-Time-Augmentation (TTA) TTA makes predictions not only on the originals but also on the random augmented generated. log_preds,y = learn.TTA() accuracy(log_preds,y) 0.99199999999999999 Part 2: Dog Breeds Walkthrough Overview of the Steps -until over-fitting Dog Breeds PATH = '/home/paperspace/Desktop/data/dogbreeds/' # * sz=224 arch=resnet34 bs=24 label_csv = f'{PATH}labels.csv' #list of rows, minus 1, nubmer of rows in CSV, number of imgs n = len(list(open(label_csv)))-1 # get crossvalidation indexes custom FASTAI val_idxs = get_cv_idxs(n) n 10222 val_idxs array([3694, 1573, 6281, ..., 5734, 5191, 5390]) Will get 20% of the data will be in the validation set ??get_cv_idxs def get_cv_idxs(n, cv_idx=4, val_pct=0.2, seed=42): np.random.seed(seed) n_val = int(val_pct*n) idx_start = cv_idx*n_val idxs = np.random.permutation(n) return idxs[idx_start:idx_start+n_val] The data can be downloaded via the Kaggle CLI Initial Exploration !ls {PATH} labels.csv sample_submission.csv.zip test.zip train labels.csv.zip test tmp train.zip label_df = pd.read_csv(label_csv) label_df.head() label_df.pivot_table(index='breed', aggfunc=len).sort_values('id', ascending=False)[:10]) fn = PATH + data.trn_ds.fnames[0]; fn '/home/paperspace/Desktop/data/dogbreeds/train/000bec180eb18c7604dcecc8fe0dba07.jpg' img = PIL.Image.open(fn); img img.size (500, 375) How big are the images? Most ImageNet models are trained on 224 x 224 or 299 x 299. Lets make a dictionary comprehension to store all the names of the files to the size of the files. This will be important for memory and size consideration. size_d = {k: PIL.Image.open(PATH+k).size for k in data.trn_ds.fnames} row_sz, col_sz = list(zip(*size_d.values())) row_sz = np.array(row_sz); col_sz=np.array(col_sz) row_sz[:5] array([500, 500, 400, 500, 231]) Let’s look at the distribution of the Image Sizes (rows first) Most of them are under 1000, so we will use NumPy to filter. plt.hist(row_sz) (array([ 3014., 5029., 91., 12., 8., 3., 17., 1., 1., 2.]), array([ 97. , 413.7, 730.4, 1047.1, 1363.8, 1680.5, 1997.2, 2313.9, 2630.6, 2947.3, 3264. ]), <a list of 10 Patch objects>) plt.hist(row_sz[row_sz<1000]) (array([ 148., 600., 1307., 1205., 4581., 122., 78., 62., 15., 7.]), array([ 97. , 186.3, 275.6, 364.9, 454.2, 543.5, 632.8, 722.1, 811.4, 900.7, 990. ]), <a list of 10 Patch objects>) Let’s look at the distribution of the Image Sizes (cols) plt.hist(col_sz) (array([ 2713., 5267., 131., 21., 15., 8., 17., 4., 0., 2.]), array([ 102. , 336.6, 571.2, 805.8, 1040.4, 1275. , 1509.6, 1744.2, 1978.8, 2213.4, 2448. ]), <a list of 10 Patch objects>) plt.hist(col_sz[col_sz<1000]) (array([ 243., 721., 2218., 2940., 1837., 95., 29., 29., 8., 8.]), array([ 102. , 190.2, 278.4, 366.6, 454.8, 543. , 631.2, 719.4, 807.6, 895.8, 984. ]), <a list of 10 Patch objects>) Let’s look at the classes len(data.trn_ds), len(data.test_ds) (8178, 10357) len(data.classes), data.classes[:5] (120, ['affenpinscher', 'afghan_hound', 'african_hunting_dog', 'airedale', 'american_staffordshire_terrier']) Initial Model def get_data(sz, bs):) return data if sz > 300 else data.resize(340, 'tmp') Precompute data = get_data(sz,bs) Used ResNet34 since ResNext didn’t load for some errors learn = ConvLearner.pretrained(arch,data, precompute=True, ps=0.5) 0%|===================================>| 0/341 [00:00<?, ?it/s] learn.fit(1e-2,2) Do a few more cycles, more epochs Epoch - 1 pass through the data. Cycle - is how many epoches in a full cycle. offline - tried to find the learning rate learn.precompute=False learn.fit(1e-2, 5, cycle_len =1) Can continue training on larger images after starting on smaller images Started with 224 x 224, and continuing with 299 x 299. Will start small then move to larger general images to limit the overfitting. Some addition trial and error training learn.set_data(get_data(299,bs)) learn.fit(1e-2,3,cycle_len=1) learn.fit(1e-2,3 cycle_len=1, cycle_mult=2) Scoring log_preds,y = learn.TTA() probs = np.exp(log_preds) accuracy(log_preds,y), metrics.log_loss(y, probs)
https://forums.fast.ai/t/deeplearning-lecnotes2/7515
CC-MAIN-2018-51
refinedweb
2,637
59.7
On 07/28/2010 03:07 PM, Chris Lalancette wrote: > However, the code to find the parent of the device had a > much too relaxed check. It would iterate through all PCI > devices on the system, looking for a device that had a range > of busses that included the current device's bus. Stupid English. I'm more used seeing buses than busses, and was about to call you on it, but shows both forms as roughly neck-and-neck and dictionary.com lists both spellings as acceptable plurals. > This patch is simple in that it looks for the PCI device > whose secondary device *exactly* equals the bus of the > device we are looking for. That means that one, and only one > bridge will be found, and it will be the correct device. Makes sense. > > Note that this also caused a fairly serious bug in the s/caused/solved/ - the bug was caused by the condition before this commit, but the commit message is documenting what this particular commit does for the code base. > > /* No, it's superman! */ > - return (dev->bus >= secondary && dev->bus <= subordinate); > + return (dev->bus == secondary); Do we want a comment here in the code to summarize your commit message? But that's a minor nit, so: ACK. -- Eric Blake eblake redhat com +1-801-349-2682 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://www.redhat.com/archives/libvir-list/2010-July/msg00668.html
CC-MAIN-2014-10
refinedweb
230
70.53
The really easy to share packages on services like GitHub, but packages are also great for private personal development, sharing code within a team, or at any other granularity. A package consists of Swift source files and a manifest file. The manifest file, called Package.swift, defines the package’s name and its contents using the PackageDescription module. A package has one or more targets. Each target specifies a product and may declare one or more dependencies. Swift organizes code into modules. Each module specifies a namespace and enforces access controls on which parts of that code can be used outside of that module. A program may have all of its code in a single module, or it may import other modules as dependencies. Aside from the handful of system-provided modules, such as Darwin on OS X or GLibc on Linux, most dependencies require code to be downloaded and built in order to be used. Extracting code that solves a particular problem into a separate module allows for that code to be reused in other situations. For example, a module that provides functionality for making network requests could be shared between a photo sharing app and a program that displays the weather forecast. And if a new module comes along that does a better job, it can be swapped in easily, with minimal change. By embracing modularity, you can focus on the interesting aspects of the problem at hand, rather than getting bogged down solving problems you encounter along the way. As a rule of thumb: more modules is probably better than fewer modules. The package manager is designed to make creating both packages and apps with multiple modules as easy as possible. The Swift Package Manager and its build system needs to understand how to compile your source code. To do this, it uses a convention-based approach which uses the organization of your source code in the file system to determine what you mean, but allows you to fully override and customize these details. A simple example could be: foo/Package.swift foo/Sources/main.swift Package.swiftis the manifest file that contains metadata about your package. Package.swiftis documented in a later section. If you then run the following command in the directory foo: swift build Swift will build a single executable called foo. To the package manager, everything is a package, hence Package.swift. However, this does not mean you have to release your software to the wider world: you can develop your app without ever publishing it in a place where others can see or use. On the other hand, if one day you decide that your project should be available to a wider audience your sources are already in a form ready to be published. The package manager is also independent of specific forms of distribution, so you can use it to share code within your personal projects, within your workgroup, team or company, or with the world. Of course, the package manager is used to build itself, so its own source files are laid out following these conventions as well. A target may build either a library or an executable as its product. A library contains a module that can be imported by other Swift code. An executable is a program that can be run by the operating system. Modern development is accelerated by the exponential use of external dependencies (for better and worse). This is great for allowing you to get more done with less time, but adding dependencies to a project has an associated coordination cost. In addition to downloading and building the source code for a dependency, that dependency's own dependencies must be downloaded and built as well, and so on, until the entire dependency graph is satisfied. To complicate matters further, a dependency may specify version requirements, which may have to be reconciled with the version requirements of other modules with the same dependency. The role of the package manager is to automate the process of downloading and building all of the dependencies for a project, and minimize the coordination costs associated with code reuse. Dependencies are specified in your Package.swift manifest file. “Dependency Hell” is the colloquialism for a situation where the graph of dependencies required by a project cannot be met. The end-user is then required to solve the scenario; usually a difficult task: A good package manager should be designed from the start to minimize the risk of dependency hell and where this is not possible, to mitigate it and provide tooling so that the end-user can solve the scenario with a minimum of trouble. The Package Manager Community Proposal contains our thoughts on how we intend to iterate with these hells in mind. The following are some of the most common “dependency hell” scenarios: Inappropriate Versioning - A package may specify an inappropriate version for a release. For example, a version is tagged 1.2.3, but introduces extensive, breaking API changes that should be reflected by a major version bump to 2.0.0. Incompatible Major Version Requirements - A package may have dependencies with incompatible version requirements for the same package. For example, if Foo depends on Baz at version ~>1.0 and Bar depends on Baz at version ~>2.0, then there is no one version of Baz that can satisfy both requirements. This situation often arises when a dependency shared by many packages updates to a new major version, and it takes a long time for all of those packages to update their dependency. Incompatible Minor or Update Version Requirements - A package may have dependencies that are specified too strictly, such that version requirements are incompatible for different minor or update versions. For example, if Foo depends on Baz at version ==2.0.1 and Bar depends on Baz at version ==2.0.2, once again, there is no one version of Baz that can satisfy both requirements. This is often the result of a regression introduced in a patch release of a dependency, which causes a package to lock that dependency to a particular version. Namespace Collision - A package may have two or more dependencies that have the same name. For example, a Person package depends on an Addressable package that defines a protocol for assigning a mailing address to a person, as well as an Addressable package that defines a protocol for speaking formally to another person. Broken Software - A package may have a dependency with an outstanding bug that is impacting usability, security, or performance. This may simply be a matter of timeliness on the part of the package maintainers, or a disagreement about their expectations for the package. Global State Conflict - A package may have two or more dependencies that presume to have exclusive access to the same global state. For example, one package may not be able to accommodate another package writing to a particular file path while reading from that same file path. Package Becomes Unavailable - A package may have a dependency on a package that becomes unavailable. This may be caused by the source URL becoming inaccessible, or maintainers deleting a published version.
https://fuchsia.googlesource.com/third_party/swift-package-manager/+/refs/tags/swift-4.1-DEVELOPMENT-SNAPSHOT-2017-10-31-a/Documentation/
CC-MAIN-2020-05
refinedweb
1,192
52.09
What I want to do: launch seperate python programs from one main program (multi-threading will not achieve this because the mechanize library uses one shared global opener object which will not suit my needs) I want the scripts launched to be in seperate windows that i can see the output of on screen, seperate processes. I can accomplish this in win32 by: import subprocess; args = ["cmd", "/c", "START", "python", "myScript.py"]; process1 = subprocess.Popen(args, shell=False); however, doing will open a new window, but will lose the process id: e.g. process1.poll() will always return 0 no matter if the process is open or not, meaning that python always thinks its closed. It should return None if the process is still running. I can do it without using cmd /c start, but then the newly launched python script is trapped in my original cmd terminal where i launched the script from in the first place. I can keep track of the process now, but the one cmd window open is insufficient to keep track of the output that a multitude of programs running will produce. Doing it without the /c argument will still keep the new script in the same console. Yes, I have read I can do this in linux with Konsole or whatever like: child = subprocess.Popen("konsole -e python foo.py", shell=True) however, I need this to run in windows. Any help or solution is appreciated, -- Zak Kinion zkinion at gmail.com
https://grokbase.com/t/python/python-list/10b2bba5vx/problem-with-opening-a-new-python-program-in-a-new-window-and-keeping-track-of-the-process
CC-MAIN-2022-40
refinedweb
249
69.92
A customer reported the “No snap-ins have been registered for Windows PowerShell version 2.” Error message running their code on a windows 2008 r2 server box. They were running code that would haven run inside a c# application which called used the Microsoft.Exchange.Management.Powershell.Admin namespace to call the powershell enable-mailbox cmdlet. On cause of this error message is worth checking how you are registering the snapin on W2K8 R2? We have seen instances where customers used wrong version of installutil.exe and the snapin got registered under WOW hives of a 64bit machine. The Installer tool (Installutil) allows you to install and uninstall server resources by executing installer components in specified assemblies. Check if you are using the correct version of installutil – Framework64 version of installutil.exe to register snapins and trying to load the snapin from a 64bit process. Also review the shortcut to powershell, are you loading up the 32 bit version of 64 bit? When looking at the properties of the shortcuts to Powershell and Powershell(x86) on the start menu they may point to “%SystemRoot%\System32\WindowsPowerShell\V1.0” and “%SystemRoot%\SysWOW64\WindowsPowerShell\V1.0”. check if you intend to run the 32bit version or not? Your article are very impressive. Thank you so much. More Wait… Hi, I am also getting the same error. Can you please tell me how to overcome this problem. Prethesh – I have the same problem. Did you get any solution? Anybody has any solution to this problem? SnapIns is old mechanism, Modules is new mechanism. refer to this page: blog.codeinside.eu/…/powershell-snapins-vs-modules For server 2008 (not r2) you might need to install the provider,…/confirmation.aspx The issue is that the PowerShell Registry Key is corrupted. Copy the PowerShell Registry key from a working Windows Server 2008 R2 server to the non working one. HKEY_LOCAL_MACHINESOFTWAREMicrosoftPowerShell That worked for me. Uninstalling the PowerShell ISV and reinstalling it did not work.
https://blogs.msdn.microsoft.com/pareshj/2010/07/30/error-msg-no-snap-ins-have-been-registered-for-windows-powershell-version-2/
CC-MAIN-2016-44
refinedweb
328
60.51
- × A kickass library to manage your poppers Filed under user interface Popper.js A library used to position poppers in web applications. Wut? Poppers? A popper is an element on the screen which "pops out" from the natural flow of your application. Common examples of poppers are tooltips, popovers, and drop-downs. So, yet another tooltip library?. Popper.js This is the engine, the library that computes and, optionally, applies the styles to the poppers. Some of the key points are: - Position elements keeping them in their original DOM context (doesn't mess with your DOM!); - Allows to export the computed information to integrate with React and other view libraries; - Supports Shadow DOM elements; - Completely customizable thanks to the modifiers based structure; Visit our project page to see a lot of examples of what you can do with Popper.js! Find the documentation here. Tooltip.js into your projects. The tooltips generated by Tooltip.js are accessible thanks to the ariatags. Find the documentation here. Installation Popper.js is available on the following package managers and CDNs: Tooltip.js as well: *: Bower isn't officially supported, it can be used to install Tooltip.js only through the unpkg.com CDN. This method has the limitation of not being able to define a specific version of the library. Bower and Popper.js suggest using npm or Yarn for your projects. For more info, read the related issue. Dist targets Popper.js is currently shipped with 3 targets in mind: UMD, ESM, and ESNext. Have no idea what am I talking about? You are looking for UMD probably. - UMD - Universal Module Definition: AMD, RequireJS, and globals; - ESM - ES Modules: For webpack/Rollup or browser supporting the spec; - ESNext: Available in /dist, can be used with webpack and babel-preset-env; Make sure to use the right one for your needs. If you want to import it with a <script>tag, use UMD. If you can't find the /distfolder in the Git repository, this is because the distribution files are shipped only to Bower, npm or our CDNs. You can still find them visiting(or) Usage Given an existing popper DOM node, ask Popper.js to position it near its button. <div class="my-button">reference</div> <div class="my-popper">popper</div> var reference = document.querySelector('.my-button'); var popper = document.querySelector('.my-popper'); var popperInstance = new Popper(reference, popper, { // popper options here }); Take a look at this CodePen example to see a full fledged usage example, consisting of all the HTML, JavaScript, and CSS needed to style a popper. Callbacks Popper.js supports two kinds of callbacks; the onCreatecallback is called after the popper has been initialized. The onUpdateone }, }); Writing modifiers on your own React, Vue.js, Angular, AngularJS, Ember.js (etc...) integration Integrating 3rd party libraries to React or other libraries can be a pain because they usually alter the DOM and drive the libraries crazy. Popper.js limits all its DOM modifications inside the applyStylemodifier, you can simply disable it and manually apply the popper coordinates using your library of choice. For a comprehensive list of libraries that let you use Popper.js into existing frameworks, visit the MENTIONS page. Alternatively, you may even override your own applyStyleswith: 900, }, }, }); How to use Popper.js in Jest It is recommended that users mock Popper.js for use in Jest tests due to some limitations of JSDOM. The simplest way to mock Popper.js is to place the following code in __mocks__/popper.js.jsadjacent to your node_modulesdirectory. Jest will pick it up automatically. import PopperJs from 'popper.js'; export default class Popper { static placements = PopperJs.placements; constructor() { return { destroy: () => {}, scheduleUpdate: () => {}, }; } } Alternatively, you can manually mock Popper.js for a particular test. jest.mock('popper.js', () => { const PopperJS = jest.requireActual('popper.js'); return class Popper { static placements = PopperJS.placements; constructor() { return { destroy: () => {}, scheduleUpdate: () => {}, }; } }; }); Migration from Popper.js v0 Since the API changed, we prepared some migration instructions to make it easy to upgrade to Popper.js v1. Feel free to comment inside the issue if you have any questions. Performances Popper.js is very performant. It usually takes 0.5ms to compute a popper's position (on an iMac with 3.5G GHz Intel Core i5). This means that it will not cause any jank, leading to the smooth user experience. Notes Libraries using Popper.js The aim of Popper.js is to provide a stable and powerful positioning engine ready to be used in 3rd party libraries. Visit the MENTIONS page for an updated list of projects. Credits I want to thank some friends and projects for the work they did: - @AndreaScn for his work on the GitHub Page and the manual testing he did during the development; - @vampolo for the original idea and for the name of the library; - Sysdig for all the awesome things I learned during these years that made it possible for me to write this library; - Tether.js for having inspired me in writing a positioning library ready for the real world; - The Contributors for their much appreciated Pull Requests and bug reports; - you for the star you'll give to this project and for being so awesome to give this project a try 🙂 Code and documentation copyright 2016 Federico Zivolo. Code released under the MIT license. Docs released under Creative Commons.
https://www.javascripting.com/view/popper-js
CC-MAIN-2019-47
refinedweb
888
50.73
Comment on Tutorial - this keyword sample in Javascript By Nicholas C. Zakas Comment Added by : anil Comment Added at : 2010-11-04 00:36:35 Comment on Tutorial : this keyword sample in Javascript By Nicholas C. Zakas Very Very nice.*; public class de View Tutorial By: Virudada at 2012-05-05 06:27:22 2. Nice n simple ex!!!!!!!!!!!!!!!!!!! View Tutorial By: Siya at 2010-11-17 07:08:34 3. Very Very nice explanation... View Tutorial By: anil at 2010-11-04 00:22:58 4. its not solve my problem.i.e the concept is View Tutorial By: murli at 2010-10-02 23:23:11 5. Hi there is two ways you can get information. View Tutorial By: Sathyam at 2008-12-15 02:01:22 6. i like this a lot View Tutorial By: raj at 2011-05-06 06:15:36 7. awesome!!!explanation View Tutorial By: karthik at 2013-02-02 08:06:40 8. I am not able to open the rar file, can you please View Tutorial By: Chandra at 2008-08-07 10:35:59 9. good info to create package View Tutorial By: Nirdesh at 2012-10-01 03:45:02 10. thanks a lot..please tell me how to implement this View Tutorial By: agnas at 2008-04-26 12:26:52
https://www.java-samples.com/showcomment.php?commentid=35538
CC-MAIN-2020-05
refinedweb
223
78.55
Using ‘yield’ to simulate a Markov chain One thing that I really like about Sage is that it uses Python as its underlying language. This means that we get “for free” many nice features of Python. One of these features that I particularly like is the yield keyword. Here is a small example: def foo(): i = 0 while True: yield i i = i + 1 We can use the foo function as a generator: sage: g = foo() sage: print g.next() 0 sage: print g.next() 1 sage: print g.next() 2 In other words, the yield keyword acts as a way to a Python function into a generator. The execution of foo is paused until the next call to g.next(). If we reach the end of the function, the StopIteration exception is raised. The yield keyword makes it pretty easy to write the skeleton for a Markov chain simulator, using the following basic form: def markov_chain(): state = initial_state() while True: yield state state = new_state(state) if some_condition: return For a real example, see the latin_square_generator function in latin.sage which is part of a small library for latin square manipulations in Sage. The Markov chain itself was given by Jacobson and Matthews in “Generating uniformly distributed random Latin squares,” Journal of Combinatorial Designs, vol 4, 1996, no 6, pp 405–437.
https://carlo-hamalainen.net/2007/12/16/785/
CC-MAIN-2017-51
refinedweb
222
70.23
One Trick A Day (35 Part Series) I challenged my self to share a blog post every day until the end of the COVID-19 quarantine in Switzerland, the 19th April 2020. Thirty days left until hopefully better days. We are starting a new project with two friends, can’t tell much yet about it at this point, but let’s just say for the moment that it aligns our values. For its purpose we need a website which, obviously, is going to be open source and which I’m going to develop with Gatsby. Even though it is not my first Gatsby’s site, my personal website is developed with the same stack, this is the first time I have to internationalize one. I expected such implementation to be fairly straight forward but between light documentations, outdated blog posts or even sample projects, it turned out that I actually had to invest two hours this morning to finally achieve my goal. That’s why I thought that sharing the outcome in this new tutorial might be a good idea. SEO Friendly Plugin Your good old friend need different URLs (routes) to crawl and render your pages for each language. For example, if your website supports English and French, Google is going to be happy if you provide and . To achieve this with Gatsby, the first thing important to have in mind is that all your pages have to be duplicated. To follow above example, that would mean that the website would contains both an index.en.js page and an index.fr.js one. To help our website understand such routing, we can use the plugin gatsby-plugin-i18n. npm install gatsby-plugin-i18n —-save Once installed, we add its required configuration in gatsby-config.js and also add some meta information about the list of supported languages and the default one. Note that I specified prefixDefault to true in order to use no root routing, even URLs for the default language, English, will have to be prefixed with /en/ . To be perfectly honest with you, one of the reason behind this is also the fact that I was unable to make it happens otherwise 😅. siteMetadata: { languages: { langs: ['en', 'fr'], defaultLangKey: 'en' } }, plugins: [ { resolve: 'gatsby-plugin-i18n', options: { langKeyDefault: 'en', useLangKeyLayout: true, prefixDefault: true } } ] Because we are using a prefix in any case, without any other change, accessing the root of our website will display nothing, that’s why we edit gatsby-browser.js to redirect the root requests to the default home page. exports.onClientEntry = () => { if (window.location.pathname === '/') { window.location.pathname = `/en` } } Internationalization Library Gatsby and the above plugin are either compatible with react-i18next or react-intl. I use i18next in Tie Tracker, therefore I went with the other solution because I like to learn new things. React Intl relies on the Intl APIs, that’s why we are also installing the polyfill intl-pluralrules. npm install react-intl @formatjs/intl-pluralrules --save Hands-on Coding Enough installation and configuration, let’s code. The major modification which we have to apply occurs in layout.js , which by the way, I moved in a subfolder src/components/layout/ for no other particular reason that the fact that I like clean structure. What happens here you may ask? Summarized, we are adding two new required properties, location and messages . The first one is use to guess the locale which should be applied and the second one contains the list of translations. As you can notice we import React Intl and we do also import a function getCurrentLangKey from ptz-i18n which is actually a utility of the above plugin. I’m also using the <FormattedMessage/> component to print out an Hello World to ensure that our implementation works out. import React from "react" import PropTypes from "prop-types" import { useStaticQuery, graphql } from "gatsby" import Header from "../header" import "./layout.css" import { FormattedMessage, IntlProvider } from "react-intl" import "@formatjs/intl-pluralrules/polyfill" import { getCurrentLangKey } from 'ptz-i18n'; const Layout = ({ children, location, messages }) => { const data = useStaticQuery(graphql` query SiteTitleQuery { site { siteMetadata { title languages { defaultLangKey langs } } } } `) const { langs, defaultLangKey } = data.site.siteMetadata.languages; const langKey = getCurrentLangKey(langs, defaultLangKey, location.pathname); return ( <IntlProvider locale={langKey} messages={messages}> <Header siteTitle={data.site.siteMetadata.title} /> <p> <FormattedMessage id="hello" /> </p> </IntlProvider> ) } Layout.propTypes = { children: PropTypes.node.isRequired, location: PropTypes.any.isRequired, messages: PropTypes.any.isRequired, } export default Layout To “extend” the layout for each language and locale, we create a new file per supported languages. For example, in English, we create layout/en.js in which we import both our custom messages and the specific polyfill. import React from 'react'; import Layout from "./layout" import messages from '../../i18n/en'; import "@formatjs/intl-pluralrules/dist/locale-data/en" export default (props) => ( <Layout {...props} messages={messages} /> ); At this point, our code won’t compile because these languages, these messages are missing. That’s why we also create the file for these, for example i18n/en.js . module.exports = { hello: "Hello world", } As I briefly staten in my introduction, each page is going to be duplicated. That’s why we create the corresponding index page. In case of the default, English, we rename index.js to index.en.js . Moreover, because the layout now expect a location property, we pass it from every pages too. Note also that, because I have decided to prefix all route, I also modifed the link routing from /page-2/ to /en/page-2 . import React from "react" import { Link } from "gatsby" import Layout from "../components/layout/en" import SEO from "../components/seo/seo" const IndexPage = (props) => ( <Layout location={props.location}> <SEO /> <h1>Hi people</h1> <Link to="/en/page-2/">Go to page 2</Link> </Layout> ) export default IndexPage The same modifications we have implemented for index should be propagated to every pages, in this example, I also rename page-2.js in page-2.en.js and apply the same modifications as above. import React from "react" import { Link } from "gatsby" import Layout from "../components/layout/en" import SEO from "../components/seo/seo" const SecondPage = (props) => ( <Layout location={props.location}> <SEO title="Page two" /> <p>Welcome to page 2</p> <Link to="/en/">Go back to the homepage</Link> </Layout> ) export default SecondPage Identically, the usage of the <Layout/> component has to be enhanced with the location object in our 404.js page. import React from "react" import Layout from "../components/layout/layout" import SEO from "../components/seo/seo" const NotFoundPage = (props) => ( <Layout location={props.location}> <SEO /> <h1>NOT FOUND</h1> </Layout> ) export default NotFoundPage And voilà, that’s it, our Gastby site is internationalized 🎉. Of course you might want to add some other languages, to do so, repeat the above English steps and again, duplicate pages. Summary Well it was really unexpected to me to have had to spend so much time to unleash internationalization in a new project, that’s why I hope that this small “how to” might help anyone in the future. And as always, if you notice anything which can be improved, don’t hesitate to ping me with a comment or a tweet. Stay home, stay safe! David Cover photo by Nicola Nuttall on Unsplash One Trick A Day (35 Part Series) Posted on by: David Dal Busco Creator of DeckDeckGo | Organizer of the Ionic Zürich Meetup Discussion
https://dev.to/daviddalbusco/internationalization-with-gatsby-38p2
CC-MAIN-2020-34
refinedweb
1,224
56.05
Hi Im very new to java and trying to make a simple java program which will work out the chances of two people sharing the same birthday in say a class of 40 people. Using the monte Carlo method the above senerio will be simulated 10,000 times. Below is the code I have already written public class birthday { public static void main(String[] args) { long experiments = 10000; long bdaycount = 0; int bday1, bday2; for(long i = 0; i < experiments; i++) { for(int j = 0; j < 20; j++) { bday1 = probability(); bday2 = probability(); if(bday1 == bday2) { bdaycount++; break; } } } System.out.println("prob is: " + ((double)bdaycount / (double)experiments)); } public static int probability() { double x = Math.random(); x = 1.0 + (x * 365); int outcome = (int)Math.floor(x); return outcome; } } But I quickly figured at that all this is doing is getting to birthdays at a time and comparing those instead of the entire class of 40. So my question is how can I compare the whole class and see if there are 2 or more birthdays in which are the same. I thought that it may be possible to store the values in an array but im not sure. Any help will be appreciated. Thanks
http://www.javaprogrammingforums.com/whats-wrong-my-code/12797-looking-duplicate-birthdays-program.html
CC-MAIN-2018-51
refinedweb
203
60.85
Re: Initialization of static anonymous-namespace members from a dynamicallyloaded lib Discussion in 'C++' started by Alf P. Steinbach /Usenet, Jan 9, initializing embedded anonymous struct static members?mark fine, Nov 8, 2004, in forum: C++ - Replies: - 1 - Views: - 469 - Michael Jørgensen - Nov 9, 2004 any python wrapper to call .lib static library(ufmod.lib)?est, Feb 16, 2008, in forum: Python - Replies: - 1 - Views: - 617 - Diez B. Roggisch - Feb 16, 2008 Why can nsmc, local classes or anonymous classes have static members?Rit, Dec 12, 2009, in forum: Java - Replies: - 23 - Views: - 943 - Mike Schilling - Jan 3, 2010 [ANN] Anonymous CVS repository of lib/csv and lib/soap4r suspendedagainNAKAMURA, Hiroshi, Jul 6, 2004, in forum: Ruby - Replies: - 0 - Views: - 136 - NAKAMURA, Hiroshi - Jul 6, 2004
http://www.thecodingforums.com/threads/re-initialization-of-static-anonymous-namespace-members-from-a-dynamicallyloaded-lib.741768/
CC-MAIN-2014-41
refinedweb
126
62.78
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How print a debug message on log file? I want to print a debug message in a method. I override project.task.work.create method and i don't know if program use my method or the original.In java i print a simple message to see it. How do the same with OpenERP API? I use OpenERP 7 on debian linux. To display a debug log, you can use the standard python logging module. Basically, you need to get a logger and you can use it in your methods to output log messages. Usually, in OpenERP, a _logger is get at the top of the python module, its name being the name of the python module. import logging from openerp.osv import orm _logger = logging.getLogger(__name__) class project_task_work(orm.Model): _inherit = 'project.task.work' def create(self, cr, uid, vals, context=None): _logger.debug('Create a %s with vals %s', self._name, vals) return super(project_task_work, self).create(cr, uid, vals, context=context) Note that you will need to start your server using the --debug option to show debug logs. Other options for logging at a higher level (that doesn't require the --debug option) are: _logger.info('FYI: This is happening') _logger.warning('WARNING: I don't think you want this to happen!') _logger.error('ERROR: Something really bad happened!') It doesn't work(I see nothing on log file but i have a lot of debug info on it). I don't know how debug this personal add-on. I just want to override this method. should I create a new question for this problem? If you don't see your logs that's probably because your module is not loaded correctly, or the module's dependencies are wrong. You can create a question focused to the loading of modules with details on your module (people will have to find out why it is not loaded, give them useful information). thanks a lot if it didn't work that's because i hadn't restart openERP. I just restart and it's ok. Thanks again. Hi,,,when i am giving _logging.info after _columns in .py file then it is printing to log file but when i am calling to the method then it didn't work?? Please help me out.... About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-print-a-debug-message-on-log-file-1037
CC-MAIN-2018-09
refinedweb
440
69.48
Hello, My name is Rashid Sarwar and I am on Visual C++ Compiler Front End team. I have been on this team for several months and have learned many things on how to test a complier. In this post I will be sharing with you how we test our Visual C++ compiler and it’s Intellisense. Testing features in a compiler is not easy as each feature/construct can interact with the already existing features/constructs in many different ways. Coming up with all the possible cases in which the new construct can be used is a big task as the Visual C++ compiler has many features, starting from standard C++ language features to our own Microsoft extensions to compiler. Every time when we have to add a new feature to our compiler like the new C++0x features coming in Visual Studio 2010, we try to come up with the good number of possible cases in which the feature can be used as a standalone feature .We first try to exhaust the new grammar of the newly added language constructs and try to make sure that we are able to compile the newly added language constructs as a standalone construct. For example: Lambda expressions in new C++0x can be used in many different ways. A lambda can have an empty capture clause. A simple compiler test to test empty capture clause would be something like this in which we will come up a source code related to empty lambda capture clause. The automated test will take these sources and compile them with Visual C++ compiler. If let's say any compilation on a valid usage of lambdas on empty capture clause fails than it is an indication that something is there which either compiler currently not supports or is broken. //Source1: testing empty lambda capture clause int main() { bool even = [](int k){ return k%2 == 0; }(2); return 0; } The above test verifies that we are able to compile correctly the empty capture clause of lambda expressions. But how can we verify that we are generating the correct compiled output? How can we verify that code generated by compiler is correct? We take a further step in the test by looking at the output of the lambda function object. We do run time verification by changing the test in following manner: //Source2 - testing empty lambda capture clause if(even) { return 0; } else { return 1; Our testing automation framework will look for the return value of the main method. If the return value from main method is 0 than the test is assumed to pass and assumed that we are generating correct code from compilation. If the return value is non zero than the testing framework assumes that the test failed as the result of main method can only be non zero if let’s say we generated wrong machine code. This is very simple source code to test a big feature like lambda expressions which can be used in many different ways. So we then try to test each feature from the perspective of how it can be used in combination with other different features. Example can be like testing lambda expressions in a class member functions. Now, I will discuss testing Intellisense for Visual C++. For Intellisense testing we divide our automated tests into two categories. Engine/Component level tests and End to End User scenario tests. In the End to End User scenario test our automated test will try to simulate user actions in IDE example test can be like: 1. Creating a new Visual Studio instance. 2. Creating a new Visual C++ project. 3. Adding a new cpp file in the project and set the cursor focus in the new file. 4. Writing some C++ code in the opened file in the editor. Simple code can be like: class person{ public : char * name(); }; person p; return 0; 5. Trying to invoke some intellisense operations like QuickInfo on the code in the file by hovering the mouse cursor over the newly created instance of class person in the main method. 6. Finally verifying the output of tooltip with the expected output the test expects. 7. If tooltip output differs from the test expected output than QuickInfo operation is assumed to not working and thus making test fail. The test passes if both expected and actual outputs by hovering mouse cursor over the construct match. The Engine\Component level tests directly talk to Visual Studio to invoke the intellisense operations on the code constructs. We let the test know that at which positions in the code it has to invoke the intellisense positions. For example on sample code: Simple test scenario can be like testing QI functionality on variables of user defined types. We will let the test know to invoke QI operation on the position where class person p instance is declared in main function. We will define in the test the (row/col) position and the test when it runs it will look for the specified QI operation on the specified position. It will get the result of tooltip that appears and match it with the expected output. Our Engine level tests are easy to write and they have less execution time, as the test itself does not need to do all the extra work. But they don’t exactly mimic user scenarios. We write IDE tests to get better coverage of user end to end scenarios .The drawback of IDE tests is that they take a long time to execute. So we try to strike a balance in engine level and end to end scenario testing when we have to test a new or existing feature in our Visual C++ intellisense. Hopefully this post will give you an idea of how we test our Visual C++ compiler and also its intellisense. Thanks a lot to everbody for putting down some interesting questions and also some interesting piece of code to look at.I will try to answer all of them. Jk & yuguang : Intellisense performance is really a good question.We have a lot of test specifically for Intellisense perfromance testing. In our performance tests we try to simulate user actions as i described earlier in the post.We try to perform some operation that is to be measured for some defined no of times continuously in an iteration and get the average amount of time for that operation. That is the most simple explanation. The peformance test try to take into account that they are measuring the time of only the opeartion that is to be measured.Some tests measure time in isolation of all other scenarios and some measure in which a normal user would have invoked the operation. For testing memory leaks in VS we normally test our products both retail and debug builds .If there are any memory leaks they appear up in form of assertions in case of running our tests on debug build.We aks Joel : Thanks for pointing out Boost.We do provide intellisense for Boost library and we have tests that use Boost. We have done intellisense testing against Boost for our Intellisense parser and I hope our customers will have good experience with Intellisense in Boost library.Thanks a lot for your interest in Boost. Majkara : Nice idea oftesting the Boost Library.Thanks for it. Asif: Thanks for putting some interesting stuff here.I will try to explain the behaviour of the second question u had. Why do i get seperate results from the constructor and 'func' (without /clr, I get same values) When the code is compiled as native, the compiler does not creates a copy instance first. It constructed an A directly as a parameter to ‘func’ and this way just one instance gets created so you see both addresses same in case of native. When compiled with /clr, it first creates a temporary A instance (at one address), invoked the trivial copy constructor and then copy it as an argument to func .This way func prints the address of this second object.The compiler can choose to construct an A instance directly as an argument to func, or it can construct a temporary, then copy it as an argument. I am looking forward to some more interesting questions.Do let me know if you guys have any regarding the Intellisense and our VC++ compiler. go through this discussion. Hi, I stuck because of this bug in the VC++ compiler (or so it seems). This code is simpilfied for the sake of this discussion, and I cannot implement copy constructor etc on classes/structs that do not belong to me... What happened with the Hotfix for the "VC9 SP1 Hotfix For The vector<function<FT>> Crash" Hotfix? Hi Anders, We are currently working on the Hotfix for the "VC9 SP1 Hotfix For The vector<function<FT>> Crash" and it should be posted shortly. We will also update the original blog with the new link. Can vs add a menu or button which is "Rebuild intelisense" so that I can manually full rebuild it? Even it needs 600 seconds would be helpful. Another question: Why when I open multiple VC IDE(3 or more), and use for a long time(more than 1 day),the CTRL+F show the dialog very slow? If I close all VC IDE and reopen, everything seem OK. I really doubt VS has GDI leak or handle leak. This is a bug I found (and reported) in VC 6.0 and is still in 2003. I was hoping it would be gone im my next generation of compiler> #include <ctype.h> #include <math.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <stddef.h> static double dmx1,dmx2; #define NRC_DMAX(a,b) (dmx1=(a),dmx2=(b),((dmx1)>(dmx2))?(dmx1):(dmx2)) /* --------------------------------------------------------------------------*/ int main( void ); int main( void ) { double test1; double test2; double test3; test1 = NRC_DMAX(7.,NRC_DMAX(4.,3.)); test2 = NRC_DMAX(NRC_DMAX(4.,3.),7.); test3 = NRC_DMAX(NRC_DMAX(4.,7.),3.); printf("\n NRC_DMAX(7.,NRC_DMAX(4.,3.)) = %le \n", test1 ); printf("\n NRC_DMAX(NRC_DMAX(4.,3.),7.) = %le \n", test2 ); printf("\n NRC_DMAX(NRC_DMAX(4.,7.),3.) = %le \n", test3 ); return 0; } Test1 , -2 ,-3 should be the same but are not in the versions I have tested. As someone said. Please give users some control over intellisense. eg some settings like "Run Intellisense in background", "Don't run Intellisense in background, just do it when I tell you - even if it takes 5mins", "Kill Intellisense, its eating my memory" Intellisense certainly shouldn't be run during a compile! We've managed to mitigate some of the probs by not letting it write to disk. I am curious about code style Listing 2, where there is so much verbosity! I hope this is only seen here and is not in compiler code :) if (even) else return 1; == return !even; Unfortunately this intellisense bug still happens in VS 2008 and is a show stopper for us. :( I hate it that we can't reopen bugs in connect
http://blogs.msdn.com/b/vcblog/archive/2009/03/10/testing-vc-compiler-and-intellisense.aspx?PageIndex=2
CC-MAIN-2015-06
refinedweb
1,848
62.88
. Create Your Own Two-Factor Authentication System: Creating the Database Lets figure out what we need to make this magical wonderland happen. We need a user’s phone number, a field to state if the phone is usable or not (if we should concern ourselves with requests from it) and storage for the active token. If we want to make it more advanced than we will but for now this is good. So we will use two tables: phone and tokens. The token will be based on the phone number and current timestamp as sort of a salt (not the safest but again, template for the future 😉 ). Phone Table id – primary key digits – small integer (15-digit number) Tokens Table id – primary key token – variable char (length 0f 10) phone_id – foreign key to phone->id Simple, eh? Now lets create it in Python! from peewee import * database = SqliteDatabase(“2fa.db”) We import everything of the Peewee library to make things easier on us. We also specify we want a SQLite database. class ABC(Model): class Meta: database = database We create an abstract base class (ABC) so everything can share the database instance. This is mostly done to save typing. class Phone(ABC): id = PrimaryKeyField() digits = IntegerField() class Tokens(ABC): id = PrimaryKeyField() token = CharField(max_length=10) phone = ForeignKeyField(Phone) Specify the table structure of the phone and tokens tables, subclassing the mentioned ABC class. phone = Phone tokens = Tokens Easy references for the tables (which will be needed quite a bit). database.connect() Connect/load the SQLite database. try: phone.create_table() tokens.create_table() except: pass Create the tables in the SQLite file. Save the above code fragments to a file called “db.py” (otherwise make the adjustments in later parts). Now we have the database laid out properly and easy to work with! Create. 🙂 -! Create. Dotless Domains? A recent article on Slashdot discussed the aspect of Google requesting to start using dotless TLDs. While ultimately ICANN denied this request, its interesting to see where the future of the Internet very well could go. Most filter systems use wildcards but not to a scalable extent (in such a way that future concerns are taken care of as well). It does make sense in some aspects, though. How is anyone to know that dotless domains will ever happen, for example. Another issue is that filtering through a long list of futuristic ideas still adds more overhead to each request, which even when cached can pose some annoyances. Should filtering be done on a whim, when new techniques/resources are made available or when its best suited for the network? Its tough to say. When building a LAN you could easily tell the filtering system to deny any requests to *.onion, but then you have to consider how likely it’d be to even set up a Tor node within the network, have it connect successfully and allow applications like web servers. If all of these are also possible then you have more concerns than just filtering which domains are accessible. True). Compliance, What’s That? “Being compliant” is a big buzz word as of late that really adds nothing to the company needing it. Chances are people will be able to tell you how they can make you compliant, but not be able to tell you why you should be. Granted, the flip side is that if you’re looking into compliance you should know why you want it done anyways, but still. PCI and HIPAA compliance are probably some of the most common ones, both serving the purpose of credit card processing and medical records respectively. The main case for these is that more and more people are using plastic instead of paper to pay for things, and if you’re doing business online its virtually a necessity somewhere down the line. HIPAA, while part of me feels has seen its days as less and less people can afford to go to the doctors/medical professionals still holds a strong place in the government regulations (PCI isn’t governmental regulated). I don’t know the fundamentals of HIPAA regulations (never really was concerned with it) but PCI is a tricky little fella. It has 4 classes/levels: A, B, C, D, which range from strictest – laziest. Most online merchants will fall between C & D and physical merchants will be A & B (simply due to the vast differences in how cards are handled). D, which is common for stores that are on shared hosting plans and do not actually store CC information is also the most common. A has the hardest checklist of items to pass, however. It goes not only into virtual security but also to physical as well. Security for SMBs: Criteria While this won’t fit the mold for every SMB (small and medium business) out there, it will still give others an idea of what should be considered. This will assume the SMB wants to expand in the future. 1. Scalable Most SMBs do not want to stay in that classification forever. If the company knows their end goal in this regard early they should also be able to plan into the future in terms of software use like security applications (AV, IDS/IPS, etc…). Should you be expecting your company to go beyond the 5-50 employee mark (or however you deem a SMB) then knowing that the software is able to handle both a small, as well as enterprise business should be part of the concern. 2. Ease of Use Software should be easy to use from the time you look at the pretty packaging or install file until its time to uninstall it for good. This is one area where a lot of vendors make their critical mistake, however. If you need to write documentation on how to do everything then you’re doing something wrong. Documentation should be there for clarification, not how to use the software itself. 3. Easy Authentication Typically there’s going to be some level of authentication whether directly (program requiring username and password) or indirectly (logging into the system to use). If the program does require the use of its own login system it usually makes more sense to tie it into the system itself as well. SSH is a prime example of this. It prompts the user for the username and password (or whatnot), but authenticates the user based on the system itself. 4. Automated Updates No one likes having to remember to update anything, especially when its supposed to be set it and forget it. Automatically updating a program can pose issues in itself, however, most people don’t consider it until its too late. The convienence of not having to remember to hit the update button or run the script makes their life much easier, which is what you want the end result to be. Dropbox Client Reverse Engineered At this year’s USENIX talks, an interesting presentation was given describing how two people reversed engineered Dropbox’s client. This project, performed by Dhiru Kholia of Openwall and Przemyslaw Wegrzyn of CodePainters, showed how to both intercept SSL traffic (thus being able to manipulate the API calls) as well as bypass two-factor authentication. The authors also note, however, that for this attack to be efficient you need to already have compromised the machine: Kholia concurred that hijacking a Dropbox client first requires hacking an existing vulnerability on the target user’s machine, which can be executed remotely. So if you’re wanting to peak at your friend’s Dropbox account, you’ll have to dig deeper into the architecture to even attempt it. In the end they still proclaim Dropbox is a viable and efficient tool for its purpose, and were looking to open up the eyes of the IT security community and not devalue the usefulness of Dropbox. From what I’m able to gather being able to intercept the SSL traffic opens up the flood gates of possibilities. You’ll be able to both see the data before encryption and after decryption and snoop out details you want/need.
http://itknowledgeexchange.techtarget.com/security-admin/page/7/
CC-MAIN-2016-30
refinedweb
1,354
59.74
Learn to implement an important new feature for the WinForms DataGrid, automatic row height sizing! Also shown is column hiding and column auto sizing. Latest C# Articles - Page 100 Creating Simple Charts and Graphs Learn to create simple charts and graphs using the Microsoft .NET Framework's System.Drawing namespace. Designing a Winform in C# and Linking It to a SQL Server Database Discover how to design an interface (Windows Form) in Visual Studio .NET using the C# language and then create and link it to a database on SQL Server 2000. (The article was updated.) Serialization/Deserialization in .NET Discover how to easily store and retrieve objects into a file, a database, or in an ASP session state. TraceView'�A Debug View Utility Capture debug messages from shared memory DBWIN_BUFFER, produced by the OutputDebugString() function. Working with Delegates Made Easier with C# 2.0 See how the new delegate syntax conveniences in C# 2.0 take a lot of the tedium and drudgery out of working with delegates..
http://www.codeguru.com/csharp/csharp/594/
CC-MAIN-2015-06
refinedweb
169
57.67
std::numeric_limits::round_style The value of std::numeric_limits<T>::round_stylem identifies the rounding style used by the floating-point type T whenever a value that is not one of the exactly repesentable values of T is stored in an object of that type. [edit] Standard specializations [edit] Notes These values are constants, and do not reflect the changes to the rounding made by std::fesetround. The changed values may be obtained from FLT_ROUNDS or std::fegetround. [edit] Example The decimal value 0.1 cannot be represented by a binary floating-point type. When stored in an IEEE-745 double, it falls between 0x1.9999999999999*2-4 and 0x1.999999999999a*2-4 . Rounding to nearest representable value results in 0x1.999999999999a*2-4 . Similarly, the decimal value 0.3, which is between 0x1.3333333333333*2-2 and 0x1.3333333333334*2-2 is rounded to nearest and is stored as 0x1.3333333333333*2-2 . #include <iostream> #include <limits> int main() { std::cout << std::hexfloat << "The decimal 0.1 is stored in a double as " << 0.1 << '\n' << "The decimal 0.3 is stored in a double as " << 0.3 << '\n' << "The rounding style is " << std::numeric_limits<double>::round_style << '\n'; } Output: The decimal 0.1 is stored in a double as 0x1.999999999999ap-4 The decimal 0.3 is stored in a double as 0x1.3333333333333p-2 The rounding style is 1
http://en.cppreference.com/w/cpp/types/numeric_limits/round_style
CC-MAIN-2014-41
refinedweb
229
71
' struch. An enumeration is a special set of numeric type string literals which considered as a constants. An enumeration is a special kind of value type limited to a restricted and unchangeable set of numerical values. By default, these numerical values are integers, but they can also be longs, bytes, etc. (any numerical/value except char) as will be illustrated below. A string is an empty space, a character, a word, or a group of words... In the body of a method, you can also declare variables that would be used internally. A variable declared in the body is referred to as a local variable. It cannot be accessed outside of the method it belongs to. After declaring a local variable, it is made available to the method and you can use it. The Function defined with in class is called method. It is a code designed to work on the member data of class. Methods are operations associated with types. To provide a type with methods is to give it some useful functionality. Often this functionality is made generally available, so that it can be utilized by other types. C# has a strong feature in which we can define more than one class with the Main method. Since Main is the entry point for the program execution, there are now more than one entry points. In fact, there should be only one entry point. Which will be resolved by specifying which Main is to be used to the compiler at the time of compilation as shown below: Command line arguments are parameters supplied to the Main method at the time of invoking it for execution. To understand the concepts see the example given below: Program of Command line argument to accept name from the command line and writes to the console. (Sample.cs). Like C++ it contains three types of statements. 1. Simple Statements 2. Compound Statements 3. Control Statements Let's begin in the traditional way, by looking at the code of a Hello World program. 1. using System; 2. public class HelloWorld 3. { 4. public static void Main() 5. { 6. II This is a single line comment 7. /* This is a 8. multiple 9. line comment */ 10. Console.WriteLine("Hello World! "); 11. } 12. } C# is a type-safe language Variables· are declared as .being of a particular type; and each variable is constrained to hold only values of its declared type. Variables can hold either value types or reference types, or they can be pointers. Here's a quick recap of the difference between value types and reference types. C# uses a series of words, called keywords, for its internal use. This means that you must avoid naming your objects using one of these keywords. Identifiers refer to the names of variables, functions arrays, classes, etc. created by programmer. They are fundamental requirement of any language. Each language has its own rules for naming these identifiers. Microsoft Corporation developed a new computer programming language C# pronounced as 'C- Sharp'. C# is a simple, modem, object oriented, and type safe programming language derived from C and C++. C# is a purely object-oriented language like as Java. It has been designed to support the key features of .NET framework. Like Java, C# is a descendant language of C++ which is descendant of C language. Advance Subjects Basic Subjects Social Us
http://ecomputernotes.com/csharp/cs
CC-MAIN-2017-47
refinedweb
562
67.45
Adding some function in EL ?Jaber C. Mourad Jun 10, 2010 5:10 AM Hi, As jbpm 4.3 is using juel to solve it's EL, I will be interested to add some function in the context. Basically to access some i18n resources in mail activity to send some messages according to locale. How can I add some functions into EL context ? Regards 1. Re: Adding some function in EL ?HuiSheng Xu Jun 12, 2010 10:58 PM (in response to Jaber C. Mourad) Hi Jaber, At this time, I really don't know how to register custom functionMapper for juel. In jBPM 4.4 trunk, there is a org.jbpm.pvm.internal.el package for the expression operations. Maybe you could achieve a custom FunctionMapper and register it to the JbpmElFactory. I think opening a new issue for this feature would be a good idea. Could you do it? 2. Re: Adding some function in EL ?Jaber C. Mourad Jun 14, 2010 10:33 AM (in response to HuiSheng Xu) I create a new feature request (). In the juel config : <script-manager <script-language </script-manager> What are the definitions of read-contexts ans write-contexts ? maybe there is something to do with that, but I can't find any documentation about customising script context ! Regards 3. Re: Adding some function in EL ?Michael Wohlfart Jun 14, 2010 10:53 AM (in response to Jaber C. Mourad) it's deprecated, see: I ended up extending the ScriptManager to have a writeable context at least: package net.aux.jbpm4; import java.util.HashMap; import javax.script.Bindings; import javax.script.ScriptContext; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; import org.jboss.seam.contexts.Contexts; import org.jbpm.api.JbpmException; import org.jbpm.pvm.internal.script.EnvironmentBindings; import org.jbpm.pvm.internal.script.ScriptManager; public class CustomScriptManager extends ScriptManager { protected class CustomEnvironmentBindings extends EnvironmentBindings { private HashMap<String, Object> map = new HashMap<String, Object>(); public CustomEnvironmentBindings() { super(null, null); } @Override public Object put(String key, Object value) { return map.put(key, value); } @Override public Object get(Object key) { Object result = super.get(key); if (result == null) { result = map.get(key); } return result; } } /** * <script-language * <script-language * <script-language */ public CustomScriptManager() { scriptEngineManager = new ScriptEngineManager(); scriptEngineManager.registerEngineName("juel", new org.jbpm.pvm.internal.script.JuelScriptEngineFactory()); scriptEngineManager.registerEngineName("bsh", new org.jbpm.pvm.internal.script.BshScriptEngineFactory()); scriptEngineManager.registerEngineName("groovy", new org.jbpm.pvm.internal.script.GroovyScriptEngineFactory()); } @Override protected Object evaluate(ScriptEngine scriptEngine, String script) { Bindings bindings = new CustomEnvironmentBindings(); // add any custom bindings here: bindings.put("pi", "3.14"); bindings.put("authenticatedUser", Contexts.getSessionContext().get("authenticatedUser")); scriptEngine.setBindings(bindings, ScriptContext.ENGINE_SCOPE); try { Object result = scriptEngine.eval(script); return result; } catch (ScriptException e) { throw new JbpmException("script evaluation error: " + e.getMessage(), e); } } } instead of <script-manager /> in jbpm.cfg.xml: <!-- custom script manager implementation --> <object class='net.aux.jbpm4.CustomScriptManager'> <field name='defaultExpressionLanguage'><string value='juel' /></field> <field name='defaultScriptLanguage'><string value='juel' /></field> </object> 4. Re: Adding some function in EL ?HuiSheng Xu Jun 14, 2010 4:32 PM (in response to Jaber C. Mourad) Hi Jaber, If you want to implement your own function, please refer org.jbpm.pvm.internal.el.JstlFunction. And I think Micheal's codes is very helpful. BWT, I think custom el function feature is very cool. But there were lots of works on EL which Tom have not completed yet, I am afraid we may not finish them on jbpm 4.4. Maybe on the next release we could make a good plan to finish these unresovled issues. Thank you very much. 5. Re: Adding some function in EL ?HuiSheng Xu Jun 14, 2010 4:37 PM (in response to Jaber C. Mourad) And if you only want to use i18n in EL. I could export ResourceBundleELResolver in JbpmElFactoryImpl. What do you think about this? 6. Re: Adding some function in EL ?Jaber C. Mourad Jun 15, 2010 11:46 AM (in response to HuiSheng Xu) It may be a good idea ! Adding a parameter in configuration to define resources bundle... Regards 7. Re: Adding some function in EL ?HuiSheng Xu Jun 16, 2010 3:42 AM (in response to Jaber C. Mourad) Hi Jaber, I opened an issue for this feature. At this moment, if we want to use i18n in EL. we have to create ResourceBundle instance and set it as a process instance. 8. Re: Adding some function in EL ?Maciej Swiderski Jun 16, 2010 5:53 AM (in response to HuiSheng Xu) Simple way of providing your own functions to be evaluated by jBPM could be to enclose such functions in a class that will be set as process variable. Not perfect but should do it...
https://developer.jboss.org/message/548204
CC-MAIN-2018-43
refinedweb
786
53.17
Hello All, The question that I am attempting to work out is as follows : "A government research lab has concluded that an artificial sweetener commonly used in diet soda pop will cause death in lab mice. A friend of yours is desperate to lose weight but can't give up soda pop. Your friend wants to know how much soda pop it% artificial sweetener. Use a variable declaration with the modifier 'const' to give a name to this fraction. You may want to express the percent as the double value of 0.001. Your program should allow the calculation to be repeated as often as the user wishes." I have completed the first draft of the code which is attached. My question is : 1) Is the logic of the program correct ? For example the cin values ; SweetnerMouse = 0.080 WeightMouse(grams) = 50grams WeightDieter(grams) = 65000grams Yields the value '104000' for the variable (DietSodaPopCans); meaning that it takes 104000 cans to kill the dieter at his/her given dieting weight. Does it sound sensible ? I feel that the two equations written on the C++ code is logical. But I would need some checking on that please. 2) The question above requires that the program be repeated as many times as the user wishes. For this, what type of looping mechanism could I use ? Would a while looping suffice or should I go for do..while ? ThanksThanksCode: The Source Code of the program is shown below : //Author: Andy8 //Soda Pop Death //Date: 23 August 2011 #include <iostream> using namespace std; int main() { const double DIET_SODA_SWEETNER = 0.001; int DietSodaPopCans; double SweetnerMouse; double WeightMouse; double SweetnerDieter; double WeightDieter; cout << "This program calculates how many cans of soda it will take to kill you !\n"; cout << "Each can contains 0.001 (0.1%) of artificial sweetener\n" << endl; cout << "Enter the amount of Artificial Sweetner needed to kill a mouse: \n"; cin >> SweetnerMouse; cout << "Enter the weight of the mouse in grams: \n"; cin >> WeightMouse; cout << "Enter the weight of the dieter in grams at which dieting activity will be stopped: \n"; cin >> WeightDieter; SweetnerDieter = (SweetnerMouse/WeightMouse) * WeightDieter; DietSodaPopCans = (SweetnerDieter/DIET_SODA_SWEETNER); cout << "The amount of Diet Soda Pop Can's that would kill the dieter is: " << DietSodaPopCans; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/140502-please-help-me-logic-program-printable-thread.html
CC-MAIN-2014-49
refinedweb
375
61.56
RTEMS 4.11 AMBA Bus in PolyORB-HI-C Hi all, I have been here building a CAN Bus driver for RTEMS 4.11 integrated in TASTE. I have reached the point where I am able to define in TASTE the interface view and deployment view setting the CAN BUS nodes connected to a bus, and able to compile and generate the binary. Unfortunately is not tested yet, I hope to do this on Monday and for sure plenty of issues will arrive. Anyway one that is certain, it the usage of the AMBA bus to register the CAN driver. In RTEMS 4.11, the AMBA bus is set available through a global variable named ambapp_plb whose type is struct ambapp_bus *amba_bus. When writing the CAN driver inside po-hi-c, I followed the examples given by eth-leon and the RASTA ones that also use AMBA. Unfortunately while compiling, I get the error that it does not recognized ambapp_plb. I have looked into examples in the RASTA, and they don't make much sense to me. They do the following (copied from "…driver_rasta_common.c"): #ifdef RTEMS48 int apbuart_rasta_register(amba_confarea_type *bus); static amba_confarea_type abus; extern LEON3_IrqCtrl_Regs_Map *irq; extern LEON_Register_Map *regs; amba_confarea_type* __po_hi_driver_rasta_common_get_bus () { return &abus; } #elif RTEMS411 extern int apbuart_rasta_register(struct ambapp_bus *bus); static struct ambapp_bus abus; /* Why do we have LEON3 specifics here ??? extern LEON3_IrqCtrl_Regs_Map *irq; extern LEON_Register_Map *regs; */ struct ambapp_bus * __po_hi_driver_rasta_common_get_bus () { return &abus; } #else #error "o<" #endif * Strange enough the function seems simply returning a local variable. At the non common rasta files, there is only an extern declaration of the __po_hi_driver_rasta_common_get_bus (). This works? How do I get the amba bus structure? For now I did the following… Also made the same extern declarations, and make an explicit call to the …get_bus)= function call inside my init function to get the amba bus structure. But I think it is only returning a local variable with no real pointer to the AMBA bus. Could please help and confirm the proper solution? Thank you very much,
https://gitrepos.estec.esa.int/taste/polyorb-hi-c/-/issues/1
CC-MAIN-2020-45
refinedweb
335
54.22
Main class of the Historical Supernovae plugin. More... #include <Supernovae.hpp> Main class of the Historical Supernovae plugin. Definition at line 68 of file Supernovae.hpp. Used for keeping for track of the download/update status. Definition at line 74 of file Supernovae. This is used for plugin-specific warnings and such. Execute all the drawing functions for this module. Reimplemented from StelModule. get a supernova object by identifier Return the value defining the order of call for the given action For example if stars.callOrder[ActionDraw] == 10 and constellation.callOrder[ActionDraw] == 11, the stars module will be drawn before the constellations. Reimplemented from StelModule. Get count of supernovae from catalog. Definition at line 167 of file Supernovae.hpp. Get the date and time the supernovae were updated. Definition at line 148 of file Supernovae.hpp. Get lower limit of brightness for displayed supernovae. Gets a user-displayable name of the object category. Implements StelObjectModule. Definition at line 118 of file Supernovae.hpp. Get the number of seconds till the next update. Returns the name that will be returned by StelObject::getType() for the objects this module manages. Implements StelObjectModule. Definition at line 119 of file Supernovae.hpp. Get list of supernovae. Get the update frequency in days. Definition at line 151 of file Supernovae.hpp. Get whether or not the plugin will try to update catalog data from the internet. Definition at line 142 of file Supernovae.hpp. Get the current updateState. Definition at line 158 of file Supernovae.hpp. Initialize itself. If the initialization takes significant time, the progress should be displayed on the loading bar. Implements StelModule. Emitted after a JSON update has run. List all StelObjects. Implements StelObject with the given ID if exists or the empty StelObject if not found. Implements StelObjectModule. Definition at line 111 of file Supernovae.hpp. Return the matching satellite if exists or Q_NULLPTR. Implements StelObjectModule. Return the matching satellite object's pointer if exists or Q_NULLPTR. Implements StelObjectModule. Set whether or not the plugin will try to update catalog data from the internet. Definition at line 145 of file Supernovae.hpp. Update the module with respect to the time. Implements StelModule. Definition at line 89 of file Supernovae.hpp. Download JSON from web recources described in the module section of the module.ini file and update the local JSON file.
http://www.stellarium.org/doc/0.16.1/classSupernovae.html
CC-MAIN-2017-47
refinedweb
390
54.08
IRC log of xproc on 2011-10-31 Timestamps are in UTC. 16:04:50 [RRSAgent] RRSAgent has joined #xproc 16:04:50 [RRSAgent] logging to 16:05:19 [ht] 16:05:53 [MoZ] Scribe: alexmilowski 16:10:49 [alexmilowski] Present: Henry Thompson, Paul Grosso, James Fuller, Vojtech Toman, Mohamed Zergaoui, ,Cornelia Davis, Murray Maloney, Alex Milowski 16:11:24 [alexmilowski] Issues list: 16:14:27 [alexmilowski] Issue 3: We think this is closed but need to check with Liam to verifiy. 16:14:59 [Cdavis_] Cdavis_ has joined #xproc 16:16:27 [ht] 16:16:47 [Cornelia] Cornelia has joined #xproc 16:16:51 [jfuller] jfuller has joined #xproc 16:16:55 [ht] 16:19:01 [alexmilowski] Action: Editors to remove "particularly" clause in section 5 as this may lead to inferences that we do not want. See Henry's brain. 16:24:56 [alexmilowski] Disccusion of Issue 7: Henry was recalling his memory of the lead up to taking this to XML Core and how XHTML treats entity definitions. 16:25:57 [alexmilowski] Henry: standalone=yes does not cause an error ... no difference for a well-formed parser. 16:26:30 [alexmilowski] Henry: No need to change the default because it won't change the behavior of the parsers in use for XHTML in browsers. 16:29:53 [alexmilowski] Henry: XHTML5 maps any public identifier to a pre-defined external subset. 16:32:16 [alexmilowski] Paul: What are the datatypes available? 16:32:24 [alexmilowski] Henry: Only DTD datatypes. 16:32:47 [alexmilowski] Some discussion of attribute definitions, NMTokens, and tokenization available when parsers encounter definitions. 16:36:28 [alexmilowski] Some more digging into what HTML5 says about doctypes and entity definitions. 16:40:43 [alexmilowski] Henry: Core says "there is nothing here" ... we're done. 16:40:47 [alexmilowski] Alex: I agree 16:40:58 [alexmilowski] Action: Close issue 7 16:42:29 [alexmilowski] Action: Henry to try to get agreement from Henri Sivonen on issue 9. 16:47:04 [alexmilowski] Discussion on Issue 19 16:47:26 [alexmilowski] Paul: Look at paragraph 3 ... 16:48:13 [alexmilowski] Action: Recommend the editor use Paul's version of paragraph 3. 16:48:59 [alexmilowski] Action: Change "is an attempt to give" to "gives" 16:49:26 [alexmilowski] Action: Norm needs to suggest how this will be integrated into the document. Where does this fit in? 16:50:30 [ht] ht has joined #xproc 16:51:44 [MoZ] 16:52:16 [ht] HST proposes wrt issue 19 that we relabel section 1 as 'Introduction', push the existing prose down to subsection 1.1 Background, and make Norm's new prose the body text of section 1 16:54:22 [alexmilowski] With the actions taken today, we will have closed all the issues today, hopefully to the satisfaction of the commentators. 16:54:35 [alexmilowski] Last issue is issue 8 16:56:36 [jfuller] 16:58:30 [alexmilowski] Issue 8.2: Address by adding different profiles. 16:59:20 [alexmilowski] Issue 8.2: We kept basic and added external declaration profile. 16:59:23 [alexmilowski] Henry: Agreed. 16:59:57 [alexmilowski] Working group endorses suggested solution to 8.2 17:00:48 [alexmilowski] Issue 8.3: Class V starts to address this. Do we need to add a full validating profile? We've minimally addressed this. 17:01:04 [alexmilowski] Issue 8.3: We've also added section 7: Validation. 17:01:37 [alexmilowski] Working group endorses solution to 8.3 17:02:21 [alexmilowski] Issue 8.5: Possibly add a diagram? 17:04:34 [alexmilowski] Henry: This should be added to 4.2 17:05:06 [alexmilowski] Vojtech: Maybe we want to rename the profile classes so that they make more sense in the diagram? 17:07:27 [alexmilowski] Henry: We need all the classes we have. Renaming them may make sense. 17:07:43 [alexmilowski] Alex: Maybe we finish the diagram and see what makes sense. 17:10:09 [alexmilowski] Some discussion between Henry & Murray about the classes & validation. 17:13:25 [alexmilowski] Murray: Validating XML processors must read & process the external declarations... 17:13:38 [alexmilowski] Henry: (Reads the spec saying that as so ...) 17:14:14 [alexmilowski] A processor that validates but doesn't read external declarations isn't a conforming XML processor. 17:16:31 [alexmilowski] Henry: (Paraphrasing the discussion) Making validation optional or required is incoherent against the profiles. 17:17:53 [alexmilowski] Alex: I'm feeling uneasy about this. As a user you can pick a profile and turn on validation and do the wrong thing. 17:18:00 [alexmilowski] Henry: We need to say something about this. 17:20:29 [alexmilowski] (Murray is point at the diagram making good points about validation and profiles.) 17:26:14 [alexmilowski] Murray: In the case where you are enabling the XInclude and validation flag, can we say "it is recommended" or "required" that you validate after the XInclude? 17:27:14 [alexmilowski] Henry: Instead of three, there are only two: before or after. 17:27:29 [alexmilowski] Paul: Isn't that the [status quo]. 17:28:56 [alexmilowski] Henry: It is coherent to validate first because you'll get element content whitespace ... 17:41:06 [ht] ht has joined #xproc 17:41:15 [alexmilowski] alexmilowski has joined #xproc 17:42:04 [Cornelia] Cornelia has joined #xproc 17:42:28 [PGrosso] PGrosso has joined #xproc 17:48:49 [Cornelia] Cornelia has joined #xproc 17:49:53 [ht] RRSAgent, logs? 17:49:53 [RRSAgent] I'm logging. Sorry, nothing found for 'logs' 17:49:57 [ht] RRSAgent, log? 17:49:57 [RRSAgent] I'm logging. Sorry, nothing found for 'log' 17:50:20 [jfuller] jfuller has joined #xproc 17:55:35 [alexmilowski] Comment: We have "id xml processor profile" but the profile adds xml:id. Maybe this should be the "xml:id XML Processor profile" 17:55:55 [alexmilowski] Comment: Maybe we should make "XML Processor Profile" less redundant in the document. 18:02:17 [alexmilowski] Paul: On issue 8.5, what are the remaining questions? 18:03:22 [alexmilowski] The validation questions relate to 8.3. We need to re-open this issue. 18:04:02 [alexmilowski] Paul: We can close 8.5 by adding the diagram. 18:06:00 [alexmilowski] Henry: It is perfectly valid to provide XML Schema validation for any of the profiles. ... it is not the same for DTD validation. 18:06:14 [alexmilowski] Paul: We can close 8.3 and 8.5 and open a new issue about validation. 18:06:41 [alexmilowski] Action: Henry will draft the new issue. 18:07:02 [ht] New issue: How to expand 7 (and possible earlier bits) to clarify the distinction between DTD validation and validation in general 18:07:35 [Liam] Liam has joined #xproc 18:07:57 [ht] ... DTD validation is _not_ orthogonal, e.g. Basic+DTD Validation is not conformant with XML spec 18:08:12 [ht] ... but e.g. Schema validation is orthogonal 18:08:29 [PGrosso] s/7/section 7 in the draft/ 18:08:36 [ht] ... Also, expand the discussion of ordering of xinclude and validation 18:10:43 [alexmilowski] Issue 8.7: In profiles external declarations (2.3) and full (2.4), "reading and processing" versus "processing." 18:11:06 [alexmilowski] Henry: That prose is directly from the XML specification and I'm reluctant to fix it. 18:11:49 [alexmilowski] Henry: [ this text intended to reproduce what the XML spec says ] 18:14:09 [alexmilowski] The link in the profiles document takes you to the location in the XML specification that has the relevant text. 18:18:45 [alexmilowski] Action: Henry will attempt to separate the two parts of #1 on 2.3/2.4. 18:19:21 [MoZ] MoZ has joined #xproc 18:22:30 [alexmilowski] Action: Issue 8.8: Editorial 18:26:15 [alexmilowski] Henry: Instead of steps necessary, they are steps "preparatory" . 18:26:27 [alexmilowski] The profile steps are not "steps" ... 18:26:57 [alexmilowski] Action: Henry will rework the introduction to section 2. 18:27:37 [alexmilowski] "Step" is the wrong word throughout the profile section ... Henry will look at this as well. 18:28:46 [alexmilowski] Issue 8.9, action to henry 18:29:14 [alexmilowski] Action: Henry to review the use of conformance in section 4. 18:29:46 [Zakim] Zakim has left #xproc 18:30:48 [alexmilowski] Issue 8.10: There have been changes that may have addressed this. 18:31:25 [alexmilowski] Henry: word 'rigid' is still there. 18:32:36 [alexmilowski] Action: Henry to soften language in the first paragraph of Section 1, Background. 18:34:00 [alexmilowski] Issue 8.12 18:34:36 [alexmilowski] Henry: remove "since this specification is not implementable as such" and this will be fixed. 18:35:29 [alexmilowski] Alex: What did the infoset do about this? 18:38:02 [alexmilowski] Alex: We use "require" in each of the profile. 18:38:32 [alexmilowski] Henry: We define conformance ... 18:38:57 [alexmilowski] ...make it be that conformance starts when some other specification references our specification. 18:39:45 [alexmilowski] Henry: It is going to define what it means to conform to a profile. 18:40:30 [alexmilowski] Henry: [ a substantive change to section 6 to address issue 8.12] 18:41:20 [alexmilowski] This specification doesn't have implementations but it does have specifications that conform to it. 18:41:54 [alexmilowski] Action: Henry to change section 6 to address 8.12 18:42:56 [alexmilowski] Issue 8.14 18:44:08 [alexmilowski] Henry: XPath 2 distinguished between implementation defined and implementation determined. 18:44:53 [alexmilowski] [choice vs unspecified] 18:46:03 [alexmilowski] implementation dependent vs implementation defined 18:46:40 [alexmilowski] Action: Henry to clarify use of term to address 8.14. Take suggested fix. 18:47:11 [alexmilowski] Issue 8.16 18:47:20 [Vojtech] Vojtech has joined #xproc 18:47:44 [alexmilowski] The names may change again 18:47:52 [alexmilowski] In progress... 18:49:35 [alexmilowski] Henry to consider moving the tabulation to the front of section 3. 18:51:09 [alexmilowski] Alex: I like having the class definitions first so you know what the table is about. 18:51:35 [alexmilowski] Henry: It might be more useful to have more descriptive names: Class A: Items and properties fundamental to all XML documents. 18:52:40 [alexmilowski] Alex: Maybe change the class definitions to have two parts: the description of the class and the requirements on the properties. 18:55:30 [alexmilowski] Issue 8.4 18:56:29 [alexmilowski] Still open, James is building a list. 18:56:45 [alexmilowski] Issue 8.11 18:58:52 [alexmilowski] Henry: We can use this for an implementation report. 18:59:22 [alexmilowski] Alex: There is a distinction between the options and the common use of those options in a product (e.g. Chrome/Safari) 18:59:28 [alexmilowski] James is working on this. 18:59:39 [alexmilowski] Alex: I volunteer for helping with WebKit et. al. 18:59:50 [alexmilowski] Issue 8.6 19:01:17 [Liam] Liam has joined #xproc 19:02:47 [alexmilowski] Henry/Murray: "ID type assignment" language... 19:07:23 [alexmilowski] Action: Take the suggestion by using and taking "ID type assignment" and forcing bullet #1. See minutes. 19:09:01 [ht] ht has joined #xproc 19:09:02 [alexmilowski] "Perform ID type assignment for all xml:id attributes as required by xml:id 1.0 by setting their attribute type Infoset property to type ID" 19:10:20 [alexmilowski] Issue 8.13 19:10:43 [alexmilowski] Section 6 is a start... 19:11:36 [ht] RRSAgent, make logs public 19:12:20 [PGrosso] PGrosso has left #xproc 19:35:27 [Cornelia] Cornelia has joined #xproc 19:37:19 [Liam] Liam has joined #xproc 19:56:40 [Liam] [xquery breaking for lunch] 20:03:39 [alexmilowski] alexmilowski has joined #xproc 20:11:49 [alexmilowski] alexmilowski has joined #xproc 20:41:52 [MoZ] MoZ has joined #xproc 20:42:15 [Vojtech] Vojtech has joined #xproc 20:42:23 [alexmilowski] alexmilowski has joined #xproc 20:42:37 [ht] ht has joined #xproc 20:44:36 [jfuller] jfuller has joined #xproc 20:45:03 [PGrosso] PGrosso has joined #xproc 20:46:02 [alexmilowski] 20:47:07 [PGrosso] scribe pgrosso 20:47:59 [PGrosso] 3 things left from Jim's list 20:48:26 [PGrosso] open issues against XProc itself which we need to sort into Vnext requests and potential errata. 20:51:27 [PGrosso] 20:51:56 [MoZ] 20:53:24 [Cornelia] Cornelia has joined #xproc 20:55:40 [PGrosso] Our charter ends the end of this coming January. We need to decide if we will recharter or just extend. 21:01:35 [PGrosso] The abstract says each profile defines a data model, but that isn't really true. We should consider rewording that. 21:01:59 [PGrosso] The profile determines properties that are available from which to determine a data model. 21:02:42 [PGrosso] action to henry: the abstract (and any paragraph in the Background that is almost a copy of it) needs to be rewritten. 21:03:49 [ht] ht has joined #xproc 21:05:34 [PGrosso] We find that the spec uses the term data model all over the place and perhaps in a fashion that will be confusing to people. 21:05:57 [PGrosso] Jim's terminology section should define the term, though Alex suggests perhaps we use a different word in most cases. 21:07:10 [PGrosso] action to alex: sketch out by tomorrow morning if possible how we should address the "data model" terminology in the spec. 21:11:32 [PGrosso] Perhaps we should add some words to explain why we picked each of the 4 profiles we did and admit that there could be lots more so that our choice was somewhat arbitrary although still, we hope, useful. 21:12:37 [PGrosso] For example, we believe all browsers implement (at least) basic and not all browser implement any of the larger profiles. 21:13:20 [PGrosso] Our profiles were based on sets of available properties, not on things like streaming or not or dynamic manipulation or not. 21:16:10 [PGrosso] action to jim: suggest a short rationale for our picking each of our profiles. 21:18:33 [PGrosso] Hey HENRY: what does "faithful provision" mean? 21:18:59 [ht] Where? 21:19:19 [PGrosso] In section 2 all over 21:20:04 [ht] It means that whatever gets put in a data model does actually (enable itself to) reconstruct the information defined by the relevant infoset property 21:20:45 [PGrosso] ht: 21:21:05 [ht] So, e.g., if the parser builds a datamodel that doesn't actually discriminate between NMTOKEN and ID is not 'faithfully provisioning' wrt the attribute type 21:21:08 [ht] property 21:21:39 [PGrosso] I don't think I understand that use of the word "provision". Can you give me a synonym? 21:22:12 [ht] 'install' 21:22:18 [ht] 'install in' 21:22:59 [PGrosso] for 8.15, we will accept Michael's suggested fix and let the editor massage as necessary. 21:25:27 [PGrosso] for 8.17, Alex suggests we add a short sentence or two about each of xml:id, xml:base, and xinclude to section 2 (perhaps just the intro to 2 or maybe a new subsection). 21:25:47 [PGrosso] [and the WG agrees] 21:26:44 [PGrosso] action to Murray: suggest the wording to add about xml:id, xml:base, and xinclude. 21:28:08 [PGrosso] For 8.18, these are all editorial, and we are leaving their resolution to Jim and Norm. 21:29:06 [PGrosso] And that takes us to the end of LC comments. 21:31:40 [PGrosso] Section 4.2.3, Vojtech questions whether "Unexpanded Entity Reference Information Items" should be in there at all because he doesn't think there is any difference. 21:34:41 [PGrosso] Also in section 4, we note that all those "Entirely, for the same reason" are still confusing and need to be spelled out or something. 21:38:45 [PGrosso] We believe (though we're not positive) that "Unexpanded Entity Reference Information Items" has to be the same for the "external declaration" profile and the full profile. 21:39:37 [PGrosso] We aren't sure that we understand what happens for Unexpanded Entity Reference Information Items for either profile, so we need to re-discuss this with HST. 21:41:13 [PGrosso] Vojtech has some editorial comments that he will pass on to Jim. 21:46:03 [alexmilowski] 21:46:44 [jfuller] 21:48:15 [PGrosso] issue 001 to be filed under Vnext. 21:48:47 [PGrosso] Issues 002 and 003 are closed. 21:48:55 [PGrosso] Issue 004 is for V.next. 21:51:52 [PGrosso] Issue 005 is about conformance for the xproc (and Vojtech's comment here is about the profile spec), so this goes into the errata pile. 21:52:11 [PGrosso] Issue 006 is for V.next. 21:54:18 [PGrosso] We believe that issue 007 is a bug in Calabash. action to Norm to check and confirm. 21:55:20 [PGrosso] Issue 008 is an erratum. 22:00:43 [PGrosso] Issue 009 is asking that the xproc schema be updated to include p:template, but p:template is not part of V1, so we wonder if we can change the schema. Paul doubts it, but thinks that we could add such a schema to the p:template note. We should discuss this with Norm and Henry too, but we are leaning toward adding the augmented schema to the note. 22:02:01 [PGrosso] MoZ says that implementors cannot add something in the p namespace, so they cannot use p:template with the official xproc schema. 22:02:40 [PGrosso] Leaving Issue 009 open for discussion. 22:05:31 [PGrosso] At least most of Issue 010 is V.next. But there is one thing that Norm says "I'll put that on the bug list" and we're not sure what that is, so: 22:06:09 [PGrosso] action to Norm: Look at issue 010 and determine what aspect of it is a bug and report back. 22:30:21 [PGrosso] [break until 15:45] 22:59:00 [alexmilowski] 22:59:24 [PGrosso] Issue 013 was discussed in the minutes Alex just posted above. 23:00:12 [PGrosso] Norm was given an action to write a proposal for V.next. 23:00:52 [MoZ] MoZ has joined #xproc 23:01:00 [PGrosso] And Alex has an action on this issue to do some more research. 23:03:45 [PGrosso] Issue 014 is an erratum. It requires some clarification in the spec as outlined in Vojtech's email at. 23:06:47 [PGrosso] Issue 015 is V.next unless Norm says it's just closable. action to Norm to confirm. 23:07:40 [PGrosso] Same with Issue 016--V.next with Norm to confirm. 23:11:13 [PGrosso] Issue 017 is V.next. 23:12:24 [PGrosso] We believe issue 018 is just fyi and is neither an erratum or A v.next request, so we will just close it. action to Norm to confirm. 23:13:19 [PGrosso] Issues 019 through 024 are already closed. 23:14:05 [PGrosso] Issue 025 is an erratum. We should clarify that xslt match patterns are evaluated using the Step xpath context. 23:20:33 [PGrosso] action to Norm (editor): clarify that xslt match patterns are evaluated using the step xpath context (to close 025). 23:23:00 [PGrosso] action to Norm (editor): Clarify that what Norm asked about is conformant to close 014. 23:25:13 [PGrosso] action to Norm (editor): to correct the obvious bug outlined in issue 008. 23:32:53 [PGrosso] Regarding 009, Paul suggests that we can add to the p:template WG Note an augmented schema, but we can't pretend that the augmented schema is the official 1.0 one. 23:34:41 [PGrosso] action to Jim: create the augmented schema (that includes p:template) and augment the WG Note to point to the augmented schema. 23:36:03 [PGrosso] So we can close 009 as neither errata nor V.next (though we'll probably put p:template into V.next) and just address it with a Second Edition of the WG Note. 23:38:37 [PGrosso] That leaves us with 005 on the conformance section of the xproc spec at 23:42:39 [PGrosso] action to Jim: Give a try to finding all conformance statements throughout the spec and putting references to them in the conformance section to address issue 005. 23:50:13 [PGrosso] meeting adjourned 16:49 local time until 9:00 tomorrow. 23:50:25 [PGrosso] PGrosso has left #xproc 23:50:53 [MoZ] RSSAgent, help 23:51:48 [MoZ] RSSAgent, make minutes 23:52:17 [MoZ] RSSAgent, make minutes worldwide visible 23:59:54 [ht] RSSAgent, make logs world-visible
http://www.w3.org/2011/10/31-xproc-irc
CC-MAIN-2017-17
refinedweb
3,511
73.88
In this post we’ll be sharing the top best answers for the following queries: - How can I perform a ( INNER| ( LEFT| RIGHT| FULL) OUTER) JOINwith pandas? - How do I add NaNs for missing rows after a merge? - How do I get rid of NaNs after merging? - Can I merge on the index? - How do I merge multiple DataFrames? - Cross join with pandas merge? join? concat? update? Who? What? Why?! … and more. I’ve seen these recurring questions asking about various facets of the pandas merge functionality. Most of the information regarding merge and its various use cases today is fragmented across dozens of badly worded, unsearchable posts. The aim here is to collate some of the more important points for posterity. Pandas merging 101- Answer #1: This post aims to give readers a primer on SQL-flavored merging with Pandas, how to use it, and when not to use it. In particular, here’s what this post will go through: - The basics – types of joins (LEFT, RIGHT, OUTER, INNER) - merging with different column names - merging with multiple columns - avoiding duplicate merge key column in output What this post (and other posts by me on this thread) will not go through: - Performance-related discussions and timings (for now). Mostly notable mentions of better alternatives, wherever appropriate. - Handling suffixes, removing extra columns, renaming outputs, and other specific use cases. There are other (read: better) posts that deal with that, so figure it out! Note Most examples default to INNER JOIN operations while demonstrating various features, unless otherwise specified. Furthermore, all the DataFrames here can be copied and replicated so you can play with them. Also, see this post on how to read DataFrames from your clipboard. Lastly, all visual representation of JOIN operations have been hand-drawn using Google Drawings. Inspiration from here. Enough talk – just show me how to use merge! Setup & Basics np.random.seed(0) left = pd.DataFrame({'key': ['A', 'B', 'C', 'D'], 'value': np.random.randn(4)}) right = pd.DataFrame({'key': ['B', 'D', 'E', 'F'], 'value': np.random.randn(4)}) left key value 0 A 1.764052 1 B 0.400157 2 C 0.978738 3 D 2.240893 right key value 0 B 1.867558 1 D -0.977278 2 E 0.950088 3 F -0.151357 For the sake of simplicity, the key column has the same name (for now). An INNER JOIN is represented by Note This, along with the forthcoming figures all follow this convention: - blue indicates rows that are present in the merge result - red indicates rows that are excluded from the result (i.e., removed) - green indicates missing values that are replaced with NaNs in the result To perform an INNER JOIN, call merge on the left DataFrame, specifying the right DataFrame and the join key (at the very least) as arguments. left.merge(right, on='key') # Or, if you want to be explicit # left.merge(right, on='key', how='inner') key value_x value_y 0 B 0.400157 1.867558 1 D 2.240893 -0.977278 This returns only rows from left and right which share a common key (in this example, “B” and “D). A LEFT OUTER JOIN, or LEFT JOIN is represented by This can be performed by specifying how='left'. left.merge(right, on='key', how='left') key value_x value_y 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 Carefully note the placement of NaNs here. If you specify how='left', then only keys from left are used, and missing data from right is replaced by NaN. And similarly, for a RIGHT OUTER JOIN, or RIGHT JOIN which is… …specify how='right': left.merge(right, on='key', how='right') key value_x value_y 0 B 0.400157 1.867558 1 D 2.240893 -0.977278 2 E NaN 0.950088 3 F NaN -0.151357 Here, keys from right are used, and missing data from left is replaced by NaN. Finally, for the FULL OUTER JOIN, given by specify how='outer'. left.merge(right, on='key', how='outer') key value_x value_y 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 4 E NaN 0.950088 5 F NaN -0.151357 This uses the keys from both frames, and NaNs are inserted for missing rows in both. The documentation summarizes these various merges nicely: Other JOINs – LEFT-Excluding, RIGHT-Excluding, and FULL-Excluding/ANTI JOINs If you need LEFT-Excluding JOINs and RIGHT-Excluding JOINs in two steps. For LEFT-Excluding JOIN, represented as Start by performing a LEFT OUTER JOIN and then filtering (excluding!) rows coming from left only, (left.merge(right, on='key', how='left', indicator=True) .query('_merge == "left_only"') .drop('_merge', 1)) key value_x value_y 0 A 1.764052 NaN 2 C 0.978738 NaN Where, left.merge(right, on='key', how='left', indicator=True) key value_x value_y _merge 0 A 1.764052 NaN left_only 1 B 0.400157 1.867558 both 2 C 0.978738 NaN left_only 3 D 2.240893 -0.977278 both And similarly, for a RIGHT-Excluding JOIN, (left.merge(right, on='key', how='right', indicator=True) .query('_merge == "right_only"') .drop('_merge', 1)) key value_x value_y 2 E NaN 0.950088 3 F NaN -0.151357 Lastly, if you are required to do a merge that only retains keys from the left or right, but not both (IOW, performing an ANTI-JOIN), You can do this in similar fashion— (left.merge(right, on='key', how='outer', indicator=True) .query('_merge != "both"') .drop('_merge', 1)) key value_x value_y 0 A 1.764052 NaN 2 C 0.978738 NaN 4 E NaN 0.950088 5 F NaN -0.151357 Different names for key columns If the key columns are named differently—for example, left has keyLeft, and right has keyRight instead of key—then you will have to specify left_on and right_on as arguments instead of on: left2 = left.rename({'key':'keyLeft'}, axis=1) right2 = right.rename({'key':'keyRight'}, axis=1) left2 keyLeft value 0 A 1.764052 1 B 0.400157 2 C 0.978738 3 D 2.240893 right2 keyRight value 0 B 1.867558 1 D -0.977278 2 E 0.950088 3 F -0.151357 left2.merge(right2, left_on='keyLeft', right_on='keyRight', how='inner') keyLeft value_x keyRight value_y 0 B 0.400157 B 1.867558 1 D 2.240893 D -0.977278 Avoiding duplicate key column in output When merging on keyLeft from left and keyRight from right, if you only want either of the keyLeft or keyRight (but not both) in the output, you can start by setting the index as a preliminary step. left3 = left2.set_index('keyLeft') left3.merge(right2, left_index=True, right_on='keyRight') value_x keyRight value_y 0 0.400157 B 1.867558 1 2.240893 D -0.977278 Contrast this with the output of the command just before (that is, the output of left2.merge(right2, left_on='keyLeft', right_on='keyRight', how='inner')), you’ll notice keyLeft is missing. You can figure out what column to keep based on which frame’s index is set as the key. This may matter when, say, performing some OUTER JOIN operation. Merging only a single column from one of the DataFrames For example, consider right3 = right.assign(newcol=np.arange(len(right))) right3 key value newcol 0 B 1.867558 0 1 D -0.977278 1 2 E 0.950088 2 3 F -0.151357 3 If you are required to merge only “new_val” (without any of the other columns), you can usually just subset columns before merging: left.merge(right3[['key', 'newcol']], on='key') key value newcol 0 B 0.400157 0 1 D 2.240893 1 If you’re doing a LEFT OUTER JOIN, a more performant solution would involve map: # left['newcol'] = left['key'].map(right3.set_index('key')['newcol'])) left.assign(newcol=left['key'].map(right3.set_index('key')['newcol'])) key value newcol 0 A 1.764052 NaN 1 B 0.400157 0.0 2 C 0.978738 NaN 3 D 2.240893 1.0 As mentioned, this is similar to, but faster than left.merge(right3[['key', 'newcol']], on='key', how='left') key value newcol 0 A 1.764052 NaN 1 B 0.400157 0.0 2 C 0.978738 NaN 3 D 2.240893 1.0 Merging on multiple columns To join on more than one column, specify a list for on (or left_on and right_on, as appropriate). left.merge(right, on=['key1', 'key2'] ...) Or, in the event the names are different, left.merge(right, left_on=['lkey1', 'lkey2'], right_on=['rkey1', 'rkey2']) Other useful merge* operations and functions - Merging a DataFrame with Series on index: See this answer. - Besides merge, DataFrame.updateand DataFrame.combine_firstare also used in certain cases to update one DataFrame with another. pd.merge_orderedis a useful function for ordered JOINs. pd.merge_asof(read: merge_asOf) is useful for approximate joins. This section only covers the very basics, and is designed to only whet your appetite. For more examples and cases, see the documentation on merge, join, and concat as well as the links to the function specifications. Follow the other answers of the post to continue learning. Pandas merging 101- Answer #2: This answer will go through the following topics: - Merging with index under different conditions - options for index-based joins: merge, join, concat - merging on indexes - merging on index of one, column of other - effectively using named indexes to simplify merging syntax Index-based joins TL;DR There are a few options, some simpler than others depending on the use case. DataFrame.mergewith left_indexand right_index(or left_onand right_onusing names indexes) - supports inner/left/right/full - can only join two at a time - supports column-column, index-column, index-index joins DataFrame.join(join on index) - supports inner/left (default)/right/full - can join multiple DataFrames at a time - supports index-index joins pd.concat(joins on index) - supports inner/full (default) - can join multiple DataFrames at a time - supports index-index joins Index to index joins Setup & Basics import pandas as pd import numpy as np np.random.seed([3, 14]) left = pd.DataFrame(data={'value': np.random.randn(4)}, index=['A', 'B', 'C', 'D']) right = pd.DataFrame(data={'value': np.random.randn(4)}, index=['B', 'D', 'E', 'F']) left.index.name = right.index.name = 'idxkey' left value idxkey A -0.602923 B -0.402655 C 0.302329 D -0.524349 right value idxkey B 0.543843 D 0.013135 E -0.326498 F 1.385076 Typically, an inner join on index would look like this: left.merge(right, left_index=True, right_index=True) value_x value_y idxkey B -0.402655 0.543843 D -0.524349 0.013135 Other joins follow similar syntax. Notable Alternatives DataFrame.joindefaults to joins on the index. DataFrame.joindoes a LEFT OUTER JOIN by default, so how='inner'is necessary here. left.join(right, how='inner', lsuffix='_x', rsuffix='_y') value_x value_y idxkey B -0.402655 0.543843 D -0.524349 0.013135Note that I needed to specify the lsuffixand rsuffixarguments since joinwould otherwise error out: left.join(right) ValueError: columns overlap but no suffix specified: Index(['value'], dtype='object')Since the column names are the same. This would not be a problem if they were differently named. left.rename(columns={'value':'leftvalue'}).join(right, how='inner') leftvalue value idxkey B -0.402655 0.543843 D -0.524349 0.013135 pd.concatjoins on the index and can join two or more DataFrames at once. It does a full outer join by default, so how='inner'is required here.. pd.concat([left, right], axis=1, sort=False, join='inner') value value idxkey B -0.402655 0.543843 D -0.524349 0.013135For more information on concat, see this post. Index to Column joins To perform an inner join using index of left, column of right, you will use DataFrame.merge a combination of left_index=True and right_on=.... right2 = right.reset_index().rename({'idxkey' : 'colkey'}, axis=1) right2 colkey value 0 B 0.543843 1 D 0.013135 2 E -0.326498 3 F 1.385076 left.merge(right2, left_index=True, right_on='colkey') value_x colkey value_y 0 -0.402655 B 0.543843 1 -0.524349 D 0.013135 Other joins follow a similar structure. Note that only merge can perform index to column joins. You can join on multiple columns, provided the number of index levels on the left equals the number of columns on the right. join and concat are not capable of mixed merges. You will need to set the index as a pre-step using DataFrame.set_index. Effectively using Named Index [pandas >= 0.23] If your index is named, then from pandas >= 0.23, DataFrame.merge allows you to specify the index name to on (or left_on and right_on as necessary). left.merge(right, on='idxkey') value_x value_y idxkey B -0.402655 0.543843 D -0.524349 0.013135 For the previous example of merging with the index of left, column of right, you can use left_on with the index name of left: left.merge(right2, left_on='idxkey', right_on='colkey') value_x colkey value_y 0 -0.402655 B 0.543843 1 -0.524349 D 0.013135 Follow the next answer to continue learning. Pandas merging 101- Answer #3: This answer will go through the following topics: - how to correctly generalize to multiple DataFrames (and why mergehas shortcomings here) - merging on unique keys - merging on non-unqiue keys Generalizing to multiple DataFrames Oftentimes, the situation arises when multiple DataFrames are to be merged together. Naively, this can be done by chaining merge calls: df1.merge(df2, ...).merge(df3, ...) However, this quickly gets out of hand for many DataFrames. Furthermore, it may be necessary to generalise for an unknown number of DataFrames. Here I introduce pd.concat for multi-way joins on unique keys, and DataFrame.join for multi-way joins on non-unique keys. First, the setup. # Setup. np.random.seed(0) A = pd.DataFrame({'key': ['A', 'B', 'C', 'D'], 'valueA': np.random.randn(4)}) B = pd.DataFrame({'key': ['B', 'D', 'E', 'F'], 'valueB': np.random.randn(4)}) C = pd.DataFrame({'key': ['D', 'E', 'J', 'C'], 'valueC': np.ones(4)}) dfs = [A, B, C] # Note, the "key" column values are unique, so the index is unique. A2 = A.set_index('key') B2 = B.set_index('key') C2 = C.set_index('key') dfs2 = [A2, B2, C2] Multiway merge on unique keys If your keys (here, the key could either be a column or an index) are unique, then you can use pd.concat. Note that pd.concat joins DataFrames on the index. # merge on `key` column, you'll need to set the index before concatenating pd.concat([ df.set_index('key') for df in dfs], axis=1, join='inner' ).reset_index() key valueA valueB valueC 0 D 2.240893 -0.977278 1.0 # merge on `key` index pd.concat(dfs2, axis=1, sort=False, join='inner') valueA valueB valueC key D 2.240893 -0.977278 1.0 Omit join='inner' for a FULL OUTER JOIN. Note that you cannot specify LEFT or RIGHT OUTER joins (if you need these, use join, described below). Multiway merge on keys with duplicates concat is fast, but has its shortcomings. It cannot handle duplicates. A3 = pd.DataFrame({'key': ['A', 'B', 'C', 'D', 'D'], 'valueA': np.random.randn(5)}) pd.concat([df.set_index('key') for df in [A3, B, C]], axis=1, join='inner') ValueError: Shape of passed values is (3, 4), indices imply (3, 2) In this situation, we can use join since it can handle non-unique keys (note that join joins DataFrames on their index; it calls merge under the hood and does a LEFT OUTER JOIN unless otherwise specified). # join on `key` column, set as the index first # For inner join. For left join, omit the "how" argument. A.set_index('key').join( [df.set_index('key') for df in (B, C)], how='inner').reset_index() key valueA valueB valueC 0 D 2.240893 -0.977278 1.0 # join on `key` index A3.set_index('key').join([B2, C2], how='inner') valueA valueB valueC key D 1.454274 -0.977278 1.0 D 0.761038 -0.977278 1.0 Pandas merging 101- Answer #4: Let’s start by establishing a benchmark. The easiest method for solving this is using a temporary “key” column: # pandas <= 1.1.X def cartesian_product_basic(left, right): return ( left.assign(key=1).merge(right.assign(key=1), on='key').drop('key', 1)) cartesian_product_basic(left, right) # pandas >= 1.2 (est) left.merge(right, how="cross") col1_x col2_x col1_y col2_y 0 A 1 X 20 1 A 1 Y 30 2 A 1 Z 50 3 B 2 X 20 4 B 2 Y 30 5 B 2 Z 50 6 C 3 X 20 7 C 3 Y 30 8 C 3 Z 50 How this works is that both DataFrames are assigned a temporary “key” column with the same value (say, 1). merge then performs a many-to-many JOIN on “key”. While the many-to-many JOIN trick works for reasonably sized DataFrames, you will see relatively lower performance on larger data. A faster implementation will require NumPy. Here are some famous NumPy implementations of 1D cartesian product. We can build on some of these performant solutions to get our desired output. My favourite, however, is @senderle’s first implementation. def cartesian_product(*arrays): la = len(arrays) dtype = np.result_type(*arrays) arr = np.empty([len(a) for a in arrays] + [la], dtype=dtype) for i, a in enumerate(np.ix_(*arrays)): arr[...,i] = a return arr.reshape(-1, la) Generalizing: CROSS JOIN on Unique or Non-Unique Indexed DataFrames Disclaimer These solutions are optimised for DataFrames with non-mixed scalar dtypes. If dealing with mixed dtypes, use at your own risk! This trick will work on any kind of DataFrame. We compute the cartesian product of the DataFrames’ numeric indices using the aforementioned cartesian_product, use this to reindex the DataFrames, and def cartesian_product_generalized(left, right): la, lb = len(left), len(right) idx = cartesian_product(np.ogrid[:la], np.ogrid[:lb]) return pd.DataFrame( np.column_stack([left.values[idx[:,0]], right.values[idx[:,1]]])) cartesian_product_generalized(left, right) 0 1 2 3 0 A 1 X 20 1 A 1 Y 30 2 A 1 Z 50 3 B 2 X 20 4 B 2 Y 30 5 B 2 Z 50 6 C 3 X 20 7 C 3 Y 30 8 C 3 Z 50 np.array_equal(cartesian_product_generalized(left, right), cartesian_product_basic(left, right)) True And, along similar lines, left2 = left.copy() left2.index = ['s1', 's2', 's1'] right2 = right.copy() right2.index = ['x', 'y', 'y'] left2 col1 col2 s1 A 1 s2 B 2 s1 C 3 right2 col1 col2 x X 20 y Y 30 y Z 50 np.array_equal(cartesian_product_generalized(left, right), cartesian_product_basic(left2, right2)) True This solution can generalise to multiple DataFrames. For example, def cartesian_product_multi(*dfs): idx = cartesian_product(*[np.ogrid[:len(df)] for df in dfs]) return pd.DataFrame( np.column_stack([df.values[idx[:,i]] for i,df in enumerate(dfs)])) cartesian_product_multi(*[left, right, left]).head() 0 1 2 3 4 5 0 A 1 X 20 A 1 1 A 1 X 20 B 2 2 A 1 X 20 C 3 3 A 1 X 20 D 4 4 A 1 Y 30 A 1 Further Simplification A simpler solution not involving @senderle’s cartesian_product is possible when dealing with just two DataFrames. Using np.broadcast_arrays, we can achieve almost the same level of performance. def cartesian_product_simplified(left, right): la, lb = len(left), len(right) ia2, ib2 = np.broadcast_arrays(*np.ogrid[:la,:lb]) return pd.DataFrame( np.column_stack([left.values[ia2.ravel()], right.values[ib2.ravel()]])) np.array_equal(cartesian_product_simplified(left, right), cartesian_product_basic(left2, right2)) True Performance Comparison Benchmarking these solutions on some contrived DataFrames with unique indices, we have Do note that timings may vary based on your setup, data, and choice of cartesian_product helper function as applicable. Performance Benchmarking Code This is the timing script. All functions called here are defined above. from timeit import timeit import pandas as pd import matplotlib.pyplot as plt res = pd.DataFrame( index=['cartesian_product_basic', 'cartesian_product_generalized', 'cartesian_product_multi', 'cartesian_product_simplified'], columns=[1, 10, 50, 100, 200, 300, 400, 500, 600, 800, 1000, 2000], dtype=float ) for f in res.index: for c in res.columns: # print(f,c) left2 = pd.concat([left] * c, ignore_index=True) right2 = pd.concat([right] * c, ignore_index=True) stmt = '{}(left2, right2)'.format(f) setp = 'from __main__ import left2, right2, {}'.format(f) res.at[f, c] = timeit(stmt, setp, number=5) ax = res.div(res.min()).T.plot(loglog=True) ax.set_xlabel("N"); ax.set_ylabel("time (relative)"); plt.show() Follow Programming Articles for more!
https://programming-articles.com/pandas-merging-101-python-pandas-answered/
CC-MAIN-2022-40
refinedweb
3,439
69.58
Last power level no longer increased well enough in the endgame - The player’s attack and defense stats were no longer similar even though the level up has you choosing between them As it turns out, the second problem was fixed by discovering I had a bug; my new combat system had two offensive stats (one for hit-rate, and one for damage) and I was showing the wrong one. Fixing that meant that I could now have the player choosing to level up either hit-rate or armor class (in the d20 “gives a percentage chance monsters miss you” sense and not in the tutorial’s “broken damage mitigation that makes you invincible within a few experience levels” sense). The first one was more interesting. It also gives me the last bit of useful code to share. I ended up deciding that the game needed better gear. Since this was literally the last minute, I decided that I would just keep the sword and shield from the regular game’s item table but tweak the bonuses. Before, the Sword class looked like this: class Sword(Entity): def __init__(self, x, y): super().__init__(x, y, '/', libtcod.sky, 'Sword') Equippable(EquipmentSlots.MAIN_HAND, power_bonus=3).add_to_entity(self) So first things first, we tweak it to allow an arbitrary power bonus: class Sword(Entity): def __init__(self, x, y, bonus=3): super().__init__(x, y, '/', libtcod.sky, 'Sword (+{})'.format(bonus)) Equippable(EquipmentSlots.MAIN_HAND, power_bonus=bonus).add_to_entity(self) This is pretty straightforward, but it creates a non-obvious problem when you consider how my drop table previously worked. One of the benefits of having all these item and monster classes is that it allowed the drop table logic to look like this: # this gives us the probabilities item_chances = { items.HealingPotion: 5, items.Sword: from_dungeon_level([[5, 4]], self.dungeon_level), items.Shield: from_dungeon_level([[15, 8]], self.dungeon_level), items.RejuvenationPotion: 35, items.LightningScroll: from_dungeon_level([[25, 4]], self.dungeon_level), items.FireballScroll: from_dungeon_level([[25, 6]], self.dungeon_level), items.ConfusionScroll: from_dungeon_level([[10, 2]], self.dungeon_level), } # this actually places the items for _ in range(number_of_items): x = randint(room.x1 + 1, room.x2 - 1) y = randint(room.y1 + 1, room.y2 - 1) if not any([entity for entity in self.entities if entity.x == x and entity.y == y]): item_choice = random_choice_from_dict(item_chances) self.entities.append(item_choice(x, y)) Note that unlike the tutorial I’ve been working from, I’m using class names instead of strings as keys. This is because in Python, the class name can be used as a first-class value, and also called as a function to create the object in question. So my drop logic can just say “Whatever class we get from the randomizer, just drop one of those.” The problem is that this doesn’t give us an easy way to create classes that need another parameter. It might look like I need to special-case equipment in the above loop somehow, but what I actually want is a way to create a function that says “make a sword where the bonus is some arbitrary value I decided earlier.” It turns out that Python gives a really easy way to do that: from functools import partial # intervening code omitted def sword_class(bonus): return partial(Sword, bonus=bonus) If it’s not obvious what that does, these lines are basically equivalent: partial(Sword, 3)(10, 30) sword_class(3)(10, 30) Sword(10, 30, bonus=3) So this in turn means that I can just write sword_class(3) and get what used to be Sword, or even sword_class(x) where x is defined somewhere else, and then stuff that value in a drop table dictionary. After setting all that up, and nearly identical code for shields, I can now give the player a semi-random loot progression that allows for power gains on top of XP-levelling. I’m not sure I love the balance but I do like it better than in the “oh god” state from last time. It shifts the game experience from hoarding consumables and trying to kill everything tactically to trying to find your way to the next floor while constantly keeping an eye out for better loot. I was originally going to write about some of the ideas I had that never made it in, but then I realized that the effort needed to do that would be only slightly less than the effort needed to actually implement those ideas. Part of me really does want to try implementing some of the ideas I’ve had for the AI, but there’s another project I’ve had on the backburner for awhile and all of this coding for fun has made me really want to go work on that. Thanks to everyone who read this far. It’s been a fun couple of months. If you’d like to see the Final(?) Version and an archive of previous posts about this game, you can now find the full archive here.
https://projectwirehead.home.blog/2019/08/10/roguelike-tutorial-its-over/
CC-MAIN-2021-04
refinedweb
831
62.38
Details Improvement - Status: Resolved (View Workflow) Major - Resolution: Fixed - - Description. Attachments Issue Links - - Activity benji2006 please do not use JIRA as a help forum. There is a users’ list and other places to ask for help using Jenkins. I went ahead and created to focus on returning the exit status and standard output at the same time. So I hope all the folks who commented asking for this feature will go up vote that Code: def output = sh( script: """cd ${component_root_path} && JAVA_HOME=${JAVA_HOME} mvn dependency:list -Dsort=true""", returnStdout: true ) But Get no output. [Pipeline] echo [12-22 22:18:28] output: Hi benji2006, there are other ways to provide a password besides stdin. Check out this example Jenkinsfile (apologies for weird formatting) and note the usage of Credentials & 'environment' in the "Build & Push Docker Image" stage.
https://issues.jenkins.io/browse/JENKINS-26133?focusedCommentId=302894&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2022-33
refinedweb
137
52.49
DEBSOURCES Skip Quicknav sources / nmap / 6.00-0.3+deb7u1 / osscan2 /*************************************************************************** * osscan2.h -- Header info for 2nd Generation: osscan.h 3636 2006-07-04 23:04:56Z fyodor $ */ #ifndef OSSCAN2_H #define OSSCAN2_H #include "nmap.h" #include "global_structures.h" #include "nbase.h" #include <vector> #include <list> #include "Target.h" class Target; /****************************************************************************** * CONSTANT DEFINITIONS * ******************************************************************************/ #define NUM_FPTESTS 13 /* The number of tries we normally do. This may be increased if the target looks like a good candidate for fingerprint submission, or fewer if the user gave the --max-os-tries option */ #define STANDARD_OS2_TRIES 2 // The minimum (and target) amount of time to wait between probes // sent to a single host, in milliseconds. #define OS_PROBE_DELAY 25 // The target amount of time to wait between sequencing probes sent to // a single host, in milliseconds. The ideal is 500ms because of the // common 2Hz timestamp frequencies. Less than 500ms and we might not // see any change in the TS counter (and it gets less accurate even if // we do). More than 500MS and we risk having two changes (and it // gets less accurate even if we have just one). So we delay 100MS // between probes, leaving 500MS between 1st and 6th. #define OS_SEQ_PROBE_DELAY 100 /****************************************************************************** * TYPE AND STRUCTURE DEFINITIONS * ******************************************************************************/ typedef enum OFProbeType { OFP_UNSET, OFP_TSEQ, OFP_TOPS, OFP_TECN, OFP_T1_7, OFP_TICMP, OFP_TUDP } OFProbeType; /****************************************************************************** * FUNCTION PROTOTYPES * ******************************************************************************/ /* This is the primary OS detection function. If many Targets are passed in (the threshold is based on timing level), they are processed as smaller groups to improve accuracy */ void os_scan2(std::vector<Target *> &Targets); int get_initial_ttl_guess(u8 ttl); int get_ipid_sequence(int numSamples, int *ipids, int islocalhost); /****************************************************************************** * CLASS DEFINITIONS * ******************************************************************************/ class OFProbe; class HostOsScanStats; class HostOsScan; class HostOsScanInfo; class OsScanInfo; /** Represents an OS detection probe. It does not contain the actual packet * that is sent to the target but contains enough information to generate * it (such as the probe type and its subid). It also stores timing * information. */ class OFProbe { public: OFProbe(); /* The literal string for the current probe type. */ const char *typestr(); /* Type of the probe: for what os fingerprinting test? */ OFProbeType type; /* Subid of this probe to separate different tcp/udp/icmp. */ int subid; /* Try (retransmission) number of this probe */ int tryno; /* A packet may be timedout for a while before being retransmitted due to packet sending rate limitations */ bool retransmitted; struct timeval sent; /* Time the previous probe was sent, if this is a retransmit (tryno > 0) */ struct timeval prevSent; }; /* Stores the status for a host being scanned in a scan round. */ class HostOsScanStats { friend class HostOsScan; public: HostOsScanStats(Target *t); ~HostOsScanStats(); void initScanStats(); struct eth_nfo *fill_eth_nfo(struct eth_nfo *eth, eth_t *ethsd) const; void addNewProbe(OFProbeType type, int subid); void removeActiveProbe(std::list<OFProbe *>::iterator probeI); /* Get an active probe from active probe list identified by probe type * and subid. returns probesActive.end() if there isn't one. */ std::list<OFProbe *>::iterator getActiveProbe(OFProbeType type, int subid); void moveProbeToActiveList(std::list<OFProbe *>::iterator probeI); void moveProbeToUnSendList(std::list<OFProbe *>::iterator probeI); unsigned int numProbesToSend() {return probesToSend.size();} unsigned int numProbesActive() {return probesActive.size();} FingerPrint *getFP() {return FP;} Target *target; /* the Target */ struct seq_info si; struct ipid_info ipid; /* distance, distance_guess: hop count between us and the target. * * Possible values of distance: * 0: when scan self; * 1: when scan a target on the same network segment; * >=1: not self, not same network and nmap has got the icmp reply to the U1 probe. * -1: none of the above situations. * * Possible values of distance_guess: * -1: nmap fails to get a valid ttl by all kinds of probes. * >=1: a guessing value based on ttl. */ int distance; int distance_guess; /* Returns the amount of time taken between sending 1st tseq probe * and the last one. Zero is * returned if we didn't send the tseq probes because there was no * open tcp port */ double timingRatio(); double cc_scale(); private: /* Ports of the targets used in os fingerprinting. */ int openTCPPort, closedTCPPort, closedUDPPort; /* Probe list used in tests. At first, probes are linked in * probesToSend; when a probe is sent, it will be removed from * probesToSend and appended to probesActive. If any probes in * probesActive are timedout, they will be moved to probesToSend and * sent again till expired. */ std::list<OFProbe *> probesToSend; std::list<OFProbe *> probesActive; /* A record of total number of probes that have been sent to this * host, including restranmited ones. */ unsigned int num_probes_sent; /* Delay between two probes. */ unsigned int sendDelayMs; /* When the last probe is sent. */ struct timeval lastProbeSent; struct ultra_timing_vals timing; /* Fingerprint of this target. When a scan is completed, it'll * finally be passed to hs->target->FPR->FPs[x]. */ FingerPrint *FP; FingerTest *FPtests[NUM_FPTESTS]; #define FP_TSeq FPtests[0] #define FP_TOps FPtests[1] #define FP_TWin FPtests[2] #define FP_TEcn FPtests[3] #define FP_T1_7_OFF 4 #define FP_T1 FPtests[4] #define FP_T2 FPtests[5] #define FP_T3 FPtests[6] #define FP_T4 FPtests[7] #define FP_T5 FPtests[8] #define FP_T6 FPtests[9] #define FP_T7 FPtests[10] #define FP_TUdp FPtests[11] #define FP_TIcmp FPtests[12] struct AVal *TOps_AVs[6]; /* 6 AVs of TOps */ struct AVal *TWin_AVs[6]; /* 6 AVs of TWin */ /* The following are variables to store temporary results * during the os fingerprinting process of this host. */ u16 lastipid; struct timeval seq_send_times[NUM_SEQ_SAMPLES]; int TWinReplyNum; /* how many TWin replies are received. */ int TOpsReplyNum; /* how many TOps replies are received. Actually it is the same with TOpsReplyNum. */ struct ip *icmpEchoReply; /* To store one of the two icmp replies */ int storedIcmpReply; /* Which one of the two icmp replies is stored? */ struct udpprobeinfo upi; /* info of the udp probe we sent */ }; /* These are statistics for the whole group of Targets */ class ScanStats { public: ScanStats(); bool sendOK(); /* Returns true if the system says that sending is OK. */ double cc_scale(); struct ultra_timing_vals timing; struct timeout_info to; /* rtt/timeout info */ int num_probes_active; /* Total number of active probes */ int num_probes_sent; /* Number of probes sent in total. */ int num_probes_sent_at_last_wait; }; /* This class does the scan job, setting and using the status of a host in * the host's HostOsScanStats. */ class HostOsScan { public: HostOsScan(Target *t); /* OsScan need a target to set eth stuffs */ ~HostOsScan(); pcap_t *pd; ScanStats *stats; /* (Re)Initialize the parameters that will be used during the scan.*/ void reInitScanSystem(); void buildSeqProbeList(HostOsScanStats *hss); void updateActiveSeqProbes(HostOsScanStats *hss); void buildTUIProbeList(HostOsScanStats *hss); void updateActiveTUIProbes(HostOsScanStats *hss); /* send the next probe in the probe list of the hss */ void sendNextProbe(HostOsScanStats *hss); /* Process one response. If the response is useful, return true. */ bool processResp(HostOsScanStats *hss, struct ip *ip, unsigned int len, struct timeval *rcvdtime); /* Make up the fingerprint. */ void makeFP(HostOsScanStats *hss); /* Check whether the host is sendok. If not, fill _when_ with the * time when it will be sendOK and return false; else, fill it with * now and return true. */ bool hostSendOK(HostOsScanStats *hss, struct timeval *when); /* Check whether it is ok to send the next seq probe to the host. If * not, fill _when_ with the time when it will be sendOK and return * false; else, fill it with now and return true. */ bool hostSeqSendOK(HostOsScanStats *hss, struct timeval *when); /* How long I am currently willing to wait for a probe response * before considering it timed out. Uses the host values from * target if they are available, otherwise from gstats. Results * returned in MICROseconds. */ unsigned long timeProbeTimeout(HostOsScanStats *hss); /* If there are pending probe timeouts, fills in when with the time * of the earliest one and returns true. Otherwise returns false * and puts now in when. */ bool nextTimeout(HostOsScanStats *hss, struct timeval *when); /* Adjust various timing variables based on pcket receipt. */ void adjust_times(HostOsScanStats *hss, OFProbe *probe, struct timeval *rcvdtime); private: /* Probe send functions. */ void sendTSeqProbe(HostOsScanStats *hss, int probeNo); void sendTOpsProbe(HostOsScanStats *hss, int probeNo); void sendTEcnProbe(HostOsScanStats *hss); void sendT1_7Probe(HostOsScanStats *hss, int probeNo); void sendTUdpProbe(HostOsScanStats *hss, int probeNo); void sendTIcmpProbe(HostOsScanStats *hss, int probeNo); /* Response process functions. */ bool processTSeqResp(HostOsScanStats *hss, struct ip *ip, int replyNo); bool processTOpsResp(HostOsScanStats *hss, struct tcp_hdr *tcp, int replyNo); bool processTWinResp(HostOsScanStats *hss, struct tcp_hdr *tcp, int replyNo); bool processTEcnResp(HostOsScanStats *hss, struct ip *ip); bool processT1_7Resp(HostOsScanStats *hss, struct ip *ip, int replyNo); bool processTUdpResp(HostOsScanStats *hss, struct ip *ip); bool processTIcmpResp(HostOsScanStats *hss, struct ip *ip, int replyNo); /* Generic sending functions used by the above probe functions. */ int send_tcp_probe(HostOsScanStats *hss, int ttl, bool df, u8* ipopt, int ipoptlen, u16 sport, u16 dport, u32 seq, u32 ack, u8 reserved, u8 flags, u16 window, u16 urp, u8 *options, int optlen, char *data, u16 datalen); int send_icmp_echo_probe(HostOsScanStats *hss, u8 tos, bool df, u8 pcode, unsigned short id, u16 seq, u16 datalen); int send_closedudp_probe(HostOsScanStats *hss, int ttl, u16 sport, u16 dport); void makeTSeqFP(HostOsScanStats *hss); void makeTOpsFP(HostOsScanStats *hss); void makeTWinFP(HostOsScanStats *hss); bool get_tcpopt_string(struct tcp_hdr *tcp, int mss, char *result, int maxlen); int rawsd; /* Raw socket descriptor */ eth_t *ethsd; /* Ethernet handle */ unsigned int tcpSeqBase; /* Seq value used in TCP probes */ unsigned int tcpAck; /* Ack value used in TCP probes */ int tcpMss; /* TCP MSS value used in TCP probes */ int udpttl; /* TTL value used in the UDP probe */ unsigned short icmpEchoId; /* ICMP Echo Identifier value for ICMP probes */ unsigned short icmpEchoSeq; /* ICMP Echo Sequence value used in ICMP probes */ /* Source port number in TCP probes. Different probes will use an arbitrary * offset value of it. */ int tcpPortBase; int udpPortBase; }; /* Maintains a link of incomplete HostOsScanInfo. */ class OsScanInfo { public: OsScanInfo(std::vector<Target *> &Targets); ~OsScanInfo(); float starttime; /* If you remove from this, you had better adjust nextI too (or call * resetHostIterator() afterward). Don't let this list get empty, * then add to it again, or you may mess up nextI (I'm not sure) */ std::list<HostOsScanInfo *> incompleteHosts; unsigned int numIncompleteHosts() {return incompleteHosts.size();} HostOsScanInfo *findIncompleteHost(struct sockaddr_storage *ss); /* A circular buffer of the incompleteHosts. nextIncompleteHost() gives the next one. The first time it is called, it will give the first host in the list. If incompleteHosts is empty, returns NULL. */ HostOsScanInfo *nextIncompleteHost(); /* Resets the host iterator used with nextIncompleteHost() to the beginning. If you remove a host from incompleteHosts, call this right afterward */ void resetHostIterator() { nextI = incompleteHosts.begin(); } int removeCompletedHosts(); private: unsigned int numInitialTargets; std::list<HostOsScanInfo *>::iterator nextI; }; /* The overall os scan information of a host: * - Fingerprints gotten from every scan round; * - Maching results of these fingerprints. * - Is it timeout/completed? * - ... */ class HostOsScanInfo { public: HostOsScanInfo(Target *t, OsScanInfo *OSI); ~HostOsScanInfo(); Target *target; /* The target */ FingerPrintResultsIPv4 *FPR; OsScanInfo *OSI; /* The OSI which contains this HostOsScanInfo */ FingerPrint **FPs; /* Fingerprints of the host */ FingerPrintResultsIPv4 *FP_matches; /* Fingerprint-matching results */ bool timedOut; /* Did it time out? */ bool isCompleted; /* Has the OS detection been completed? */ HostOsScanStats *hss; /* Scan status of the host in one scan round */ }; /** This is the class that performs OS detection (both IPv4 and IPv6). * Using it is simple, just call os_scan() passing a list of targets. * The results of the detection will be stored inside the supplied * target objects. */ class OSScan { private: int ip_ver; /* IP version for the OS Scan (4 or 6) */ int chunk_and_do_scan(std::vector<Target *> &Targets, int family); int os_scan_ipv4(std::vector<Target *> &Targets); int os_scan_ipv6(std::vector<Target *> &Targets); public: OSScan(); ~OSScan(); void reset(); int os_scan(std::vector<Target *> &Targets); }; #endif /*OSSCAN2_H*/
https://sources.debian.org/src/nmap/6.00-0.3+deb7u1/osscan2.h/
CC-MAIN-2021-04
refinedweb
1,829
51.18
go to bug id or search bugs for New/Additional Comment: Description: ------------ I have tried to access a web service whose WSDL is distributed among several files each of which contains references to several further XSDs. Now the WSDL itself is valid, as .NET or Java clients have accessed the services without any problems. But when creating a new instance of SoapClient it fails because I'm told that a certain element has already been defined. The problem is, that there are <message> elements with the same name, but they DO differ in namespaces. Reproduce code: --------------- $client = new SoapClient('someURI'); /* Please email me to get a copy of the WSDLs producing the error. They're too large to be posted here (although I tried to keep them small already) and I have no facility to make them available online. */ Expected result: ---------------- No error when parsing the WSDL. Actual result: -------------- SoapFault exception: [WSDL] SOAP-ERROR: Parsing WSDL: '<Element>' already defined in <FILE> Add a Patch Add a Pull Request This WSDL exhibits this behavior: Any progress on this issue? Same "already defined" problem, different cause : methods are defined for two distinct versions of SOAP. Same here: Fatal error: Uncaught SoapFault exception: [WSDL] SOAP-ERROR: Parsing WSDL: <binding> 'BillServicePortBinding' already defined in
https://bugs.php.net/bug.php?id=45282&edit=1
CC-MAIN-2019-22
refinedweb
210
61.16
Ping since the thread tied here, and Joe's looking for an answer... William A. Rowe, Jr. wrote: > Joe Orton wrote: > >> >> What's the default behaviour of the current API across all platforms? >> I have no idea, I expect it differs wildly from AIX to Windows and >> back. What exactly do you propose? > > > AIUI (and I don't necessarily agree with these choices, but we are stuck > ;-) > > * Load Global. Every platform I'm aware of will map into the symbol > table. Now on Win32, OS2, and OS/X (using two-level namespace) these > symbols will be by-dso. E.g. these three examples actually have > multiple symbols of the same name, distinguished by the library's name. > > - disabling 'load global' is very effective for loading modules in > parallel. Name clashes in Linux cause dlopen to barf. Name clashes > on HP/UX produce 'arbitrary' results. Better to explicitly load > most libraries local, and then query the entry points, data or code, > as needed. > > * Bind Now, as we Run Constructors and Library Entry Points > > - disabling 'bind now' is actually a misnomer, what we are really > saying with bind deferred, IIUC, is that we plan to only grab > some static data or structures, and then dump the DSO without > it ever actually performing code operations. > > I think the bind now should be more abstract, to accomplish a specific > purpose. If they choose LAZY, are the dependent libraries and symbols > resolved when the library calls them? Consistant across platforms? > Would, say, LAZY on Linux allow the library to perform just-in-time sym > resolution, while LAZY on AIX might cause the library to crash when > trying to call an unresolved function? > > As long as the goal is to produce the fewest surprises for APR users > (developers) I'll be happy with the outcome :)
http://mail-archives.apache.org/mod_mbox/apr-dev/200509.mbox/%3C4331684A.1050803@rowe-clan.net%3E
CC-MAIN-2016-07
refinedweb
299
63.8
GemBox.Email supports all standard protocols: POP3, IMAP4v1 and SMTP. Messages are represented by simple MailMessage model which wraps around more complex MIME model. MIME model classes can be found in GemBox.Email.Mime namespace. Following is a more detailed list of GemBox.Email features. Create mail message with attachments and multi-format message body. Save and load mail messages to and from a MSG/EML/MHTML/Mbox file or a stream. Modify mail message headers using advanced MIME model. List and download mail messages using POP or IMAP protocol. Create and send complex mail messages using SMTP protocol. List and modify folders using IMAP protocol. List and modify message flags using ImapClient. Do custom SSL certificate validation when connecting to mail server using PopClient , ImapClient or SmtpClient . Validate single or multiple mail addresses. Create personalized mail messages based on a single template and variable data using MailMerge class. Save and Load calendars in iCalendar format. Create calendar events, tasks, reminders and send them in an email.
https://www.gemboxsoftware.com/email/help/html/Features.htm
CC-MAIN-2018-22
refinedweb
167
52.97
On Nov 29, 4:26 pm, markolopa <marko.lopa... at gmail.com> wrote: > Less than 3 hours have passed since my last post and got yet another > bug that could be prevented if Python had the functionality that other > languages have to destroy variables when a block ends. Here is the > code: > > ========= > > arg_columns = [] > for domain in self.domains: > i = self.get_column_index(column_names, domain.name) > col = column_elements[i] > if len(col) != len(val_column): > ValueError('column %s has not the same size as the value > column %s' > % (column_names[i], self.name)) > arg_columns.append(col) > > [...] > > value_dict = {} > for i, val_str in enumerate(val_column): > arg_name_row = [c[i] for c in arg_columns] > args = [domain[name] for name in arg_name_row] > value_dict[tuple(args)] = float(val_str) > repo[self.name] = value_dict > > ========= > > The bug is corrected replacing the line > > args = [domain[name] for name in arg_name_row] > > by > > args = [domain[name] for name, domain in zip(arg_name_row, > self.domains)] > > so "domain" should not exist in my namespace since I have no > information associated to "domain" only to "self.domains". Python > should allow me to write safe code! > > Antoher 15 minutes lost because of that Python "feature"... Is it only > me??? > I occasionally make the error you make, but I think the real problem you are having is lack of attention to detail. If name collisions are a common problem for you, consider writing shorter methods or develop the habit of using more descriptive variable names. In the code above, you could have easily cleaned up the namespace by extracting a method called get_arg_columns(). Having to spend 15 minutes tracking down a bug usually indicates that you are not being systematic in your thinking. If you are rushing too much, slow down. If you are tired, take a break. If you make the same mistake twice, commit to yourself not to make it a third time. Also, test your methods one at a time and get them rock solid before writing more code.
https://mail.python.org/pipermail/python-list/2009-November/559819.html
CC-MAIN-2016-50
refinedweb
321
65.01
I am trying to count the number of same elements on the net, I found this easy solution on the net but I am adjusting it so I understand it properly by myself. I am trying to count how many number of 40 are there in the array. Which is 2. #include <iostream> #include <algorithm> #include<array> using namespace std; int main(){ int array[6] = {38,38,40,38,40,37}; cout<<count(ca.begin(),ca.end(),40); return 0; } The example you have linked to is using a std::array called ca. It takes a type and number of elements, so std::array<int, 6> expects 6 ints and is a fixed length array. This has a begin and end method and plays nicely with the algorithms in the stl. If you have a C style array instead, you can use std::begin and std::end to achieve the same thing. array<int, 6> ca{ 38,38,40,38,40,37 }; cout << count(ca.begin(), ca.end(), 40) << '\n'; int c_style_array[] = { 38,38,40,38,40,37 }; cout << count(std::begin(c_style_array), std::end(c_style_array), 40) << '\n';
https://codedump.io/share/Le58ivCu0cYz/1/how-to-count-same-values-in-array-c
CC-MAIN-2017-43
refinedweb
188
73.07
We use the same bjam command line for all gcc compilers, but before calling bjam we source a tiny script to set PATH and LD_LIBRARY_PATH. For example: set path=(/usr/local_cci/gcc-3.0.3/bin $path) if ( $?LD_LIBRARY_PATH ) then setenv LD_LIBRARY_PATH "/usr/local_cci/gcc-3.0.3/lib:${LD_LIBRARY_PATH}" else setenv LD_LIBRARY_PATH "/usr/local_cci/gcc-3.0.3/lib" endif HTH, Ralf --- chuzo okuda <okuda1 at llnl.gov> wrote: > Hello, > From executing bjam, and getting output to the terminal, it is not clear > which version of g++/gcc compiler I am getting to compile sources. There > is newer g++ installed in our computer center: __________________________________________________ Do you Yahoo!? The New Yahoo! Search - Faster. Easier. Bingo
https://mail.python.org/pipermail/cplusplus-sig/2003-April/003449.html
CC-MAIN-2014-10
refinedweb
116
60.92