text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I’m having trouble doing a few things and would appreciate some help if possible. - Optionally adding a context menu item if another package is installed - Figuring out how to add the function call to do this (which file do I create? where does it go? what are required functions to get started?) - Figuring out how to add a context menu item and getting the textual contents of what was right clicked? Now I think #2 is adding a ‘main’ entry to the package.json file. I think that I need to call atom.commands.add() in the module.exports.activate function of the file pointed to in main in the package.json file. I don’t know how to remove the item on deactivate and I don’t know how to test for the installation or availability of another package before continuing. A lot of functions have something like namespace:function-name. How do I declare that namespace and function name specifically? Finally, there seems to be little to no documentation on how to get the data of the clicked item in the context menu hook. How to add the hook also seems a little … unclear. Gripe #1: All of the documentation seems to assume contextual knowledge of where to put things and what to put in them. Really annoying for someone coming in and learning Gripe #2: Dear God, why CoffeeScript?! In a day where traceur and other ES6 options exist, why CoffeeScript If possible please respond using JavaScript code snippets. I can work with CoffeeScript but it does happen to invoke the gag reflex a bit. Thanks.
https://discuss.atom.io/t/sigh-docs-are-not-clear-working-with-contextmenus/14906/3
CC-MAIN-2018-13
refinedweb
270
65.32
Home › Forums › WPF controls › Xceed Toolkit Plus for WPF › WPF Propertygrid expand/collapse Category I use the WPF property grid from the toolkit plus. Now i want to bind specific Categories to expand or collapse. How can I implement this in with current version of toolkit plus? I have tryed to set a style setter on xaml, which bind a type of toogle button (is a part of category), but this do not synchronize IsChecked with IsExpanded. Does anyone have a advice or tip? Thank you very much Hi, You can load your app with expanded/collpased categories by setting the ExpandedCategory attribute on your class object (no binding): [CategoryOrder(“Information”, 0)] [CategoryOrder( “Conections”, 1 )] [ExpandedCategory( “Conections”, false )] public class MyClass { [Category( “Information” )] public bool IsMale { get; set; } [Category( “Conections” )] public string FirstName { get; set; } } But binding on Categories ? We would need more details of what you are trying to do. You can have a look at our LiveExplorer to see some examples of PropertyGrid uses. It is available here :.
https://forums.xceed.com/forums/topic/wpf-propertygrid-expandcollapse-category/
CC-MAIN-2021-49
refinedweb
170
52.6
Query lock status of instance¶ Currently we only support locking/unlocking an instance but we are not able to query whether the instance is locked or not. This proposal is to add the lock status to the detailed view of an instance. Problem description¶ We are able to lock/unlock an instance through nova API now. But we don’t return the lock status of the servers. Proposed change¶ Display the lock status as part of the detailed view of an instance (that is, ‘nova show’) Alternatives¶ The lock status can be identified by attempting to lock the instance, but if the instance is not already locked this has the side-effect of locking it. If another process simultaneously tries to query the lock status in the same fashion, it may get a false positive. Equally if another process tries to delete the instance while it is locked due to a query, it will fail when it shouldn’t. REST API impact¶ Add following output to the response body of GET /v2/45210fba73d24dd681dc5c292c6b1e7f/ servers/a9dd1fd6-27fb-4128-92e6-93bcab085a98 Following lock info will be added in addition to existing output info. If the locked is True, following info will be added into output: If the locked is false, this will return following info: Both v2 and v3 API will be affected. In v2 API, extension os-server-locked-status will be added to advertise the extra information. alias: os-server-locked-status name: ServerLockStatus namespace: When the new extension “os-server-locked-status” is loaded, 2 new fields ‘locked’, ‘locked_by’ will be added to the os-hypervisor API. In v3 API, locked information will be directly added to extended_status.py since locked_by is already there. Other end user impact¶ This will allow user to query the lock status of an instance. python-novaclient will be updated in order to show the lock status in the ‘nova show’ commands. If there is no lock status info in the output from older v2 API, the new python-novaclient will exclude the lock status, locked_by fields. Implementation¶ Testing¶ Tempest cases will be added, especially the lock/unlock related cases will check through the APIs to be added, e.g. the new lock status fields will be mandatory required fields.
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/get-lock-status-of-instance.html
CC-MAIN-2019-35
refinedweb
376
60.24
The method named in the $Run$ tag must have at least one parameter, called options, which provides access to the script options. ... $Run$ mainMethod ... def mainMethod(options): print "Running the main method" It is possible to define an entry point to the use case that requires more than one parameter. The definition of the $Run$ tag only defines the function name. The definition of the function within the script defines how many parameters are needed. ... $Run$ main ... # main method with positional arguments def main(options, filename, value): print "Running the main method" # Using the positional arguments supplied to the script. print "Using file: " + filename + " and value: " + value In this example main requires two parameters, filename and value which must be supplied on the command-line when the use case script is run.
http://infocenter.arm.com/help/topic/com.arm.doc.dui0446z/vvi1443623165598.html
CC-MAIN-2020-10
refinedweb
132
58.11
Observe point changes/movements? - StephenNixon last edited by gferreira I'm hoping to record the changes made to points in glyphs. My current plan is to make a dictionary of point locations when a glyph is opened (or possibly just accessing the glif data), then comparing point locations when points are moved, and making a dictionary of point movements. Is there an observer that can send a notification when points are moved? I'm hoping for something along the lines of pointMovedInCurrentGlyph, so I could add something like this to my code... addObserver(self, 'recordPointMovement', "pointMovedInCurrentGlyph") # fake code ...but I'm not finding anything obvious. Am I missing something? How might I best approach this? There is a Contour.PointsChangednotification for a contour object. There are no notifications for each point object. a tiny script that shows all font level notifications: import vanilla class AllNotifications(object): def __init__(self): self.w = vanilla.Window((400, 400), minSize=(200, 200)) self.w.e = vanilla.TextEditor((0, 0, 0, 0)) # keep a ref otherwise weak ref will not work self.w._ref = self self.w.bind("close", self.windowClose) self.w.open() self.font = CurrentFont().naked() self.font.dispatcher.addObserver(self, "callback", notification=None, observable=None) def windowClose(self, sender): # remove the observer self.font.dispatcher.removeObserver(self, notification=None, observable=None) # remove the references del self.w._ref del self.font def callback(self, notification): txt = "\n%s --> %s" % (notification.name, notification.object) self.w.e.set(self.w.e.get() + txt) AllNotifications() @frederik said in Observe point changes/movements?: Contour.PointsChanged Amazing, thank you! I'll give this a shot tonight and see where it gets me. - StephenNixon last edited by gferreira It's working nicely to show that a point moved! Is there a simple way to know which point moved, once I get the notification? The notification I receive is: Contour.PointsChanged --> <lib.fontObjects.doodleContour.DoodleContour object at 0x112f24320> Ideally, I'd want to get to data which was the point index with its parent contour index. If I print notification.data, it only gives None. Does the object at 0x112f24320mean anything? Would I have to do something like looping through all points to compare them to a "before" state (possibly in another layer), then assign changed points a unique identifier with generateIdentifierForPoint(point)? the Contour.PointsChangedcalls your callback with a notification object: notification.object # in this case the contour object notification.data # optional data related to the event, in this case it will be None this notification is sent on different events: inserting, removing, changing start point... in most cases the moved points are the selected ones: glyph.selectedPoints Ohh of course, that is simpler than I was thinking! I suppose that the only time points would move without being selected is if they were moved by a script, right? a script or something else like fe a transormation from the inspector Makes sense. Thanks so much for your help!
https://forum.robofont.com/topic/515/observe-point-changes-movements
CC-MAIN-2020-40
refinedweb
492
51.95
Board index » scheme All times are UTC In any case, the need to exclude one or the other for this particular kind of identifier seems not of a piece with the rest of Scheme. --d -- GTE Internetworking, Powered by BBN, Burlington, MA *** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups. Please DON'T copy followups to me -- I'll assume it wasn't posted to the group. >If it were the latter, how would you ever get unquote-splicing? After all, >pretty clear that the specification "If a comma appears followed I guess my question should have been: Why allow legal of them won't unquote them the way it does for other identifiers? The "Identifiers" section doesn't to some considerable effort toward making illegal what would have been rather harmless identifiers (e.g., 401k, 9-to-5, -!-, .iota). -Steve >>seem to prefer the former.) >former would be the correct parsing. It *will*, if only you will put some whitespace in to disambiguate > (define foo '(bar baz)) (gorp bar baz) (gorp bletch) > -Rob ----- Applied Networking Silicon Graphics, Inc. Phone: 650-933-1673 1600 Amphitheatre Pkwy. FAX: 650-933-0511 Mountain View, CA 94043 PP-ASEL-IA 1. identifier names 2. Identifier name length in g77 3. String representation of an identifier's name 4. identifiers vs. IDENTIFIERS 5. CALL, identifier-2, and address-identifiers 6. Legal proc name and namespace 7. Time for a Fresh Scheme Standard: Say Goodbye to the RnRS Relic 8. Derived Syntax in RnRS and beyond 9. Time for a Fresh Scheme Standard: Say Goodbye to the RnRS Relic 10. Old RnRS available? 11. Time for a Fresh Scheme Standard: Say Goodbye to the RnRS Relic 12. Time for a Fresh Scheme Standard: Say Goodbye to the RnRS Relic
http://computer-programming-forum.com/40-scheme/011850e021c92f57.htm
CC-MAIN-2019-13
refinedweb
298
75.5
Hello and welcome back to the channel. In this video you will learn one of the important React hooks and it's useMemo. will be much easier to understand how useMemo works by real example so let's create a list of users, an input and a search button. const users = [ { id: "1", name: "Foo" }, { id: "2", name: "Bar" }, ]; function App() { const [text, setText] = React.useState(""); const [search, setSearch] = useState(""); const handleText = (event) => { setText(event.target.value); }; const handleSearch = () => { setSearch(text); }; return ( <div> <h1>React hooks for beginners</h1> <input type="text" value={text} onChange={handleText} /> <button type="button" onClick={handleSearch}> Search </button> </div> ); } So we created an array of initial users on the top. Also we have 2 states, one for changed text in the input and other we are setting only when we click on the button. This is the crucial difference. Now we want to search for users so we will render only filtered by search users. function App() { ... const filteredUsers = users.filter((user) => { console.log("filtering users"); return user.name.toLowerCase().includes(search.toLowerCase()); }), return ( <div> <h1>React hooks for beginners</h1> <input type="text" value={text} onChange={handleText} /> <button type="button" onClick={handleSearch}> Search </button> <ul> {filteredUsers.map((filteredUser) => ( <div key={filteredUser.id}>{filteredUser.name}</div> ))} </ul> </div> ); } As you can see in browser now our users are rendered and when we click search button they are filtering correctly and rendering on the screen. But what problem do we have here? As you can see each letter that we are typing leads to rerendering of the component. Which is fine. But rerendering means that every line inside component will be called from beginning to the end. And this is exactly what happens. This is why every time when we are typing we see our console log inside filtering of users. Because we are calling this filtering again and again. The problem here is obvious. If we have lots of data in array (like 1000 or users) then it is not good for performance because we are doing filter with exactly same data because actually our search value didn't change. This is exactly where useMemo comes into play. It brings the idea of memoization. This means that if value didn't change then we just store the result of some slow computed function in the variable and use it until our variables don't change. As you see our users filter depends only on search. const filteredUsers = useMemo( () => users.filter((user) => { console.log("filtering users"); return user.name.toLowerCase().includes(search.toLowerCase()); }), [search] ); So we just put our filter as a return of useMemo. Also we set a dependencies array. It works the same way as in useEffect. Our useMemo will call the function only when search changes. As you can see in browser now our filtering is not triggered when we are typing but only when we click on submit button. It's also super important to remember that if you forget to write a dependency array or it is empty your useMemo function is doing nothing. So it is crucial to provide a dependency array. Also you might ask why we don't write the every line and constuction with useMemo if it's so effective. Because every optimisation comes at a cost and calling useMemo and storing value additionally all consumes performance. So it is recommended to use memo only when you have performance problem or you know that exactly here it will be the bottleneck. So this is how you can use useMemo.
https://monsterlessons-academy.com/posts/how-to-use-use-memo-hook-in-react
CC-MAIN-2022-40
refinedweb
592
57.77
# Indexes in PostgreSQL — 6 (SP-GiST) We've already discussed PostgreSQL [indexing engine](https://habr.com/ru/company/postgrespro/blog/441962/), [the interface of access methods](https://habr.com/ru/company/postgrespro/blog/442546/), and three methods: [hash index](https://habr.com/post/442776/), [B-tree](https://habr.com/ru/company/postgrespro/blog/443284/), and [GiST](https://habr.com/ru/company/postgrespro/blog/444742/). In this article, we will describe SP-GiST. SP-GiST ======= First, a few words about this name. The «GiST» part alludes to some similarity with the same-name access method. The similarity does exist: both are generalized search trees that provide a framework for building various access methods. «SP» stands for space partitioning. The space here is often just what we are used to call a space, for example, a two-dimensional plane. But we will see that any search space is meant, that is, actually any value domain. SP-GiST is suitable for structures where the space can be recursively split into *non-intersecting* areas. This class comprises quadtrees, k-dimensional trees (k-D trees), and radix trees. Structure --------- So, the idea of SP-GiST access method is to split the value domain into *non-overlapping* subdomains each of which, in turn, can also be split. Partitioning like this induces *non-balanced* trees (unlike B-trees and regular GiST). The trait of being non-intersecting simplifies decision-making during insertion and search. On the other hand, as a rule, the trees induced are of low branching. For example, a node of a quadtree usually has four child nodes (unlike B-trees, where the nodes amount to hundreds) and larger depth. Trees like these well suit the work in RAM, but the index is stored on a disk and therefore, to reduce the number of I/O operations, nodes have to be packed into pages, and it is not easy to do this efficiently. Besides, the time it takes to find different values in the index, may vary because of differences in branch depths. This access method, same way as GiST, takes care of low-level tasks (simultaneous access and locks, logging, and a pure search algorithm) and provides a specialized simplified interface to enable adding support for new data types and for new partitioning algorithms. An internal node of SP-GiST tree stores references to child nodes; a *label* can be defined for each reference. Besides, an internal node can store a value called a *prefix*. Actually this value is not obligatory a prefix; it can be regarded as an arbitrary predicate that is met for all child nodes. Leaf nodes of SP-GiST contain a value of the indexed type and a reference to a table row (TID). The indexed data itself (search key) can be used as the value, but not obligatory: a shortened value can be stored. In addition, leaf nodes can be grouped into lists. So, an internal node can reference not only one value, but a whole list. Note that prefixes, labels, and values in leaf nodes have their own data types, independent of one another. Same way as in GiST, the main function to define for search is *the consistency function*. This function is called for a tree node and returns a set of child nodes whose values «are consistent» with the search predicate (as usual, in the form "*indexed-field operator expression*"). For a leaf node, the consistency function determines whether the indexed value in this node meets the search predicate. The search starts with the root node. The consistency function permits to find out which child nodes it makes sense to visit. The algorithm repeats for each of the nodes found. The search is depth-first. At the physical level, index nodes are packed into pages to make work with the nodes efficient from the point of view of I/O operations. Note that one page can contain either internal or leaf nodes, but not both. Example: quadtree ----------------- A quadtree is used to index points in a plane. An idea is to recursively split areas into four parts (quadrants) with respect to *the central point*. The depth of branches in such a tree can vary and depends on the density of points in appropriate quadrants. This is what it looks like in figures, by example of the [demo database](https://postgrespro.com/docs/postgrespro/11/demodb-bookings) augmented by airports from the site [openflights.org](https://openflights.org/data.html). By the way, recently we released a new version of the database in which, among the rest, we replaced longitude and latitude with one field of type «point». ![](https://habrastorage.org/r/w1560/web/8ab/6f6/4e7/8ab6f64e7bb5484bbfc798327798ce69.png) *First, we split the plane into four quadrants...* ![](https://habrastorage.org/r/w1560/web/6b5/ce5/3e1/6b5ce53e1b57499cad2a61a86a451835.png) *Then we split each of the quadrants...* ![](https://habrastorage.org/r/w1560/web/ba5/e5d/10a/ba5e5d10a47844a2bc95e5505641e798.png) *And so on until we get the final partitioning.* Let's provide more details of a simple example that we already considered in the [GiST-related article](https://habr.com/ru/company/postgrespro/blog/444742/). See what the partitioning may look like in this case: ![](https://habrastorage.org/r/w1560/web/a98/6f1/065/a986f106505b4b0fa0b03005a2c657f3.png) The quadrants are numbered as shown in the first figure. For definiteness sake, let's place child nodes from left to right exactly in the same sequence. A possible index structure in this case is shown in the figure below. Each internal node references a maximum of four child nodes. Each reference can be labeled with the quadrant number, as in the figure. But there is no label in the implementation since it is more convenient to store a fixed array of four references some of which can be empty. ![](https://habrastorage.org/r/w1560/web/24c/3cf/97f/24c3cf97f59b4f99acb2aa992d1825d6.png) Points that lie on the boundaries relate to the quadrant with the smaller number. ``` postgres=# create table points(p point); postgres=# insert into points(p) values (point '(1,1)'), (point '(3,2)'), (point '(6,3)'), (point '(5,5)'), (point '(7,8)'), (point '(8,6)'); postgres=# create index points_quad_idx on points using spgist(p); ``` In this case, «quad\_point\_ops» operator class is used by default, which contains the following operators: ``` postgres=# select amop.amopopr::regoperator, amop.amopstrategy from pg_opclass opc, pg_opfamily opf, pg_am am, pg_amop amop where opc.opcname = 'quad_point_ops' and opf.oid = opc.opcfamily and am.oid = opf.opfmethod and amop.amopfamily = opc.opcfamily and am.amname = 'spgist' and amop.amoplefttype = opc.opcintype; ``` ``` amopopr | amopstrategy -----------------+-------------- <<(point,point) | 1 strictly left >>(point,point) | 5 strictly right ~=(point,point) | 6 coincides <^(point,point) | 10 strictly below >^(point,point) | 11 strictly above <@(point,box) | 8 contained in rectangle (6 rows) ``` For example, let's look how the query `select * from points where p >^ point '(2,7)'` will be performed (find all points that lie above the given one). ![](https://habrastorage.org/r/w1560/web/5f3/194/855/5f3194855fe74166b56c1703f4bb9b07.png) We start with the root node and use the consistency function to select to which child nodes to descend. For the operator `>^`, this function compares the point (2,7) with the central point of the node (4,4) and selects the quadrants that may contain the points sought, in this case, the first and fourth quadrants. In the node corresponding to the first quadrant, we again determine the child nodes using the consistency function. The central point is (6,6), and we again need to look through the first and fourth quadrants. ![](https://habrastorage.org/r/w1560/web/8ac/99b/d9e/8ac99bd9e7b94f8d9683307b98ad1875.png) The list of leaf nodes (8,6) and (7,8) corresponds to the first quadrant, of which only the point (7,8) meets the query condition. The reference to the fourth quadrant is empty. In the internal node (4,4), the reference to the fourth quadrant is empty as well, which completes the search. ``` postgres=# set enable_seqscan = off; postgres=# explain (costs off) select * from points where p >^ point '(2,7)'; ``` ``` QUERY PLAN ------------------------------------------------ Index Only Scan using points_quad_idx on points Index Cond: (p >^ '(2,7)'::point) (2 rows) ``` ### Internals We can explore the internal structure of SP-GiST indexes using "[gevel](http://www.sai.msu.su/~megera/wiki/Gevel)" extension, which was mentioned earlier. Bad news is that due to a bug, this extension works incorrectly with modern versions of PostgreSQL. Good news is that we plan to augment «pageinspect» with the functionality of «gevel» ([discussion](https://www.postgresql.org/message-id/flat/accae316-5e4d-8963-0c3d-277ef13c396c@postgrespro.ru#accae316-5e4d-8963-0c3d-277ef13c396c@postgrespro.ru)). And the bug has already been fixed in «pageinspect». > Again, bad news is that the patch has stuck with no progress. > > For example, let's take the extended demo database, which was used to draw pictures with the world map. ``` demo=# create index airports_coordinates_quad_idx on airports_ml using spgist(coordinates); ``` First, we can get some statistics for the index: ``` demo=# select * from spgist_stats('airports_coordinates_quad_idx'); ``` ``` spgist_stats ---------------------------------- totalPages: 33 + deletedPages: 0 + innerPages: 3 + leafPages: 30 + emptyPages: 2 + usedSpace: 201.53 kbytes+ usedInnerSpace: 2.17 kbytes + usedLeafSpace: 199.36 kbytes+ freeSpace: 61.44 kbytes + fillRatio: 76.64% + leafTuples: 5993 + innerTuples: 37 + innerAllTheSame: 0 + leafPlaceholders: 725 + innerPlaceholders: 0 + leafRedirects: 0 + innerRedirects: 0 (1 row) ``` And second, we can output the index tree itself: ``` demo=# select tid, n, level, tid_ptr, prefix, leaf_value from spgist_print('airports_coordinates_quad_idx') as t( tid tid, allthesame bool, n int, level int, tid_ptr tid, prefix point, -- prefix type node_label int, -- label type (unused here) leaf_value point -- list value type ) order by tid, n; ``` ``` tid | n | level | tid_ptr | prefix | leaf_value ---------+---+-------+---------+------------------+------------------ (1,1) | 0 | 1 | (5,3) | (-10.220,53.588) | (1,1) | 1 | 1 | (5,2) | (-10.220,53.588) | (1,1) | 2 | 1 | (5,1) | (-10.220,53.588) | (1,1) | 3 | 1 | (5,14) | (-10.220,53.588) | (3,68) | | 3 | | | (86.107,55.270) (3,70) | | 3 | | | (129.771,62.093) (3,85) | | 4 | | | (57.684,-20.430) (3,122) | | 4 | | | (107.438,51.808) (3,154) | | 3 | | | (-51.678,64.191) (5,1) | 0 | 2 | (24,27) | (-88.680,48.638) | (5,1) | 1 | 2 | (5,7) | (-88.680,48.638) | ... ``` But keep in mind that «spgist\_print» outputs not all leaf values, but only the first one from the list, and therefore shows the structure of the index rather than its full contents. Example: k-dimensional trees ---------------------------- For the same points in the plane, we can also suggest another way to partition the space. Let's draw *a horizontal line* through the first point being indexed. It splits the plane into two parts: upper and lower. The second point to be indexed falls into one of these parts. Through this point, let's draw *a vertical line*, which splits this part into two ones: right and left. We again draw a horizontal line through the next point and a vertical line through yet the next point, and so on. All internal nodes of the tree built this way will have only two child nodes. Each of the two references can lead either to the internal node that is next in the hierarchy or to the list of leaf nodes. This method can be easily generalized for k-dimensional spaces, and therefore, the trees are also called k-dimensional (k-D trees) in the literature. Explaining the method by example of airports: ![](https://habrastorage.org/r/w1560/web/a97/130/4d6/a971304d68c8442b86a10c0e52c3e78c.png) *First we split the plane into upper and lower parts...* ![](https://habrastorage.org/r/w1560/web/035/660/8fe/0356608fea41422eb7fab9cbb2b67669.png) *Then we split each part into left and right parts...* ![](https://habrastorage.org/r/w1560/web/bb1/26e/8db/bb126e8db6714d7bb5122f9213e5f658.png) *And so on until we get the final partitioning.* To use a partitioning just like this, we need to explicitly specify the operator class **«kd\_point\_ops»** when creating an index. ``` postgres=# create index points_kd_idx on points using spgist(p kd_point_ops); ``` This class includes exactly the same operators as the «default» class «quad\_point\_ops». ### Internals When looking through the tree structure, we need to take into account that the prefix in this case is only one coordinate rather than a point: ``` demo=# select tid, n, level, tid_ptr, prefix, leaf_value from spgist_print('airports_coordinates_kd_idx') as t( tid tid, allthesame bool, n int, level int, tid_ptr tid, prefix float, -- prefix type node_label int, -- label type (unused here) leaf_value point -- list node type ) order by tid, n; ``` ``` tid | n | level | tid_ptr | prefix | leaf_value ---------+---+-------+---------+------------+------------------ (1,1) | 0 | 1 | (5,1) | 53.740 | (1,1) | 1 | 1 | (5,4) | 53.740 | (3,113) | | 6 | | | (-7.277,62.064) (3,114) | | 6 | | | (-85.033,73.006) (5,1) | 0 | 2 | (5,12) | -65.449 | (5,1) | 1 | 2 | (5,2) | -65.449 | (5,2) | 0 | 3 | (5,6) | 35.624 | (5,2) | 1 | 3 | (5,3) | 35.624 | ... ``` Example: radix tree ------------------- We can also use SP-GiST to implement a radix tree for strings. The idea of a radix tree is that a string to be indexed is not fully stored in a leaf node, but is obtained by concatenating the values stored in the nodes above this one up to the root. Assume, we need to index site URLs: «postgrespro.ru», «postgrespro.com», «postgresql.org», and «planet.postgresql.org». ``` postgres=# create table sites(url text); postgres=# insert into sites values ('postgrespro.ru'),('postgrespro.com'),('postgresql.org'),('planet.postgresql.org'); postgres=# create index on sites using spgist(url); ``` The tree will look as follows: ![](https://habrastorage.org/r/w1560/web/6dd/626/8d2/6dd6268d2c304e98a78609a6bad997e1.png) The internal nodes of the tree store prefixes common to all child nodes. For example, in child nodes of «stgres», the values start with «p» + «o» + «stgres». Unlike in quadtrees, each pointer to a child node is additionally labeled with one character (more exactly, with two bytes, but this is not so important). «text\_ops» operator class supports B-tree-like operators: «equal», «greater», and «less»: ``` postgres=# select amop.amopopr::regoperator, amop.amopstrategy from pg_opclass opc, pg_opfamily opf, pg_am am, pg_amop amop where opc.opcname = 'text_ops' and opf.oid = opc.opcfamily and am.oid = opf.opfmethod and amop.amopfamily = opc.opcfamily and am.amname = 'spgist' and amop.amoplefttype = opc.opcintype; ``` ``` amopopr | amopstrategy -----------------+-------------- ~<~(text,text) | 1 ~<=~(text,text) | 2 =(text,text) | 3 ~>=~(text,text) | 4 ~>~(text,text) | 5 <(text,text) | 11 <=(text,text) | 12 >=(text,text) | 14 >(text,text) | 15 (9 rows) ``` The distinction of operators with tildes is that they manipulate *bytes* rather than *characters*. Sometimes, a representation in the form of a radix tree may turn out to be much more compact than B-tree since the values are not fully stored, but reconstructed as the need arises while descending through the tree. Consider a query: `select * from sites where url like 'postgresp%ru'`. It can be performed using the index: ``` postgres=# explain (costs off) select * from sites where url like 'postgresp%ru'; ``` ``` QUERY PLAN ------------------------------------------------------------------------------ Index Only Scan using sites_url_idx on sites Index Cond: ((url ~>=~ 'postgresp'::text) AND (url ~<~ 'postgresq'::text)) Filter: (url ~~ 'postgresp%ru'::text) (3 rows) ``` Actually, the index is used to find values that are greater or equal to «postgresp», but less than «postgresq» (Index Cond), and then matching values are chosen from the result (Filter). First, the consistency function must decide to which child nodes of «p» root we need to descend. Two options are available: «p» + «l» (no need to descend, which is clear even without diving deeper) and «p» + «o» + «stgres» (continue the descent). For «stgres» node, a call to the consistency function is needed again to check «postgres» + «p» + «ro.» (continue the descent) and «postgres» + «q» (no need to descend). For «ro.» node and all its child leaf nodes, the consistency function will respond «yes», so the index method will return two values: «postgrespro.com» and «postgrespro.ru». One matching value will be selected of them at the filtering stage. ![](https://habrastorage.org/r/w1560/web/cec/aec/23c/cecaec23c932448098cdd360e60d3c5c.png) ### Internals When looking through the tree structure, we need to take data types into account: ``` postgres=# select * from spgist_print('sites_url_idx') as t( tid tid, allthesame bool, n int, level int, tid_ptr tid, prefix text, -- prefix type node_label smallint, -- label type leaf_value text -- leaf node type ) order by tid, n; ``` Properties ---------- Let's look at the properties of SP-GiST access method (queries [were provided earlier](https://habr.com/ru/company/postgrespro/blog/442546/)): ``` amname | name | pg_indexam_has_property --------+---------------+------------------------- spgist | can_order | f spgist | can_unique | f spgist | can_multi_col | f spgist | can_exclude | t ``` SP-GiST indexes cannot be used for sorting and for support of the unique constraint. Additionally, indexes like this cannot be created on several columns (unlike GiST). But it is permitted to use such indexes to support exclusion constraints. The following index-layer properties are available: ``` name | pg_index_has_property ---------------+----------------------- clusterable | f index_scan | t bitmap_scan | t backward_scan | f ``` The difference from GiST here is that clustering is impossible. And eventually the following are column-layer properties: ``` name | pg_index_column_has_property --------------------+------------------------------ asc | f desc | f nulls_first | f nulls_last | f orderable | f distance_orderable | f returnable | t search_array | f search_nulls | t ``` Sorting is not supported, which is predictable. Distance operators for search of nearest neighbors are not available in SP-GiST so far. Most likely, this feature will be supported in future. > It is supported in upcoming PostgreSQL 12, the patch by Nikita Glukhov. > > SP-GiST can be used for index-only scan, at least for the discussed operator classes. As we have seen, in some instances, indexed values are explicitly stored in leaf nodes, while in the other ones, the values are reconstructed part by part during the tree descent. ### NULLs Not to complicate the picture, we haven't mentioned NULLs so far. It is clear from the index properties that NULLs are supported. Really: ``` postgres=# explain (costs off) select * from sites where url is null; ``` ``` QUERY PLAN ---------------------------------------------- Index Only Scan using sites_url_idx on sites Index Cond: (url IS NULL) (2 rows) ``` However, NULL is something foreign for SP-GiST. All operators from «spgist» operator class must be strict: an operator must return NULL whenever any of its parameters is NULL. The method itself ensures this: NULLs are just not passed to operators. But to use the access method for index-only scan, NULLs must be stored in the index anyway. And they are stored, but in a separate tree with its own root. Other data types ---------------- In addition to points and radix trees for strings, other methods based on SP-GiST are also implemented PostgreSQL: * «box\_ops» operator class provides a quadtree for rectangles. Each *rectangle* is represented by *a point in a four-dimensional space*, so the number of quadrants equals 16. An index like this can beat GiST in performance when there are a lot of intersections of the rectangles: in GiST it is impossible to draw boundaries so as to separate intersecting objects from one another, while there are no such issues with points (even four-dimensional). * «range\_ops» operator class provides a quadtree for intervals. *An interval* is represented by *a two-dimensional point*: the lower boundary becomes the abscissa, and the upper boundary becomes the ordinate. [Read on](https://habr.com/ru/company/postgrespro/blog/448746/).
https://habr.com/ru/post/446624/
null
null
3,189
55.03
Answered by: XML Schema Editor (Visual Studio 2008 RTM)? How do I open / find the XML Schema editor in VS2008? I have an XSD file but it only lets me edit the raw XML in the text editor mode. In VS2003, we had the ability to switch the editor between visual / text modes. Having trouble finding the corresponding functionality in VS2008. Note, this is not a schema for a dataset. thanks scottm Question Answers According to the XML team's blog the schema designer will be released later: Hi Ian, As I've mentioned in another thread (), that schema contains a few errors (empty choice and referencing schematron namespace which wasn't imported). When you fix them, the Editor will not complain about included files.. All replies According to the XML team's blog the schema designer will be released later: Future version = VS2020? Please get a working version (CTP or whatever) out as soon as possible. I relied a lot on this tool in VS2005 (crash-buggy as it was) for contract-first web service development. Now, I am forced to the ludicrous alternative of opening VS2005 just for the xsd design view... Oh and while you are at it, please forward a complain-o-gram to whichever program team owns the web form designer and lets get get multiple-selection of web controls back too. Us enterprise forms guys are just cavemen developers, your modern ways frighten and confuse us. While the XML Schema Editor may not be ready for use, I'm more concerned that the XML Editor reports errors that are easily resolved in code with the .NET 2.0 Framework (at least in the VS2008 Beta2 release). With a schemaset that is file-based, linked in to a project, one XSD file of an ISO standard (ISO 19139 - 60+ files, 7 folders) has includes that have no relative path - because the included files are in the same folder as that file (coverage.xsd). All other imports / includes in the complex schemaset are resolved correctly.Code Snippet <include schemaLocation="feature.xsd"/> <include schemaLocation="valueObjects.xsd"/> <include schemaLocation="grids.xsd"/> <include schemaLocation="geometryAggregates.xsd"/> The wavy line under the word include cannot be shown here, in the snippet that I have pasted. My project's code to load all schemas from a top or starting file is unexceptional - just schemaSet.Add - and it's automatically resolved. Having said all that, I'm not sure if VS2008 RTM's XML Editor is misbehaved. And I certainly would love to see a new release of the XML Schema Editor. Hi Ian, As I've mentioned in another thread (), that schema contains a few errors (empty choice and referencing schematron namespace which wasn't imported). When you fix them, the Editor will not complain about included files. Hello Stan You're right - I have been changing this troublesome schema file, by either deleting line 423 or using Jeni's suggestion/simplification, but the 'wavy line' problem was indeed when the file coverage.xsd was as-is. That's nice to know - I wondered how on earth the IDE and the .NET assemblies could be out of synch. Thanks for pointing that out! But it's never been a problem with any Schematron crud - I am using the original, untainted release from ISO (URL for download from EDEN was given in another post of mine, on the other thread). That doesn't have assertions stuffed into it. Interestingly, 3 posts (possibly 2 of mine) have disappeared from that thread. Has someone reported me for "abuse", I wonder! Or, once the question has been answered (marked as green), are further posts disallowed / deleted?. My code (validation handler) belatedly recognized that Warnings are serious when coming out of the System.XML(etc) assemblies - and I had to "dig deeper" into the exception-tracking classes - but eventually agreed with the IDE. As a side note, it would be helpful if the warnings included some easily-identifiable text. The filenotfoundexception is one that i could pick up, but it's hard to error-trap and code sensibly for some of the warnings, like the coverage.xsd "line 423" problem with the <choice/>. My posts miraculously re-appeared today. My I remind you that this thread is about the MISSING XML schema editor in VS2008? I cannot understand that MS promotes VS2008 with a lot of bells and whistles, while stuff you have learned to appreciate simply disappears without notice. Can anyone from MS give some insight in the time schedule for this vital module? Sorry, but I'm more interested in some of my minor problems. You have a couple of options - - Use VS2005 (does the Express version even have the XML Schema Editor - or not?). It works side-by-side with VS2008 - Install VS2008 Beta 2, and download the editor/designer (XML Schema Designer) August 07 CTP. I haven't used it much, but I haven't found anything amiss, yet. I know nothing, but it's a possibility that the XML Schema Designer that was released in 08/2007 might be enhanced some time before VS vNext to work with VS2008. Martin, another option (not integrated into Visual Studio, so it is a fall-back only): There is a free XSD Editor from Liquid Technologies, called Liquid XML Studio. It looks very capable. Actually, I haven't used it. I have Stylus Studio 2008 XML Enterprise Suite, which is nice (just a pity it is java-based). But I am trialling their commercial product, called Liquid XML Data Binding 2008, and it is the ONLY tool that I have discovered that can generate code classes from the 54 schema files in the ISO 19139 standard, without a hiccup. It has a $1000+ pricetag, though. Or you have to rely on some commerical (most if them are not free) XML designer products available on the web. Example. XMLSpy () or graphical XML Schema editor () from Liquid Technologies. Thanks - No there is still no news from MS, here you can find a similar bug report which was simply ignored. Currently I'm using Liquid Studio designer as 30 day trail which is not free btw. MS has also stopped the project Linq to XSD where I was also looking forward too, has MS abandoned XSD schemes? ... complete silence :( Hopefully waiting will pay off, a bit disappointed in MS. - Edited by Freek Van Mensel Tuesday, May 26, 2009 7:32 AM
https://social.msdn.microsoft.com/Forums/en-US/84212ff1-ca41-4a86-bc0a-fe9d59b902f9/xml-schema-editor-visual-studio-2008-rtm?forum=xmlandnetfx
CC-MAIN-2016-44
refinedweb
1,074
64.3
You are not logged in. Pages: 1 hi all whats the problem: a test application written in c segfaults on printf() call's the application is used to test a library i've written for distributed computing purpose in current state, not that special, just sending a few udp broadcasts and doing tcp connects more details on the segfault behaviour: the prog is compiled like this gcc -g -ggdb -g3 -ggdb3 -Wall -Wextra -Wpadded -O0 -I../../external -I../../src -o main ./main.c running with gdb shows the following backtrace after the segfault: Program terminated with signal 11, Segmentation fault. #0 _IO_vfprintf_internal (s=0xbf9cff4c, format=0x2 <Address 0x2 out of bounds>, ap=0xbf9cff7a "A\037w") at vfprintf.c:1288 1288 vfprintf.c: No such file or directory. in vfprintf.c (gdb) bt #0 _IO_vfprintf_internal (s=0xbf9cff4c, format=0x2 <Address 0x2 out of bounds>, ap=0xbf9cff7a "A\037w") at vfprintf.c:1288 #1 0x006818a0 in __printf (format=0x8058d4d "\ncommand: ") at printf.c:35 #2 0x08058009 in main (argc=5, argv=0xbf9d04f4) at ./main.c:197 let's have a look at main.c:196 - main.c:198 while(buffer[0]!='q' && run){ printf("\ncommand: "); if(scanf("%s", buffer)<=0) continue; what else can be said: if i remove every printf in the whole app AND library it doesnt segfault but the while loop above ends after a few loops cause run is switched to 0 the weird thing - i removed the SIGTERM handler which set's run to 0, so it shouldnt break until i enter q, which in fact i didnt that's why i assume some application space corruption im not expirienced with after adding some printf in the library it segfaults again after a few loops on printf, nowhere else the printf can also be whatever you want i also tried valgrind and it doesnt detect any memory leaks or wrong assignments or such it really doesnt print anything remarkable until the segfault what i think: my opinion is that i maybe do something wrong with a few pointers and cause something to break im using a few function pointers for library callbacks and a double linked list which does a few complicated pointer arithmetic operations it seems that the problem is also tightly coupled to the test application and library im working on i tried to break it down to a test case but i dont the the error to occur on smaller pieces questions: since i don't think this is a problem where someone can answer "its caused by ...." my question is "given a correctly working libc, ubuntu installation (natty) and no hardware failure, which i just assume, what needs to be done to cause printf to segfault like this altough it is called with a static string like printf("test");"? i think this goes into something like corrupting heap or stack which im not that much familiar with - maybe there are some resources on how stack/heap corruption is done? thanks for any help on this [EDIT] from … -the-heap/ i got a simple example for a stack overflow int main() { char nStack[100000000]; printf("test"); return 0; } which causes printf to segfault on my ubuntu installation i found that a size of 8384000 results in a segfault pretty much the same way as i described above so it seems that i corrupt my function stack and after reading a few articles on this topic i think i was using too much network buffer variables in combination with a bunch of function calls using local struct's altough corrupting nearly 8MB of stack seems hard to believe, anyway ill try to reduce the memory footprint as a first step [/EDIT] Last edited by thinking (2011-08-31 03:28 PM) I doubt it's a pure stack overflow of that sort (ie: just using too much stack space)... Much more likely is dynamic heap corruption... Though, I'm not sure why valgrind wouldn't spot it for you... It's always been great at catching such things for me... Perhaps it is stack corruption of some sort, such as overflowing a stack buffer and trashing other important stuff also on the stack (like function return addresses)... It's hard to say without seeing more code... However, just from the 3 lines you posted, I don't like the look of the third one... A scanf("%s") is unbounded, so could easily overflow your buffer there... You'd be far better off using something like fgets() or getline() for input... From the lines posted, I would guess that 'buffer' isn't always NUL terminated. (or line terminated, for that matter.) Make sure the receiver always terminates the buffer with a NUL before processing the data. Stack size you can see in /proc/$PID/smaps, for instance. So you could check that when it's hanging in gdb after a crash. Took me forever to find out that X crashed with no error messages because it used more stack than I allowed with ulimit -s. I got confused by that scanf, thougth 'buffer' was the input instead of stdin. That's indeed a strange way of using scanf(), I'd do what Rob suggests and use something better fitting that's also not unbound. What I meant is the buffer where the UDP packet is received into. But if the UDP data aren't strings then never mind about NUL termination. it seems i have solved the problem, although i have no idea what exactly went wrong in general i did two things: 1) adding code to monitor the stack size during runtime 2) rearranging my source 1) as it seemd that stack corruption plays a role on this i added a local variable in main like this char *stackstart = NULL; int main(){ int someotherlocalvar; char bottom_of_stack; stackstart = (char*)&bottom_of_stack; ... in whatever function call i do i get the size of the current stack using this void somefunction(int someparam){ char current_stack_position; printf("the current stack size is: %d bytes\n", (int)(stackstart-(¤t_stack_position)) ); ... there are a bunch of things to note on this code: o) its not very portable, for example the stack size on my linux box grows from top to bottom thats why i do stackstart-current_stack_position o) this method is not very accurate o) this is just a rough example of what i did and it actually worked for me - i would not recommend using this in production code have a look at … -size-in-c for another example using this method i noticed that my global variables are garbaged after a few function calls after the first loop since i couldn't find any further reason for this i got to step 2 which solved the problem 2) on project start for fast development i made the decision that i dont split the code in .c and .h files but only use .h files with functions directly in them i also had #ifndef/#define/#endif constructs on top/end of every file in case of a small lib this method is nice to see code changes pretty fast, so not having to do make clean; make and such as the code grows it gets clumsy to be honest i made some bad expirience with g++ compiliation with templates and mid sized projects, but this also just reflects my lack of knowledge on compilation units i dont know what was the problem but i think gcc caused it during linking in combination with some global debugging variables i was using, could also be a gcc bug - hard to track at this point so i rearranged everything to autotools and a minimal makefile for my own testing purpose that's it - the code is working now without any change of function internals just cleanly separating in .c,.h files did it @robseace and i3839 the reason i used scanf, was that its 'just' for testing and i'm the only user currently but you're right, the scanf isnt very nice for obvious reason and also shouldn't be used for testing - there are better functions for this thx for your thoughts on this Last edited by thinking (2011-08-31 03:26 PM) Plus, there's stack used in libc's pre-main() code, and main()'s argc and argv (and envp) will be on the stack... Like i3839 said, "/proc/self/*maps" is a much more accurate method of figuring out stack usage on Linux... Pages: 1
http://developerweb.net/viewtopic.php?id=7262
CC-MAIN-2019-13
refinedweb
1,416
54.8
Something like fifty years ago, an argument raged among computer engineers over the order in which numbers should be stored in computer memory. During the arguments, some (mostly among the least-significant-first camp, IIRC) pointed out the Lilliputian argument between the "Little-Endians" and "Big-Endians" in Gulliver's Travels. The least-significant-first camp claimed the position of Little-Endian-ness, which left the most-significant-first camp associated with High Church once the allegory was commonly accepted. In Gulliver's Travels, the arguments between Lilliput and Blefuscu, including the endianness argument, are depicted by Swift as mere matters of fashion. Most of the least-significant-first camp took the same approach: In memory, endianness doesn't matter. This was a bit of duplicitous implicit poisoning of the well, similar to the habit Intel salescrew had a decade or two later of claiming that Turing complete CPUs were all equivalent, therefore we should all buy their self-proclaimed most popular CPU -- false analogy and a lot of logical steps being skipped among other things. To summarize the question of byte order, we need to take a general look at data in computer memory. Computer memory is organized as an array of sequentially addressable elements, which implies an ordering to the contents of memory: Let's put the number 123456 (one hundred twenty three thousand four hundred fifty-six) encoded as text after the description: Note that text is naturally recorded most significant digit first in English. (Thousands group separators just get in the way in computers, so I just left the comma out.) If we wrote 123456 textually least significant digit first, it would look like this: You may be wondering why someone would write numbers backwards, but there are actually language/numeric contexts in which least significant digit first is the common order. (They may be useful to refer to later.) Even in English, we have contexts like dates where the order is not most significant first: - September 17, 1787 (mixed order) and - 17 September 1787 (least significant first) So we know that it is not completely unthinkable to do such a thing. Now, text is actually encoded in computer memory as strings of numeric codes. Let's look at the data in the second example, reading it as hexadecimal numbers that represent the characters of the text instead of interpreting it as text: That's interesting, isn't it? No? Okay, let's leave everything but the number interpreted as text: Now, we haven't actually been changing what is in memory in Example 2. We're just changing how we look at it. We are trying to get a sense of what is actually stored in memory. (If you have a decent OS, you have command line tools like hexdump that allow you to look at files this way. You should try it some time.) So, now let's try changing the form of the number. Instead of putting it in as text, let's put it in as a number -- an integer. (It's convenient that the address where it will go is 16, for something we call alignment, but we won't really talk about that just yet.) First, we need to rewrite 123456 (one hundred twenty-three thousand four hundred fifty-six) as a hexadecimal number: 123456 ÷ 164 = 1 rem 57920So, 57920 ÷ 163 = 14 (E16) rem 576 576 ÷ 162 = 2 rem 64 64 ÷ 161 = 4 rem 0 123456 == 1E24016Two hexadecimal digits take one byte in memory on a computer with an 8 bit byte. (Numbers up to 4294967295 (FFFFFFFF16) can be stored in four bytes on computers with an 8 bit byte.) Let's look at 123456 (1E24016) stored at location 16, most significant byte first: Now let's look at the same number, stored least significant byte first: For a CPU that is MSB first, it will always store and read MSB first (as in example 3), so there's no problem. And an LSB first CPU will always store and read LSB first, so, again, no problem. The CPU is built to do it one way or the other, and it will always do it the way it's built, so there's no problem here. That's the essence of the argument. It's no longer true, and it really was never true. All bus masters have to agree on how they store numbers in a particular chunk of data or the numbers get turned upside down. (Or in the case of mixed mode integers, inside out and upside down, etc.) Back then, however, CPUs did not usually have the ability to switch byte order without a bit of work. And alternate bus masters were not as common as now, and usually tended to be built specifically for the CPU. These days, with intelligent I/O devices, alternate bus masters are rather common. (Graphics co-processors, network interfaces, disk drive interfaces, etc.) If one is going to be a bad boy and do byte order backwards from the rest, unless you isolate the bad boy somehow, things tend to fall apart. But even the ability to switch byte order does not materially change the arguments. On a CPU that can switch byte order natively, byte order becomes just another property of the integer stored in memory, which the programmer must keep track of, along with the address, size, presence of sign, etc. As long as the software and hardware respect the properties of the integer in memory, there is no problem. Well, no problem in isolation. But there is one virtual bus master that tends, in most of the world, to be most significant first when looking at numbers -- the human who might debug the program by looking at the raw contents of memory without access to the detail of the compiled program. No number exists in isolation. There it is, the fatal assumption of the argument: ... in isolation ...Nothing in this world exists in isolation. Why am I going around in circles on this subject? In modern hardware, we have multiple CPUs and other devices on the computer bus. Even in the past, the programmer often had to look at what was in memory in order to tell what the program was doing. He was, in effect, another CPU on the bus, as I said above. Before we take a look at the reasons not to use least significant first byte order, let's look at the primary argument in favor: It theoretically speeds up some hardware processes and made the 8080 and 6502 (among other CPUs) cheaper to produce. To figure out why, when you perform math on numbers, you start at the least significant end. Let's do a subtraction of two moderately large numbers: 123456You started with the column on the right, - 98765 ------- 24691 6 - 5 = 1right? CPUs have to point at what they work on, and the idea is that, if they are pointing at the number already, it's handy to be pointing at the first byte to do the math on. It sounds reasonable, now that you think of it, right? There are some other issues, like aligning the number before you start, which also appear to have some advantage when the small end is what you point at. Sounds like maybe the Little-Endian engineers know what they are onto?. Oh, dear. Maybe the Big-Endians should just shut up. Well, let's put those arguments aside for a moment and talk about what the human who is trying to debug a program is going to see when he or she looks at a number stored least significant byte first. I'm pretty sure I can show you some problems with the Little-Endian attitude here. Simple tools are the ones that are usually available. We'll make use of hexdump. If you are working on a Microsoft Windows workstation, you can install Cygwin to get Unix tools, and Cygwin can give you access to hexdump and the gnu C compiler, gcc, and gforth (and lots of other good stuff like bc). We'll also make use of a little programming in C: /* Program to demonstrate the effect of LSB1st vs. MSB1st integers // by Joel Matthew Rees, Amagasaki, Japan // Permission granted to use for personal purposes. // See // Can be downloaded here: // */ #include <stdio.h> #include <stdlib.h> /* #define NO__DEPENDENCY_ON_LIMITS_H */ #if !defined NO__DEPENDENCY_ON_LIMITS_H #include <limits.h> # define byteWidth ( (size_t) CHAR_BIT ) # define byteMask ( (unsigned long) (unsigned char) ( (unsigned long) -1 ) ) # define ulBytes ( sizeof (unsigned long) ) /* a run-time constant */ #else unsigned byteWidth = 8; /* Not depending on limits.h . */ unsigned long byteMask = 0xFF; unsigned ulBytes = 4; /* Sane throw-away initial values. */ void setULbytes( void ) { int i = 0; unsigned char chroller = 1; unsigned char chMask = 1; unsigned long ulroller = 1; while ( chroller != 0 ) { chroller <<= 1; chMask = ( chMask << 1 ) | 1; ++i; } byteMask = chMask; byteWidth = i; i = 0; while ( ulroller != 0 ) { ulroller <<= 1; ++i; } ulBytes = i / byteWidth; } #endif int putLSB( unsigned long ivalue, int early ) { int i = 0; do { putchar( ivalue & byteMask ); ++i; ivalue >>= 8; } while ( ( i < ulBytes ) && !( early && ( ivalue == 0 ) ) ); return i; } int putMSB( unsigned long ivalue, int early ) { int i = 0; do { putchar( ( ivalue >> ( ( ulBytes - 1 ) * byteWidth ) ) & byteMask ); ++i; ivalue <<= byteWidth; } while ( ( i < ulBytes ) && !( early && ( ivalue == 0 ) ) ); return i; } void fillch( int count, char ch ) { while ( count-- > 0 ) { putchar( ch ); } } int printInteger( unsigned long ivalue, unsigned base ) { char buffer[ 65 ]; char * cpt = buffer + 65; * --cpt = '\0'; if ( base > 36 ) { base = 10; } do { int ch = ivalue % base; ivalue /= base; ch += '0'; if ( ch > '9' ) { ch += 'A' - '9' - 1; } * --cpt = ch; } while ( ivalue > 0 ); fputs( cpt, stdout ); return 64 - ( cpt - buffer ); } int main( int argc, char *argv[] ) { unsigned long my_integer = 123456; int index = 1; int length; #if defined NO__DEPENDENCY_ON_LIMITS_H setULbytes(); #endif if ( argc > 1 ) { char * endpt = argv[ 1 ]; my_integer = strtoul( argv[ 1 ], &endpt, 0 ); if ( endpt > argv[ 1 ] ) { ++index; } else { my_integer = 123456; } } printf( "Data in memory: " ); length = printInteger( my_integer, 10 ); fillch( 32 - length, '\0' ); length = printInteger( my_integer, 16 ); fillch( 32 - length, '\0' ); printf( "LSB1st early: " ); length = putLSB( my_integer, 1 ); fillch( 16 - length, '-' ); printf( "LSB1st full: " ); length = putLSB( my_integer, 0 ); fillch( 16 - length, '-' ); printf( "MSB1st early: " ); length = putMSB( my_integer, 1 ); fillch( 16 - length, '-' ); printf( "MSB1st full: " ); length = putMSB( my_integer, 0 ); fillch( 16 - length, '-' ); putchar( '\n' ); return EXIT_SUCCESS; } [JMR201704281355: This can be downloaded at previous version at will eventually be taken off line.will eventually be taken off line. ] Compile it with the usual cc -Wall -o lsbmsb lsbmsb.cand run it with something like - ./lsbmsb | hexdump -C - ./lsbmsb 1234567890 | hexdump -C - ./lsbmsb 0x12345 | hexdump -C - ./lsbmsb 0x12345 | hexdump # look at it two-byte. - ./lsbmsb $(( 123456 * 256 )) | hexdump -C - # etc. Hmm. me@mycomputer:~/work/mathgames/eco101$ ./lsbmsb | hexdump -C 00000000 44 61 74 61 20 69 6e 20 6d 65 6d 6f 72 79 3a 20 |Data in memory: | 00000010 31 32 33 34 35 36 00 00 00 00 00 00 00 00 00 00 |123456..........| 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000030 31 45 32 34 30 00 00 00 00 00 00 00 00 00 00 00 |1E240...........| 00000040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000050 4c 53 42 31 73 74 20 65 61 72 6c 79 3a 20 20 20 |LSB1st early: | 00000060 40 e2 01 2d 40 e2 01 00 00 01 e2 40 00 01 e2 40 2d 2d 2d 2d 2d 2d 2d 2d |.......@--------| 000000d0 0a |.| 000000d1 me@mycomputer:~/work/mathgames/eco101$ ./lsbmsb | hexdump 0000000 6144 6174 6920 206e 656d 6f6d 7972 203a 0000010 3231 3433 3635 0000 0000 0000 0000 0000 0000020 0000 0000 0000 0000 0000 0000 0000 0000 0000030 4531 3432 0030 0000 0000 0000 0000 0000 0000040 0000 0000 0000 0000 0000 0000 0000 0000 0000050 534c 3142 7473 6520 7261 796c 203a 2020 0000060 e240 2d01 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d 0000070 534c 3142 7473 6620 6c75 3a6c 2020 2020 0000080 e240 0001 0000 0000 2d2d 2d2d 2d2d 2d2d 0000090 534d 3142 7473 6520 7261 796c 203a 2020 00000a0 0000 0000 0100 40e2 2d2d 2d2d 2d2d 2d2d 00000b0 534d 3142 7473 6620 6c75 3a6c 2020 2020 00000c0 0000 0000 0100 40e2 2d2d 2d2d 2d2d 2d2d 00000d0 000a 00000d1 me@mycomputer:~/work/mathgames/eco101$ ./lsbmsb 0x1E24000 | hexdump -C 00000000 44 61 74 61 20 69 6e 20 6d 65 6d 6f 72 79 3a 20 |Data in memory: | 00000010 33 31 36 30 34 37 33 36 00 00 00 00 00 00 00 00 |31604736........| 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000030 31 45 32 34 30 30 30 00 00 00 00 00 00 00 00 00 |1E24000.........| 00000040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000050 4c 53 42 31 73 74 20 65 61 72 6c 79 3a 20 20 20 |LSB1st early: | 00000060 00 40 e2 01 00 40 e2 01 01 e2 40 2d 01 e2 40 00 2d 2d 2d 2d 2d 2d 2d 2d |......@.--------| 000000d0 0a |.| 000000d1 me@mycomputer:~/work/mathgames/eco101$ ./lsbmsb 0x1E24000 | hexdump 0000000 6144 6174 6920 206e 656d 6f6d 7972 203a 0000010 3133 3036 3734 3633 0000 0000 0000 0000 0000020 0000 0000 0000 0000 0000 0000 0000 0000 0000030 4531 3432 3030 0030 0000 0000 0000 0000 0000040 0000 0000 0000 0000 0000 0000 0000 0000 0000050 534c 3142 7473 6520 7261 796c 203a 2020 0000060 4000 01e2 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d 0000070 534c 3142 7473 6620 6c75 3a6c 2020 2020 0000080 4000 01e2 0000 0000 2d2d 2d2d 2d2d 2d2d 0000090 534d 3142 7473 6520 7261 796c 203a 2020 00000a0 0000 0000 e201 2d40 2d2d 2d2d 2d2d 2d2d 00000b0 534d 3142 7473 6620 6c75 3a6c 2020 2020 00000c0 0000 0000 e201 0040 2d2d 2d2d 2d2d 2d2d 00000d0 000a 00000d1 Now you may be saying you'd rather not be looking at any of that, but if you really had to, if you had no choice but to look at one or the other, which would you rather look at? LSB1st or MSB1st? Remember, the numbers you are looking for will usually be mixed with text, and the text will likely help you find what you are looking for. If the text gets byte-swapped on you, it's going to be just that much harder. The salesman says he has tools to let you look at the data, so you don't have to worry. That's all well and good, but it makes you dependent on the vendor, even when the vendor has time and budget to help you. When the vendor doesn't have time or budget, wouldn't rather be able to use simple tools, at any rate? -- as a start before you set to making your own tools? Somebody usually pipes up with, "Well, if you guys would all join us Little-Endians, if everybody did it all the same, there'd be no problems!" So. From now on, everyone does Little-Endian. Blogs? News aggregators? Textbooks? Novels? Are we going to go back and reprint all the classics with Little-Endian numbers? - 71 September 7871? Somebody pipes up about now saying everything I'm talking about is human stuff, not technical at all. The Unicode Consortium determined that they did not want to be caught up in the argument. So they decided that Unicode characters could be encoded either direction. They even figured out how to put a flag called the Byte Order Mark at the beginning of a stream of Unicode text, to warn the consumer of the stream what order to expect the characters in. Characters, you see, are not integers after all, contrary to the opinions of many a respected computer scientist. Little-Endian byte order enforces this factoid. Well, the folks who defined the Internet decided they did not want to be reading data streams and crossing their eyes to read the IP addresses and other numeric data buried in the stream. So network byte order is the one that is easy to read, most significant first. If one hundred twenty-three thousand four hundred fifty-six is in the data stream, it shows up as 123456, not 654321. In setting up the demonstrations of byte order differences, I went to some pain to demonstrate one big difference between the byte orders. If you are looking carefully at the dashes, you may see how least significant first allows you to optimize math. If you can track the presence of carries, you can stop adding small numbers to big ones as soon as the carries disappear. Looks interesting, right? Tracking the carries takes more processor time and resources than just simply finish the addition out. This is one of those false early optimizations that has historically killed a lot of software projects. Worse, the programmer can look at one of these and think a particular case will never generate carries. This is almost always self-deception. The assumptions required to keep the carries from happening are almost always not valid in the end-user's context just often enough to cause hidden bugs of the integer overflow variety. Isn't that strongly enough stated? When we humans look at numbers, we perceive them as text. That allows us to do many things without consciously thinking of them, like move to the end or align them. CPUs have to do the same things with numberical text, as we can intuit by looking back at example 2. When CPUs work with numbers, they have to figure out all sorts of things about the number which we subconsciously read from the text -- Is there a sign?If there is no text, they have no clue ... unless the programmer has already told them. Is there a decimal point? How big is the number? Here is perhaps the strongest argument against least significant first: It induces bugs into software. Some people think it's a good thing to induce bugs into software. They think it guarantees their after-market revenue stream. I think there will always be enough bugs without practicing dodgey false optimizations, but what do I know? I've wasted two days I didn't have tilting at this, erm, rainbow. (Or chasing this windmill, maybe?) One of these days I'll get someone to pay me to design a language that combines the best of Forth and C. Then I'll be able to leap wide instruction sets with a single #ifdef, run faster than a speeding infinite loop with a #define, and stop all integer size bugs with a bare cast. And the first processor it will target will be a 32/64 bit version of the 6809 which will not be least significant bit first.
http://defining-computers.blogspot.com/2017/04/
CC-MAIN-2018-26
refinedweb
3,142
66.47
TypeError: "x" is not a function The JavaScript exception "is not a function" occurs when there was an attempt to call a value from a function, but the value is not actually a function. TypeError: Object doesn't support property or method {x} (Edge) TypeError: "x" is not a function Error type What went wrong? It attempted to call a value from a function, but the value is not actually a function. Some code expects you to provide a function, but that didn't happen. Maybe there is a typo in the function name? Maybe the object you are calling the method on does not have this function? For example, JavaScript Objects have no map function, but the JavaScript Array object does. There are many built-in functions in need of a (callback) function. You will have to provide a function in order to have these methods working properly: - When working with Arrayor TypedArrayobjects: - When working with Mapand Setobjects: Examples A typo in the function name In this case, which happens way too often, there is a typo in the method name: let x = document.getElementByID('foo'); // TypeError: document.getElementByID is not a function The correct function name is getElementById: let x = document.getElementById('foo'); Function called on the wrong object For certain methods, you have to provide a (callback) function and it will work on specific objects only. In this example, Array.prototype.map() is used, which will work with Array objects only. let obj = {a: 13, b: 37, c: 42}; obj.map(function(num) { return num * 2; }); // TypeError: obj.map is not a function Use an array instead: let numbers = [1, 4, 9]; numbers.map(function(num) { return num * 2; }); // Array [2, 8, 18] Function shares a name with a pre-existing property Sometimes when making a class, you may have a property and a function with the same name. Upon calling the function, the compiler thinks that the function ceases to exist. var Dog = function () { this.age = 11; this.color = "black"; this.name = "Ralph"; return this; } Dog.prototype.name = function(name) { this.name = name; return this; } var myNewDog = new Dog(); myNewDog.name("Cassidy"); //Uncaught TypeError: myNewDog.name is not a function Use a different property name instead: var Dog = function () { this.age = 11; this.color = "black"; this.dogName = "Ralph"; //Using this.dogName instead of .name return this; } Dog.prototype.name = function(name) { this.dogName = name; return this; } var myNewDog = new Dog(); myNewDog.name("Cassidy"); //Dog { age: 11, color: 'black', dogName: 'Cassidy' } Using brackets for multiplication In math, you can write 2 × (3 + 5) as 2*(3 + 5) or just 2(3 + 5). Using the latter will throw an error: const sixteen = 2(3 + 5); alert('2 x (3 + 5) is ' + String(sixteen)); //Uncaught TypeError: 2 is not a function You can correct the code by adding a * operator: const sixteen = 2 * (3 + 5); alert('2 x (3 + 5) is ' + String(sixteen)); //2 x (3 + 5) is 16 Import the exported module correctly Ensure you are importing the module correctly. An example helpers library ( helpers.js) let helpers = function () { }; helpers.groupBy = function (objectArray, property) { return objectArray.reduce(function (acc, obj) { var key = obj[property]; if (!acc[key]) { acc[key] = []; } acc[key].push(obj); return acc; }, {}); } export default helpers; The correct import usage ( App.js): import helpers from './helpers'
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Errors/Not_a_function
CC-MAIN-2021-17
refinedweb
550
59.8
otherstuf 0.902 other' Given recent versions (e.g. 0.9.10) of stuf, one could simply use from stuf import chainstuf. This portion of the otherstuf sidecar is now superflous. For counterstuf: from otherstuf import counterstuf c = counterstuf() c.update("this and this is this but that isn't this".split()) c.total = sum(c.values()) print "everything:", c.total print "'this' mentioned", c.this, "times" print "'bozo' mentioned", c.bozo, "times" print c Notes - Commenced automated multi-version testing with pytest and tox. - Now successfully packaged and tested for Python 2.6, 2.7, 3.2, 3.3 and PyPy 1.9 (aka Python 2.7.2). Previous exceptions for Python 2.6 have been eliminated. - Now packaged as a package rather than modules. This changes the import from from chainstuf import chainstuf and from counterstuf import counterstuf to from otherstuf import chainstuf and from otherstuf import counterstuf, respectively. -.) - Downloads (All Versions): - 42 downloads in the last day - 445 downloads in the last week - 1423 downloads in the last month - Author: Jonathan Eunice - Keywords: Counter ChainMap stuf attribute mapping nested - Categories - Development Status :: 4 - Beta - Intended Audience :: Developers - License :: OSI Approved :: BSD License - Operating System :: OS Independent - Programming Language :: Python -: otherstuf-0.902.xml
https://pypi.python.org/pypi/otherstuf/0.902
CC-MAIN-2014-15
refinedweb
206
52.26
Example code of jQuery animation Example code of jQuery animation Example code of jQuery animation The jQuery is very powerful framework. It provides a many animation effects that can Animating Images in Java Application Animating Images in Java Application This section shows you how to create an animation with multiple images. You can see how animation has been implemented in the following J Java Sleep Thread Java Thread sleep() is a static method. It sleeps the thread for the given time in milliseconds. It is used to delay the thread. It is used in Applet or GUI programming for animation Java Sleep Thread Example public class Animate Div in Jquery within one animation. $( "#btn" ).click(function() { $( " Java Shapes Bouncing App Java Shapes Bouncing App hi guys, im trying to implement the following into my java app code: Here's my code: first ill post the main class, then the animation panel class, then moving shape and moving rectangle class Sliding window protocol Sliding window protocol Hello Everyone. Will anybody send me the code to make sliding window protocol animation using java swings. I have to do... only animation to show how packets ar moving from one end to other... Thank u Free Java Applets Free Java Applets There are many free java applets available on the web that you can use in your website. These applets includes menu, clock, slide show, image animation Swings - Java Beginners ("java-swing-tutorial.JPG","My Website"); If the above is not correcet.. Can...("Animation Frame"); th = new Thread(); lbl = new JLabel(); Panel...){} } } For more information on Swing visit to : VoIP Java VoIP Java Java VoIP Telephony Servers? My company develops pure Java web.... Java Chart Designer 4.0 Java Chart Designer is a powerful Post your Comment
http://www.roseindia.net/discussion/18831-Line-Animation-in-Java.html
CC-MAIN-2015-48
refinedweb
293
55.44
by David Amid and David Boaz David Amid is an IBM master inventor and the technical lead of the Watson Decision Analytics team based in the IBM Research and Development Lab in Israel. David Boaz is a Decision Analytics researcher who works out of the IBM Research and Development Lab in Israel. Watson Tradeoff Analytics has been out for over five months. Since then, we have been asked on numerous occasions: How does Watson Tradeoff Analytics actually help me to make better decisions?? In this blog post we will try to answer this question. To make it clear and enjoyable we decided to follow Explained Visually’s paradigm — making difficult ideas intuitive. Watson Tradeoff Analytics has three pillars: - Pillar 1: Finding the top options, out of many possible options - Pillar 2: Visually exploring these options - Pillar 3: Simulating human judgment to guide you through the decision-making process Pillar 1: Finding the top options In most scenarios we all want to find the top option, not the top options. But, life is not that easy. In every decision we make in life, we have a dilemma around which option to choose, because there is always a tradeoff between competing goals. And it’s not always about money! For example, “what do I wear today” entails: how appropriate it is vs. how comfortable it is vs. did I wear it last week? “What car do I buy” involves: is it safe? How much does it cost? What is its fuel consumption? Serious decisions are the most difficult, like “what treatment best meets a patient’s specific needs?” Is it the treatment with a high success rate, or the one with low adverse affects, or simply the one they can afford? If there is no obvious best option, how do we find the option that best meets our preferences and needs? The best approach is to first identify the top options, because they represent the best deal you can get out of the tradeoff. Filtering versus Tradeoff Analytics What do people do if they don’t have Watson Tradeoff Analytics? How do they identify the top options? The common approach is to use traditional filtering. This is available on every shopping site. However, Filtering has drawbacks. In the example below, we have a set of mutual funds, and we want to find the top investment option. Relying solely on the traditional filtering approach, we've begun to narrow down the number of options by setting the Risk threshold to 0.8. Did you notice that when we started to filter by Risk, we missed good opportunities for high Mid Term Value investments? Those opportunities are now grayed out. Even more problematic, we've filtered out quite a lot, but we are still left with many options (blue circles) that aren't actually good options. There are options marked with an orange cross that are better than the blue ones in both Risk and Mid Term Value. The options marked with an orange cross are Watson Tradeoff Analytics. Those are the top options. In economics, this is called the Pareto Frontier, or Efficient Frontier (watch the movie), in Mathematics it is called Pareto Optimality. For each of the options marked with an orange cross, there is no other option that is better on both Risk and Mid Term Value goals. Try it out! You can change the goals and see how the graph changes, or you can play with the filters to understand how filtering is misleading. Identifying top options OK, so we understand that there is more to it than filtering. But how can one identify those top options? In our example, the set of top options is pretty easy to discern on this (2D) diagram, since they were the ones closest to the "you want to be here" point. However, in practice, this gets complicated, since it is rarely the case that we have only two goals or criteria. For example, selecting a car requires us to balance between safety, price, environmental, comfort and many other variables. In the Mutual Funds example, an individual making this decision would actually be looking at many other goals besides our two selection criteria, such as maximizing our Short Term Value, Mid Term Value, and Year-to-Date return (YTD) while minimizing our risk. So can we look at each two goals separately? Unfortunately, the answer is no. If we apply Pareto Optimality multiple times, each time for only two goals, the results between each set of two goals will differ. On the visual below, hover your mouse on a specfic option. When you hover over an option, that option is highlighted on all of the other views. Notice that all options that are considered "top" (orange cross) in one view, are not usually "top" in at least one other view (If you find one that is "top" on all views, it means that it is best in every respect. If this data was real and up-to-date, we would have told you to drop everything and go buy it!). Luckily, Pareto Optimality can take into account more than two objectives at a time. That is the not-so-secret sauce behind the first pillar of Watson Tradeoff Analytics. When Tradeoff Analytics gets its inputs (a table of options and a list of the objectives), it first executes a multi-objective Pareto filtering process, which picks for us the top options with respect to all of the objectives in that list. This allows users of Tradeoff Analytics to focus only on the top options, excluding the inferior ones, thus making the decision process much easier. With the top options list in hand, we are now ready to go to the next stage - visually exploring the trade-offs between those options. Pillar 2: Visually exploring your top options So now that we have found our top options (and got rid of the inferior ones) - how can we distinguish between them? How can we better explore and understand their trade-offs? Using conventional diagrams, we are limited to considering two objectives at a time. So how about using colors or size for additional attributes on the diagram? On the visual below, we added to the bubble (or cross), size as an additional visual attribute. Well, it’s a start, but humans tend to associate different importance to different visual gestures such as size, position, and color since it is just easier to discern some visual cues. This may cause unintentional bias towards one goal. Finally, what will we do if we have six objectives? What additional visual attributes can we add? To overcome this, the Operational Research community has traditionally used a visualization called Parallel Coordinates. Unfortunately, Parallel Coordinates only do a decent job when you have either But in the Mutual Fund example, where there are 115 funds, and 16 are considered top options (this time, according to 5-objective Pareto optimality), you won’t be able to reveal the tradeoffs, since you have more than a few options, but not nearly enough to look for trends. It’s simply overwhelming (see below) Once again Watson Tradeoff Analytics can help you. Watson Tradeoff Analytics provides a Client Library with novel visualizations that foster exploration. Consider the map below. The closer an option (a colored bubble) is to one of the corners/vertices (where each represents a goal), the better its value on that specific goal and the greater is the intensity of the sector that correspondes to that goal. In the Watson Tradeoff Analytics tool this is a interactive visualization where you can explore options and the tradeoffs between them. Pillar 3: Simulating human judgment to guide you through the decision-making process So, what do people do when they explore funds? They compare between them, and they employ their judgment in order to decide if the tradeoff between one goal to another is worth it. But we are humans, and in many cases, a computer can employ similar judgment and save us from making mistakes or missing out on more suitable options. That's the purpose of the third pillar of Tradeoff Analytics: simulating human judgment to guide you through the decision-making process. This example scenario shows how Tradeoff Analytics can simulate human judgement to predict that you may mistakenly consider dropping a mutual fund before you can benefit from its other significant gains: I am a delayed gratification kind of a person, so I am interested in funds that will benefit me in the long run. So I set Long Term to be above 13%. Now, I can obviously select the mutual fund that is highest on Long Term revenue, but I am aware that this is highly risky and if I change my mind in a few years, I won’t have as good of Mid Term gains. So I select the one to its right, marked with a star. However, Watson Tradeoff Analytics shows me that I can get more out of another option. If I am willing to endure a small, insignificant change in Mid Term Value. I can have large gains on YTD, Short Term Value, Risk, and even some Long Term. What I eventually select is completely up to me, but Tradeoff Analytics helps me to see a more complete picture. Watson Tradeoff Analytics provides you with a unique way of making informed decisions. Personally, I make all decisions this way (just don’t tell my wife… Oops too late?).. Watson Tradeoff Analytics has graduated from Beta and is now Generally Availlable.
https://developer.ibm.com/watson/blog/2015/07/29/what-makes-watson-tradeoff-analytics-is-so-different/
CC-MAIN-2017-47
refinedweb
1,595
61.46
- DataGrid refresh - Please help, i am tired of VS.NET nonsense error message - Any user control tutorial ? - Need DateTimePicker to display blank - System.Environment.Exit - Printing Escape codes to a printer - Adding MyComputer Icon to TreeView - Check code - PictureBox DragDrop and Click coexist - process usage problem with filling a large list view - ShowDialog() problem - Intercepting Another Application's Events - C Api call from VB.Net - CollectionBase - combobox - DataRelations - Convert string to DateTime and compare with another???? - How to make the cursor follow the screen pointer - Janus controls suite for net v1.0 - Background colour of column in datagrid - ColumnHeader Control? - Creating a Setup file - Use DataEnvironment in VB.NET - how to find the application is already running on a PC - Redirect Output - What is the equivalent to writing to the server? - Install WMI with a Setup - Referring to main form from nested controls? - Database Connectivity - Form Closing Event - IQueryCancelAutoPlay::AllowAutoPlay Method - Array Syntax? - Attn Jay Great Help THANKS RE Is there a function which takes string converts it to decimal WITHOUT rounding ? - Dynamic temp. datagrid col.gen. -Session access inside a class inside a UserCtrl - Is there a function which takes string converts it to decimal WITHOUT rounding ? - Default Images for ImageList?? - AutoScroll question... - distributing .net framework - Hosting Windows App via Browser - execute string - data bind list view? - use splitter with datagrid on form? - SelectSingleNode and Regular Expressions? - Novell Control's - Constructors with arguments - Attributes for Enumerations - Friendly Name - QueryInterface for interface Word._Application failed. - I'm shocked - Inheriting from UserControls - Need help with DataGrid in VB.net - Save DataGrid conetents in a text file? - get user object - Setup project returning error - Assembly Reference Problems - popup blocking - Starting / Stopping a Windows Service - Hide a column in a datagrid - List of all Forms in Project - ExtendedDataGrid - how pass variables within diferent assemblies ? w/t - Login timeout - Windows Form startup position - Calling Javascript Function from within VB Procedure? - moving Listview column changes SubItems indexes in the collection??? - pass form to function - Strong-Typing? Or Nested Collections - dynamic windows service name - ListView Double Click not Firing - threads, forms, and com automation - HttpWebRequest.GetResponse() problem - Instructions... Wonderful thing... - Datagrid - Replace methode, Replace Function, Stringbuilder replace, Regex Replace, Split - NET controls - remove safely USB memory stick - SoapFormatter, Serialization of a DictionaryBase derived class - Please help - SoapFormatter, Serialization of a DictionaryBase derived class - Please help - Using a datetimepicker in a datagrid - Excel: Check for borders of cells in all workbooks (vb.net) - How PtInRegion works under VB.NET - AutoScroll in a UserControl - how do i get the value of my primary key in datagrid? - execute statement - Date Time Picker For Time Alternative - how to access 2 namespace? - Disposing of a log in window - Reflection from System assemblies - Threading Question - Accessing forms controls in module - how to save copy of project from inside visual studio - Cant attach to dll error 126 - Finding string in text file - sending html formatted mail through vb .net - Class Definitions and Performance - Skinning - What code can my program Email me with if a condition happens? - help someone read data from a web page - Cannot get rid of these exceptions - Browser Control Events ? - 'Sub Main' was not found .... - Assembly language link with VB.NET - Regular Expression brain teaser - Sending email! - Formatting textbox on the fly - List Box - CheckBox - Writing client-server sysetem using VB.NET - Word Template with XML - "Filter" data from XML datafile into datagrid - What langugage is used to program EA Games Battlefield 1942? - Caspol / One Touch Deployment / Security Issues - Any Idea's ? - VB6 to .NET - What can slow down VB.NET windows database applications - make pop account on mail server - Validating file names - Outlook programming - Exception setting SocketOption - Dynamic MDI Menus? - mulitline textboxes - Terminal Server ActiveX Control - writing to CD from visual basic - Best Resource for learning .NET (from a VB6 User's point of view) - Help with vb.net controling excel and excel is hanging in mem - ### vb.net DataGrid problem, cant make unbound columns ### - Project Items - What's the purpose? - Re : Re ## Startup Forms and Splash screen Problem - Please helpme!! ## - Compress dataset sent over a web service - listview index - Drag and Drop on Web page - Active directory - combobox in datagrid scrolling behavior - Help with Crystal Report - ## Startup Forms and Splash screen Problem - Please help me!! ## - determing when form has stopped moving - 2nd Post Continuous subForm Tabular, Not DataGrid - reflection and picturebox - Things left to be desired in dot net - searching telnet example. - need some help with access types. - TAPI or similair? hat do I need? - Include dotnetfx and mdac in setup - VB.NET Project with no SLN or VBPROJ file - Beginner question - Class library search order? - Problem with bitmap and Crystal Report - Kill the desktop - Preventing External File Access - Creating modular program? - String Pattern Search Ideas - Referencing MSHTML Library - Basic Listview question - Connection syntax not right - VB6 to VB.NET - HTML Editor Control? - Convert string to decimal - data at the root level is invalid - papersource - SOLUTION : indirect reference to assembly - General Question - ShowDialog() problem - set custom date format for bound textbox? - Split string to char ? - Tab Control Problems - Get Result OF Console - connecting remore access 97 database - Check for another application running in Windows - ARRAYs in VB.Net - image from web - Datagrid - Configuration appsettings - Extending the string class? - Move to a previous record while on a new record - HTMLHelp Search - WebDAV Protocol - PtInRegion always returns true - Trying to Deserialized from a String variable holding the SOAP representation. - FTP "LIST" command help - data bound listbox + data table select command - .Net and pocket pc 2003 and sql ce - Enumeration using reflection - code generator received malformed input - Using Dexplore in VB code - .NET Components, COM and Firewall??? - Controls For Time - on closing event of the form - need complete bound form examples - VB.NET Crystal Select statement for Viewer - Private class public Members? - entering data in a datagrid - About L2CacheSize - Where is the form minimize event? Thanks (Null) - What happens with this 2 alternative - Grouping 1st 2 characters (2nd try) - Grouping 1st 2 characters - Double buffering on the Paint Event - Movie to Thumbnail - VB.NET service does not start - VB.nET service doesn't start - Image button. - Quick Question - dll and import statements problem - Exit program - How to send a char to TextBox - Determining the current procedure name and the namespace for the class it is in - How to limit my VB.NET app to show only one taskbar entry - Importing namespace does not work?? - DateTimePicker - Trying to get a Date Breakdown - Help with data grid - PrintDocument: PaperSize of custom paper - please help - Default values in Access database not applied to Dataset datarow... - Return DataRow or Row index from DataView - clearing a listbox - HOWTO?: decimal property to supply null - Sending arguments to a context menu event handler - RegEx Replace - Network State - custom drawn databound listbox - Serial Communications: best way to start?? - data types - building a sort - How to view MS Access reports from VB.NET - specifying selected item in a listbox - API call to GetTopIndex not working for Listview (vb.net) - Base table - Windows Service Description - Nested Collections - Cast from 'DBNull' to "string' is not valid - Client/Server sockets and WMP Problem! Need help! - Generating XML from Excel / Trapping saved Excel file using FileSystemWatcher - Is it time to change from vb to VB.Net? - Showing treeview nodes in Bold - How to Access Data Using Query? - ?disable txtbox without graying out font? - dbcontrol - Print button on PrintPreview dialog box not working?! - Focus of RadioButton - etched line look quickly - How do I step through my DLL? - Timer as form inactivity monitor - tick event firing unexpectedly - TO LEARN PROGRAMMING - using windows services - Databinding - Exporting Table in a Dataset to Excel - Find Dialog - Urgent: Outlook with VB.net - Formsize larger than screensize - How do I do this? - Comparison Operators - Looping through a bunch of text boxes - Create DLL in VB.NET - Vb.Net & Win98 - Update a Progress Bar asynchronously - Treeview Help - Problem with different time zone - How to use QBColor in VB.NET? - test production - How to subclass a treeview control - Exposing System.IO.FileInfo in a PropertyGrid - Grab filename from drag&drop into ListView? How? - how can i browse the pictrue quickly? - please help me.textbox do not work well - building vb.net with nant - Help with Directory Properties - Directory Properties - Directory Properties - Problem with connect database(SQL Server) - Reconnecting to a remote MessageQueue when it has to reboot - distributing windows services - Purging Dataset - Adding Double Clcik to DataGrid - Exporting to Word from Web App - Shift-F2 Functionality in VB.Net - XSLT extension function fails when not running under aspnet_wp. - Character separated lines into a collection - Possible to get project's output automatically registered to the GAC on compile? - Design question - Edit and Deleting Record in a listview in VB.NET - CObj() Function -- What's the point? - Automating Access Report Printing in VB.NET - how to de-select datagrid row with currency manager? - Print Icons - datagrid - wmv playtime (?) - test - Weird VB .Net Question - datagrid - Is there a Now time function that gets the Servers Now? - datagrid to dataset - impersonation - typed dataset for crystal rport - Tab pages rearrangeing themselfs - split question - How to Map IIS Virtual Directory to Path Name - Event Logging - what is wrong? - fastest, easiest way to remove item from a string array? - Simple Stock Control - ColumnHeaderHeight in DataGrid - datagrid question - DataGrid - Multiple Selections - Passing Classes - cdate is slow... any alternatives? - wordwrap a text with stringtrimming does not work properly (bug?)! - requirement to cast from one custom object to another custom object - Crystal report - suppresion formulas - Continuous sub Form Tabular, not Data Grid - CommonAppDataRegistry - How can I tell if the system is 'idle', or the screen saver is running, or the machine is locked? - Self updating applications - Possible to read information for the Application Pool on IIS 6 - Printing - Select Media Type from VB - Text Height - Printing a series of PDF files - cant populate comboboxes in XP - System Restore WMI classes - HTML in Textbox - How to create a file link in RichTextBox ? - Force close of open file - Do any of the .NET languages offer a way to assure recovery after a stack overflow? - mailmerged document by using automation from VB.NET to OFFICE 2000 - VB .NET Enumerating Machines - Form Size Problem - Help with StreamWriter and StreamReader... - way to check if program is already running - how to check if a form is already open - Try...Catch question - Accessing and displaying SSL web pages and cookies from a windows form - SQL operation in Server Explorer differ from Enterprise Manager - can't get an answer to a crystal rpt question re database connection - Cor, Herfried, Armin, Ken Tucker? - Character Escapes Don't Work in VB Regex Replace? - Debugging an app - ComboBoxes in vb.net - Select and modify data within a RichTextBox by mouse click - XML to plain text - Using fax in vb.net - Linking to Access in VB - Calculated Columns in a Datagrid - regular expression question - manually loaded assembly fails to find its referenced assemblies - WTF?? - textfile into datagrid - How do you set focus to a control - Print RTF text that overflows one page - VB SCript in .NET? - Hiding a tab in a TabControl? - Update progress bar to reflect database update? - Visual Studio Memory Usage - Creating COM objects with VB.NET - Registry Keys and Administrators - Please HELP. VS.NET report wrong compile errors - listview typing - download files from a server? - conversion problem string to GUID - Format Datagrid Unicode Chars - ini file - VB.NET Security Question - Question related to dataset - Inet control equivalent - VB.NET Deployment - Checking for user-modified files - howto use params with insertCmd for DataAdapter? - Setting variables at Compile Time using #Const - Images within Richtextbox... - Inheritance - How can I improve this code please? - web browser control - Working with the Windows Messenger API - ghostscript leaves windword processes - Windows styles - Looking for suggestions: VB.NET Learning Project - IIdentity casting problem - Registering font at runtime under Windows XP - Reflection - Distribution Question - client/server - Themes - reading a file - VB.NET windows service doesn't start - prevent form from disposing - help with comparing strings.... - MouseUp-Event gets 'lost' - ASP.Net - disabling threestate of checkbox in datagrid (datacolumn) - Add a Timer to a form at run time - vb.net deployment - Executing the code contained in a string? - About Dataset & DataGrid - Crystal reports help possible here? - Dataview displaying several tables - Howto detect the calling program of a library - license question - IE Toolbar in vb - Ftp client example in VB.net - Help for a vb.net application - global changes for crystal reports connection strings - Format method of String Type - Problem with MDI children maximization (maximize bug) - Is there any vb.net code to email from a webform? - Numeric to string problems. - Try this - Deleting Tabs from Tab Control - .Net equal to the VB Right string function - The ListView Control? - Raise Events vs Timmers recognizing thread instantiations. - keypress question - XmlReader - RowFilter in a dataview - Timer resources - Killing a thread - Impersonation half way working - Position MDI child at lower right corner of MDI parent - error when loading an old style dll - Windows Control Library - convert string variable to uniqueidentifier - Vb Setup/Obfuscating questions - Issue with calling a webservice from my IP - Listview Visible Row Count - Problem with third party dll, and application shut down - help! - Printing Text - transparency in backcolor does not work - newbe: create new event with designer - Need help with regular expression - how big can disconnected dataset be? - Outlook/Mapi Object, Help saving attachments using VB.NET Forms - catch event in non-visual class? - Bug Tracking Software (Web Based?) - COMInterop problem ! - Another Data Grid question - Name/Value Properties - excel application does not shut down after program exit - Scroll button not working now in VB6 - Ping code - Install internet package - Add new IFieldObject in Crystal Report - web service access - VB vs VB.NET - Reflection and PropertyInfo - HowTo get proxy settings from applications? - shell,console and standard input and output - Re. deriving from eventArgs( ) - XP Style question - Help on Line Control for Vb.net - eventArgs( ) in customEventArgs( ) - purpose? - eventArgs( ) in customEventArgs( ) - purpose? - eventArgs( ) in customEventArgs( ) -Purpose? - ascii length?? - Toolbar Button / Dropdown_click if firing twice - Stepping into - A couple questions - SmtpMail error - Try/Catch does not detect out of stack space error, under certain circumstances - ListView issue - relink access tables using vb.net coding.... Help - What is the difference between If and #If? - is there another product - other than Visual Studio .net - Addhandler & Collections - Regular Expressions - COM interop q - HTML code for email - Using VB .NET 2003 project in VB .NET 2002 - ListBox's first shown item - Cloning an object - UCOMIStream, MSHTML and WebBrowser control Persistence Problem - Daily calendar - HHTP file download: - Programatically Add Tab with Controls - view Trace output OUTSIDE of the ide - Deployment Problem - Multicolumn combobox in datagrid - Help part in a VB program - group of imports statements - Updating Textbox Text - Long String - Docking / Auto Hide Toolbars - Thanks for the help.... - Homework help for If Then Else statements Could someone check my work? - MySQL, BDE and VB.Net 2003/ADO.NET Question - VB.Net and MSDE Replication - XP Theme Colours - Draggable controls possible at run-time? - Overwriting "Equals" - Casting problem with C# DLL - calling a function in a managed .dll - losing waitcursor - Open Application Files - Copy a folder/Directory - clecklistbox vertical size - Showdialog from 1.0 to 1.1 framework - form.text = "" and controlbox = false - GetResponseStream - Scanning Documents from VB.Net 2003 - Please Help! Arraylist of structures - Performance of SubString and IndexOf methods - Calling DLL from VB.NET - Calling unmanaged dll from VB.NET - After days and hours, could not achieve to do this ... - Clarification in COM - converting to excel - strings that look like numbers - Concurrent routines - converting to excel and column data with carriage returns - Disable Outlook 2002 / 2003 Warning when accessing the Outlook object model - var name same as type-declared-as' name - How convert path > 8.3 - Web Scraping with WebClient - windows service calling another application - Not a member of the debugger users group - MSFT and random numbers - Typed Dataset and Find row method - Search for Images in Word using VS Tools for Office Systems - How to Autosize DataGrid Columns - How to connect to device - progress bar - what am I missing? - Windows Impersonation - a question about authoring user controls - External Window Location - Multi Monitor - referencing existing object - Thread A notifying Thread B of an Event - Global Change to DataGrid - SQL Query - Declaring constructor interfaces [in interfaces] - Capitalize First letter of keywords - Passing Datasets as Object - Strict On - How get type? - Structure within or outside a class? - Load a new dataset from Excel file?? - COM object separated from its underlying RCW error - Setting MDI Child Form Size to fill parent without maximising - ComboBoxes - Change color of a Statusbar? - is there a way to do a C++ style friend class/function etc - Disable Excel Macro warning? - string wrap in between sql statement - flatstyle setting across all app change... - Code Window - Textbox cursor position - Unable to connect to SQL database - Problem with using Excel.Application object - ListViewSubItem - User Control in a Library - String and linefeed - MSI - need a tutorial - Printing from classes and modules - Drag & Drop Object implementing an Interface - Log off user - How to check a network server is available - Code Access Security - Anonymous array creation possible in VB.NET? - CreateWindowStation - very strange behavor - Doubt about Visual Basic 6.0
https://bytes.com/sitemap/f-332-p-97.html
CC-MAIN-2019-43
refinedweb
2,847
56.55
Base types, Collections, Diagnostics, IO, RegEx… This post discusses features in the preview version of MEF, and some details may change between now and the full release. In the first version of MEF there are only two notions of lifetime: A shared global lifetime or a per instance lifetime. In MEF 2 Preview 4 new support has been added to enable a finer grained control over the lifetime and sharing of parts. Before we can cover those changes in detail, we need to introduce ExportFactory<T>. Let us consider a simple application scenario of a RequestListener, which spawns a RequestHandler, which uses a DatabaseConnection to connect to a data source (Fig 1): Let us also consider the following parts that implement the RequestHandler and DatabaseConnection. [Export] public class RequestHandler { [ImportingConstructor] public RequestHandler(DatabaseConnection connection) { } } [Export] public class DatabaseConnection { public DatabaseConnection() { } } To better illustrate the new capabilities we’ve added in MEF 2, some of the scenarios that could be achieved in MEF version 1 are discussed below. When we first start building our app, we are being very conservative about resources and not knowing our load we spin up a single instance of RequestHandler and DatabaseConnection We could instantiate an instance of the RequestHandler using the following code. TypeCatalog global = new TypeCatalog( typeof(RequestHandler), typeof(DatabaseConnection)); var container = new CompositionContainer(global); var requestHandler = container.GetExportedValue<RequestHandler>(); Regardless of the number of times GetExportedValue() is called, the same instance of the RequestHandler is always returned (Fig 2). Now we realize that the throughput for our system is really poor, since we can only process one request at a time. So in order to increase our throughput, we spin up new instance of RequestHandler and DatabaseConnection for every request that comes in. In order to achieve this with MEF we can put a PartCreationPolicyAttribute on RequestHandler and DatabaseConnection: [PartCreationPolicy(CreationPolicy.NonShared)] public class RequestHandler { … } [PartCreationPolicy(CreationPolicy.NonShared)] public class DatabaseConnection { … } Now, with the same invoking code to GetExportedValue(), we now get new instances for RequestHandler and DatabaseConnection every time. Our throughput is now up, but creating a new database connection per request is really taking a toll on our database server which can handle only a few open connections at a time. To ease the load on our database server we hence decide, that we will share a single instance of a DatabaseConnection, among multiple RequestHandlers. This will give us our throughput improvements without overloading our database server. We can achieve the following by getting rid of the PartCreationPolicy we have on the DatabaseConnection class, so it would now look like the following: public class DatabaseConnection { … } With the removal of this attribute, we now get only a new instance of the RequestHandler every time GetExportedValue() is called, which all share a single instance of the DatabaseConnection. The above scenarios that we have seen are the sharing scenarios that could be achieved using MEF in the past releases. However there is one glaring omission in the matrix that we presented above. What if I wanted a multiple instance of a dependency from a single instance of what it was depending on? To put it in the context of our example, how does the RequestListener class which was the first block in our diagram create multiple instances of the RequestHandler. This brings us to: In order to enable this scenario, in the latest version of MEF, we introduce a new class called the ExportFactory<T>. ExportFactory<T> may be familiar to Silverlight 4 developers, since it was shipped as a part of the Silverlight 4 SDK. The export factory allows us to create new instances of dependencies and control the lifetime of the created parts. Let us go back to our initial block diagram and write out the code for the RequestListener class. [Export] public class RequestListener { ExportFactory<RequestHandler> _factory; [ImportingConstructor] public RequestListener(ExportFactory<RequestHandler> factory) { _factory = factory; } public void HandleRequest() { using (var instance = _factory.CreateExport()) { instance.Value.Process(); } } } Now we see in the code that instead of importing a RequestHandler as a depedency, we import an ExportFactory<RequestHandler>. MEF treats ExportFactory<T> as a special class, and will automatically hook it up to the part providing T. We then added a HandleRequest() member, which does the work of instantiating the RequestHandler. The CreateExport() method creates an ExportLifetimeContext<T>, which implements IDisposable, which is used to control the lifetime of the objects created by the factory. Since the listener is creating the parts, we can get also get rid of the PartCreationPolicy attribute on the RequestHandler. Putting all the pieces together a call to: TypeCatalog global = new TypeCatalog(typeof(RequestHandler), typeof(DatabaseConnection) , typeof(RequestListener)); var container = new CompositionContainer(global); var requestListener = container.GetExportedValue<RequestListener>(); requestListener.HandleRequest(); will result in the following composition graph: Stay tuned for part two of this post where we talk about some more sharing scenarios and how we can use the new scoping enhancements in MEF in conjunction with ExportFactory<T> to accomplish these. Hi, I'm glad to see the way MEF is going and the fact that more and more people are using MEF even when extensibility is not an external request but internal need. I also like the fact that Microsoft MEF and Google Guice libraries are getting more and more closer to each other - and this raise the question why not simply go ahead and use ExportFactory<T> the way guice's FactoryModuleBuilder is built. It is quite obvies that the next request after ExportFactory will work is to pass some state into the part that needs to be created. Google already solve that. Keep up the great work. Thank you, Ido. Hi Ido, thanks very much for the encouragement. You might be interested to find that parameterized construction is supported in different forms by many composition frameworks on .NET as well. This scenario hasn't shown up frequently in requests from MEF users though - do you find you need parameterized construction often? If so it would be interesting to hear more about where this is necessary. Regards, Nick Hi guys. This is actually a comment for the previous post, but it seems that the blog engine would only allow me to comment on the latest one. Anyway. The question: is there a way to explicitly control composition scopes? On one hand, in the previous post about MVC integration, MEF is nice enough to take care about request scope for me, but I don't see a way to plug into that process. On the other hand, several posts ago ("What's new in MEF2"), I've seen this thing called CompositionScopeDefinition that looks like an explicit specification for scopes, but with that one, I don't see a way to "close" the scope. To put it in other words: how do you determine when to dispose of components that were created within a scope? And on third hand (yes, I'm from Mars :-), with MEF1, I used to deal with scoping by creating nested CompositionContainers, but that doesn't work very well with custom ExportProviders. What would really like to see is something like: using( var scope = compositionContainer.OpenScope( /* some scope definition here */ ) ) { var rootComponent = scope.GetExport<MyRootComponent>(); // The component tree gets composed at this point rootComponent.DoYourScopedThing(); } // The component tree gets disposed at this point If I had that thing, I could easily build MVC integration on top of it, but I could also use it in other contexts. So, the question again: do you have a generic (as in, not MVC-bound) way to completely (as in, "open" and "close") control scope? Hi Fyodor - I've responded via your similar question on Stack Overflow: stackoverflow.com/.../8547204 In short, we still have some work to do in this area and will be paying your scenario close attention. There's also some additional information now on this blog describing CompositionScopeDefinition in more detail: blogs.msdn.com/.../sharing-with-compositionscopedefinition-in-mef2-alok.aspx
http://blogs.msdn.com/b/bclteam/archive/2011/11/17/exportfactory-amp-lt-t-amp-gt-in-mef-2-alok.aspx
CC-MAIN-2015-48
refinedweb
1,319
50.16
#include <ares.h> typedef void (*ares_callback)(void *arg, int status, void ares_search(ares_channel channel, const char *name, int dnsclass, int type, ares_callback callback, The callback argument arg is copied from the ares_search argument arg. The callback argument status indicates whether the query sequence ended with a successful query and, if not, how the query sequence failed. It may have any of the following values: The callback argument timeouts reports how many times a query timed out during the execution of the given request. If a query completed successfully, the callback argument abuf points to a result buffer of length alen. If the query did not complete successfully, abuf will usually be NULL and alen will usually be 0, but in some cases an unsuccessful query result may be placed in abuf.
http://www.makelinux.net/man/3/A/ares_search
CC-MAIN-2016-26
refinedweb
131
53.95
Hello, I have just got into learing to program (In C), and I'm having a problem with floating point. It's a simple get the volume of a circle if given a radius problem. #include <stdio.h> char user[100]; float radius; const float PI = 3.1415; float volume; int main() { printf("Enter radius of circle: "); fgets(user, sizeof(user), stdin); sscanf(user, "%f", &radius); volume = ((4.0 / 3.0 * PI) * radius) ^ 3.0; printf("The volume of the circle is %f\n", volume); return (0); } Now I don't know if this message board has problems with posts having source code in them, but it's a short program, so don't kill me. What I want to know from anyone that knows C, is to tell me if I did anything wrong, because when i compile the code, it brings up a error message saying illegal use of float point. I belive the problem lies in with the squaring the problem by 3. Maybe you can't square floating point numbers? Or a special function or key to do it? I dont know, please help.
https://cboard.cprogramming.com/c-programming/40105-illegal-use-floating-point-c-printable-thread.html
CC-MAIN-2017-09
refinedweb
188
81.12
Whenever we work with data of any sort, we need a clear picture of the kind of data that we are dealing with. For most of the data out there, which may contain thousands or even millions of entries with a wide variety of information, it’s really impossible to make sense of that data without any tool to present the data in a short and readable format. Also read: Get Head and Tail of a Pandas Dataframe or Series Most of the time we need to go through the data, manipulate it, and visualize it for getting insights. Well, there is a great library which goes by the name pandas which provides us with that capability. The most frequent Data manipulation operation is Data Filtering. It is very similar to the WHERE clause in SQL or you must have used a filter in MS Excel for selecting specific rows based on some conditions. pandas is a powerful, flexible and open source data analysis/manipulation tool which is essentially a python package that provides speed, flexibility and expressive data structures crafted to work with “relational” or “labelled” data in an intuitive and easy manner. It is one of the most popular libraries to perform real-world data analysis in Python. pandas is built on top of the NumPy library which aims to integrate well with the scientific computing environment and numerous other 3rd party libraries. It has two primary data structures namely Series (1D) and Dataframes(2D), which in most real-world use cases is the type of data that is being dealt with in many sectors of finance, scientific computing, engineering and statistics. Let’s Start Filtering Data With the Help of Pandas Dataframe Installing pandas !pip install pandas Importing the Pandas library, reading our sample data file and assigning it to “df” DataFrame import pandas as pd df = pd.read_csv(r"C:\Users\rajam\Desktop\sample_data.csv") Let’s check out our dataframe: print(df.head()) Now that we have our DataFrame, we will be applying various methods to filter it. Method – 1: Filtering DataFrame by column value We have a column named “Total_Sales” in our DataFrame and we want to filter out all the sales value which is greater than 300. #Filter a DataFrame for a single column value with a given condition greater_than = df[df['Total_Sales'] > 300] print(greater_than.head()) Method – 2: Filtering DataFrame based on multiple conditions Here we are filtering all the values whose “Total_Sales” value is greater than 300 and also where the “Units” is greater than 20. We will have to use the python operator “&” which performs a bitwise AND operation in order to display the corresponding result. #Filter a DataFrame with multiple conditions filter_sales_units = df[(df['Total_Sales'] > 300) & (df["Units"] > 20)] print(Filter_sales_units.head()) Method – 3: Filtering DataFrame based on Date value If we want to filter our data frame based on a certain date value, for example here we are trying to get all the results based on a particular date, in our case the results after the date ’03/10/21′. #Filter a DataFrame based on specific date date_filter = df[df['Date'] > '03/10/21'] print(date_filter.head()) Method – 4: Filtering DataFrame based on Date value with multiple conditions Here we are getting all the results for our Date operation evaluating multiple dates. #Filter a DataFrame with multiple conditions our Date value date_filter2 = df[(df['Date'] >= '3/25/2021') & (df['Date'] <'8/17/2021')] print(date_filter2.head()) Method – 5: Filtering DataFrame based on a specific string Here we are selecting a column called ‘Region’ and getting all the rows that are from the region ‘East’, thus filtering based on a specific string value. #Filter a DataFrame to a specific string east = df[df['Region'] == 'East'] print(east.head()) Method – 6: Filtering DataFrame based on a specific index value in a string Here we are selecting a column called ‘Region’ and getting all the rows which has the letter ‘E’ as the first character i.e at index 0 in the specified column results. #Filter a DataFrame to show rows starting with a specfic letter starting_with_e = df[df['Region'].str[0]== 'E'] print(starting_with_e.head()) Method – 7: Filtering DataFrame based on a list of values Here we are filtering rows in the column ‘Region’ which contains the values ‘West’ as well as ‘East’ and display the combined result. Two methods can be used to perform this filtering namely using a pipe | operator with the corresponding desired set of values with the below syntax OR we can use the .isin() function to filter for the values in a given column, which in our case is the ‘Region’, and provide the list of the desired set of values inside it as a list. #Filter a DataFrame rows based on list of values #Method 1: east_west = df[(df['Region'] == 'West') | (df['Region'] == 'East')] print(east_west) #Method 2: east_west_1 = df[df['Region'].isin(['West', 'East'])] print(east_west_1.head()) Method – 8: Filtering DataFrame rows based on specific values using RegEx Here we want all the values in the column ‘Region’, which ends with ‘th’ in their string value and display them. In other words, we want our results to show the values of ‘North‘ and ‘South‘ and ignore ‘East’ and ‘West’. The method .str.contains() with the specified values along with the $ RegEx pattern can be used to get the desired results. For more information please check the Regex Documentation #Filtering the DataFrame rows using regular expressions(REGEX) regex_df = df[df['Region'].str.contains('th$')] print(regex_df.head()) Method – 9: Filtering DataFrame to check for null Here, we’ll check for null and not null values in all the columns with the help of isnull() function. #Filtering to check for null and not null values in all columns df_null = df[df.isnull().any(axis=1)] print(df_null.head()) Method – 10: Filtering DataFrame to check for null values in a specific column. #Filtering to check for null values if any in the 'Units' column units_df = df[df['Units'].isnull()] print(units_df.head()) Method – 11: Filtering DataFrame to check for not null values in specific columns #Filtering to check for not null values in the 'Units' column df_not_null = df[df['Units'].notnull()] print(df_not_null.head()) Method – 12: Filtering DataFrame using query() with a condition #Using query function in pandas df_query = df.query('Total_Sales > 300') print(df_query.head()) QueryFunction Method – 13: Filtering DataFrame using query() with multiple conditions #Using query function with multiple conditions in pandas df_query_1 = df.query('Total_Sales > 300 and Units <18') print(df_query_1.head()) QueryFunction Method – 14: Filtering our DataFrame using the loc and iloc functions. #Creating a sample DataFrame for illustrations import numpy as np data = pd.DataFrame({"col1" : np.arange(1, 20 ,2)}, index=[19, 18 ,8, 6, 0, 1, 2, 3, 4, 5]) print(data) Explanation: iloc considers rows based on the position of the given index, so that it takes only integers as values. For more information please check out Pandas Documentation #Filter with iloc data.iloc[0 : 5] iloc Explanation: loc considers rows based on index labels #Filter with loc data.loc[0 : 5] loc You might be thinking about why the loc function returns 6 rows instead of 5 rows. This is because loc does not produce output based on index position. It considers labels of index only which can be an alphabet as well and includes both starting and endpoint. Conclusion So, these were some of the most common filtering methods used in pandas. There are many other filtering methods that could be used, but these are some of the most common. When choosing a filtering method, it is important to consider the data you are trying to filter, the type of data, and the type of filtering you are trying to do. Hope you enjoyed this article. To learn more, don’t forget to read out Pandas tutorials.
https://www.askpython.com/python-modules/pandas/filter-pandas-dataframe
CC-MAIN-2022-33
refinedweb
1,311
59.64
Task #4633 Define Collection model 100% Description Define 1 model type, Collection that inherits from Content. It will store 4 string values and a single artifact. namespace: "acme" name: “jenkins” version: "3.5.0" min_ansible_version: “2.4” Related issues Associated revisions Revision 51bf29c8 View on GitHub Adds basic Collection model closes #4633 History #3 Updated by daviddavis 7 months ago Two reasons. The first was normalization. The second is that we were basing our models off the ones in Galaxy which also had a Role0 and a RoleVersion1. I'm not opposed to having one model especially since it seems like Collection is a lot smaller than Role was. [0] [1] #5 Updated by bmbouter 7 months ago - Blocks Story #4635: As a user I can use Mazer to install a Collection published by Pulp added #6 Updated by bmbouter 7 months ago - Blocks Story #4634: As a user I can use mazer to publish to Pulp added #7 Updated by daviddavis 7 months ago - Groomed changed from No to Yes - Sprint set to Sprint 51 #9 Updated by bmbouter 7 months ago - Status changed from ASSIGNED to POST PR available at: #12 Updated by bmbouter 6 months ago - Status changed from POST to MODIFIED - % Done changed from 0 to 100 Applied in changeset pulp_ansible|51bf29c8f554aad0f6ec6b7feac0c462be209ce9. Please register to edit this issue Also available in: Atom PDF
https://pulp.plan.io/issues/4633
CC-MAIN-2019-47
refinedweb
227
57.4
15. Re: Injection And Child ObjectsTom Goring Feb 27, 2008 3:30 PM (in response to Tom Goring) Hi, Yes that does work but it's not what Pete suggested I think. Is it the case then you should never store a reference to a Seam object. i.e. only use injection... Workaround is to rather than store the parent reference in the PojoChild look it up every time it is required. PojoChild in my case is like a utility class that can add functionality to the parent... Doing the look up or using injection makes the utility more cumbersome as it does not know the parent seam component name at runtime (and so this would have to be passed in). 16. Re: Injection And Child ObjectsPete Muir Feb 29, 2008 11:35 AM (in response to Tom Goring) Tom Goring wrote on Feb 27, 2008 03:30 PM: Is it the case then you should never store a reference to a Seam object. i.e. only use injection... Yes, basically. Whilst in the same request it should be ok though. Certainly safer to always use lookup in the child. 17. Re: Injection And Child ObjectsMatt Drees Mar 2, 2008 9:45 PM (in response to Tom Goring) Hmm. And you're also doing public PojoChild getPojoChild() { if ( pojoChild==null ) { System.out.println("creating pojo"); this.pojoChild = new PojoChild(instance()); } return pojoChild; } ... public static SeamParent instance() { return (SeamParent)Component.getInstance("seamParent"); } as Pete said, right? If so, then I think that should work. 18. Re: Injection And Child ObjectsTom Goring Mar 3, 2008 12:02 PM (in response to Tom Goring) Hi, Yes I tried that.... it does not work. It only works if you look you look up the SeamParent from the PojoChild every time it is required.... i.e. you can't store any kind of reference to it. This is a bit of a shame for me as I was planning small utility classes (e.g. action handler classes) to work with the parent.... The only way this works is if I look up the parent every parent is required. 19. Re: Injection And Child ObjectsMatt Drees Mar 4, 2008 4:42 AM (in response to Tom Goring) You're right, you're right. The culprit here is the org.jboss.seam.core.MethodContextInterceptor, which, during the execution of a component's method (such as create() above), makes it impossible to obtain a reference to that component's proxy object. It turns out there is a jira issue for this: JBSEAM-2221. So, maybe go ahead and vote for that issue. It's something that I think needs to be added to Seam, too.
https://developer.jboss.org/thread/180481?start=15&tstart=0
CC-MAIN-2020-05
refinedweb
447
74.39
This article is a sponsored article. Articles such as these are intended to provide you with information on products and services that we consider useful and of value to developers If our computer is on a LAN and has a DNS server, we can use the code below to get its IP address from the Web host. This code is very simple. We use the Dns class to connect to the DNS server on our Local Area Network. Then it returns a IPHostEntry object as IPHost. IPHost contains properties including the IP Address... Dns IPHostEntry IPHost using System; using System.Net; using System.Net.Sockets; class GTest { public static void Main() { string strHost; Console.Write("Input host : "); //Iput web host name as string strHost = Console.ReadLine(); IPHostEntry IPHost = Dns.Resolve(strHost); // though Dns to get IP host Console.WriteLine(IPHost.HostName); // Output name of web host IPAddress [] address = IPHost.AddressList; // get list of IP address Console.WriteLine("List IP {0} :",IPHost.HostName); for(int i = 0;i< address.Length; i++) // output list of IP address Console.WriteLine(address[i]); } } This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here
https://www.rootadmin.com/Articles/235/Get-IP-Address-from-Web-host-name
CC-MAIN-2021-04
refinedweb
225
69.58
I took a look at the std tuple header file (which I believe is authored/maintained by Stephan T. Lavavej of the VC++ team – original code by P.J. Plauger) and it would be an understatement to say that my head was spinning for a while just going through the code. To understand better how it was implemented, I simplified it down and extracted out the minimal code required to instantiate a tuple and to access its values (and be able to both read and write). It helped me understand how variadic templates are typically used with recursive expansion to significantly reduce lines of code when designing template classes. // tuple template<class... _Types> class tuple; // empty tuple template<> class tuple<> {}; // recursive tuple definition template<class _This, class... _Rest> class tuple<_This, _Rest...> : private tuple<_Rest...> { public: _This _Myfirst; }; The recursive specialization uses inheritance so that we’ll end up with members for every argument type specified for the tuple. To access a tuple value, a tuple_element class is used as a sort of accessor class. // tuple_element template<size_t _Index, class _Tuple> struct tuple_element; // select first element template<class _This, class... _Rest> struct tuple_element<0, tuple<_This, _Rest...>> { typedef _This& type; typedef tuple<_This, _Rest...> _Ttype; }; // recursive tuple_element definition template <size_t _Index, class _This, class... _Rest> struct tuple_element<_Index, tuple<_This, _Rest...>> : public tuple_element<_Index - 1, tuple<_Rest...> > { }; Again, recursive inheritance is used, and the 0th case is specialized as well. Notice the two typedefs, one of them is a reference to the type of the value, and the other represents the tuple with the same arguments as the tuple_element. So, given an _Index value, we can retrieve the type of the tuple and the type of the tuple value for that recursion level. This is used in the get method. // get reference to _Index element of tuple template<size_t _Index, class... _Types> inline typename tuple_element<_Index, tuple<_Types...>>::type get(tuple<_Types...>& _Tuple) { typedef typename tuple_element<_Index, tuple<_Types...>>::_Ttype _Ttype; return (((_Ttype&) _Tuple)._Myfirst); } Notice the return type, it uses the type typedef defined above. Similarly, the tuple is cast to the _TType typedef defined above and then we access the _Myfirst member (which represents the value). Now you can write code as follows. tuple<int, char> t1; get<0>(t1) = 959; get<1>(t1) = 'A'; auto v1 = get<0>(t1); auto v2 = get<1>(t1); Now, this goes without saying, but I’ll say it just to be sure – but this is just for demonstration. Do not use this in production code, instead use std::tuple which does all this and lots more (there’s a reason it’s 800 lines long). 10 thoughts on “Tuple implementation via variadic templates” Given that you’re not implementing the C++ Standard Library 😉 it’s not a good idea to use leading underscores: Even if this is purely for illustrative purposes, it’s a good habit to avoid illegal constructs in general. BTW, I’ve found the following to be quite informative on the subject matter: Thanks – and I accept that. I was merely re-using the existing code, but it was still a mistake. 🙂 And thank you for that link, looks like an interesting read to me. PJP wrote all of VC’s , with one exception. Ordinarily I just fix bugs, but I rewrote tuple_cat() from scratch to be non-runtime-recursive. Note that STL maintainers, in addition to having the right and responsibility to use _Ugly names, often have to do unusual things in the name of extreme genericity or other concerns. So, I want to emphasize: you should not unthinkingly imitate the STL’s implementation. I’d like to believe the optimizing compiler takes care of the runtime recursion. Does it? Thanks, 🙂 Thanks Stephan. Good advice there! R. Martinho Fernandez did a really good set of articles covering Tuple as well. Thanks for the link, moswald. You should have demonstrated the non recursive implementation as well. It gives (slightly) better results (notably layouts the members in the order they’re given). There’s a talk available online done at the last boost con er… C++ Now. Thanks for the suggestion, Julien.
https://voidnish.wordpress.com/2013/07/13/tuple-implementation-via-variadic-templates/
CC-MAIN-2018-26
refinedweb
693
56.96
Hello everyone... I've played around with my code from a previous posting (outputing spintf to array) to make things a bit easier to figure out. I'm trying to populate one character array with data from a second that's constantly changing. In the case below, the array "char decktype" is what I'm trying to populate. The second array "char value" will be changing as it goes through the FOR loop. But, it's not allowing me to do this. I'm running code blocks on a windows machine. Does anyone have any suggestions? Thanks very much for any help! Code: #include <stdio.h> #include <string.h> main(){ int d, j; d = 0; char decktype[312]; // Controlled by int j - Will store type of card char *value[13] = {"Ace", "Deuce", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King"}; //controlled by int d for (j=0; j<=311; j++) { decktype[j]=*value[d]; d++; if (d>12) {d=0} //Resets the card value to Ace after King has been assigned. }
http://cboard.cprogramming.com/c-programming/114807-how-do-i-set-char-%5Bj%5D-%3D-*char-b%5Bd%5D-printable-thread.html
CC-MAIN-2014-35
refinedweb
174
73.88
Synopsis Create and view spectral files for ChaRT2 (CIAO contributed package). Syntax from sherpa_contrib.chart import * Description The sherpa_contrib.chart module provides routines for creating and viewing the input spectral files for users of ChaRT2 and is provided as part of the CIAO contributed scripts package. The module can be loaded into Sherpa by saying either of: from sherpa_contrib.chart import * from sherpa_contrib.all import * where the second form loads in all the Sherpa contributed routines, not just the chart module. Contents The chart module currenly provides the following routines: See the ahelp file for the routine and the contributed scripts page for further information. change the extra labelling in the plot - that gave the model name and dataset identifier - have been removed (although the model name is now included in the plot title). Changes in the scripts 4.8.2 (January 2016) release The routines have been updated to work with version 2 of ChART. Bugs See the bugs pages for an up-to-date listing of known bugs.
https://cxc.cfa.harvard.edu/sherpa4.11/ahelp/sherpa_chart.html
CC-MAIN-2020-16
refinedweb
170
66.23
How Can I Change the Internet Explorer Home Page? Hey, Scripting Guy! Can I change the Internet Explorer home page by using a script? — AH Hey, AH. It’s Monday morning, and after a hard weekend of sitting around watching football, we decided to take it easy this morning. Yes, you can change the Internet Explorer home page by using a script; all you have to do is write a WMI script that modifies the HKCU\ SOFTWARE\Microsoft\Internet Explorer\Main\Start Page registry value: Const HKEY_CURRENT_USER = &H80000001 strComputer = “.” Set objReg = GetObject(“winmgmts:\\” & strComputer & “\root\default:StdRegProv”) strKeyPath = “SOFTWARE\Microsoft\Internet Explorer\Main” ValueName = “Start Page” strValue = “” objReg.SetStringValue HKEY_CURRENT_USER, strKeyPath, ValueName, strValue Boy, when we said we were going to take it easy this morning, we weren’t kidding, were we? As you can see, there’s not much to this script. We start by defining the constant HKEY_CURRENT_USER, setting the value to &H80000001; as we’ve noted in previous columns, this value tells our script to work with the HKCU portion of the registry. We then connect to the WMI service; note that the class used to modify registry values – StdRegProv – is in the root\default namespace (which makes it different from the hundreds of WMI scripts you’re probably most familiar with, almost all of which connect to the root\cimv2 namespace). After that, we assign the registry path and the registry value to a pair of variables (strKeyPath and ValueName, respectively). We then assign our new home page — — to the variable strValue. Once we get all these variables set, we then call the SetStringValue method to actually change the registry and, in turn, change the Internet Explorer home page. (As you no doubt figured out, each time Internet Explorer starts up, it checks HKCU\ SOFTWARE\Microsoft\Internet Explorer\Main\Start Page to determine the home page.) The end result: a script that sets the home page of the current user to the TechNet Script Center. (Yeah, we know: that is kind of silly. After all, who doesn’t already have their home page set to the TechNet Script Center?) Incidentally, Internet Explorer makes extensive use of the registry; in turn, that means you can easily write scripts that configure Internet Explorer settings. If you just can’t wait to do this, check out the Tweakomatic utility, which includes scores of scripts useful in managing Internet Explorer. Or, just sit tight for a couple weeks and wait until we get all those scripts added to the Script Repository. Why don’t we just add all those scripts today? Didn’t we mention that it was Monday? We’ll get them up soon. By the way, if all you want to do is determine the current home page configured for a user, try this script: On Error Resume Next Const HKEY_CURRENT_USER = &H80000001 strComputer = “.” Set objReg = GetObject(“winmgmts:\\” & strComputer & “\root\default:StdRegProv”) strKeyPath = “SOFTWARE\Microsoft\Internet Explorer\Main” ValueName = “Start Page” objReg.GetStringValue HKEY_CURRENT_USER, strKeyPath, ValueName, strValue If IsNull(strValue) Then Wscript.Echo “The value is either Null or could not be found in the registry.” Else Wscript.Echo strValue End If
https://devblogs.microsoft.com/scripting/how-can-i-change-the-internet-explorer-home-page/
CC-MAIN-2019-26
refinedweb
523
54.42
Battery life Being conservative with power consumption and having a great user experience aren't mutually exclusive concepts. You might notice that when you are conservative with power usage that your users are more satisfied with your app. Stop animations when they're not needed Although graphics and animations are important to the look and feel of most applications, if they're overused, they can affect the life of the device battery. Generally, you should always stop graphics and animations when they're no longer needed. When a Cascades animation runs (either implicitly or explicitly), the rendering thread redraws the scene at a rate of about 60 FPS. A significant amount of processing power is being used when the app could be inactive. For example, consider an app that frequently requests remote data, or performs intensive operations, such as running SQL queries. Although it's important to provide a visual cue indicating that this process is underway, you need to consider how often the animation runs (whether it's an ActivityIndicator, ProgressIndicator, or your own animation). If it turns out that the animation is running constantly, you might want to reconsider how you present this information to your users. In most cases, you can replace an animation with a static image that the user recognizes and associates with a particular process. For example, in a mapping app, an image of a compass rose or an arrow might indicate that the app is receiving location updates. Cascades stops rendering visual updates when the target node is no longer visible on the screen. This includes instances where the app moves to the background, and (most) instances where the node becomes hidden by another node (for example, if a Page is pushed on top of a Page that has a running animation). However, it's still a good idea to stop animations manually instead of relying on the framework to know when to stop rendering visual updates. For more information about graphics, see Graphics & Multimedia. For more information about Cascades animations, see Animations. Use moderation when polling sensors Retrieving sensor readings from the device's hardware can be costly to battery life when your app receives updates too frequently. Despite the impact on battery life, receiving sensor updates at short intervals is necessary for many apps and games. For example, in a racing game that allows the user to steer the vehicle by rotating the device, you probably need to receive a steady stream of accelerometer updates when the app is running. An app receives sensor updates in the same way regardless of the type of sensor. To capture updates for Qt sensors, you create an instance of the QSensor subclass, you set the active property to true, and you connect to the readingChanged() signal. To capture updates for BPS sensors, you first initialize BPS by calling bps_initialize() and then start receiving sensor events by calling sensor_request_events(). For some sensors, such as the holster, orientation, and proximity sensors, this approach is efficient. Because these sensors have a small set of default values, the number of updates that are reported to the app are small (for example, the holster sensor returns only a value of true or false). This situation also makes the sensors straightforward to manage because you don't need to worry about polling the sensors yourself. This approach doesn't work well when you are accessing sensors that are able to monitor small changes in the physical environment, such as the accelerometer, gyroscope, and magnetometer sensors. These sensors can potentially send numerous updates. There are some ways that you can reduce the number of updates that are sent to the app when you work with these sensors. For Cascades, you can use the QSensor::skipDuplicates property, which, when it's set to true, omits and skips successive readings with the same or similar values. For a C app, you can configure the sensor by calling sensor_set_rate() and sensor_set_skip_duplicates(), depending on your app's needs. You can also use a sensor filter to specify the frequency or number of new readings that the sensor delivers to your app, instead of having each new reading delivered. For more information about how to access sensors efficiently, see Sensor efficiency.. Although it might seem useful to have the user's precise location always available, this approach can cause a significant drain on the device battery. For information about how you can retrieve the location of the device, see Retrieving a single fix and Retrieving multiple fixes. Stop background processes Many apps depend on direct interaction from the user, so that all of their processing can be done while the app is open and being used. Apps that run in the background are generally those apps that need the ability to listen for a particular event and notify the user of the event, even while the app isn't in use. Depending on the configuration of the app, running in the background can mean different things. All apps are considered to be running in the background while they're visible as an Active Frame (also called an app cover) on the home screen. Even though an app can continue running processes in this state, it should stop any extraneous processing and use only the resources it needs to update the app cover. To listen for this event, you can connect a slot to the bb::Application::thumbnail() signal or process the NAVIGATOR_WINDOW_THUMBNAIL navigator event to notify the app of the change in the window state and the app is placed into the NAVIGATOR_APP_BACKGROUND run partition. When the user opens another app or the backlight turns off, apps that are running as Active Frames automatically enter the Stopped state, at which point they can't perform any operations. This approach helps to minimize battery power consumption and maximize the performance of the device. However, apps that request the run_when_backgrounded permission continue to run in the background, even after the user opens another app or the backlight turns off. If your app does require this permission, you should still make sure to stop any operations that aren't essential. To listen for this event, you can connect a slot to the bb::Application::invisible() signal. The implementation is similar to the example above. For more information about app states and running apps in the background, see App life cycle. Update Active Frames efficiently Active Frames (also known as app covers) provide your users with information without requiring them to reopen your app. An app cover can be updated dynamically at any time, but you should always be conservative with the frequency that you make updates. The image to the right shows a few examples of app covers used in the BlackBerry 10 OS. Even though these app covers contain dynamic content (BBM displays recent contact updates and the Calendar app displays upcoming events), the content remains static after it's displayed. The following code samples show different implementations of an Active Frame. One sample updates the Active Frame inefficiently and the other uses the thumbnail() signal to update the Active Frame. Not recommended The code sample below is a basic implementation of an app cover that displays a single integer. After the app cover is created, an update() function is called, which increments the integer and starts a timer to call the function again after 4 seconds have elapsed. #include "ActiveFrameInefficient.h" #include <bb/cascades/SceneCover> #include <bb/cascades/Container> #include <bb/cascades/Application> ActiveFrameInefficient::ActiveFrameInefficient(QObject *parent) : SceneCover(parent) { Container *root = new Container(); m_theLabel = Label::create().text("0"); root->add(m_theLabel); setContent(root); update(); } void ActiveFrameInefficient::update() { QTimer::singleShot(4000, this, SLOT(update())); int labelNum = m_theLabel->text().toInt() + 1; m_theLabel->setText(QString::number(labelNum)); } There are a couple of issues with this approach. The app cover should never need to be updated with this frequency. If updates are necessary, a frequency of every 30 seconds is probably sufficient. Another issue is that updates continue to occur even while the app is running in the foreground. Even though the visual updates aren't rendered, the app still increments the number every 4 seconds. In this example, the updates don't seem too costly, but with more elaborate app covers these updates could consume a significant amount of processing power. Recommended The code sample below shows you how to implement your app covers. Instead of calling the update() function when the app cover is created, you can connect to the thumbnail() signal so that updates occur only when the app becomes an Active Frame. When the app moves to the foreground, a Boolean variable called isActiveFrame is set to false, letting the app know that it can stop updating the app cover. #include "ActiveFrame.h" #include <bb/cascades/SceneCover> #include <bb/cascades/Container> #include <bb/cascades/Application> ActiveFrame::ActiveFrame(QObject *parent) : SceneCover(parent) , isActiveFrame(false) { Container *root = new Container(); m_theLabel = Label::create().text("0"); root->add(m_theLabel); setContent(root); bool connectResult; Q_UNUSED(connectResult); connectResult = QObject::connect (Application::instance(), SIGNAL(thumbnail()), this, SLOT(backgrounded())); Q_ASSERT(connectResult); connectResult = QObject::connect (Application::instance(), SIGNAL(fullscreen()), this, SLOT(foregrounded())); Q_ASSERT(connectResult); } void ActiveFrame::foregrounded() { isActiveFrame = false; } void ActiveFrame::backgrounded() { isActiveFrame = true; update(); } void ActiveFrame::update() { if (isActiveFrame) { QTimer::singleShot(3000, this, SLOT(update())); qDebug() << "Cover updated!"; int labelNum = m_theLabel->text().toInt() + 1; m_theLabel->setText(QString::number(labelNum)); } } For more information about creating app covers, see Active Frames. Push content to your app If your app relies on updates from a server-side app, you have probably thought about how to design the app to keep data current for the user. One approach is to use a headless app to retrieve data in the background and notify the client app when new data arrives. In this scenario, the headless app performs scheduled network requests to pull the data from a remote server. The problem with this approach is that scheduled queries require processing power and network access, and the headless app might make numerous requests even when the server has no data to provide. The preferred approach for receiving remote data is to use the Push Service. Push technology allows a server-side app to notify a client app when new data is available by pushing a notification with a small payload to the device. You can conserve processor power and network usage because the client app sends requests only when it knows that there's data available. The result is an app that not only conserves battery power, but also has the added benefit of near real-time updates. For more information, see Push Service. Reduce wakeups during standby mode To reduce power consumption, you can decrease the number of times that a device wakes up when it is in standby mode. A device is in standby mode when it has been put to sleep or has timed out due to inactivity, and has a blank screen. During standby mode, the device tries to enter lower power states, which it can enter only when there is no activity. A wakeup is an interrupt event that causes the kernel to schedule a thread to do some work. Wakeups can be caused by misbehaving apps or by hardware interrupts. Even when apps are in standby mode, they need to receive push events from the network or poll network services for new content. As a developer, you need to ensure that your app is not waking up unnecessarily while the device is asleep. The standby monitor runs in the background on all BlackBerry 10 devices. The standby monitor watches for misbehaving apps and writes to a log file when it detects them. To determine whether apps are misbehaving, the standby monitor considers wakeups and CPU sleep states. Typically, an app is misbehaving when it causes wakeups more frequently than once per minute while in standby mode. You don't need to include any libraries to use the standby monitor. On the device, the standby monitor writes to a log file under /accounts/1000/shared/misc/standby.log. In the Momentics IDE for BlackBerry, you can use the Target File System Navigator to locate and open the log file. For more information, see Check the standby monitor log file. Last modified: 2015-04-30 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/documentation/best_practices/performance/battery.html
CC-MAIN-2018-51
refinedweb
2,053
51.07
I am trying to create an online directory with vendors listed by COMPANY, STATE and PROFESSION. To begin, any help on creating a drop down menu that allows a user to select a COMPANY and then be automatically directed to the Dynamic Page of that Company. Example: Open drop down menu on homepage > Select "SAID COMPANY" > User is automatically sent to dynamic page which shows that Company. URL's: Current Dropdown: Dynamic Page for Hawk and Rose Creative I am wanting to link to dropdown: Any help is appreciated. Can you point me to where the example for listing the items in your collection which I presume you have been able to do? I have not created a list of all the items in my collection yet. But I just created an interim list on the same URL I referenced previously. Let me know if this helps. Yes, that is what I'm wanting to do. I have figured out that the functions I want in WIX are coming soon, just not sure how soon. How did you create the interim list? Suggestions appreciated but no worries if you feel this is remedial. Definitely not remedial. I am no developer so most of this Wix Code is well over my head. For the table that connects to the dynamic pages, I followed step-by-step this video on youtube. hope it helps! Hi, What you're trying to achieve is most easily built with a table that has built in functionality for linking to other pages when items are clicked. However, you can achieve a similar result with a dropdown if you don't mind writing a bit of code. The idea here is to create an event handler for the dropdown's onChange event. In that event handler, you can use the wix-location to( ) function to send the user to the relevant dynamic page. Thanks, Sam. Writing Code is a bit beyond my abilities / understanding. Can I find a sample code to copy and paste? Do I simply plug that into the spot where it says: "write your page related code here..." To reiterate my exact request...If a user clicks the dropdown and selects Hawk and Rose Creative, they would be sent to Thanks. Thanks, the video is exactly what I needed. Hay hawkandrosecreative, You will basically need to write an onChange event handler for the dropdown and use the location API to navigate to the actual page. You add the event from the property editor while selecting the dropdown. Once you add the event, you should write code that looks like Notes: dropdown is the id of your dropdown element collection is the collection name field is the name of the field in the collection you use to identify items in it. It is probably the title, unless you selected a different field for the dropdown options. dynamicUrlField is a field in the collection generated for the dynamic pages. It is the field key of the field with the link to the pages. Thank you very much Yoav. It seems like you have laid this out very clearly. However, it is still well over my head and can't seem to implement it. Thanks for your input on this. I will add a few comments to what Yoav has wrote, hopefully making it easier to implement. the idea is to add this code snippet to the page that has the dropdown on it. here's the code, this time with comments: hope it helps. Here's what I have: I am sure I'm inputting info incorrectly. //TODO: import location from 'wix-location'; import wixData from 'wix-data'; export function dropdown_onChange(event) { let value = $w('#Aarika Day Photography').value; wixData.query('Vendors') .eq('Title', value) .find() .then(res => { if (res.length > 0) location.to(res.); }) } }); Hay hawkandrosecreative, the line does not look right. It should look like The dynamicUrlField is the field key of the field generated by the dynamic page. You can see this field key in the database view when clicking the field properties on the link field. 1 $w.onReady(function () { 2 //TODO: import location from 'wix-location'; X 3 import wixData from 'wix-data'; "import and export may only appear at the top level" 4 ! 5 export function dropdown_onChange(event) { "Parameter 'event' is never used" X 6 let value = $w('Aarika Day Photography').value; "Aarika Day Photography is not a valid selector name" 7 wixData.query('Vendors') 8 .eq('title', value) 9 .find() 10 .then(res => { 11 if (res.length > 0) X 12 location.to(res.link-Vendors-title); " 'location' is undefined 'Vendors' is undefined 'title' is undefined" 13 }); 14 } 15 }); k. here's what I have :) Thanks to everyone who has been so helpful to try and get this functional. I'm getting a few errors / warnings throughout the code. thoughts? I have changed a few things moved the import line to be the first line in the file added import for location. You can ignore the message about the event parameter not being used. It is just a notification, not an error. res has an array of results called items - I have changed the code to use that - so now we have res.items[0] instead of res. the link-Vendors-title is not a valid javascript term because of the '-' inside of it. because of this I have changed it to be used as using the javascript member access notation res.items[0]['member-name'] instead of res.link-Vendors-title. There is still one thing to fix here - the 'Aarika Day Photography' id. 'Aarika Day Photography' is not a valid id - ids in Wix Code look like strings without spaces. You can find the id in the top of the property editor when you select the element in question. Once you find the id (lets assume the id is myElementId), you will need to write $w('#myElementId') Thanks Yoav. Sorry you are having to hold my hand through all this. Can you explain a bit further what you meant by this: 'You can find the id in the top of the property editor when you select the element in question.' Property Editor as in the database? The dropdown? Do I need to be in preview mode? This will be my last question! If I don't get it from here, I will figure our a way to get it up and going. Thanks for all your time. You access the property panel from "tools" ==> "developer tools" ==> "property panel". once you select a component, the ID is shown at the top of the panel. also, did you try looking for information on this in our videos? articles? for example see this: Hello, I am a beginner in code and need help on creating a drop down menu that allows a user to select a items and then be automatically directed to the Dynamic Page . I tried this code.. Thanks for all your time..... Looks pretty good to me. There are a couple of things to check that I can't see in your screenshots. The field key for the Categoria field is lowercase as you have it in the code. The value in onChange event in the properties panel must match the name of your event handler function (dropdown_onChange). Thanks Sam ! for your feedback I appreciate your knowledge and time Now I can see the items .. but I can not link to the collection you could suggest me how to do it?
https://www.wix.com/corvid/forum/community-discussion/online-directory-creation
CC-MAIN-2019-47
refinedweb
1,253
74.29
Welcome to another OpenCV with Python tutorial, in this tutorial we are going to be covering some simple arithmetic operations that we can perform on images, along with explaining what they do. To do this, we will require two images of equal size to start, then later on a smaller image and a larger one. To start, I will use: and First, let's see what a simple addition will do: import cv2 import numpy as np # 500 x 250 img1 = cv2.imread('3D-Matplotlib.png') img2 = cv2.imread('mainsvmimage.png') add = img1+img2 cv2.imshow('add',add) cv2.waitKey(0) cv2.destroyAllWindows() Result: It is unlikely you will want this sort of messy addition. OpenCV has an "add" method, let's see what that does, replacing the previous "add" with: add = cv2.add(img1,img2) Result: Probably not the ideal here either. We can see that much of the image is very "white." This is because colors are 0-255, where 255 is "full light." Thus, for example: (155,211,79) + (50, 170, 200) = 205, 381, 279...translated to (205, 255,255). Next, we can add images, and have each carry a different "weight" so to speak. Here's how that might work: import cv2 import numpy as np img1 = cv2.imread('3D-Matplotlib.png') img2 = cv2.imread('mainsvmimage.png') weighted = cv2.addWeighted(img1, 0.6, img2, 0.4, 0) cv2.imshow('weighted',weighted) cv2.waitKey(0) cv2.destroyAllWindows() For the addWeighted method, the parameters are the first image, the weight, the second image, that weight, and then finally gamma, which is a measurement of light. We'll leave that at zero for now. Result: Those are some addition options, but what if you quite literally want to add one image to another, where the newest overlaps the first? In this case, you would start with the largest, then add the smaller image(s). For this, we will use that same 3D-Matplotlib.png image, but use a new one, a Python logo: Now, we can take this logo, and place it on the original image. That would be pretty easy (basically using the same-ish code we used in the previous tutorial where we replaced the Region of Image (ROI) with a new one), but what if we just want the logo part, and not the white background? We can use the same principle as we had used before for the ROI replacement, but we need a way to "remove" the background of the logo, so that the white is not needlessly blocking more of the background image. First I will show the full code, and then explain: import cv2 import numpy as np # Load two images img1 = cv2.imread('3D-Matplotlib.png') img2 = cv2.imread('mainlogo.png') # I want to put logo on top-left corner, So I create a ROI rows,cols,channels = img2.shape roi = img1[0:rows, 0:cols ] # Now create a mask of logo and create its inverse mask img2gray = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY) # add a threshold ret, mask = cv2.threshold(img2gray, 220, 255, cv2.THRESH_BINARY_INV) mask_inv = cv2.bitwise_not(mask) # Now black-out the area of logo in ROI img1_bg = cv2.bitwise_and(roi,roi,mask = mask_inv) # Take only region of logo from logo image. img2_fg = cv2.bitwise_and(img2,img2,mask = mask) dst = cv2.add(img1_bg,img2_fg) img1[0:rows, 0:cols ] = dst cv2.imshow('res',img1) cv2.waitKey(0) cv2.destroyAllWindows() A decent amount happened here, and a few new things showed up. The first thing we see that is new, is the application of a threshold: ret, mask = cv2.threshold(img2gray, 220, 255, cv2.THRESH_BINARY_INV). We will be covering thresholding more in the next tutorial, so stay tuned for the specifics, but basically the way it works is it will convert all pixels to either black or white, based on a threshold value. In our case, the threshold is 220, but we can use other values, or even dynamically choose one, which is what the ret variable can be used for. Next, we see: mask_inv = cv2.bitwise_not(mask). This is a bitwise operation. Basically, these operators are very similar to the typical ones from python, except for one, but we wont be touching it anyway here. In this case, the invisible part is where the black is. Then, we can say that we want to black out this area in the first image, and then take image 2 and replace it's contents in that empty spot. Result: In the next tutorial, we discuss thresholding more in depth.
https://pythonprogramming.net/image-arithmetics-logic-python-opencv-tutorial/
CC-MAIN-2021-39
refinedweb
757
74.39
Obtaining Data: A Journalist’s Guide to Scraping PDFs and Converting Them to CSVs This post is the first of a series dedicated to obtaining, cleaning, analyzing and visualizing the causes of death in North Carolina. We start our journey with two basic questions: 1) What are the most common causes of death in North Carolina by county? 2) What kind of trends can we see over time? The North Carolina Department of Health and Human Services maintains detailed mortality statistics for the state, and that’s where we’ll start. For our first prototype, we will focus on the 2015 data found on their website. The first problem I ran into was that all of the detailed mortality statistics were stored in individual county pages accessible via a drop-down menu. Manually opening and saving each of the 100 PDFs would be a pain. Time for some web scraping! Building the Scraper The first step to building a scraper is finding a pattern in the URL that you can follow to loop through the pages. The pattern for these links is easy to see:[COUNTYNAME].pdf For each link, we just need to replace [COUNTYNAME] with all lowercase, no spaces (New Hanover, the only county that has more than one word in its name, simply translates to newhanover in the link) county name. I chose Python to write this scraper because it’s lightweight—and, because I have a Mac—comes preinstalled on my computer. Building a basic scraper isn’t terribly difficult, even if you’ve never written Python before. If you Google “download file from url python,” the second link will get you the few lines you need. If you open up the text editor of your choice, you can start with these few lines of Python to scrape the site: import urllib testfile = urllib.URLopener() testfile.retrieve("", "alamance.pdf") console.log(“script complete!”) Save this file as a .py file and you’ve got a Python script! To run it, simply navigate to the directory where your .py file is located and run: python [yourfilename].py You should see the log “script complete!” and the alamance.pdf file will appear in your directory. We don’t want to have to manually write out a new line for each county, so we’ll have to stick it in a loop. To do that, we’ll need a Python array with a string for each county. Our array will look something like this, with all 100 counties included: counties = [ “alamance”, “alleghany”, ... ] How can we make this array as efficiently as possible without having to handwrite it? I already had a list of each N.C. county in a spreadsheet from a previous project. In one column, I used the lower() function to get our string into all lowercase. I also made it a little easier to directly paste those counties into the array format by adding some quotes and commas: Getting the quotes and commas in the second column is a little tricky: Because quotes in most programming languages (including Google Sheets syntax) indicate strings, if you want to use a quote in a concatenate() function, you must escape it. If you Google “google sheets escape quotes,” the first link will tell you that Google Sheets has a char() function that can read Unicode digits and translate those into characters. The unicode number for a quote is 34, so we just use CHAR(34) to indicate a quote in our concatenate function. Now we can paste that column into our Python array between the brackets (**remember, you’ll have to remove the space in “new hanover”!**), and we’re ready to write our loop! for i in counties: testfile.retrieve("" + i + ".pdf", i + ".pdf") If you run this script, you’ll get all of your PDFs in your current directory. Of course, we want our data in a delimited format—so now we’ll have to use Tabula to translate those PDFs into CSVs. Using Tabula If you install Tabula and pull in each PDF individually, you can get the CSVs you need. I was having some trouble with the time it was taking my computer to process the PDFs, so I manually wrote a script to run Tabula through my terminal. If you go to the Tabula GitHub, you’ll find a Ruby version of Tabula that you can install on your computer. You’ll need JRuby for it to work. I used RVM to install JRuby, so first you should install RVM, which is just a matter of pasting these two commands in your terminal: gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 And then: \curl -sSL | bash -s stable Once you have RVM installed, just run this command to install JRuby: rvm install ruby And now we can install Tabula: jruby -S gem install tabula-extractor Now, if you navigate to the directory where your PDFs are located, you can try translating one of your PDFs with this command: tabula --outfile alleghany.csv alleghany.pdf You’ll now see alleghany.csv in your directory, but it’s a little smaller than you might expect. That’s because by default, Tabula will only translate the first page of the PDF. We’ll need to add on the –pages option to tell the script to get more pages. The Alleghany PDF has 52 pages, so to get all of the pages translated we’ll run: tabula --pages 1-52 --outfile alleghany.csv alleghany.pdf Now we know the format our commands need to run in, but it would be a huge pain to have to write down the page counts for each PDF and then write each command! But hey, that’s what code is for: automating tedious tasks. Automating Tabula It was at this point that I switched to writing a shell script, because I was trying to loop through the tabula command, which is a shell script. If you know Ruby, you could also use the example in the Tabula extractor repository to write in Ruby. At this point, I needed just a few elements to my shell script: an array of the county names, an array of the number of pages in each respective PDF, and a loop to put it all together. You can Google “shell script array” and “shell script loop” to figure out these basic syntax structures, but I’ll save you some time. The county array loop we need looks like this (remember to change “new hanover” to “newhanover”!): declare -a COUNTIES=( alamance alexander alleghany … ) We can paste the county list we had before without the quotes and commas to quickly populate the array. Once you have the array in a new file, save that file as [yourfilename].sh (I called mine process.sh). Our pages array will have the same syntax, but the question is, how can we get the script to count the pages in each PDF for us? If you Google: “count pages in pdf shell script,” the first result will give you some options. The long script that is the first option wasn’t working for me, so I decided to go with the pdfinfo package option: foo=$(pdfinfo pdffile.pdf | grep Pages | awk '{print $2}') How do you install the pdfinfo package? Googling “how to install pdfinfo” will tell you that it’s part of the xpdf package, which you can install with this command if you have Homebrew: brew install homebrew/x11/xpdf If you don’t have Homebrew, then you should be able to manually install the xpdf package here. Now we can write a loop using our COUNTIES array to return a list of all the PDF’s respective page numbers: for ((i=0;i<${#COUNTIES[@]};++i)); do echo $(pdfinfo "${COUNTIES[i]}".pdf | grep Pages | awk '{print $2}') done echo "pages printed." Now you can save your .sh file and run it in the terminal with the command: shell [yourfilename].sh You should see a list of all the page numbers in your terminal. Copy that list and make yourself a pages array! You can now comment out the page printing loop with some hashtags on each line. declare -a PAGES=( 52 23 13 … ) Now we’re ready to write our master loop! It’ll look like this: for ((i=0;i<${#COUNTIES[@]};++i)); do printf "%s has %s pages\n" "${COUNTIES[i]}" "${PAGES[i]}" tabula --pages 1-"${PAGES[i]}" --outfile "${COUNTIES[i]}".csv "${COUNTIES[i]}".pdf done echo "TABULA COMPLETE." You don’t necessarily need the printf or echo lines, but it’s nice to see the progress of your script as it runs. This process will take a little while to complete, but when it’s done, you’ll have a CSV version of every PDF in your folder! For our next post, we’ll take a look at the format of these CSVs and see how we can clean them up for analysis.
http://www.carolinadatadesk.org/a-journalists-guide-to-scraping-pdfs-and-converting-them-to-csvs/
CC-MAIN-2019-35
refinedweb
1,495
69.62
I've installed SublimeRope for my Python usage, and have built a new rope project with my existing sources. However, it seems the plugin still doesn't offer basic Python completions. For example, I start writing: - Code: Select all import stru.. In hopes that rope will offer me the struct module, but it doesn't. Is this beyond the plugin's capabilities? I also noticed that if I import struct, I still don't get the struct.Struct completion. I was hoping to replace PyCharm (which has superb completion) with SublimeText, But I'm really missing this capability at the moment. Thanks for your help
http://www.sublimetext.com/forum/viewtopic.php?f=3&t=11011
CC-MAIN-2014-15
refinedweb
105
67.45
Completion Lists in C# The IntelliSense completion lists in Visual C# contain tokens from List Members, Complete Word, and more. It provides quick access features: Members of a type or namespace (see List Members), Variables, commands, and functions names (see Complete Word), Code snippets, Language Keywords, Extension Methods).. Note The completion list does not display all extension methods for String objects. For more information, see Filtered Completion Lists in C#. Extension methods use a different icon than instance methods. For a listing of list icons, see Class View and Object Browser Icons. When an instance method and extension method with the same name are both in scope, the completion list displays the extension method icon. See Also Reference Concepts Filtered Completion Lists in C# Pre-selected Completion List Items in C#
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/ms165383%28v%3Dvs.100%29
CC-MAIN-2019-39
refinedweb
131
63.39
One of the nice things about data science is that it allows us to explore the symmetry between mathematics and coding. We saw this with translating equations into functions. For example, we have now seen how to translate something like this: $y = 1.3x + 20$, into code like this: def y(x): return 1.3*x + 20 In this section we'll see how we can use coding to better understand a mathematical series and summation. In Python, you are familiar with a list as looking like the following. my_list = [1, 6, 11, 16, 21] Now looking at the numbers above, you can see that these numbers start at one and increment by five for every number. So below we generate these numbers with code, and also use code to express how these numbers increment. initial_i = 0 ending_i = 4 terms = [] for i in range(initial_i, ending_i + 1): current_element = 5*(i) + 1 terms.append(current_element) terms Our code above expresses a pattern in the numbers: In Python we call this ordered collection a list. In math, an ordered list of numbers is called a sequence. We express sequences not with [], but with parentheses or curly brackets. For example: Each component of a sequence is called an element, or a term. So in our sequence above, 1 is a term, as is 6 and 21. Another mathematical notation for expressing the above sequence is the following: $(x_i)^4_{i=0}$ where $x_i =5*i + 1$ Read the above line as the following: In describing a sequence in math, the initial index is placed at the bottom, and the stopping point at the top. And the common term for the sequence is also described. Notice the similarity to our previous code: initial_i = 0 ending_i = 4 terms = [] for i in range(initial_i, ending_i + 1): current_element = 5*(i) + 1 terms.append(current_element) terms In our first sequence, the term changed based on the value of $i$. $(x_i)^4_{i=0}$ where $x_i =5*i + 1$ But sequences don't have to depend on the value of the index. For example, here is another sequence: $(x_i)^5_{i=1}$ where $x_i =5$ Can you determine how this sequence would look? $(5, 5, 5, 5, 5)$. Here we describe the starting point at $i = 1$ and the ending point at $ i = 5$. For every element in the sequence, $x_i$, the value is 5. And the code would be: initial_i = 0 ending_i = 4 terms = [] for i in range(initial_i, ending_i + 1): current_term = 5 terms.append(current_term) terms A series in mathematics is just the sum of the terms of a sequence. So the series of the sequence above is this: $(x_i)^4_{i=0}$ where $ x_i = 5*i + 1 = 1 + 6 + 11 + 16 + 21 = 55$. Let's see how we would write something like this using Python. Here is the sequence that we previously generated written another way. Python's range function can accept a third argument, which tells it how much to increment each element. In other words, we start at 1, increment each element by 5, then stop at 21 because 26 < 22 is False. num_range = list(range(1, 22, 5)) num_range Now to turn this into a series, we just add up those terms. total = 0 for i in num_range: total += i total Or we can use Python's built-in sum function to add up the elements in num_range. num_range sum(num_range) We have just seen different ways for adding a sequence of numbers in code. Adding a sequence of numbers is called a summation. Let's see how to express a summation of numbers in mathematics using sigma notation. This is sigma notation for adding up the numbers from one to ten, {1 + 2 + 3+ 4+ 5+ 6+ 7 + 8 + 9+ 10}. $$\sum_{i=1}^{10} i$$ Ok, so let's break down the syntax. The greek letter $\sum$, means that we will add up a set of numbers, one after the other. The $i$ specified at the bottom of the sigma indicates our starting point, just like with sequences. So, at the bottom of the $\sum$, we see $i = 1 $. This means we will be initializing $i$ to equal 1. The number at the top still indicates where we will be stopping, with $i = 10 $. Now, what comes after the $\sum$ indicates the term that we are adding. Here, we are adding $i$, where $i$ is the numbers one through ten. total = 0 for i in range(1, 11): total += i total Let's try another of these. What does adding up the numbers one through five look like in sigma notation? Well, we know to start at one and end at five. Then we add each of those numbers, $i$. $$\sum_{i=1}^{5} i$$ Ok, not too bad. The above is the equivalent of $ 1 + 2 + 3 + 4 + 5 = 15 $. If we want to be a little more long winded about this, we can express this with our series notation as the following: For $(x_i)^5_{i=1}$ with $x_i = i$, $y = x_i + x_{i + 1} ... x_5 $ So far we have discussed adding numbers one after the other. Now let's use our notation to describe adding any terms in a sequence. This is one way to describe a sum of a sequence: $(x_i)^5_{i=0}$ where $x_i = 5*i$, $y = x_i + x_{i + 1} ... x_5 $ We can read that as, for an element in the series with $i$ starting at 0 and going to 5, and each element equal to $5 * i$, add up all of those elements of the series and set it equal to $y$. Here is that same sequence with sigma notation. $$\sum_{i=0}^{5} 5*i$$ This equals 75. Do you see why? So by using sigma notation we can express the a summation more succinctly. And in doing so, we can get more to the core of what we are trying to express. In this section, we saw the similarity between abstractions in mathematics and in coding. We saw that a sequence is an ordered list of elements or terms. And how we can describe a sequence by indicating the initial value, the stopping point and the general case, as in: $(x_i)^5_{i=0}$ where $x_i = 5*i $ A statement like that means start where $i = 0 $ and end where $i = 5$. The elements in the series are $5 * i$, or ${0, 5, 10, 15, 20, 25}$. We then saw how to add the terms in a sequence using the sigma notation as in: $$\sum_{i=0}^{5} 5*i$$ which translates to $0 + 5 + 10 + 15 + 20 + 25 $. Going forward we will use sigma notation to explain concepts in math and data science.
https://learn.co/lessons/sigma-notation
CC-MAIN-2019-43
refinedweb
1,123
80.72
Different Ways of Creating a List of Objects in C# In this post, we look at all the different approaches available to create a list of objects in C#. Do you know of any more? Join the DZone community and get the full member experience.Join For Free It has always been fun to play with C#. In this post, we will see how we can create a list of objects with a different approach. The scenario is: for one of my MVC applications I need to bind the 5 empty rows (a list of 5 objects) to the kendo grid for a bulk insert of the records, so that whenever I open that page, kendo grid renders 5 empty rows in editable mode. In this post, for a better illustration, I have used the example of "Book." Let's say I want to add multiple books to library management software. First, let's create one basic POCO class - Book - with some properties: public class Book { public string BookName { get; set; } = string.Empty; public string Author { get; set; } = string.Empty; public string ISBN { get; set; } = string.Empty; }() }); Then C# 3.0 came along with a lot of enhancements, including Collection Initializers, a shortened syntax to create a collection. // using collection initializer var bookList = new List<Book>() { new Book(), new Book(), new Book() }; In the .NET framework, there is one class - Enumerable - that resides under the "System.Linq" namespace. This class contains some static methods, which we can use to create a list of objects. For example, using Enumerable.Repeat() : //(); The Range() method generates a collection within a specified range. Kindly note there are so many use cases for this method. I wanted. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/different-ways-of-creating-list-of-objects-in-c
CC-MAIN-2020-45
refinedweb
290
74.19
Run-Time Use of Metadata To better understand metadata and its role in the common language runtime, it might be helpful to construct a simple program and illustrate how metadata affects its run-time life. The following code example shows two methods inside a class called MyApp. The Main method is the program entry point, while the Add method simply returns the sum of two integer arguments. Public Class MyApp Public Shared Sub Main() Dim ValueOne As Integer = 10 Dim ValueTwo As Integer = 20 Console.WriteLine("The Value is: {0}", Add(ValueOne, ValueTwo)) End Sub Public Shared Function Add(One As Integer, Two As Integer) As Integer Return (One + Two) End Function End Class [C#] using System; public class MyApp { public static int Main() { int ValueOne = 10; int ValueTwo = 20; Console.WriteLine("The Value is: {0}", Add(ValueOne, ValueTwo)); return 0; } public static int Add(int One, int Two) { return (One + Two); } } When the code runs, the runtime loads the module into memory and consults the metadata for this class. Once loaded, the runtime performs extensive analysis of the method's Microsoft intermediate language (MSIL) stream to convert it to fast native machine instructions. The runtime uses a just-in-time (JIT) compiler to convert the MSIL instructions to native machine code one method at a time as needed. The following example shows part of the MSIL produced from the previous code's Main function. You can view the MSIL and metadata from any .NET Framework application using the MSIL Disassembler (Ildasm.exe). .entrypoint .maxstack 3 .locals ([0] int32 ValueOne, [1] int32 ValueTwo, [2] int32 V_2, [3] int32 V_3) IL_0000: ldc.i4.s 10 IL_0002: stloc.0 IL_0003: ldc.i4.s 20 IL_0005: stloc.1 IL_0006: ldstr "The Value is: {0}" IL_000b: ldloc.0 IL_000c: ldloc.1 IL_000d: call int32 ConsoleApplication.MyApp::Add(int32,int32) /* 06000003 */ The JIT compiler reads the MSIL for the whole method, analyzes it thoroughly, and generates efficient native instructions for the method. At IL_000d, a metadata token for the Add method ( /* 06000003 */) is encountered and the runtime uses the token to consult the third row of the MethodDef table. The following table shows part of the MethodDef table referenced by the metadata token that describes the Add method. While other metadata tables exist in this assembly and have their own unique values, only this table is discussed. Each column of the table contains important information about your code. The RVA column allows the runtime to calculate the starting memory address of the MSIL that defines this method. The ImplFlags and Flags columns contain bitmasks that describe the method (for example, whether the method is public or private). The Name column indexes the name of the method from the string heap. The Signature column indexes the definition of the method's signature in the blob heap. The runtime calculates the desired offset address from the RVA column in the third row and returns this address to the JIT compiler, which then proceeds to the new address. The JIT compiler continues to process MSIL at the new address until it encounters another metadata token and the process is repeated. Using metadata, the runtime has access to all the information it needs to load your code and process it into native machine instructions. In this manner, metadata enables self-describing files and, together with the common type system, cross-language inheritance. See Also Structure and Use of Metadata | Compiling to MSIL | Compiling MSIL to Native Code
http://msdn.microsoft.com/en-us/library/vstudio/65a37919(v=vs.71)
CC-MAIN-2014-15
refinedweb
578
54.12
Hi Guys, I wrote a code in phyton and work perfectly in one platform call pythontutor(to learn python), but when I put the code in dynamo, he not work so well and show me other results, Does someone know the problem?? pythontutor: Dynamo: The script: from math import sqrt Q = [[0.25,0.25,0.25,0.25,0.16,0.16],[0.25,0.25,0.25,0.25,0.19,0.16]] D = [[21.6,21.6,21.6,17,17,1],[21.6,21.6,21.6,17,17,17]] B = [17,21.6,27.8,35.2,44,53.4,66.6,97.8] #List Base. def recalculate(a,b): V = 4000*a/(b*b*3.1415) Vaz = 14*sqrt((b/1000)) if V < 3 and V< Vaz: return False else: return True for i, list in enumerate(Q): for j, number in enumerate(list): a=recalculate(number, D[i][j]) if a==False: print("OK") else: print("Recalculate") aux=0 while True: b=recalculate(number, B[aux]) if b==False: break else: aux+=1 D[i][j]=B[aux] OUT = D
https://forum.dynamobim.com/t/same-code-python-with-a-different-result-in-dynamo-and-pythontutor/36088
CC-MAIN-2020-45
refinedweb
187
73.27
oleg at pobox.com wrote: > Simon Marlow wrote: >> Anyway, I just wanted to point out that nowadays we have the option of >> using imprecise exceptions to report errors in lazy I/O. > >. If getContents raises an > imprecise exception, what do we do? > return (map process (catch l (\e -> syslog e >> return []))) > That of course won't work: l is a value rather than an effectful > computation; besides; catch can't occur in the pure code to start > with. What think I agree with that. Resumable exceptions don't make any sense for pure code (I can certainly imagine implementing them though, and they make sense in the IO monad). But all is not lost: if an exception is raised during getContents for example, you still have the partial results: the list ends in an exception, and I can write a function that returns the non-exceptional portion (in IO, of course). Cheers, Simon
http://www.haskell.org/pipermail/haskell-cafe/2007-March/023240.html
CC-MAIN-2013-48
refinedweb
153
67.08
GDI+ is supported in the CLR by the System.Drawing namespace and subordinate elements. GDI+ includes classes representing the DC, brushes, pens, fonts, and basic shape-drawing capabilities, as well as support for imaging and rendering graphics on devices in addition to Windows Forms. Support for drawing in Windows Forms is supported at the API level. GDI+ is a framework within the CLR that provides us with refined metaphors that make what we have always been able to do easier. The Graphics class is the metaphor that represents the API device context. Methods , fields, and properties in the Graphics class conceal working with handles and HRESULT return values. Generally, you have access to a Graphics object when you explicitly call CreateGraphics or as a member of the PaintEventArgs Paint event argument. Because the Graphics object is stateless, you should not cache a Graphics object; you should request a Graphics object each time you need to perform a graphics operation. Additionally, the Graphics object represents a valuable system resource; thus, following advice earlier in this book, we should call Dispose when we are done with any Graphics object. Tip Any object you are going to create and destroy should be wrapped in a resource protection block using the Try..Finally idiom introduced in VB .NET. As with any resource, if you want to ensure that the Dispose method is called, wrap the call to CreateGraphics and Dispose in a resource protection block. Classes in Visual Basic .NET support constructors instead of the Finalize event called internally. The availability of constructors allows additional idioms to be employed. One such idiom is to devise a factory method and make the constructor protected or private. The Graphics class uses this technique, with the end result being that you might only create instances via the factory method. The reason for employing a factory method is usually to ensure that a construction and initialization order occurs reliably. Graphics objects are associated by the handle of a device context (hDC) for a specific control. As with any other object, when you have an instance, you can invoke any of the instance methods or properties. To create an instance of a Graphics object, call Control. CreateGraphics where Control is the instance of the control whose DC, or canvas, you want to draw on. The following code demonstrates how to get an instance of a Graphics object for a Form: Dim G As Graphics = CreateGraphics() Note You can create Graphics objects from static methods of the Graphics class by passing the HWnd, or windows handle, using the Handle property. There are several other shared Graphics methods that allow you to create instances, but the constructor is Private. Unless there are exigent circumstances, you should use the Control. CreateGraphics method. Do not cache Graphics objects. Graphics objects are defined to be stateless. As a consequence of their statelessness, if you cache a Graphics object for a form and the form is resized, the cached Graphics object will not render the output correctly. Each time you need a Graphics object to paint a particular control, call CreateGraphics. You can use a single Graphics instance within a single method and called methods; when the method that created the Graphics object exits, call Dispose. Another way you get a Graphics object is when you write an event handler for a control's Paint event. The first argument in the Paint event is the Object parameter and the second is a PaintEventArgs object. The parameter e contains a property that is an instance of the Graphics object for the control that raised the Paint event. Listing 17.1 demonstrates a shadowed text effect created in a form's Paint event. 1: Private Sub ShadowText(ByVal G As Graphics, _ 2: ByVal Text As String, ByVal X As Integer, _ 3: ByVal Y As Integer) 4: 5: G.DrawString(Text, _ 6: New Font("Times New Roman", 16, FontStyle.Bold), _ 7: Brushes.Silver, X - 3, Y - 3) 8: 9: G.DrawString(Text, _ 10: New Font("Times New Roman", 16, FontStyle.Bold), _ 11: Brushes.Black, X, Y) 12: 13: End Sub 14: 15: Private Sub Form1_Paint(ByVal sender As Object, _ 16: ByVal e As System.Windows.Forms.PaintEventArgs) _ 17: Handles MyBase.Paint 18: 19: ShadowText(e.Graphics, "GOTDOTNET", 11, 11) 20: End Sub Line 19 calls a well-named method, passing basic information to create a shadowed-text effect. The Graphics.DrawString method is called on lines 5 and 9. The first time the text is drawn, the X and Y values are offset and the text is drawn using a Silver brush. The foreground text is drawn at the X and Y positions using a darker , contrasting foreground brush color . You could create custom text in VB6. There are, however, some distinct differences between VB6 and VB .NET. In VB6 you probably had to import API methods, like WriteText and GetDC. Also, in VB6 many controls did not raise a Paint event. With Visual Basic .NET, more controls implement the Paint method and raise a Paint event, enabling you to perform custom painting on a wider variety of Windows Forms controls. Using variations of offsetting values, you can create embossed , shadowed, engraved, or outlined text with the code demonstrated in the listing. The System.Drawing namespace contains fundamental classes for drawing. GDI+ is a stateless implementation. The result is that you will need to pass in instances of fonts, colors, brushes, pens, and shapes depending on the needs of the class and methods each time you invoke a drawing operation. There are collections, such as Brushes, that contain dozens of properties that return predefined items (such as brushes), making the task of acquiring a brush (color, pen, and so on) easier. The Color class implements about 50 Shared properties that return a specific color value. (The list is extensive enough to include colors like AliceBlue.) Use the class and refer to the color property by name to get an instance of that color. For example, Color.AliceBlue returns an instance of a Color object that is the color AliceBlue. (I wonder what Color.AliceCooper would return?) The Color class also allows you to manually initialize a Color object and define the color by specifying the A, R, G, and B values. R, G, and B are the red, green, and blue percentages, and A is the alpha component of the color. (The alpha part of color has to do with the transparency of a color relative to the background.) Additionally, you can create a Color by calling the shared methods FromArgb or FromName. For example, BackColor = Color.FromName("PapayaWhip") will turn the background color of a form to a salmony-beige color. If you are comfortable with or want to experiment with RGB colorsor know a color by nameyou can get Color objects using these two shared methods. The Brushes class is similar to the Color class in the members it contains. Brushes contains shared properties defined by the color of the brush and has properties with the same names as the color properties. Keep in mind that Brushes returns an instance of a Brush object, not a color. It would have been consistent if there were simply a Brush class with the shared properties that returned brushes, but if you need an instance of a custom Brush, you have to construct an instance of the Brush class, not Brushes. (There is no Colors equivalent.) A Brush is used to describe how the interiors of shapes are filled, including the color component. (See Listing 17.1 for an example of how to use the Brushes class in context.) The Pens class is analogous to the Brushes class. There are shared properties named by the color of the pen that you can access to quickly get a Pen object. Pens are used to define the color of lines and curves. If you need to manually create an instance of a Pen, you will need to construct a Pen, rather than a Pens object. (See the section on the Rectangle class for an example of using the Pens class.) Using the simple primitives, you can provide the appropriate arguments necessary to satisfy the stateless behavior of GDI+. Listing 17.2 demonstrates using the Pens, Graphics, and RectangleF classes to perform some basic architectural drawing. 1: Private Sub DrawArc(ByVal G As Graphics) 2: Const Max As Integer = 36 3: Dim I As Integer 4: For I = 1 To Max 5: G.DrawArc(Pens.Red, RandomRect(Max), 0, 180) 6: Next 7: End Sub 8: 9: Private Function RandomRect(_ 10: Optional ByVal Max As Integer = 36) As RectangleF 11: 12: Static Count As Integer = 0 13: If (Count >= Max) Then Count = 0 14: Count += 1 15: 16: Return New RectangleF(_ 17: (ClientRectangle.Width - (Count * 10)) / 2, _ 18: 10, Count * 10, 300) 19: 20: End Function The DrawArc procedure beginning on line 1 is passed an instance of a Graphics object and draws 36 arcs using the Red pen property from the Pens class. The last two parameters of DrawArc express the start angle and sweep. The parameters 0 and 180 draw the bottom half of an ellipse. The RectangleF object returned by RandomRect defines a rectangle that acts as the constraining boundary of the ellipse. (See the upcoming sections on DrawArc and Rectangle for some additional information.) Using the primitives for pens, colors, brushes, and fonts, GDI+ provides methods for drawing shapes, text, and graphics, like icons. There seems to be little comparative difference between the ValueType structures RectangleF and Rectangle. Some methods seem to require the RectangleF and others the Rectangle. Rectangles are structures that define a rectangular area by tracking the X and Y ordered pair and Width and Height properties. The difference between structures like Rectangle and RectangleF is that the suffix indicates that the fields are floating-point rather than integer fields. There are overloaded versions of methods that take both integer and floating-point versions of similar structures. Why this difference exists is presumed to be related to the range of possible values supported by each type. Recall that you do not have to use the New keyword when declaring a ValueType variable; however, if you want to call a parameterized constructor for a ValueType, declare the variable with the New keyword. For example, the following line declares a variable R that is a ready-to-use Rectangle: Dim R As Rectangle If you want to initialize the Rectangle with the X, Y and Width, Height arguments, use the parameterized constructor version with the New keyword: Dim R As New Rectangle(1,1, 10,10) The Cartesian quadrant in Windows Forms is quadrant II, where the origin 0, 0 is the upper-left corner of the Form (or control). If you need more advanced region management than provided by the Rectangle or RectangleF structures, use the Region class. Region is covered in the section titled, "Advanced Drawing." (For additional code samples demonstrating Region, refer to Chapter 15, "Using Windows Forms.") Point and PointF are ValueTypes that represent an X,Y ordered-pair in the Cartesian plane. Points are used to represent a position in a two-dimensional drawing surface, a Windows Form. The Graphics.DrawArc method is used to draw arcs (see Figure 17.1), or part or all of an ellipse. If you want a closed arc, call DrawEllipse. If you want an open arc, use DrawArc. There are four versions of DrawArc. Essentially an arc requires a Pen, a bounding Rectangle, a start angle, and the sweep of the arc. A start angle of 0 is the right-horizontal from the center of the arc, and the sweep proceeds clockwise. For example, a start angle of 0 and a sweep of 360 would draw a closed arc. Figure 17.1 illustrates the 0-angle and direction and relative position of a 90-degree sweep. The Graphics.DrawEllipse method draws a closed arc, so it does not need the angle and sweep parameters. The Pen argument describes the color of the line used to draw the ellipse, and the Rectangle parameter defines the bounding region. Calling Graphics.DrawEllipse(Pens.Blue, New RectangleF(10, 10, 100, 50) yields the same result as Graphics.Draw(Pens.Blue, New RectangleF(10, 10, 100, 50), 0, 360). Replace the call to DrawArc on line 5 of Listing 17.2, losing the angle and sweep arguments to create the visual effect of a 3D sphere. Graphics.DrawRectangle has three overloaded versions that take a Pen and a Rectangle object or the four coordinates of a rectangle. Polygons are many-sided shapes whose edges are defined by connecting points. Graphics.DrawPolygon takes a Pen argument and an array of points. The first point is the last point; consequently, you do not have to repeat the first point to close the polygon. The number of sides in the polygon will be equal to the number of points, although if you define only two points, the polygon will appear to be a line. Listing 17.3 demonstrates an array of five points that roughly define a pentagon. 1: Private Sub DrawPolygon(ByVal G As Graphics) 2: 3: Dim Points() As Point = {New Point(10, 30), _ 4: New Point(100, 10), New Point(150, 75), _ 5: New Point(100, 150), New Point(10, 130)} 6: 7: G.DrawPolygon(Pens.Purple, Points) 8: 9: End Sub The Point type is a ValueType; however, to use the overloaded, parameterized constructor of a ValueType, we need to use the New keyword. Points defines an array of five point structures. DrawPolygon, on line 7, draws the polygon defined by the array of points. DrawPolygon closes the polygon defined by the points, mitigating the need for specifying a last point matching the first point. The Graphics.DrawPie method works similarly to the DrawArc method. DrawPie takes a bounding rectangle, a start angle, and a sweep. The distinction between DrawArc and DrawPie is that the latter closes the arc with line segments. Listing 17.4 demonstrates the DrawPie method, using a TextureBrush, to fill a wedge of the pie (see Figure 17.2). 1: Private Sub DrawPie(ByVal G As Graphics) 2: Dim R As Rectangle = New Rectangle(50, 50, 125, 150) 3: Dim B As New TextureBrush(Icon.ToBitmap) 4: 5: G.FillPie(B, R, 0, 90) 6: G.DrawPie(Pens.Black, R, 0, 270) 7: End Sub Line 2 calls the parameterized constructor for the Rectangle ValueType, so we must invoke the constructor using New. Line 3 creates a subclass of BrushTextureBrushusing the form's Icon to create the texture. (TextureBrush is overloaded and takes a Bitmap; to satisfy the overloaded TextureBrush constructor, we call Icon.ToBitmap to convert the icon to a bitmap.) Line 5 creates the wedge first using the texture brush and line 6 draws the pie, outlining the wedges. As demonstrated in this chapter's examples so far, rendering graphics-based images is much easier in Visual Basic .NET. You can also use some of the techniques demonstrated thus far to create some neat string-drawing effects. The basic Graphics.DrawString method takes a string, font, brush, and an ordered-pair indicating the starting left-top position of the text. The color of the string is determined by the font argument, and the fill characteristics are determined by the brush. For example, if you pass a textured brush, you can get some unique-looking strings. Listing 17.5 demonstrates using a textured brush that uses the JPEG image of this book's jacket to create some fiery text. 1: Private Sub DrawText(ByVal G As Graphics) 2: Dim Image As New Bitmap("..\ ..\ Images\ VBNET.jpg") 3: 4: Dim ATextureBrush As New TextureBrush(Image) 5: Dim AFont As New Font("Garamond", 50, _ 6: FontStyle.Bold, GraphicsUnit.Pixel, 0) 7: 8: G.DrawString("Unleashed!", AFont, ATextureBrush, 10, 10) 9: End Sub Line 2 creates a Bitmap object from a JPEG file, which provides for high levels of compression. (You might want to double-check the location and existence of the image file in production code.) Line 4 instantiates a TextureBrush from the JPEG image. Line 5 instantiates a Garamond bold font 50 pixels high at the ordered-pair position X = 10 and Y = 10. Line 8 invokes the DrawString method passing a literal string message, the font and brush objects, and the X,Y offset. You could use an existing Brush from the Brushes class and the form's Font, resulting in a simpler appearance that requires fewer lines of code: G.DrawString("Unleashed!", 50, Font, Brushes.Orange, 10, 10). The previous statement uses an existing font and brush and would output the text with a single statement.
https://flylib.com/books/en/1.488.1.171/1/
CC-MAIN-2020-29
refinedweb
2,796
63.59
Opened 8 years ago Closed 8 years ago #2122 closed defect (fixed) Race condition causes error describing step Description Not sure on the details, but it seems that sometimes when the waterfall page is refreshed it causes the following exception in twistd.log (and one of the commands on the waterfall just shows '???'). It seems to be some kind of race condition, since most of the time the same command will display normally and not generate an exception. As suggested in the traceback, the command uses WithProperties?... I don't know how those are actually handled, but at a guess something is meant to parse it into a string before describe is called? Let me know if you need any further info. 2011-10-05 14:06:07+1300 [-] Error describing step Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/buildbot-0.8.5-py2.6.egg/buildbot/process/build.py", line 389, in _stepDone return self.startNextStep() File "/usr/local/lib/python2.6/dist-packages/buildbot-0.8.5-py2.6.egg/buildbot/process/build.py", line 378, in startNextStep d = defer.maybeDeferred(s.startStep, self.remote) File "/usr/local/lib/python2.6/dist-packages/Twisted-11.0.0-py2.6-linux-x86_64.egg/twisted/internet/defer.py", line 133, in maybeDeferred result = f(*args, **kw) File "/usr/local/lib/python2.6/dist-packages/buildbot-0.8.5-py2.6.egg/buildbot/process/buildstep.py", line 514, in startStep self.step_status.setText(self.describe(False)) --- <exception caught here> --- File "/usr/local/lib/python2.6/dist-packages/buildbot-0.8.5-py2.6.egg/buildbot/steps/shell.py", line 173, in describe if len(words) < 1: exceptions.AttributeError: WithProperties instance has no attribute '__len__' Change History (7) comment:1 Changed 8 years ago by dustin - Milestone changed from undecided to 0.8.6 comment:2 Changed 8 years ago by seb_kuzminsky I get this same traceback. It seems to happen when I run a SetProperty?() buildstep that has a WithProperties?() for a command. I'm on 0.8.5, on Ubuntu Precise 12.04 Alpha, fwiw. comment:3 follow-up: ↓ 4 Changed 8 years ago by dustin Does that WithProperties? reference something that the SetProperty? should set? comment:4 in reply to: ↑ 3 Changed 8 years ago by seb_kuzminsky Does that WithProperties? reference something that the SetProperty? should set? I have two SetProperty?() calls that have trouble; one tries to use the property it sets and one doesn't. Here's the one that uses its own property: set_property_branch = shell.SetProperty( command = WithProperties('if [ ! -z "%(branch)s" ]; then echo %(branch)s; else echo master; fi'), property = 'branch', haltOnFailure = 1 ) And here's the one that doesn't: def get_version_extractor(rc, stdout, stderr): s = stdout.strip() if s == '': return None # how do you report error from an extract_fn? return { 'version': s } set_property_version = shell.SetProperty( command = WithProperties("scripts/get-version-from-git %(branch)s"), extract_fn = get_version_extractor, haltOnFailure = 1 ) comment:5 Changed 8 years ago by seb_kuzminsky Here is my master config, and two relevant twistd.logs: At 2012-01-13 09:00:47-0700 in twistd.log.2, there's a traceback from the first SetProperty?(). Then at 2012-01-13 09:01:44-0700 in the same log there's a traceback from the second SetProperty?(). comment:6 Changed 8 years ago by dustin Sorry, my question in comment 3 was bogus -- it doesn't matter. I think that the fix for this is to note when the step is rendered, and then use self.build.render in describe if the command hasn't been rendered yet. This may result in incorrect descriptions while the build is still running, but I think that's an acceptable problem -- what else could we do? comment:7 Changed 8 years ago by tom.prince - Resolution set to fixed - Status changed from new to closed I'm guessing that this is happening before the step is run, so its properties haven't been rendered yet. It's pretty tricky to determine whether self.command needs to be rendered (since the renderables may be buried in a list). This is relatively harmless, as you've seen - the waterfall renders, albiet with a traceback in the logfile. Still, I bet we can find a fix -- describe() should be resilient.
http://trac.buildbot.net/ticket/2122
CC-MAIN-2019-39
refinedweb
717
59.8
In this article I present a namespace of managed types that provide a wrapper to some of the standard functionalities exported by ZLib. ZLib is a well known free, general-purpose lossless data-compression library for use on any operating system (1). Visual C++ allows you to produce managed and unmanaged code into the same assembly. The following example demonstrates the use of mixing unmanaged (native) code with managed one. This technique is very useful when building managed type that are wrappers around unmanaged ones, allowing you to migrate code and still maintain good efficiency. Another good point about mixing code is that, as in this case, bug fix or improvements on native code layer (especially when provided by other vendors) are so easy to handle that in most of the cases require only a rebuild of the project. The price to pay is that VC++ .NET code cannot be made verifiably type-safe (try the peverify.exe tool). Another quirk about mixing code came from initialization of the static structures of the standard libraries that are often linked with the native modules, such as CRT ATL and MFC. The solution to this problem comes from a MSDN article (2) but it is almost twisted. I have heard that this problem will be addressed at the next release of the .NET Framework. Waiting for next release of .NET framework, now it is best to not use static structure in these library or even better to not use the library at all (more difficult the last one). One note on calling unmanaged code from managed ones. The ZStream class uses an internal (managed) buffer of Bytes to reproduce the stream behavior. To use the managed buffer with ZLib library functions, we must provide to the function a pinned pointer to the managed heap. This prevents the managed buffer from being moved by the garbage collector. Pinning a part of a managed object has the effect of pinning the entire object. So if any element of an array is pinned, then the whole array is also pinned. This led us to write the following code: BYTE __pin * pBuffer = &buffer[0]; BYTE __pin * pSource = &source[0]; int nRes = compress2(pBuffer, & length, pSource, sourceLen, level); The managed object is unpinned once the pinning pointer goes out of scope or when the pointer is set to zero. The DevelopDotnet.Compression namespace enclose many types dedicated to compression task. To use one of these types, just reference the component in the project and insert the following declaration at the beginning of the source file: DevelopDotnet.Compression // // Compression types // using DevelopDotNet.Compression; ZStream, encapsulate the compressed stream functionalities. It derives from the System.Stream .NET Framework class and it can be used to compress streams as well as to decompress compressed streams. The class constructors take the base Stream to manage, and eventually other parameters to determinate if the ZStream can Read (decompress) data, or Write (compress) data. Note that ZStream is a sequential Stream so it does not support Seek. ZStream System.Stream // Read only Stream (it can only decompress) ZStream(Stream stream); // Read or Write Stream depending on the boolean parameter write // If write = true this stream can only Write ZStream(Stream stream, bool write); // Write only Stream (it can only compress) ZStream(Stream stream, CompressionLevel level); ZStream(Stream stream, CompressionLevel level, CompressionStrategy strategy); The following lines of code, represent the standard way to compress binary data into a file. // // Serializing dataset object // FibonacciDataSet ds = (FibonacciDataSet) GenerateFibonacciData(); fs = new FileStream(sFileName, FileMode.Create); ZStream compressor = new ZStream(fs, true); BinaryFormatter bf = new BinaryFormatter(); bf.Serialize(compressor, ds); To regenerate the dataset object from a compressed stream, open the compressed data file then attach it to a ZStream object and use Deserialize method of BinaryFormatter class. Deserialize BinaryFormatter // // Deserializing data // fs = new FileStream(sFileName, FileMode.Open); ZStream decompressor = new ZStream(fs); BinaryFormatter bf = new BinaryFormatter(); FibonacciDataSet ds = (FibonacciDataSet) bf.Deserialize(decompressor); dataGrid1.DataSource = ds; ZCompressor export two static methods: Compress and Un compress to quick compress and decompress a buffer of Bytes. Beware that while Compress does require as much memory as that occupied by the data to compress, Un compress may require very large memory allocation depending on the origin of the compressed data. This because decompression algorithm allocates buffers to accommodate the decompressed data on the fly while decompression goes on. ZCompressor string sData = txtData.Text; try { Encoding encoder = Encoding.UTF7; byte [] compressedData = ZCompressor.Compress(encoder.GetBytes(sData), CompressionLevel.BestCompression); txtData.Text = encoder.GetString(compressedData); } catch(ZException err) { txtData.Text = err.Message; } The DevelopDotnet.Compression namespace also export types suited to do checksum. These are Adler32 and CRC32, and one interface IChecksum. The Adler32 and CRC32 managed types, both implements the IChecksum interface. Writing different concrete types that implements ICkecsum interface will do the trick of polymorphism. That is, both Adler32 and CRC32 do perform checksum on data buffer, but how this sum is performed it depends only by the implementation of the concrete types as long as they are all controlled through the same interface. DevelopDotnet.Compression Adler32 CRC32 IChecksum ICkecsum public __gc __interface IChecksum { __property unsigned long get_Checksum(); unsigned long Update(unsigned char buffer __gc[]); unsigned long Update(unsigned char buffer __gc[], int offset, int count); }; This let us easily configure one application to use checksum facilities by writing code based on the generic access through the IChecksum interface neglecting how the checksum is performed. Adler32 crc = new Adler32(); DoChecksum(txtFile.Text, crc); lblAdler.Text = crc.Checksum.ToString("X"); CRC32 crc = new CRC32(); DoChecksum(txtFile.Text, crc); lblCrc.Text = crc.Checksum.ToString("X"); // ... private long DoChecksum(string sFile, IChecksum chk) { FileStream fs = null; long checksum = 0; try { fs = new FileStream(sFile, FileMode.Open); Byte [] data = new Byte[16384]; while(fs.Read(data, 0, 16384) > 0) { checksum = chk.Update(data); } } catch(Exception err) { MessageBox.Show(err.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } finally { if(null != fs) fs.Close(); } return checksum; } ZException extends the System.Exception class to provide Compression namespace with some mechanism to translate ZLib errors trough the managed world. Every type exported by Compression namespace that use directly the ZLib library should throw a ZException to let know the application that something has "crashed" deep inner the library. ZException System.Exception Finally I so proud to introduce to you, the last type, Ladies and Gents... ZLib. This little type has only two public static properties. The first is Version that returns the version string of the ZLib library stuck into the managed code, the last is CompiledFlags. CompiledFlags is an enumeration of type CompileOptions that has the Flags attribute and it represent the options chosen at compile time for the ZLib library source code. ZLib Starting from release 1.1.0.0 the library support Zip archive files, according to the PKWARE standard as stated in the zip file format specification document (3). Because ZLib, actually only the deflated method of compression is supported, anyway this method is the most used for zip archives. The Compression library exposes the following managed types related to zip archives: and ZipStoreFilePath. The ZipArchive class can raise notification of errors occurred during archive operation trough the ZipFail event. Client that register this even can eventually decide whether stop the operation or give it a second chance or even hide the error to the user. See the ZTest application to get a picture of what can be done with the Fail event. The Progress event is notified to the user during the normal operation flow. This event carries the information about the current operation state and the completion percent as well as a flag used to abort the operation. ZipStoreFilePath Follows some interesting code snipped that shows the standard use of some task that the library can perform on zip archives: string archiveFile = "test.zip"; using(ZipArchive archive = new ZipArchive(archiveFile, FileMode.OpenOrCreate, FileAccess.ReadWrite)) { // The simplest method to add file in archive. archive.Add("file.txt"); bool recursive = true; // Adding all the files contained in a folder. archive.Add(@"C:\Temp", recursive); // Another method to add a files. ZipEntry entry = new ZipEntry(@"C:\Temp"); archive.Entries.Add(entry, recursive); } string archiveFile = "test.zip"; using(ZipArchive archive = new ZipArchive(archiveFile, FileMode.Open, FileAccess.ReadWrite)) { // Remove from the zip archive all the entries contained in the given folder archive.Remove("Temp", true); } string archiveFile = "test.zip"; using(ZipArchive archive = new ZipArchive(archiveFile, FileMode.Open, FileAccess.Read)) { archive.Fail += new ZipFailEventHandler(OnZipFail); archive.Progress += new ZipProgressEventHandler(OnZipProgress); if(archive.Test()) { Console.WriteLine("{0} test successfully completed.", archiveFile); } } string archiveFile = "test.zip"; using(ZipArchive archive = new ZipArchive(archiveFile, FileMode.Open, FileAccess.Read)) { // Set some flag to perform extraction archive.OverwriteExisting = true; archive.BuildTree = true; // Get error notification. archive.Fail += new ZipFailEventHandler(OnZipFail); // Go with extract archive.ExtractTo(@"C:\Temp"); } The ZipArchive class use .NET seekable stream to access the Zip archives. This gives us the possibility to create on the fly memory zip archives and speed up the things when multiple operations must be performed sequentially. MemoryStream archiveStream = new MemoryStream(); ZipArchive archive = new ZipArchive(archiveStream); try { archive.Add("file1.txt"); archive.Add("file2.pdf"); archive.Comment = "Memory Archive Test"; } catch(ZipException) { MessageBox.Show("Something goes wrong while adding files to the archive."); } // // Copy memory stream to file // using(FileStream file = new FileStream("memory.zip", FileMode.Create, FileAccess.Write)) { StreamUtilities.StreamCopy(archiveStream, 0, file); } There is always time to do the best, so what about to let the library manage SFX archive, password, and others stuff. The library is freeware and comes with the source code. Actually is under development at the DevelopDotNet(4/4</a />). There you will find a forum about the library and the possibility to signal bugs or require changes. Ok, by the time, when I had the power of ZStream I would like to have something to squeeze to doing some testing. Searching up and down I came along with an old school book where I found the way to get a lot of number: the Fibonacci's series. I hope you not get bored from the following little story about Fibonacci's series. In 1202, Leonardo Pisano, also called Leonardo Fibonacci, published a great book of mathematics: Liber Abaci. In this book Fibonacci, discusses the famous rabbit population problem, here is Prof. Sigler's (5) translation of Fibonacci's original statement: written pair in the first month bore, you will double it; there will be two pairs in one month. One of these, namely the first, bears in the second month, and thus there are in the second month 3 pairs; of these in one month two are pregnant, and in the third month 2 pairs of rabbits are born, and thus there are 5 pairs in the month; in this month 3 pairs are pregnant, and in the fourth month there are 8 pairs, of which 5 pairs bear another 5 pairs; these are added to the 8 pairs making 13 pairs in the fifth month; these 5 pairs that are born in this month do not mate in this month, but another 8 pairs are pregnant, and thus there are in the sixth month 21 pairs; to these are added the 13 pairs that are born in the seventh month; there will be 34 pairs in this month; to this are added the 21 pairs that are born in the eighth month; there will be 55 pairs in this month; to these are added the 34 pairs that are born in the ninth month; there will be 89 pairs in this month; to these are added again the 55 pairs that are born in the tenth month; there will be 144 pairs in this written pair in the mentioned place at the end of the one year. written sum of rabbits, namely 377, and thus you can in order find it for an unending number of months. The solution to this problem comes from the series with the nth term given by: F(n) = F(n-1) + F(n-2) with the initial condition: F(0) = 0 and F(1) = 1. 1, 1, 2, 3, 5, 8, 11, ..., Interesting, isn't it? A generic term of this series can be calculated using recursion but this get expensive in term of computational time, so I have reduced the problem to straight line by simply saving the two last term of the series and then getting the sum for the last term. (1) ZLib home site.
https://www.codeproject.com/articles/7636/managed-c-wrapper-for-zlib?fid=73012&df=10000&mpp=25&sort=position&spc=none&select=1212533&tid=869540
CC-MAIN-2017-04
refinedweb
2,072
55.03
React Dazzle Dashboards made easy in React JS. Features - Grid based layout - Add/Remove widgets - Drag and drop widget re-ordering - UI framework agnostic - Simple yet flexible - Well documented (It’s a feature! Don’t you think?) Installation $ npm install react-dazzle --save Dazzle me Here is a demo. Widgets shows fake data though but they look so damn cool (At least for me). Repository of the demo is available here. Usage import React, { Component } from 'react'; import Dashboard from 'react-dazzle'; // Your widget. Just another react component. import CounterWidget from './widgets/CounterWidget'; // Default styles. import 'react-dazzl/lib/style/style.css'; class App extends Component { constructor() { this.state = { widgets: { WordCounter: { type: CounterWidget, title: 'Counter widget', } }, layout: { rows: [{ columns: [{ className: 'col-md-12', widgets: [{key: 'WordCounter'}], }], }], } }; } render() { return <Dashboard widgets={this.state.widgets} layout={this.state.layout} /> } } API Providing widgets widgets prop of the dashboard component takes an object. A sample widgets object would look like below. This object holds all the widgets that could be used in the dashboard. { HelloWorldWidget: { type: HelloWorld, title: 'Hello World Title' }, AnotherWidget: { type: AnotherWidget, title: 'Another Widget Title' } } typeproperty – Should be a React component function or class. titleproperty – Title of the widget that should be displayed on top mode Setting editable prop to true will make the dashboard editable. Add new widget When user tries to add a new widget, the onAdd callback will be called. More info here on how to handle widget addition. Remove - Improve drag and drop experience (#1) License MIT © Raathigesh » React Dazzle – A dashboard library for React JS 评论 抢沙发
http://www.shellsec.com/news/13751.html
CC-MAIN-2018-05
refinedweb
261
50.84
HTTP requests and wide support in Java and the .net world. This way, we can write an Excel Addin in C# and call Java objects from there. If you want to create a webservice in Java, it it pretty simple. First, we construct the object, which we want to access from C#. Note, that the access from C# will work using the Windows Communication Foundation (WCF), which wraps the SOAP access and make the whole setup easy. The following example is a Java class, which we want to access from C#: package com.thetaris.MyWebService; import java.util.HashMap; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; import javax.jws.soap.SOAPBinding.Style; @WebService @SOAPBinding(style=Style.RPC) public class Calculator { public long addValues(int val1, int val2) { return val1 + val2; } public double[] getList(int numberElements) { double[] res = new double[numberElements]; for (int i = 0; i { res[i] = i*1.2; } return res; } } Then, we need a Java server, which can publish an object of the class above: package com.thetaris.MyWebService; import javax.xml.ws.Endpoint; public class MyWebService { public static void main (String args[]) { Calculator server = new Calculator(); Endpoint endpoint = Endpoint.publish("", server); } } After starting the Java server, you can test the function using your browser at “” and you should be able to see the Webservice and a link to the Webservice Description WSDL. Now, we can access this Webservice from C#. First, we need to create a new project in Visual Studio. Within this project, we run the SvcUtil command in order to generate the C# Stubs required. This can look like: "c:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\SvcUtil.exe" /language:cs /out:generatedProxy.cs /config:app.config This creates the files “generatedProxy.cs” and “app.config”, which you have to add to your Visual Studio project. Furthermore, add also a reference to “System.ServiceModel” to your Visual Studio project. Then, the following code will access the Webservice: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; namespace WebServiceClientCSharp { class Program { static void Main(string[] args) { CalculatorClient client = new CalculatorClient(); Console.WriteLine("Result of 3 + 7 is " + client.addValues(3, 7)); for (int j = 0; j < 100; j++) { double?[] res = client.getList(10); for (int i = 0; i < res.Length; i++) { Console.WriteLine("Element is " + res[i]); } } Console.WriteLine("Done."); Console.ReadLine(); } } } Now, we can use NetOffice, VSTO (as Excel macro) or Excel DNA (as Excel UDF) for integrating C# and Excel. Then, we are done. We see that this is actually very simple. Different data types are possible, too. Some data type mappings are given here. Update 2011-12-14: Instead of calling the “SvcUtil.exe” for generating the C# stubs, you can also use "c:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\wsdl.exe" /language:cs /out:generatedProxy.cs
https://computeraidedfinance.com/2011/12/05/how-do-i-connect-java-with-ms-excel-try-webservices-with-soap/
CC-MAIN-2019-22
refinedweb
475
53.88
This tutorial series is also available as an online video course. You can watch it on YouTube or enroll on Udemy. Or you just keep on reading. Enjoy! :) Introduction Blazor WebAssembly is turning the web development world upside down. With Blazor you can keep coding every part of your web application - meaning front end and back end - with the programming language and the framework you love - C# and .NET. No need for JavaScript anymore and you can even use the same classes and methods you write for the server as well as for the client. In this tutorial series, we will dive right into the code by first having a look at the standard example project of Blazor WebAssembly and then we already build the main project of this tutorial series, which is a classic online browser game, where users can create an army of fighters and send them into battle against other users. Together with some customization options and climbing the leaderboard, this application will teach you how to use Blazor WebAssembly with Razor components in a playful way. We will have a look at data binding and event handling, communication between components, forms with their built-in components and validation options, how to use views only authorized users can see, how to make calls to a web service, and much more. Additionally, you will learn how to build the back end of the browser game with a Web API and Entity Framework to store all the data in a SQLite database. By the end of this tutorial series, you will have what it takes to call yourself a full stack Blazor web developer. With your new skills, you are ready to conquer any upcoming .NET web development project you want to build yourself or any project that is requested by a recruiter. Are you ready? Let's start! Tools The only tools you need in the beginning are the .NET SDK and Visual Studio. Depending on when you’re watching this course, you can choose to download the .NET Core 3.1 SDK, the preview or release candidate of .NET 5 - which will combine .NET Core with the older .NET framework, or .NET 5 has already been released, then you’re safe to choose this SDK. This course will use .NET Core 3.1 because, by the time of writing these lines, this version has been the latest stable release. So to be absolutely safe, please choose this version as well - but there shouldn’t be many differences between .NET Core and .NET 5 if any. Regarding Visual Studio, you can use the Community Edition of Visual Studio 2019. It’s totally free and provides all the functions we need. If you decide to use a preview version of a .NET SDK, you probably also need a preview of Visual Studio. Just keep that in mind. Otherwise, the latest released version of Visual Studio is perfect. If you already want to get the necessary tools for the back end, you can download and install Postman, which we will use to test our web service calls later on. Sometimes it’s just nice to test a call to the Web API before you build the corresponding front end. Additionally, you could already download the DB Browser for SQLite, which enables you to, well, browse through a SQLite database. But, again, these tools are used later in this tutorial series. For now, please download and install the .NET SDK of your choice and Visual Studio 2019 Community Edition. Git Repository on GitHub One last thing before we start with creating our Blazor WebAssembly project. You can get the complete code you’ll see during this tutorial series on GitHub. Here’s the link to the repository: As you can see, the commits in this repository match the structure of this tutorial series. So if you’re struggling with your code, please have a look at this repository. I hope it helps to find a solution. Or just grab the code and build your own browser game with it. Whatever you like. Anyways, if you still have any problems with the code, you can also reach out to me, of course. Now let’s create the project. Jumpstart Create an ASP.NET Core Hosted Blazor WebAssembly Project Alright, when you start Visual Studio, choose Create a new project first. From the templates, we choose a Blazor App. If you can’t find it, just use the search bar on top and look for blazor for instance. Let’s give this thing a suitable name like BlazorBattles for instance and click Create. Now it’s getting interesting. We have a bunch of options available here. First the .NET SDK version. I’d like to use .NET Core 3.1. Then we can choose between a Blazor Server App and a Blazor WebAssembly App. Well, according to the title of this tutorial series, we should choose Blazor WebAssembly. If you don’t know the difference, just real quick: A Blazor Server App runs completely on the server. There are no actual web service calls like you might be used to with a typical web application. The description says that user interactions are handled over a SignalR connection, meaning over web sockets. So the user doesn’t really download the client app to the browser, which is great. But the big disadvantage is that the server has to handle everything. So if your app has a lot of users, the server has to manage all the interactions of every user, all current states, and so on. With a Blazor WebAssembly application, the server has a lot less to do, because the client handles the interactions which might be faster for the user - speaking of user experience and offline functionality, for instance - and which is the typical structure of a modern web application. Maybe you have already built a web application with a .NET back end and a JavaScript framework like Angular, React, or VueJS for the front end. These frameworks handle user interactions as well and make web service calls to the back end. With Blazor WebAssembly it is the same thing, but you don’t have to write JavaScript. You can keep writing C# code and even use the same classes for the client and the server. I just love that fact. So, long story short, we choose the Blazor WebAssembly App. We don’t need authentication for now. We will implement authentication later by ourselves, which is a great way to learn. Configure for HTTPS is fine and then comes an important checkbox, we select ASP.NET Core hosted. With that selection, we already get a server project and a shared project additionally to our client project. We’ll have a look at these projects in the next chapter. For now, it’s just important to know that this checkbox provides a solution where we can already make use of a Web API. So, a full-stack web application in one solution with C# and .NET only. That’s it. At the bottom right you can double-check the used .NET (Core) version and then click Create. And there’s our new solution. Great! Let’s open the Solution Explorer on the right and then see what has been created for us. Solution Overview So, in the Solution Explorer of Visual Studio, you see three projects, Client, Server, and Shared. The Client project represents the front end. Here’s where all the Blazor WebAssembly magic will happen. The Server project will be the home of the Web API and Entity Framework. And the Shared project will be used to share classes between the client and server projects. This means, building a model once and using it for both the client and the server. You can already see the WeatherForecast model here for instance, but we’ll talk about the example application in the next chapter. Let’s have a look at the client project first. The Program.cs file with the Main() method is the starting point. We will mainly use this method to register new services we write by ourselves or services that will be added by new packages. namespace BlazorBattles.Client {(); } } } You can already see that something is happening here with the HttpClient class, and above that line, a root component is added, which would be the App component we can find in the App.razor file. <Router AppAssembly="@typeof(Program).Assembly"> <Found Context="routeData"> <RouteView RouteData="@routeData" DefaultLayout="@typeof(MainLayout)" /> </Found> <NotFound> <LayoutView Layout="@typeof(MainLayout)"> <p>Sorry, there's nothing at this address.</p> </LayoutView> </NotFound> </Router> A component in Blazor is a razor file. That's why it's also called a Razor component. So the App component is the root component where you actually see the use of further components like the Router, the Found and NotFound component, and the RouteView for instance. The Router component that wraps the other ones, in this case, decides if a route that has been entered in the address bar of the browser is correct or not. If it is correct, the Found component will be used to display the content of the entered route, if not, the NotFound component will be used. Both components, in turn, use further components, a RouteView and a LayoutView. And these two make use of the MainLayout. But let’s stick to the LayoutView first. It uses the MainLayout, but the content can already be seen here. It’s a simple text wrapped by a standard HTML paragraph tag. <LayoutView Layout="@typeof(MainLayout)"> <p>Sorry, there's nothing at this address.</p> </LayoutView> The RouteView though uses the MainLayout but with the content of the actual page, the user wants to see by entering a particular route. And this page can be found in the Pages folder. We’ll get to that in a second. Let’s have a quick look at the _Imports.razor file. It’s quite simple, you will find global using directives here. That’s it. If you don’t want to add a reference in a single component, add it here instead. Battles.Client @using BlazorBattles.Client.Shared Okay, next, let’s have a look at the MainLayout in the Shared folder. @inherits LayoutComponentBase <div class="sidebar"> <NavMenu /> </div> <div class="main"> <div class="top-row px-4"> <a href="" target="_blank" class="ml-md-auto">About</a> </div> <div class="content px-4"> @Body </div> </div> Looks like standard HTML at first glance, right? Well, the only difference is that it inherits from the LayoutComponentBase class. And when you look at this class, you can see that it has a property called Body of type RenderFragment which is used in the MainLayout with @Body. And that’s where the pages will be rendered. namespace Microsoft.AspNetCore.Components { // // Summary: // Optional base class for components that represent a layout. Alternatively, components // may implement Microsoft.AspNetCore.Components.IComponent directly and declare // their own parameter named Microsoft.AspNetCore.Components.LayoutComponentBase.Body. public abstract class LayoutComponentBase : ComponentBase { protected LayoutComponentBase(); // // Summary: // Gets the content to be rendered inside the layout. [Parameter] public RenderFragment Body { get; set; } } } Additionally, you can see another used component, the NavMenu, but we’ll get to that soon. Do you already see how these things work together? A component can be used by adding a tag with its exact name. And any page will be rendered in the @Body part of a Layout. You could build a custom layout if you like and use that one instead of the MainLayout in the App.razor component. Totally up to you. Anyways, let’s have a look at the pages, for instance, the Index.razor. @page "/" <h1>Hello, world!</h1> Welcome to your new app. <SurveyPrompt Title="How is Blazor working for you?" /> The crucial part is the @page directive with a string that already is the route for that page. You don’t have to specify that route anywhere else. Just create a component, add that @page directive, and you’re done. @page "/" It’s the same with the FetchData and the Counter pages. Before we have a deeper look at the code, a quick word about the other projects. The server project first consists of a Program.cs and a Startup.cs file. In the Startup class we find the ConfigureServices() and the Configure() method. The ConfigureServices() configures the app’s services, so a reusable component that provides app functionality. public void ConfigureServices(IServiceCollection services) { services.AddControllersWithViews(); services.AddRazorPages(); } We will register services in the future in this method, so they can be consumed in our web service via dependency injection for instance, similar to the Program.cs of the client project. Please don’t mind all these buzzwords right now… The Configure() method creates the server app’s request processing pipeline, meaning the method is used to specify how the app responds to HTTP requests. public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseWebAssemblyDebugging(); } else { app.UseExceptionHandler("/Error"); //RazorPages(); endpoints.MapControllers(); endpoints.MapFallbackToFile("index.html"); }); } As you can see we’re using HttpRedirection, Routing, and so on. With all these Use...() extension methods, we’re adding middleware components to the request pipeline. For instance UseHttpRedirection() adds middleware for redirecting HTTP requests to HTTPS. Now the Startup class is specified when the app’s host is built. You see that in the Program class in the CreateHostBuilder() method. Here the Startup class is specified by calling the UseStartup() method. namespace BlazorBattles.Server { public class Program { public static void Main(string[] args) { CreateHostBuilder(args).Build().Run(); } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }); } } Regarding the appsettings.json files we only need to know that we can add and modify some configurations here. More interesting and often used throughout the back end part of this course is the Controllers folder. The first controller you see here is the generated WeatherForecast demo controller. We’ll get to the details of controllers later. For now, it’s only important to know that we can already call the Get() method here which is actually done by the FetchData component. (); } I’d say, we do that next. Example Project Explained Let’s run the application first and then talk about some files we haven’t talked about yet. The great thing about Visual Studio and .NET Core hosted Blazor WebAssembly applications is that you can simply start this whole package with that little play button on top. Let’s do exactly that. The actual starting project is the Server project and Visual Studio will fire up IIS Express with a browser of your choice to start the app. When you have a close look, you see the text “Loading…”. Where does that come from, you might ask? Well, the client project has got this folder called wwwroot which is the root of the actual website, and here next to some cascading style sheets and a favicon you can find the actual index.html. And here you find the loading text. <app>Loading...</app> Any resources you want to add later, like icons, images, other stylesheets, and so on, have to be added to this wwwroot folder. We will use this place later. The crazy thing now is, that you can actually debug the client and the server at the same time. You see, that the counter page increases a counter. Let’s have a look at the code. @page "/counter" <h1>Counter</h1> <p>Current count: @currentCount</p> <button class="btn btn-primary" @Click me</button> @code { private int currentCount = 0; private void IncrementCount() { currentCount++; } } It seems like there’s not much to see, but actually, there’s already happening a lot. You already know the @page directive. Then you can use the @currentCount variable. <p>Current count: @currentCount</p> This integer variable can also be found in the @code block at the bottom. private int currentCount = 0; So in this @code block, you’re free to write C# code and use that code above within your HTML code. With the @currentCount variable you already see how data binding is done in Blazor WebAssembly. And in the button, you can already see event handling. <button class="btn btn-primary" @Click me</button> The onclick event, marked with an @ calls the IncrementCount() method. Simple as that. By the way, if you don’t like having your C# code together with HTML, you can create a code-behind file for your C# code instead. Now let’s set a breakpoint inside the IncreaseCount() method. We hit the Click me button and hit the breakpoint. Crazy. Now, what’s up with the service? Let’s have a look at the FetchData page. This page actually makes a call to the Web API of the server project. We can see that in the network tab of the console. Looking at the code, we see a lot of new stuff. @page "/fetchdata" @using BlazorBattles> } @code { private WeatherForecast[] forecasts; protected override async Task OnInitializedAsync() { forecasts = await Http.GetFromJsonAsync<WeatherForecast[]>("WeatherForecast"); } } First the using directive. We’re referencing the Shared project here, because we’re using the WeatherForecast class. Then we inject the HttpClient which enables us to make web service calls. @using BlazorBattles.Shared @inject HttpClient Http Inside the table, we see a foreach loop that uses the forecasts received from the service. <tbody> @foreach (var forecast in forecasts) { <tr> <td>@forecast.Date.ToShortDateString()</td> <td>@forecast.TemperatureC</td> <td>@forecast.TemperatureF</td> <td>@forecast.Summary</td> </tr> } </tbody> We’ll do all this later by ourselves, so please don’t mind my pacing for now if it’s a bit too fast. In the @code block we see a new method called OnInitializedAsync(). This thing is part of the component lifecycle and is called, as the name may imply, on the initialization of the component. protected override async Task OnInitializedAsync() { forecasts = await Http.GetFromJsonAsync<WeatherForecast[]>("WeatherForecast"); } And in this method we’re using the injected HttpClient to make a Get() call to the WeatherForecast route which would be the WeatherForecastController of the server project, which is, by the way, also using the WeatherForecast class of the Shared project. Let’s move to that controller and set a breakpoint in the Get() method and then run the app again. When we open the FetchData page, the breakpoint in the service is hit. Fantastic! Now, although being able to debug client and server at the same time is great, I like to run the application in another way. Because with the way I just showed you, you have to stop and run the project by yourself every single time you made some changes to your code. This slows down development. So instead, I like to use the Package Manager Console with the dotnet watch run command, which rebuilds the project when any code change has been recognized. There's only one problem: So far this only works for the server project. If we make any changes to the client project, they won’t be recognized. But, of course, we can fix that. In the project file of the server, we add a new ItemGroup. <ItemGroup> <Watch Include="..\Client\**\*.razor" /> </ItemGroup> With that little change, the application notices any change happening in any razor file of the client project - additionally to any change in the server project. Now let’s open the Package Manager Console and then first move to the directory of the server. Now enter dotnet watch run. When the app is running, you already see the URL we have to enter in Chrome. This URL can also be found in the launchSettings.json files of your projects. Now the breakpoints won’t be hit, but let’s increase the counter by 2 for instance. private void IncrementCount() { currentCount += 2; } The change will be recognized, the app is rebuilt and we just have to reload the page to see the change. The same for the server. Let’s return seven forecasts instead of five and reload the page. (); } Now we see seven forecasts. You see, this makes developing your Blazor WebAssembly app a lot faster. One thing we haven’t talked about yet is the NavMenu. But by now, you may already know how this component works. You see some data binding and event handling with the ToggleNavMenu() function and the NavMenuCssClass variable. <div class="top-row pl-4 navbar navbar-dark"> <a class="navbar-brand" href="">BlazorBattles<> </ul> </div> @code { private bool collapseNavMenu = true; private string NavMenuCssClass => collapseNavMenu ? "collapse" : null; private void ToggleNavMenu() { collapseNavMenu = !collapseNavMenu; } } New though is the NavLink component. This is a built-in component for navigation links. <NavLink class="nav-link" href="counter"> <span class="oi oi-plus" aria-</span> Counter </NavLink> The description already says it all: It’s a component that renders an anchor tag, automatically toggling its active class based on whether its href matches the current URI. And the href would be the string that is used with the @page directives of your pages. One last thing I want to show you in this example application is the first way of communication between components. When we switch to the Index.razor again, you see the use of the SurveyPrompt component with a Title. <SurveyPrompt Title="How is Blazor working for you?" /> This Title is actually a parameter. So the parent component Index can tell the child component SurveyPrompt anything with the help of parameters. Looking at the SurveyPrompt component, you see the Title property in the @code block, marked by a [Parameter] attribute. @code { // Demonstrates how a parent component can supply parameters [Parameter] public string Title { get; set; } } And then this Title - provided by the parent - is used in the HTML above. <strong>@Title</strong> Alright, that should be it for the example application. Already a lot of input I guess. Let’s make a new Git repository out of this and then start building our browser game with Blazor. Initialize Git Repository To initialize a Git repository, on the bottom right of Visual Studio we click on Add to Source Control and choose Git. And that’s it actually. Now we’ve got a local repository where we can commit our changes and have a history of them. We could also publish this repository to any remote one or use GitHub or Azure DevOps, for instance. This could also be a private repository. So if you want to save your code to the cloud feel free to do so. As mentioned earlier, you will find the code of this tutorial series on GitHub as well. So if you have any trouble or just want to play around with the project, feel free to access this project on GitHub, clone the repo, fork it, star it, and so on. Alright, that’s it for this part of the Blazor WebAssembly Full Stack Bootcamp. Let’s make a game next.: Your first Razor component, communication between components with parameters, event callbacks and services, create a new page, and more! Thumbnail: vector illustration/Shutterstock But wait, there’s more! - Watch the Blazor WebAssembly Full Stack Bootcamp online course - Get the 5 Software Developer’s Career Hacks for free. - Let’s connect on Twitter, YouTube, LinkedIn or here on dev.to. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/_patrickgod/blazor-webassembly-full-stack-bootcamp-i7
CC-MAIN-2021-49
refinedweb
3,888
66.13
anywhere in the program like the Java parser does. Multi-line strings Regular strings in Groovy cannot span multiple lines. As an exception to this rule, a backslash at the end of a line disappears and joins the current line with the next.Error rendering macro 'code': Invalid value specified for parameter 'lang' // this is a compile error def' def name = "James" def text = """\ hello there ${name} how are you today? """ assert text != null println(text)ily if at all. Another use case for GString is If you explicitly want to coerce the GString to a String you can use the toString() method. Groovy can also automatically coerce GStrings into Strings for you.
http://docs.codehaus.org/pages/viewpage.action?pageId=33887
CC-MAIN-2014-42
refinedweb
113
64.1
Having a bit of a problem two days now. I cannot get my Daventech ranger finder to work. I think its working but my lcd string doesn’t seem it wants to update. logic for range finder: 1st - send a signal out for a minimum of 10us thorugh pd1. 2nd - When this happens if there is an object infront it will send the signal back to pd0. **But all i see is “loop again on my lcd” see code. I took that line out and it said “Signal happened” which (indicates that a signal came back) over and over. Just to see if it works withou worring out distance calculator. Was just wondering if i am overlooking something. #include <avr/io.h> #include "LCD.h" #include "SPI.h" #include <util/delay.h> #define F_CPU 1000000UL // Orangutan frequency (2MHz)!!! int main(void) { SPIInit(); // allows the mega644 to send commands to the mega168 LCDInit(); DDRD &= ~(1 << PD0); //make PD0 an input to receive echo DDRD |= 1 << PD1; // PD1 an output to send signal while (1) { PORTD |= 1 << PD1; //turn on to produce pulse delay_us(15); //make 15 us elapse PORTD &= ~(1 << PD1); // clear while (!(PIND&(1<<PC0))) { LCDString("No signal has returned as yet"); } LCDString("Signal happened"); delay_ms(15); LCDClear(); if (!(PIND&(1<<PC0))) { LCDString("Signal yes it wasnt n e hoax!"); delay_ms(1000); } LCDString("Loop again"); delay_ms(1000); } } I also attached a picture to show how the signals work graphically
https://forum.pololu.com/t/sensor/960
CC-MAIN-2022-21
refinedweb
239
75.3
How to Make a VR Game With Unity and Google Cardboard Update note: This tutorial was updated in May 2016 for the new Google Cardboard SDK, which is now called the Google VR SDK. If you completed this tutorial previously, and want to migrate over to the new SDK, refer to the Google VR release notes. Welcome… to the THIRD DIMENSION! It’s probably safe to say that everybody has been waiting for virtual reality experiences since that classic of modern cinema, “Lawnmower Man”, promised us a delightful future of, um, virtual reality where we could all have telepathic powers. Or something like that. In this tutorial, you’ll help humanity prepare for its future of telepathic abilities and virtual reality by learning to take a simple Unity game and make it work as a VR experience using Google Cardboard. You’ll. So, until Apple has a mind control API, just stick with mastering its precursors. What is Google Cardboard? In theory, creating a VR experience is straightforward. Instead of displaying your world as a single image on screen, you display two images. They come from two cameras placed a few inches apart, and the user views the image from the left camera with the left eye and vice versa, creating the appearance of depth. Additionally, with some judicious use of motion sensors, you can detect the direction the user is facing. Combine that with the 3D world you’ve created, and you’ve got yourself an immersive experience. In practice, it takes some pretty sophisticated hardware to display two images on a high-definition screen, along with tracking the user’s head, and putting that all into a device small and light enough that it doesn’t hurt your neck. Well, it turns out that such highly advanced technology is probably in your pocket or sitting within arm’s reach. It’s in almost every smartphone, and that’s the idea behind Cardboard; it can take the display and sensors in your phone and turn them into a respectable VR device, using just some cardboard and plastic lenses. Getting Started Note: You’re going to be working with the Unity GUI quite a bit. If you’ve never worked with it before, you might want to take a look at our Unity GUI tutorial. To get started making your own VR game with Google Cardboard, you’re going to need the following: - Unity Personal Edition, version 5.x - A smartphone, either an iPhone 5 or later or an Android phone running Jelly Bean or later on which to test. By the way, this tutorial will assume you’re working on an iPhone. - Oh, yeah, and a Cardboard unit. Wait… How Do I Get a Cardboard Unit? Don’t have a Cardboard unit? Your best bet is to order it from any of the vendors listed here. It should cost you around $20 or $30, plus shipping. If you’re feeling like a DIY superstar, you could even make your own. When buying a Cardboard device, look for ones that mention “V2” or “Cardboard 2.0”, because they fit a larger variety of phones, including big phones like the iPhone 6+. They also support user input with a button that’s covered in conductive metal tape that taps the phone’s screen when you press down on it. Can I Do this Tutorial Without a Cardboard Unit? Well, kinda. You’ll be able to run your game on your phone, and it’ll look like this. You can approximate a 3D experience if you stare past the screen just right while playing. If you move your phone around, you’ll be able to control it just as though your phone were strapped to your head, and you can simulate clicks by tapping the screen. Although you could play the game and get a sense of what it would be like if you were viewing it through a VR headset, it doesn’t mean you should. It’s kind of like the difference between eating chocolate and listening to a friend describe to you how chocolate tastes. Sure, you’ll get the general idea, but it just won’t be the same. Long story short: if you’re too impatient to wait for that Cardboard unit to arrive, you can still learn from this tutorial, but you’ll get a lot more out of it with the right equipment. The Sample Game — Ninja Attack is Back! Take a moment to try out your sample game. Download and unzip the Unity starter project. Next, start up Unity. From the welcome screen, select Open and then the StarterNinja folder to open the NinjaAttack project. In the Project Browser, double-click MainScene in Assets. Then give the game a try by pressing Play. You’re the ninja on the left. As monsters make their way across the screen, click anywhere on the screen to launch a ninja star and take ’em out! Take out 20 monsters and you win, but if a monster reaches the red zone on the left, you lose. Does this game look familiar? For those of you who are long-time readers of raywenderlich.com, you might recognize this as the same ninja game from your SpriteKit and Cocos2D tutorials. But this time it’s rendered in glorious 3D! Not that you’ll really notice most of that glorious dimension though. The entire game is top-down. It almost feels like a waste rendering all these polygons. Now you’re seeing why this game is the perfect candidate for VR. Getting Started with The Google VR SDK The first thing you need to do is download the SDK. Head on over to the Google VR Downloads page, click on the “Agree” button if you see one, and download the SDK. You can do this either by cloning the repo (if you’re into git), or clicking the “download the repo directly” link (if you’re not). Next, import it into your project. From Unity’s menu, select Assets\Import Package\Custom Package… and then select the GoogleVRForUnity.unitypackage from within the repo you just downloaded. Make sure everything is selected, uncheck the DemoScenes folder, and then click the Import button. Hack it Like It’s Hot To get your game working as a VR experience, you need to perform a few quick ‘n dirty hacks. From the GoogleVR\Prefabs folder in the Project Browser, drag the GvrMain Prefab into your scene. In the inspector, give it almost the same Position as your ninja main character — (-5.53, 1.13, 0.122) — and a Y Rotation of 90. You’ll notice it’s a little bit higher than the ninja’s center to represent the fact that you’re looking through his eyes. Next, select the Main Camera in the hierarchy and uncheck it in the inspector to disable it. Do the same thing for the raccoon-ninja object. Now, run your game in the Unity editor again. You should see something resembling a 3D scene! And if you hold down the option key while moving your mouse around, your camera will move as if you were turning your head. Running Your Scene on an iOS Device It’s great to be able to run your game in the Unity editor, but last time I checked, there was no way to fit a computer monitor inside a VR headset without some serious pain, hence why you’re also working on an iPhone. - Select File\Build Settings — iOS should already be selected as your default platform. - Click Player Settings and switch to the inspector - Under Resolution and Presentation, change the Default Orientation to Landscape Left. - In Other Settings, change your Bundle Identifier to be something appropriate for your organization. (Like com.<your_company>.NinjaAttackVR) - Change your Target Device to iPhone Only Attach your iPhone to your computer, select Build and Run and give your export folder a name; it can be anything you’d like. Unity will export your project, and then it should automatically open up in Xcode. If it doesn’t, start up Xcode and manually open the generated project. Run it, and try it out on your phone! The first time you run your game, it takes you through a setup process where you can scan a QR code on your Cardboard unit. This lets the Google VR SDK make a few minor graphical adjustments based on how far apart your lenses are, their distance from the phone, and so on. Note: If the setup process displays the error message Problem in parsing the URL after you scanned the QR code of your Cardboard unit, you’ll have to modify the info.plist of your Xcode project as described here, in the Apple developer forums. Now go ahead and insert your phone into your Cardboard unit. Turn your head to adjust the camera’s view, and enjoy some fairly convincing 3D graphics. Make it a Game Again! Being able to view your game world is great and all, but you need to bring the gameplay back. Specifically, you want to be able to shoot ninja stars in the direction you’re facing. That’s the first piece of gameplay you’ll work with. For UI, Cardboard supports a single button. It might seem limiting, but if you combine it with the motion tracking you get from moving your head, it allows for interactions that are more complex. In Ninja Attack, you detect if your user is in VR mode with the GvrViewer.Instance.VRModeEnabled property. You’ll check if the button is pressed with the GvrViewer.Instance.Triggered property. If both of those come out to be true, you launch a ninja star in the direction the user is looking. Open up your NinjaStarLauncher.cs script. You can find it attached to the GameLogic GameObject in the inspector. Create a new private variable: private Vector3 _vrShooterOffset; Initialize it in your Start() method: _vrShooterOffset = new Vector3(0.0f, -0.4f, 1.0f); Replace Update() with the following: void Update () { //1 if (GvrViewer.Instance.VRModeEnabled && GvrViewer.Instance.Triggered && !_gameController.isGameOver) { GameObject vrLauncher = GvrViewer.Instance.GetComponentInChildren<GvrHead>().gameObject; // 2 LaunchNinjaStarFrom(vrLauncher, _vrShooterOffset); } else if (!GvrViewer.Instance.VRModeEnabled && Input.GetButtonDown("Fire1") && !_gameController.isGameOver) { // This is the same code as before Vector3 mouseLoc = Input.mousePosition; Vector3 worldMouseLoc = Camera.main.ScreenToWorldPoint(mouseLoc); worldMouseLoc.y = ninja.transform.position.y; ninja.transform.LookAt(worldMouseLoc); LaunchNinjaStarFrom(ninja, _shooterOffset); } } That will get things working. Here’s a look at what Update() does: - You first check if the game is in VR mode and if the user has pressed the button by examining the properties on the GvrViewer.Instancesingleton object. - After that, you call LaunchNinjaStarFrom()to instantiate a ninja star. You pass in two parameters: - The first is the head GameObject. The Google VR library moves it around for you, so it should already be pointed in the right direction. - The second is a slight offset, so the method instantiates a ninja star slightly in front of and below the head GameObject, which looks a little more natural — otherwise it would look like you’re shooting ninja stars out of your eyes. Cool, but weird. Since your Ninja Star GameObject is already designed to fly out from where it’s created, it will fire in the correct direction. Give it another try! At this point, you can turn your head around and shoot bad guys. The win or lose logic still applies. Take that, blobbies! Fixing the Game Over Menu As you might have observed, when the game is over you’re still left with the old Game Over menu. It not only shows up improperly in 3D, but you have no way of clicking on it. Shockingly, this button renders poorly in 3D. The game currently uses a Display Canvas — as seen in the Unity New Gui Tutorial — to display the Game Over screen, which always displays on top of the game window. This canvas is great for most game GUIs because it automatically scales itself to fit on top of your screen no matter what your game’s camera is doing, and it nicely handles different screen sizes. But in this case, you need a GUI canvas that exists in the world itself, partly so it renders properly in 3D, but also because you don’t want it to stay locked to the camera. You want your users to be able to look up and down so they can look at different UI elements and trigger the active one by clicking the button. Creating a New Canvas Select the GameOverCanvas in the Hierarchy, right-click on it and select Duplicate. Rename the duplicated canvas VRGameOverCanvas, to distinguish it from the original one, and rename its GameOverTxt child object to VRGameOverTxt. In the Canvas component of VRGameOverCanvas, change the Render Mode to World Space In the Rect Transform component, change the Position to (-2.24, 1.1, 0.07), and the Y Rotation to 90. Finally, change the X and Y Scale to be 0.009. When it’s all done, the VRGameOverCanvas should look like this: And you should see the two canvases roughly overlap each other in the Game View (when the game’s not running): Where did these values come from? Truthfully, I just fiddled around with them until I had something that looked good when viewing them through the Cardboard camera. Sometimes programming is more an art than a science. :] Supporting Both Canvases Next up, you’re going to alter GameController.cs so that it’s aware of both canvases. Open up the GameController script that’s also attached to the GameLogic GameObject. Add the following two public variables to your class: public Canvas VRGameOverCanvas; public Text VRGameOverTxt; Add the following line to the beginning of resetGame(): VRGameOverCanvas.enabled = false; Replace GameOver() with: public void GameOver(bool didIWin) { isGameOver = true; _didIWin = didIWin; string finalTxt = (_didIWin) ? "You won!" : "Too bad"; if (GvrViewer.Instance.VRModeEnabled) { VRGameOverCanvas.enabled = true; VRGameOverTxt.text = finalTxt; } else { gameOverCanvas.enabled = true; gameOverTxt.text = finalTxt; } } This displays the proper canvas and text objects, depending on whether or not you’re in VR mode ( GvrViewer.Instance.VRModeEnabled). After you’ve saved your script, you’ll need to assign the correct objects to the new public variables. Find the GameController script in the inspector. Click on the target next to each of the new variables, and select VRGameOverCanvas object as your VR Game Over Canvas variable and the VRGameOverTxt object as your VR Game Over Txt variable. Note: Are you wondering why you’re going through the trouble of supporting two canvases instead of just changing the existing one? Because eventually, you’re going to support both top-down and VR mode. Stay tuned! If you were to run your game now, you would see that it properly shows the end-game screen in VR mode. You can look up and down and see the different parts of your interface; all that’s missing is a way for you to click that Play Again button. Note: At least, this is how it should look in theory. In practice, there is a bug in Unity (starting with 5.3.4p2) where World Space GUI canvases don’t render onto a RenderTexture, which is what the GoogleVR SDK uses for distortion correction — the fisheye look you see in the two cameras. As a temporary workaround, you can go to your GvrMain object and in the Gvr Viewer component, change Distortion Correction to None. Turning off distortion correction in Google VR This means that you may see some slight distortion when you try your app in a Cardboard viewer, but at least you’ll be able to see your GUI. :] Adding Gaze Input Luckily, Unity has built-in support for “Treating the center point of the camera as a mouse cursor” when using a world-space GUI canvas, but you need to provide an extra script to make it work in VR space. First, expand GvrMain\Head in the Hierarchy. Find the Main Camera there and rename it to VR Main Camera Select your VRGameOverCanvas object. You should see an Event Camera entry in the Canvas Component. Click on the target and select the VR Main Camera you just renamed. Click on the EventSystem object in the Hierarchy. Click Add Component in the inspector and add the GazeInputModule script. This is a script that makes sure Unity’s GUI system understands how Google VR’s camera system works. Check the VR Mode Only checkbox, because you only want things to work this way when you’re in VR mode — not when you’re in top-down version of your game. Finally, click the gear of the Gaze Input Module Component you just added and select Move Up. Do this one more time, so that it appears above the Touch Input and Standalone Input modules. This is required to make sure the Gaze Input Module takes priority over the other types of input that your game can process. When it’s all done, it should look like this: Give it a try now! This time, when you center your view on the Play Again button, it should turn green, and pressing down on the Cardboard button will let you “click” the it to start a new game! Minor Gameplay Tweaking So perhaps you found this version a bit harder to play in VR mode. This is partly because you have a reduced field of vision and it’s easier for enemies to slip by when you’re looking in the wrong direction. Also, you can’t change the direction you’re aiming nearly as quickly as you could before. You’re physically constrained by the speed at which you can turn your neck. You don’t want to penalize your players for choosing to play in VR mode! So how can you correct this? Oh, well, I was going to suggest slowing down the enemies. But… uh… your call, I guess. Select your EvilSlimeEnemy Prefab in the Prefabs folder and open up the attached EnemyMover.cs. Add this code to Start(), right after you set thisSpeed: if (GvrViewer.Instance.VRModeEnabled) { thisSpeed *= 0.85f; } This leads to your enemies bunching up a bit. If you want to keep them spaced out, go to EnemySpawner.cs — it’s attached to your GameLogic object — and add these lines in SetNextLaunch() right after you set launchInterval: if (GvrViewer.Instance.VRModeEnabled) { launchInterval *= 1.1f; } This will make your game a tad easier in VR mode — just enough that your player shouldn’t feel penalized for choosing to play this way. Fixing the On-Screen Score The other UI element you need to address is the on-screen score object, and you’re going to try a slightly different approach for this one. While it still needs to appear in VR mode properly, you want it to stay fixed to your camera no matter where you look. Imagine projecting your score onto a piece of glass that keeps moving around so that it’s always about 2 meters away from where the player is looking. That’s the effect you’re going to create. You’ll accomplish this by using another world canvas so it renders properly in 3D, but you’ll make it a child of your GoogleVR Head object. Select GvrMain\Head in the Hierarchy. Right-click on it and select UI\Canvas. Rename the new canvas to VRScoreCanvas. Change its Render Mode to World Space. Give it the following values: - Position: (0, 1, 2.5) - Width: 400, Height: 100 - Rotation: (0,0,0) - Scale: (0.0115, 0.0115, 1) When you’re all done, it should look like this: Right-click on your VRScoreCanvas and select UI\Text to add a text child-object. Rename it to VRScoreTxt. Change its anchor point to be top-left. Give it a Position of (150, -65, 0) and a Width of 60. In the text component, set Font Size to 18 and change the Alignment to right-aligned. Finally, change the Text to 999. When it’s all finished, it should look like this: It might seem like your text is strangely aligned in the center of the screen, but in VR mode, you see much less of your world than you normally would, so this ends up being pretty close to the edge of your vision. Feel free to adjust it so it looks right to you on your phone. Next up, add the plumbing to display your score using this text object. The process is similar to how you displayed the Game Over canvas. Open up GameController.cs and add a new public variable: public Text VRScoreTxt; Next, you’ll update VRScoreTxt every time you update scoreTxt. In the ResetGame() method, add this line right after you update scoreTxt: VRScoreTxt.text = "--"; Then add this line to GotOne(), right after you update scoreTxt: VRScoreTxt.text = "" + _currScore; Save your script, go back into Unity and you’ll notice the GameController Component of GameLogic now has an entry for your VR Score Txt variable. Click the target next to it and select your VRScoreTxt text object. Play your game again, and now you’ll see that your score appears up in the upper-left corner of your vision, and that it follows your head movement. Swapping In and Out of VR Mode Since your game works in both top-down and VR mode, you should give the user the ability to switch between them. The UI for this is pretty straightforward. You’ll add a simple button to your top-down game that users can press to switch into VR mode. As a bonus, the Google VR SDK automatically displays a back button you can use to go back into top-down mode. Give it a try! First, you’re going to add the code to handle swapping in and out of VR mode. Select GameLogic in the Hierarchy. Click Add Component in the inspector, select New Script and name the script CardboardSwapper. Open it, and replace the class code with this: public class CardboardSwapper : MonoBehaviour { public GameObject[] cardboardObjects; public GameObject[] monoObjects; // Turn on or off VR mode void ActivateVRMode(bool goToVR) { foreach (GameObject cardboardThing in cardboardObjects) { cardboardThing.SetActive(goToVR); } foreach (GameObject monoThing in monoObjects) { monoThing.SetActive(!goToVR); } GvrViewer.Instance.VRModeEnabled = goToVR; // Tell the game over screen to redisplay itself if necessary gameObject.GetComponent<GameController>().RefreshGameOver(); } public void Switch() { ActivateVRMode(!GvrViewer.Instance.VRModeEnabled); } void Update () { if (GvrViewer.Instance.BackButtonPressed) { Switch(); } } void Start() { ActivateVRMode(false); } } The most important method of this class is ActivateVRMode where you toggle the value of GvrViewer.Instance.VRModeEnabled. It’s what activates VR mode. The rest of the logic disables or enables various GameObjects in your scene, depending on whether or not you’re in VR mode. Calling ActivateVRMode(false) in Start() starts your game off in top-down mode. You’ll also notice that you call Switch() when you detect the back button has been pressed on the Cardboard display. You can simulate this GvrViewer.Instance.BackButtonPressed in the Unity editor by pressing the esc key, a feature you’ll no doubt find awfully handy for testing. You do need to add just a bit more logic to your GameController script so that it properly displays or hides the correct canvas if you’re switching modes at the end of the game. Open up GameController.cs, and add this method: public void RefreshGameOver() { gameOverCanvas.enabled = false; VRGameOverCanvas.enabled = false; if (isGameOver) { GameOver(_didIWin); } } Save everything and go back to Unity to populate the values of the two GameObject arrays. Select GameLogic in the Hierarchy and scroll down to the Cardboard Swapper component in the inspector. For the Cardboard Objects array, give it a size of 1, and then fill in the Head child of the GvrMain GameObject in your scene. Not only does this disable your Google VR Head so you can go back to the top-down camera, but it disables VRScoreCanvas as well. For the Mono Objects array, give it a size of 3, and then select Canvas, Main Camera, and raccoon-ninja from your scene (not from the Assets tab, which Unity seems to like defaulting to). Finally, you need to add a button on the top-down canvas for the user. To save time, I’ve already made one for you — it’s in the prefabs folder. Drag CardboardButton from Assets\Prefabs into the Hierarchy so that it’s a child of your Canvas object. Make sure that its Position is set to (-50, 50, 0): At the bottom of your button object, hook it up such that clicking the button will call the CardboardSwapper.Switch() method attached to GameLogic. You can follow this animation to see how it’s done: Give your game another try. Click on the button at the bottom-right of the screen to switch to VR mode, then click the back button in the Cardboard display (or press Escape in the Unity editor) to go back to top-down mode. Congratulations! You’re swapping VR Modes like a boss! Where to Go From Here? Download the final Unity project here. You now have the power to take any 3D game in Unity and make it a VR experience with just a little cardboard and some plastic lenses. It’s VR for everybody! The build process for Android is almost the same as for iOS. Google’s Unity Developer Guide provides some additional technical info. And finally, you could even add Augmented Reality (AR) to your VR projects! Lastly, take a look at any 3D game you might make in Unity and see if there’s anything that would translate well to a VR experience. Or, maybe this tutorial will inspire you to create something entirely new. As always, please feel free to leave questions and comments below or in the forums! Oh, and if you figure out how to control lawn mowers with your mind, let me know — I’m still working on that.
https://www.raywenderlich.com/116805/make-vr-game-unity-google-cardboard
CC-MAIN-2018-13
refinedweb
4,340
72.36
Several common authentication schemes are not secure over plain HTTP. In particular, Basic authentication and forms authentication send unencrypted credentials. To be secure, these authentication schemes must use SSL. In addition, SSL client certificates can be used to authenticate clients. Enabling SSL on the Server To set up SSL in IIS 7 or later: - Create or get a certificate. For testing, you can create a self-signed certificate. - Add an HTTPS binding. For details, see How to Set Up SSL on IIS 7. For local testing, you can enable SSL in IIS Express from Visual Studio. In the Properties window, set SSL Enabled to True. Note the value of SSL URL; use this URL for testing HTTPS connections. Enforcing SSL in a Web API Controller If you have both an HTTPS and an HTTP binding, clients can still use HTTP to access the site. You might allow some resources to be available through HTTP, while other resources require SSL. In that case, use an action filter to require SSL for the protected resources. The following code shows a Web API authentication filter that checks for SSL: public class RequireHttps); } } } Add this filter to any Web API actions that require SSL: public class ValuesController : ApiController { [RequireHttps] public HttpResponseMessage Get() { ... } } SSL Client Certificates SSL provides authentication by using Public Key Infrastructure certificates. The server must provide a certificate that authenticates the server to the client. It is less common for the client to provide a certificate to the server, but this is one option for authenticating clients. To use client certificates with SSL, you need a way to distribute signed certificates to your users. For many application types, this will not be a good user experience, but in some environments (for example, enterprise) it may be feasible. To configure IIS to accept client certificates, open IIS Manager and perform the following steps: - Click the site node in the tree view. - Double-click the SSL Settings feature in the middle pane. - Under Client Certificates, select one of these options: - Accept: IIS will accept a certificate from the client, but does not require one. - Require: Require a client certificate. (To enable this option, you must also select "Require SSL") You can also set these options in the ApplicationHost.config file: <system.webServer> <security> <access sslFlags="Ssl, SslNegotiateCert" /> <!-- To require a client cert: --> <!-- <access sslFlags="Ssl, SslRequireCert" /> --> </security> </system.webServer> The SslNegotiateCert flag means IIS will accept a certificate from the client, but does not require one (equivalent to the "Accept" option in IIS Manager). To require a certificate, set the SslRequireCert flag. For testing, you can also set these options in IIS Express, in the local applicationhost.Config file, located in "Documents\IISExpress\config". Creating a Client Certificate for Testing For testing purposes, you can use MakeCert.exe to create a client certificate. First, create a test root authority: makecert.exe -n "CN=Development CA" -r -sv TempCA.pvk TempCA.cer Makecert will prompt you to enter a password for the private key. Next, add the certificate to the test server's "Trusted Root Certification Authorities" store, as follows: - Open MMC. - Under File, select Add/Remove Snap-In. - Select Computer Account. - Select Local computer and complete the wizard. - Under the navigation pane, expand the "Trusted Root Certification Authorities" node. - On the Action menu, point to All Tasks, and then click Import to start the Certificate Import Wizard. - Browse to the certificate file, TempCA.cer. - Click Open, then click Next and complete the wizard. (You will be prompted to re-enter the password.) Now create a client certificate that is signed by the first certificate: makecert.exe -pe -ss My -sr CurrentUser -a sha1 -sky exchange -n "CN=name" -eku 1.3.6.1.5.5.7.3.2 -sk SignedByCA -ic TempCA.cer -iv TempCA.pvk Using Client Certificates in Web API On the server side, you can get the client certificate by calling GetClientCertificate on the request message. The method returns null if there is no client certificate. Otherwise, it returns an X509Certificate2 instance. Use this object to get information from the certificate, such as the issuer and subject. Then you can use this information for authentication and/or authorization. X509Certificate2 cert = Request.GetClientCertificate(); string issuer = cert.Issuer; string subject = cert.Subject; This article was originally created on December 12, 2012
http://www.asp.net/web-api/overview/security/working-with-ssl-in-web-api
CC-MAIN-2014-52
refinedweb
718
59.9
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Thonny: The Beginner-Friendly Python Editor Are you a Python beginner looking for a tool that can support your learning? This article is for you! Every programmer needs a place to write their code. This article will discuss an awesome tool called Thonny that will enable you to start working with Python in a beginner-friendly environment. In this article, you’ll learn: - How to install Thonny on your computer - How to navigate Thonny’s user interface to use its built-in features - How to use Thonny to write and run your code - How to use Thonny to debug your code By the end of this article, you’ll be comfortable with the development workflow in Thonny and ready to use it for your Python learning. So what is Thonny? Great question! Thonny is a free Python Integrated Development Environment (IDE) that was especially designed with the beginner Pythonista in mind.. Installing Thonny This article assumes that you have Python 3 installed on your computer. If not, please review Python 3 Installation & Setup. Web Download The web download can be accessed via a web browser by visiting the Thonny website. Once on the page, you will see a light gray box in the top right corner like this: Once you’ve found the gray box, click the appropriate link for your operating system. This tutorial assumes you’ve downloaded version 3.0.1. Command Line Download You can also install Thonny via your system’s command line. On Windows, you can do this by starting a program called Command Prompt, while on macOS and Linux you start a program called Terminal. Once you’ve done that, enter the following command: $ pip install thonny The User Interface Let’s make sure you understand what Thonny has to offer. Think of Thonny as the workroom in which you will create amazing Python projects. Your workroom contains a toolbox containing many tools that will enable you to be a rock star Pythonista. In this section, you’ll learn about each of the features of the UI that’ll help you use each of the tools in your Thonny toolbox. The Code Editor and Shell Now that you have Thonny installed, open the application. You should see a window with several icons across the top, and two white areas: Notice the two main sections of the window. The top section is your code editor, where you will write all of your code. The bottom half is your Shell, where you will see outputs from your code. The Icons Across the top you’ll see several icons. Let’s explore what each of them does. You’ll see an image of the icons below, with a letter above each one. We will use these letters to talk about each of the icons: Working our way from left to right, below is a description of each of the icons in the image. A: The paper icon allows you to create a new file. Typically in Python you want to separate your programs into separate files. You’ll use this button later in the tutorial to create your first program in Thonny! B: The open folder icon allows you to open a file that already exists on your computer. This might be useful if you come back to a program that you worked on previously. C: The floppy disk icon allows you to save your code. Press this early and often. You’ll use this later to save your first Thonny Python program. D: The play icon allows you to run your code. Remember that the code you write is meant to be executed. Running your code means you’re telling Python, “Do what I told you to do!” (In other words, “Read through my code and execute what I wrote.”) E: The bug icon allows you to debug your code. It’s inevitable that you will encounter bugs when you’re writing code. A bug is another word for a problem. Bugs can come in many forms, sometimes appearing when you use inappropriate syntax and sometimes when your logic is incorrect. Thonny’s bug button is typically used to spot and investigate bugs. You’ll work with this later in the tutorial. By the way, if you’re wondering why they’re called bugs, there’s also a fun story of how it came about! F-H: The arrow icons allow you to run your programs step by step. This can be very useful when you’re debugging or, in other words, trying to find those nasty bugs in your code. These icons are used after you press the bug icon. You’ll notice as you hit each arrow, a yellow highlighted bar will indicate which line or section Python is currently evaluating: - The F arrow tells Python to take a big step, meaning jumping to the next line or block of code. - The G arrow tells Python to take a small step, meaning diving deep into each component of an expression. - The H arrow tells Python to exit out of the debugger. I: The resume icon allows you to return to play mode from debug mode. This is useful in the instance when you no longer want to go step by step through the code, and instead want your program to finish running. J: The stop icon allows you to stop running your code. This can be particularly useful if, let’s say, your code runs a program that opens a new window, and you want to stop that program. You’ll use the stop icon later in the tutorial. Let’s Try It! Get ready to write your first official Python program in Thonny: Enter the following code into the code editor: print("Hello World") Click the play button to run your program. See the output in the Shell window. Click the play button again to see that it says hello one more time. Congratulations! You’ve now completed your first program in Thonny! You should see Hello world! printed inside the Shell, also known as the console. This is because your program told Python to print this phrase, and the console is where you see the output of this execution. Other UI Features To see more of the other features that Thonny has to offer, navigate to the menu bar and select the View drop. Currently this section offers more reading on the following topics: Running Programs Step-wise, how to install 3rd Party Packages, or using Scientific Python Packages. Variables: This feature can be very valuable. A variable in Python is a value that you define in code. Variables can be numbers, strings, or other complex data structures. This section allows you to see the values assigned to all of the variables in your program. Assistant: The Assistant is there to give you helpful hints when you hit Exceptions or other types of errors. The other features will become useful as you advance your skills. Check them out once you get more comfortable with Thonny! The Code Editor Now that you have an understanding of the UI, let’s use Thonny to write another little program. In this section, you’ll go through the features of Thonny that will help guide you through your development workflow. Write Some Code In the code editor (top portion of the UI), add the following function: def factorial(num): if num == 1: return 1 else: return num * factorial(num - 1) print(factorial(3)) Save Your Code Before we move on, let’s save your program. Last time, you were prompted to do this after pressing the play button. You can also do this by clicking the blue floppy disk icon or by going to the menu bar and selecting File > Save. Let’s call the program factorial.py. Run Your Code In order to run your code, find and press the play icon. The output should look like this: Debug Your Code To truly understand what this function is doing, try the step feature. Take a few large and small steps through the function to see what is happening. Remember you can do this by pressing the arrow icons: As you can see, the steps will show how the computer is evaluating each part of the code. Each pop up window is like a piece of scratch paper that the computer is using to compute each portion of the code. Without this awesome feature, this may have been hard to conceptualize—but now you’ve got it! Stop Running Your Code So far, there hasn’t been a need to hit the stop icon for this program, particularly because it exits as soon as it has executed print(). Try increasing the number being passed to the factorial function to 100: def factorial(num): if num == 1: return 1 else: return num * factorial(num - 1) print(factorial(100)) Then step through the function. After a while, you will notice that you will be clicking for a long time to reach the end. This is a good time to use the stop button. The stop button can be really useful to stop a program that is either intentionally or unintentionally running continuously. Find Syntax Errors in Your Code Now that you have a simple program that works, let’s break it! By intentionally creating an error in your factorial program, you’ll be able to see how Thonny handles these types of issues. We will be creating what is called a syntax error. A syntax error is an error that indicates that your code is syntactically incorrect. In other words, your code does not follow the proper way to write Python. When Python notices the error, it will display a syntax error to complain about your invalid code. Above the print statement, let’s add another print statement that says print("The factorial of 100 is:"). Now let’s go ahead and create syntax errors. In the first print statement, remove the second quotation mark, and in the other remove the second parenthesis. As you do this, you should see that Thonny will highlight your SyntaxErrors. Missing quotations are highlighted in green, and missing parenthesis are in gray: For beginners, this is a great resource that will allow you to help spot any typos while you’re writing. Some of the most common and frustrating errors when you start programming are missing quotes and mismatched parentheses. If you have your Assistant View turned on, you will also notice that it will give you a helpful message to guide you in the right direction when you are debugging: As you get more comfortable with Thonny, the Assistant can be a useful tool to help you get unstuck! The Package Manager As you continue to learn Python, it can be quite useful to download a Python package to use inside of your code. This allows you to use code that someone else has written inside of your program. Consider an example where you want to do some calculations in your code. Instead of writing your own calculator, you might want to use a third-party package called simplecalculator. In order to do this, you’ll use Thonny’s package manager. The package manager will allow you to install packages that you will need to use with your program. Specifically, it allows you to add more tools to your toolbox. Thonny has the built-in benefit of handling any conflicts with other Python interpreters. To access the package manager, go to the menu bar and select Tools > Manage Packages… This should pop open a new window with a search field. Type simplecalculator into that field and click the Search button. The output should look similar to this: Go ahead and click Install to install this package. You will see a small window pop up showing the system’s logs while it installs the package. Once it completes, you are ready to use simplecalculator in your code! In the next section, you will use the simplecalculator package along with some of the other skills you’ve learned in this tutorial to create a simple calculator program. Check Your Understanding You’ve learned so much about Thonny so far! Here’s what you’ve learned: - Where to write your code - How to save your code - How to run your code - How to stop your code from running - Where to see your code execute - How to spot SyntaxErrors - How to install third party packages Let’s check your understanding of these concepts. Now that you have simplecalculator installed, let’s create a simple program that will use this package. You’ll also use this as an opportunity to check that you understand some of the UI and development features that you’ve learned thus far in the tutorial. Part 1: Create a File, Add Some Code, and Understand the Code In Part 1, you will create a file, and add some code to it! Do your best to try to dig into what the code is actually doing. If you get stuck, check out the Take a Deeper Look window. Let’s get started: - Start a new file. - Add the following code into your Thonny code editor: 1from calculator.simple import SimpleCalculator 2 3my_calculator = SimpleCalculator() 4my_calculator.run('2 * 2') 5print(my_calculator.lcd) This code will print out the result of 2 * 2 to the Thonny Shell in the main UI. To understand what each part of the code is doing, check out the Take a Deeper Look section below. Line 1: This code imports the library calculatorinside of the package called simplecalculator. From this library, we import the class called SimpleCalculatorfrom a file called simple.py. You can see the code here. Lines 2: This is a blank line behind code blocks, which is generally a preferred style. Read more about Python Code Quality in this article. Line 3: Here we create an instance of the class SimpleCalculatorand assign it to a variable called my_calculator. This can be used to run different calculators. If you’re new to classes, you can learn more about object-oriented programming here. Line 4: Here we have the calculator run the operation 2 * 2by calling run()and passing in the expression as a string. Line 5: Here we print the result of the calculation. You’ll notice in order to get the most recent calculation result, we must access the attribute called lcd. Great! Now that you know exactly what your calculator code is doing, let’s move on to running this code! Part 2: Save the File, View the Variables, and Run Your Code Now it’s time to save and run your code. In this section, you’ll make use of two of the icons we reviewed earlier: - Save your new file as calculations.py. - Open the Variables window and make note of the two variables listed. You should see SimpleCalculatorand my_calculator. This section also gives you insight into the value that each variable is pointing to. - Run your code! You should see 4.0in the output: Great job! Next you’ll explore how Thonny’s debugger can help you to better understand this code. Other Great Beginner Features As you get more comfortable with Thonny, the features in this section will come in quite handy. Debugging Using your calculations.py script, you’re going to use the debugger to investigate what is happening. Update your code in calculations.py to the following: from calculator.simple import SimpleCalculator def create_add_string(x, y): '''Returns a string containing an addition expression.''' return 'x + y' my_calculator = SimpleCalculator() my_calculator.run(create_add_string(2, 2)) print(my_calculator.lcd) Hit the save icon to save this version. You’ll notice the code has a new function called create_add_string(). If you’re unfamiliar with Python functions, learn more in this awesome Real Python course! As you inspect the function, you may notice why this script will not work as expected. If not, that’s okay! Thonny is going to help you see exactly what is going on, and squash that bug! Go ahead and run your program and see what happens. The Shell output should be the following: >>> %Run calculations.py 0 Oh no! Now you can see there is a bug in your program. The answer should be 4! Next, you’ll use Thonny’s debugger to find the bug. Let’s Try It! Now that we have a bug in our program, this is a great chance to use Thonny’s debugging features: Click the bug icon at the top of the window. This enters debugger mode. You should see the import statements highlighted. Click the small step arrow icon, the yellow arrow in the middle. Keep pressing this to see how the debugger works. You should notice that it highlights each step that Python takes to evaluate your program. Once it hits create_add_string(), you should see a new window pop up. Examine the pop up window carefully. You should see that it shows the values for x and y. Keep pressing the small step icon until you see the value that Python will return to your program. It will be enclosed in a light-blue box: Oh no! There’s the bug! It looks like Python will return a string containing the letters xand y(meaning 'x + y'and not a string containing the values of those variables, like '2 + 2', which is what the calculator is expecting.) Each time you see a light-blue box, you can think of this as Python replacing subexpressions with their values, step by step. The pop up window can be thought of as a piece of scratch paper that Python uses to figure out those values. Continue to step through the program to see how this bug results in a calculation of 0. The bug here has to do with string formatting. If you are unfamiliar with string formatting, check out this article on Python String Formatting Best Practices. Inside create_add_string(), the f-string formatting method should be used. Update this function to the following: def create_add_string(x, y): '''Returns a string containing an addition expression.''' return f'{x} + {y}' Run your program again. You should see the following output: >>> %Run calculations.py 4.0 Success! You have just demonstrated how the step-by-step debugger can help you find a problem in your code! Next you’ll learn about some other fun Thonny features. Variable Scope Highlighting Thonny offers variable highlighting to remind you that the same name doesn’t always mean the same variable. In order for this feature to work, on the menu bar, go to Thonny > Preferences and ensure that Highlight matching names is checked. Notice in the code snippet below, that create_add_string() now has a new variable called my_calculator, though this is not the same as the my_calculator on lines 10 and 11. You should be able to tell because Thonny highlights the variables that reference the same thing.. Code Completion Thonny also offers code completion for APIs. Notice in the snapshot below how pressing the Tab key shows the methods available from the random library: This can be very useful when you’re working with libraries and don’t want to look at the documentation to find a method or attribute name. Working on a Pre-Existing Project Now that you’ve learned the basic features of Thonny, let’s explore how you can use it to work on a pre-existing project. Find a File on Your Computer Opening a file on your computer is as easy as going to the menu bar, selecting File > Open, and using your browser to navigate to the file. You can also use the open folder icon at the top of the screen to do this as well. If you have a requirements.txt file and pip locally installed, you can pip install these from the Thonny system Shell. If you don’t have pip installed, remember you can use the Package Manager to install it: $ pip install -r requirements.txt Work on a Project From Github Now that you are a Thonny expert, you can use it to work on the exercises from Real Python Course 1: Introduction to Python: Navigate to the Real Python GitHub repo called book1-exercises. Click the green button labeled Clone or download and select Download Zip. Click the opening folder icon to navigate and find the downloaded files. You should find a folder called book1-exercises1. Open one of the files and start working! This is useful because there are tons of cool projects available on GitHub! Conclusion Awesome job getting through this tutorial on Thonny! You can now start using Thonny to write, debug, and run Python code! If you like Thonny, you might also like some of the other IDEs we’ve listed in Python IDEs and Code Editors (Guide). Thonny is actively maintained, and new features are being added all the time. There are several awesome new features that are currently in beta that can be found on the Thonny Blog. Thonny’s main development takes place at the Institute of Computer Science of the University of Tartu, Estonia, as well as by contributors around the world. Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Thonny: The Beginner-Friendly Python Editor
https://realpython.com/python-thonny/
CC-MAIN-2022-05
refinedweb
3,594
72.66
Builders hanging off class vs Builders in same namespace I wrote a couple of months ago about an approach we’re using to help people find test data builders in our code base by hanging those builders off a class called ‘GetBuilderFor’ and I think it’s worked reasonably well. However, a couple of weeks ago my colleague Lu Ning suggested another way to achieve our goal of allowing people to find the builders easily. The approach he suggested is to put all of the builders in the same namespace, for example ‘Builders’, so that if someone wants to find out if a builder already exists they can just type ‘Builders.’ into the editor and then it will come up with a list of all the builders that exist. The benefit of this approach is that it means we can make use of the object initializer to setup test data - perhaps one of the few occasions when it seems to be reasonably useful. Lu Ning explains in more detail on his blog but the idea is that instead of: new FooBuilder().Bar("hello").Build(); We could do this: new FooBuilder { Bar = "hello" }.Build(); The second approach requires less code since we can just create all public fields and setup a default value for each of them in the class definition and then override the values later if we want to as shown above. We can’t do this with the ‘GetBuilderFor’ approach since you can only make use of object initializer when you are initialising an object (as the name might suggest!). Another advantage of this approach is that we don’t have to write the boiler plate code to add each builder onto the ‘GetBuilderFor’ class so that others can find it. The disadvantage is that once we type ‘Builders.’ to find the list of builders we then need to delete that text and type in ‘new FooBuilder()…’ which means the flow of creating test data isn’t as smooth as with the ‘GetBuilderFor’ approach. I don’t feel like there is a really big difference between these approaches and as long as people can find code that’s the main thing. There would probably be less typing required with the namespace approach although I’ve never really felt that typing is the bottleneck in software development projects so it would be interesting to see if this would give us a gain or not. We are still using the ‘GetBuilderFor’ approach on our project since there probably wouldn’t be a massive gain by switching to the other approach at this stage. It does seem like an interesting alternative to solving the same problem though. About the author Mark Needham is a Developer Relations Engineer for Neo4j, the world's leading graph database.
https://markhneedham.com/blog/2009/08/15/builders-hanging-off-class-vs-builders-in-same-namespace/
CC-MAIN-2018-30
refinedweb
464
61.5
I've been testing the patch and it seems to work out allright :-) A few notes; #include <sys/stat.h> must be outside #ifdef SunOS - other systems needs it too in unlock_and_fclose(FILE *fp) fclose is only called in #else, I suppose that should have been like for unlock_and_close or we'll run out of filehandles ;-) Best regards, Christian On Wed, Sep 10, 2008 at 10:05 PM, Jason Pollock < [EMAIL PROTECTED]> wrote: > > > Christian Theil Have wrote: > >> Hi >> >> I think it looks good :-) I was working on some similar code based on a >> dict, which I was still testing out, but your patch is cleaner. >> I look forward to trying it out, but I wont have the opportunity to test >> it until Monday though. >> >> About changing close and fclose everywhere in the codebase, you could get >> around this without having to change the rest the of codebase by using a few >> macros , eg. >> >> #ifdef SunOS >> #define close(fd) unlock_and_close(fd) >> #endif >> >> .... >> >> int unlock_and_close(fd) { >> #ifdef close(fd) >> #undef close(fd) >> printf("unlock_and_close do the stuff..\n"); >> return close(fd); >> #define close(fd) unlock_and_close(fd) >> #endif >> } >> >> and similarly for fclose... >> >> Christian. >> > > That would work. I wasn't sure if the Mbuni team would want to go that > way, or the Kannel way: > > #define close(x) you_should_not_call_close_directly(x) > > I personally decided against the #define because I wanted to be able to > check that all instances of close/fclose had been modified in the source I > was working on. I did look at trying to figure out how to wrap the libc > call, but I thought that might be a bit too destructive to my schedule. ;) > > I'm still new to the gwlib code, I didn't realise it had a dict! Too bad > it uses octstr's as keys. > > Let me know if you run into any problems. If I find any, I'll let the list > know. > > Jason > _______________________________________________ Devel mailing list Devel@mbuni.org
https://www.mail-archive.com/devel@mbuni.org/msg00397.html
CC-MAIN-2018-47
refinedweb
324
68.2
Using the low-level robot base controllers to drive the robotDescription: This tutorial teaches you how to start up and control the default robot base controllers (pr2_base_controller and pr2_odometry) directly, rather than at a high level (using move_base). Keywords: move, moving, base, casters, controllers, driving, drive, robot, PR2, wheels Tutorial Level: BEGINNER Next Tutorial: Using the base controller with odometry and transform information pr2_controllers/Tutorials/Using the base controller with odometry and transform information Contents Overview and prerequisites In order to send velocity commands to the base, we need the following components: - the base velocity controller that directly commands the motors - the base odometry controller - a higher-level program to send the velocity commands The first two components above are available in ROS. In this tutorial we will show how to use them by writing the third component, the higher-level program. Bring up a robot, either on the hardware or in gazebo. Note that, in standard robot start-up, the arms are not controlled and free to move. As a result, they might move around due to inertia as you are driving the base. The base velocity and odometry controller In this tutorial, we will be moving the base by using the base velocity and odometry controllers. Both of these controllers are normally loaded on robot start-up. To check, after starting the robot, use the pr2_controller_manager : rosrun pr2_controller_manager pr2_controller_manager list If you see both base_controller (running) and base_odometry (running) then the controllers are ready and we can proceed to use them. If you see no mention of either base_controller or base_odometry, then something went wrong during robot start-up. Abort this tutorial and investigate the problem. The controller message The base controller receives Twist messages on the '/base_controller/command' topic, which has two Vector3 fields: linear and angular. Nonzero entries in the x and y fields of linear causes the robot to move forwards and backwards (x), or strafe left and right (y), in the robot's base odometry frame. A nonzero entry in the z field of angular causes the robot to turn (yaw). The size of the entries changes the velocity of the robot, and a single command will only move the robot for a short period of time before stopping, so it does not run off into the wall (or you) when commands stop coming for any reason. (For the PR2, linear z and angular x and y are just ignored.) Velocities are in units of m/s and rad/s, as in all of ROS. Note that, for now, you don't have to worry about commanding the odometry controller ; it just needs to be running. We will use it explicitly in the next tutorial. Package setup In order to create a ROS node that sends goals to the trajectory controller, the first thing we'll need to do is create a package. To do this we'll use the handy roscreate-pkg command where we want to create the package directory: roscreate-pkg drive_base_tutorial roscpp geometry_msgs After this is done we'll need to roscd to the package we created, since we'll be using it as our workspace. roscd drive_base_tutorial Creating the node (C++) In this tutorial, we will write code in C++ for moving the robot using keyboard commands. For this, we do not need to worry about odometry, distance traveled, etc. An example showing how to have the robot move a certain distance based on odometry information will be shown in the next tutorial. Put the following into src/drive_base.cpp: 1 #include <iostream> 2 3 #include <ros/ros.h> 4 #include <geometry_msgs/Twist.h> 5 6 class RobotDriver 7 { 8 private: 9 //! The node handle we'll be using 10 ros::NodeHandle nh_; 11 //! We will be publishing to the "/base_controller/command" topic to issue commands 12 ros::Publisher cmd_vel_pub_; 13 14 public: 15 //! ROS node initialization 16 RobotDriver(ros::NodeHandle &nh) 17 { 18 nh_ = nh; 19 //set up the publisher for the cmd_vel topic 20 cmd_vel_pub_ = nh_.advertise<geometry_msgs::Twist>("/base_controller/command", 1); 21 } 22 23 //! Loop forever while sending drive commands based on keyboard input 24 bool driveKeyboard() 25 { 26 std::cout << "Type a command and then press enter. " 27 "Use '+' to move forward, 'l' to turn left, " 28 "'r' to turn right, '.' to exit.\n"; 29 30 //we will be sending commands of type "twist" 31 geometry_msgs::Twist base_cmd; 32 33 char cmd[50]; 34 while(nh_.ok()){ 35 36 std::cin.getline(cmd, 50); 37 if(cmd[0]!='+' && cmd[0]!='l' && cmd[0]!='r' && cmd[0]!='.') 38 { 39 std::cout << "unknown command:" << cmd << "\n"; 40 continue; 41 } 42 43 base_cmd.linear.x = base_cmd.linear.y = base_cmd.angular.z = 0; 44 //move forward 45 if(cmd[0]=='+'){ 46 base_cmd.linear.x = 0.25; 47 } 48 //turn left (yaw) and drive forward at the same time 49 else if(cmd[0]=='l'){ 50 base_cmd.angular.z = 0.75; 51 base_cmd.linear.x = 0.25; 52 } 53 //turn right (yaw) and drive forward at the same time 54 else if(cmd[0]=='r'){ 55 base_cmd.angular.z = -0.75; 56 base_cmd.linear.x = 0.25; 57 } 58 //quit 59 else if(cmd[0]=='.'){ 60 break; 61 } 62 63 //publish the assembled command 64 cmd_vel_pub_.publish(base_cmd); 65 } 66 return true; 67 } 68 69 }; 70 71 int main(int argc, char** argv) 72 { 73 //init the ROS node 74 ros::init(argc, argv, "robot_driver"); 75 ros::NodeHandle nh; 76 77 RobotDriver driver(nh); 78 driver.driveKeyboard(); 79 } Building and running Add the following line to the CMakeLists.txt: rosbuild_add_executable(drive_base src/drive_base.cpp) and make the binary by typing make in the drive_base_tutorial directory. Run the drive_base program: bin/drive_base You should now be able to drive the robot around in the window running drive_base . You may jump to the next tutorial if you are not interested in doing the same with Python. Creating the node (Python) The code given below is parallel to the above code, just that this works in Python. The code can be downloaded from the following Github link. Base_Driver Python or can be copied from here. Put the code into src/teleop_py.py: import rospy from geometry_msgs.msg import Twist import sys twist = Twist() def values(): print '(w for forward, a for left, s for reverse, d for right,k for turning left,l for turning right and . to exit)' + '\n' s = raw_input(':- ') if s[0] == 'w': twist.linear.x = 1.0 twist.angular.z = 0.0 twist.linear.y = 0.0 elif s[0] == 's': twist.linear.x = -1.0 twist.angular.z = 0.0 twist.linear.y = 0.0 elif s[0] == 'd': twist.linear.y = -1.0 twist.angular.z = 0.0 twist.linear.x = 0.0 elif s[0] == 'a': twist.linear.y = 1.0 twist.angular.z = 0.0 twist.linear.x = 0.0 elif s[0] == 'k': twist.angular.z = 2.0 twist.linear.x = twist.linear.y = 0.0 elif s[0] == 'l': twist.angular.z = -2.0 twist.linear.x = twist.linear.y = 0.0 elif s[0] == '.': twist.angular.z = twist.linear.x = twist.linear.y = 0.0 sys.exit() else: twist.linear.x = twist.linear.y = twist.angular.z = 0.0 print 'Wrong command entered \n' return twist def keyboard(): pub = rospy.Publisher('base_controller/command',Twist, queue_size=1) rospy.init_node('teleop_py',anonymous=True) rate = rospy.Rate(1) while not rospy.is_shutdown(): twist = values() pub.publish(twist) rate.sleep() if __name__ == '__main__': try: keyboard() except rospy.ROSInterruptException: pass Building and running Type the following in the command file to make the python file executable. (Do this in the directory where you have stored the Python script) $ chmod +x teleop_py.py After this is done remember to do catkin_make to build your files. This can be done as follows: $ cd ~/catkin_ws $ catkin_make Now you should be able to run the teleop_py.py in the src folder. Type the following command in the right directory to get the file running: $ python teleop_py.py In the next tutorial, we will add information from odometry to make the robot travel for a specified distance: Using the base controller with odometry and transform information
https://wiki.ros.org/pr2_controllers/Tutorials/Using%20the%20robot%20base%20controllers%20to%20drive%20the%20robot
CC-MAIN-2021-21
refinedweb
1,364
64.71
Help with I2C connection to BME280 on WiPy 2.0 ... Just trying to connect to a BME280 sensor on I2C with a WiPy 2.0 on a 2.0 Expansion Board but seem to be having a problem. Running V1.8.6-849-9569a73 which I believe is current since I just did a firmware update. First I'm just trying to get a basic I2C connection using the standard SDA and SCL pins (which I believe are G16 (P9) for SDA and G17 (P10) for SDL. I've removed the LED jumper on the expansion board. I'm running the REPL through ATOM on COM. When I run this: from machine import I2C i2c = I2C(0, I2C.MASTER, baudrate=100000) i2c.scan() it hangs for quite! Do I need a pull-up resistor on SDA? Also looking for a library for the BME280. If anyone knows of a good one that actually works well, please post a link. Would appreciate any help. Thanks. @sealyons I made a test with the calibrated temperature mesurement device: Calibrated device: 21.7 21.8 DS18B20: 21.9 21.9 DHT22: 21.8 21.9 SI7021: 21.7 21.8 BMP085: 22.4 22.5 BME280: 21.8 21.9 Besides the BMP085, all devices were on the spot = within 1 digit variance. That status was stable for 5 hours testing. Only the BMP085 was always ~0.6 °C above, but still that at the limits of it's spec. @robert-hh I just made a small test with a set of different sensors I had in my drawer and a small temp/humidity unit for home use. I placed them all outside, so I could compare the to values I get from the local university's weather station, 3.85km distant at almost the same elevation. The weather is calm outside. Almost not wind. DS18B20: 10.13C DHT22: 10.30C 74.90% SI7021: 10.01C 84.19% BMP085: 10.80C 1004.49hPa BME280: 10.19C 1007.28hPa 60.27% Weath-Stn: 10.40C 79.20% University: 10.29C 1004.26hPa 73.10% There seems to be no single winner. Even is all devices besides DHT22 claim to do calibrated temperature measurement with 0.1C error, the variation between them is larger, although not drastic. The same for pressure. What completely seems to be poor guessing are the humidity figures. The only device in the proper range seems to be the DHT22. I will repeat the test for temperature with a calibrated reference. @sealyons I compared that with two other sensors, which show both about 50%. My sensor shows ~35%. The other values are OK. The pressure deviates only by 0.2% from what the web page of the local university tells. I assume they have good instruments. I was too lazy to compare their RH value, because their instrument is outside, and I would have to compensate for the temperature difference. Maybe I'll give it a try. Update: That was easy. The BME280 seems right. The weather station tells 6.5g/m³ and 11°C, 67% RH. At 11°C, the saturation value is about 10g/m³, which gives 65%RH. The indoor temperature is 21°C, with a saturation amount of ~18g/m³. Assuming that the absolute humidity is the same, that gives ~36%RH. Ok, I have the library working now. Thanks for your help. I agree, the humidity value seems really low. It's raining here and the indoor humidity value is showing as 47.9%. I'd expect that to be higher. If I blow on the sensor the humidity value does go up, so it's responding but the returned value doesn't appear to be correct. I'll have to search around for any other micropython libraries for the BME280 and/or dig into the code and see if I can see anything. I do want something that reports valid values. I'm still going to move this over to Arduino and see what it shows with another library. @sealyons Puh! That's odd. There were some reports about faulty breadboard in the past. Below is a module which I was using with my BME280. It is the initially linked one with the compensation calculation changed to float. The unit I have show far too low humidity values. # Authors: Paul Cunnane 2016, Peter Dahlebrg 2016 # # This module borrows from the Adafruit BME280 Python library. Original # Copyright notices are reproduced below. # # Those libraries were written for the Raspberry Pi. This modification is # intended for the MicroPython and esp8266 from ustruct import unpack, unpack_from from array import array # BME280 default address. BME280_I2CADDR = 0x76 # Operating Modes BME280_OSAMPLE_1 = 1 BME280_OSAMPLE_2 = 2 BME280_OSAMPLE_4 = 3 BME280_OSAMPLE_8 = 4 BME280_OSAMPLE_16 = 5 BME280_REGISTER_CONTROL_HUM = 0xF2 BME280_REGISTER_CONTROL = 0xF4 class BME280: def __init__(self, mode=BME280_OSAMPLE_8, self.address = address if i2c is None: raise ValueError('An I2C object is required.') self.i2c = i2c # load calibration data dig_88_a1 = self.i2c.readfrom_mem(self.address, 0x88, 26) dig_e1_e7 = self.i2c.readfrom_mem(self.address, 0xE1, 7) self.dig_T1, self.dig_T2, self.dig_T3, self.dig_P1, \ self.dig_P2, self.dig_P3, self.dig_P4, self.dig_P5, \ self.dig_P6, self.dig_P7, self.dig_P8, self.dig_P9, \ _, self.dig_H1 = unpack("<HhhHhhhhhhhhBB", dig_88_a1) self.dig_H2, self.dig_H3 = unpack("<hB", dig_e1_e7) e4_sign = unpack_from("<b", dig_e1_e7, 3)[0] self.dig_H4 = (e4_sign << 4) | (dig_e1_e7[4] & 0xF) e6_sign = unpack_from("<b", dig_e1_e7, 5)[0] self.dig_H5 = (e6_sign << 4) | ((dig_e1_e7[4] >> 4) & 0x0F) self.dig_H6 = unpack_from("<b", dig_e1_e7, 6)[0] self.i2c.writeto_mem(self.address, BME280_REGISTER_CONTROL, bytearray([0x3F])) self.t_fine = 0 # temporary data holders which stay allocated self._l1_barray = bytearray(1) self._l8_barray = bytearray(8) self._l3_resultarray = array("i", [0, 0, 0]) def read_raw_data(self, result): """ Reads the raw (uncompensated) data from the sensor. Args: result: array of length 3 or alike where the result will be stored, in temperature, pressure, humidity order Returns: None """ self._l1_barray[0] = self._mode self.i2c.writeto_mem(self.address, BME280_REGISTER_CONTROL_HUM, self._l1_barray) self._l1_barray[0] = self._mode << 5 | self._mode << 2 | 1 self.i2c.writeto_mem(self.address, BME280_REGISTER_CONTROL, self._l1_barray) sleep_time = 1250 + 2300 * (1 << self._mode) sleep_time = sleep_time + 2300 * (1 << self._mode) + 575 sleep_time = sleep_time + 2300 * (1 << self._mode) + 575 time.sleep_us(sleep_time) # Wait the required time # burst readout from 0xF7 to 0xFE, recommended by datasheet self.i2c.readfrom_mem_into(self.address, 0xF7, self._l8_barray) readout = self._l8_barray # pressure(0xF7): ((msb << 16) | (lsb << 8) | xlsb) >> 4 raw_press = ((readout[0] << 16) | (readout[1] << 8) | readout[2]) >> 4 # temperature(0xFA): ((msb << 16) | (lsb << 8) | xlsb) >> 4 raw_temp = ((readout[3] << 16) | (readout[4] << 8) | readout[5]) >> 4 # humidity(0xFD): (msb << 8) | lsb raw_hum = (readout[6] << 8) | readout[7] result[0] = raw_temp result[1] = raw_press result[2] = raw_hum def read_compensated_data(self, result=None): """ Reads the data from the sensor and returns the compensated data. Args: result: array of length 3 or alike where the result will be stored, in temperature, pressure, humidity order. You may use this to read out the sensor without allocating heap memory Returns: array with temperature, pressure, humidity. Will be the one from the result parameter if not None """ self.read_raw_data(self._l3_resultarray) raw_temp, raw_press, raw_hum = self._l3_resultarray # temperature var1 = (raw_temp/16384.0 - self.dig_T1/1024.0) * self.dig_T2 var2 = raw_temp/131072.0 - self.dig_T1/8192.0 var2 = var2 * var2 * self.dig_T3 self.t_fine = var1 + var2 temp = self.t_fine / 5120.0 # pressure var1 = (self.t_fine/2.0) - 64000.0 var2 = var1 * var1 * self.dig_P6 / 32768.0 + var1 * self.dig_P5 * 2.0 if (var1 == 0.0): pressure = 0 # avoid exception caused by division by zero else: p = ((1048576.0 - raw_press) - (var2 / 4096.0)) * 6250.0 / var1 var1 = self.dig_P9 * p * p / 2147483648.0 var2 = p * self.dig_P8 / 32768.0 pressure = p + (var1 + var2 + self.dig_P7) / 16.0 # humidity h = (self.t_fine - 76800.0) h = ((raw_hum - (self.dig_H4 * 64.0 + self.dig_H5 / 16384.0 * h)) * (self.dig_H2 / 65536.0 * (1.0 + self.dig_H6 / 67108864.0 * h * (1.0 + self.dig_H3 / 67108864.0 * h)))) humidity = h * (1.0 - self.dig_H1 * h / 524288.0) if result: result[0] = temp result[1] = pressure result[2] = humidity return result return array("f", (temp, pressure, humidity)) @property def values(self): """ human readable values """ t, p, h = self.read_compensated_data() return ("{:.2f}C".format(t), "{:.2f}hPa".format(p/100), "{:.2f}%".format(h)) For testing, assuming the file is called bme280.py, run: from machine import I2C from bme280 import * i2c=I2C() bme280 = BME280(i2c=i2c) bme280.values Just a quick update .... Both of my BME280's have the UP marking. When I looked at both SCL & SDA when not communicating, they were measuring LOW. Something is odd. Just on an odd chance, I moved everything off the breadboard I was using to another breadboard and BEHOLD, scan is now returning the correct 118 address for both of my BME280's!!!! Not sure what the issue is with that breadboard but that appears to have been my problem all along. Now I'm going to try to get the BME280 library running and see if I can communicate with it ... @sealyons said in Help with I2C connection to BME280 on WiPy 2.0 ...: No need to explain the SMD resistors, I'm an EE. That's good. So you can read follow the traces on the board and understand the logic. The level of knowledge is varying here. As said, I could not clearly see the wiring. Black on black give a bad contrast. So you surely have measured the level of SDA and SCL when not communicating as being ~3.3 V. Even if SDA and SCL are swapped, scan() should return fast. The long time for scan() can be caused by SCL being pulled low. The situation is quite confusing. Can you read the imprint on the case of the BME280. The letters are pretty small, so you need a magnifying glass. According to the data sheet, it should be nnn and UP, which nnn being a 3 alphanumeric symbols, and UP identifying the device as BME280. But from the position of the vent hole it looks like a BME280 and not a BMP280, which anyhow has the same pin-out and the same communication specs. No need to explain the SMD resistors, I'm an EE. The pins on the breakout board are Vin, GND, and 3V3 starting at the left. I'm using the GND and 3V3 pins and I bring those to the -/+ rails on the breadboard. And I did measure the 3V3 value before I did any work with the BME280, just to be sure. Neither of them have gotten anything above 3.3V. Still don't understand why the BME280's won't respond to the I2C scan. @sealyons If I look at you wiring, I cannot clearly see how the power is connected. In the orientation of you picture, the topmost left connector is 5V. That must not be used. The second from the left is GND, the third is 3.3V. You must use GND and 3.3V to power your device. And yes, the breakout board has pull-up resistors (SDO pull-down). That are the small components with 103 = 10 followed by three zeroes = 10000 Ohm printed on them. And they are conected to SDA, SCL, CSB and SDO. So you do not even have to connect CSB and SDO yourself. I'm asking because I received once boards with the wrong components on them, capacitors instead of resistors. I do appreciate your help. Ok, I knew nothing about LoPy. The image I originally posted was from the vendor where I purchased the BME280 (it's exactly the same as what I'm using). Attached are images of my exact BME280 (this is one of the 2 that I have, both are not responding to the I2C scan). I've already stated this but my connections are as follows; Vcc to 3.3V, GND to GND, SCL to P10 / G17 SDA to P9 / G16 CSB to 3.3V Here's an image of my setup. I've tried it with and without external pull-ups (the 10K pull-ups are on the breakout), same result. Again, neither are responding to the I2C scan() function. The function hangs for a few seconds to a few minutes but it does return. When I use other I2C sensors, the scan() function works properly and I do get the correct addresses returned, so I believe my WiPy is functioning correctly. So, either my BME280's are not working or the connections are wrong. This should be pretty simple, I've never had such trouble. I'm going to switch this over to Arduino and give it a run there just to see if the BME280 will function in that environment. If anyone has any other suggestions, please reply. @sealyons Just to please you soul, I have tested that with a WiPy2 clone, which is just a generic ESP32 module with the WiPy firmware. It works. And I also found backside pictures of the modules you showed. @sealyons For I2C there is no difference between a WIPy and a LoPy. The Lopy is a WiPy with an SX127x chip for Lora and Sigfox. Looking at the picture, I see 10 kOhm pullup-resistors for the data lines. I cannot see the traces at the back side, but form what I see, the connections should be (from left to right, connections pointing to you): Vcc -> 3.3v, GND, SCL, SDA, CSB -> 3.3V, SDO -> GND. CSB must be connected to 3.3V at power-on. So it is always a good idea make the Vcc as the last one. Is the picture form the module you use? I cannot see any signs of connecting attempts. Just tried the other BME280 that I have and I can't get that to respond to the I2C scan either. Not sure how to proceed. I believe I have it connected correctly, using Vcc, GND, SDA, SCL and CSB. If anyone has a BME280 that looks like this and can confirm connections, please reply. Thanks for your reply but it's really not helping me much. I'm using a WiPy 2.0 & Expansion Board 2, not a LoPy4 which I have no experience with so I'm not sure whether there are syntax differences. As far as I know, when defining the I2C object on the WiPy 2.0 you have to include the I2C.MASTER keyword as I did. Also, as I said in my last post, I can't talk with the BME280, so I'm not there yet. As I stated, I can't get the I2C scan to find the BME280. My BME280 has 6 connections; Vcc, GND, SCL, SDA, CSB and SDO. Since I verified that my WiPy can talk with other sensors, I'm guessing there must be something wrong with my BME280 sensor. I may either connect this sensor to Arduino and see if I can get it to work there or connect the other BME280 sensor I have (same model) and see what it does (still have to prep it). @sealyons I have responded to that discussion in the other forum. I tested a BME280 with that library with a LoPy4 and it worked. The test lines to run that module are: from machine import I2C from bme280 import * i2c = I2C() bme280 = BME280(address=0x76, i2c=i2c) bme280.values Connection: BME280 LoPy ----------------------- VIN 3.3V GND GND SDA P9 (aka G16) SCL P10 (aka G17) Before trying to access the BME280 itself, just want to be sure I can see the sensor over I2C first. I'm not sure whether my BME280 breakout has integrated pull-ups or not. I'm thinking it does but I did add 4.7K pull-ups just in case. CS is tied HIGH. All cables are 6 in. or less. When I run this it's not hanging as long (just a few seconds now) but it's still not returning anything. It just returns [ ]. Expecting to see [118] since the BME280's address is 0x76. I guess it's possible the breakout is bad, I have another I will try. Here's some updates. First, I'm just trying to get I2C working properly before continuing with the BME280 connection. I originally did not have external pull-ups on SDA & SCL (and I have not added them yet) . I decided to try another I2C sensor to at least see if I could get scan() to find it's address. I connected to an Adafruit TSL2561 I2C Luminosity Sensor (since I had one kicking around) with it's I2C ADDR line floating, so it's I2C address should be 57 decimal (0x39). I believe this breakout has integrated pull-ups on SDA & SCL. When I connected everything and ran the same code I gave at the beginning of my post, it worked and returned [57]. So, it appears my WiPy I2C is working correctly. Back to connecting to the BME280 ... @sealyons Hi! Check the simple things at first: - Do you have Pull-Ups on the SDA and SCL lines (4k7 Ohm)? - Try to switch SDA and SCL and check what happens. - How long are the cabels? <= 1m should be fine. - is the BME280 connected in the right way? CS should be HIGH to activate the I2C bus. Cheers, Thomas
https://forum.pycom.io/topic/3924/help-with-i2c-connection-to-bme280-on-wipy-2-0/12?lang=en-US
CC-MAIN-2020-29
refinedweb
2,876
77.64
I've been working on a new open source project this past week and had a requirement to get ARGB value of every pixel in the image. I thought 'OK easy, just use Bitmap.GetPixel()'! At first I was using small images so there was not a problem, but as the size of the image increased, the performance of the application just dropped! After a little bit of digging around I found the answer right under my nose! Rob Miles recently wrote an article on Image Manipulation in Windows Mobile 5.0 for the OpenNETCF Community site. The section that stood out to me was 'Speeding up Bitmap Access'. I took the techniques he showed in the article and implemented them in my code. The result was an average savings of 11.5s to grab the pixel ARGB values. Here is the original code (average for time for a 324x324 image was 12s): for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { intImage[x][y] = image.GetPixel(x, y).ToArgb(); } } And here is the new code (average for time for a 324x324 image was 0.5s): unsafe { BitmapData bd = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb); int sourceWidth = image; for (int y = 0; y < height; y++) { pPixel = (PixelData*)(bitmapBaseByte + y * sourceWidth); for (int x = 0; x < width; x++) { intImage[x][y] = (int)((0xff << 0x18) | (pPixel->red << 0x10) | (pPixel->green << 8) | pPixel->blue); pPixel++; } } image.UnlockBits(bd); } By using unsafe block and directly accessing the bitmap data in memory we save a huge 11.5s on average. If you want a more detailed explanation on how the code works read the article. For those interested here is the image I was testing with: I just finished writing a new article on sharing ink data created on a windows Mobile 6 device using the Mobile Ink Library with a custom desktop application and OneNote 2007. If you have used the Mobile Ink Library then you'll probably want to know that I have made changes to the class library but I have marked methods obsolete to keep backward compatibility. I've also posted the 'technical overview' of the Mobile Ink Library which is basically the same as the previous blog posts on WISP Lite and Mobile Ink Library but updated with the new updates. Here are some screen shots on sharing ink with a custom desktop application: And here are some screen shots on sharing ink data with OneNote 2007: I have yet to update the SVN Repository but will update that in a couple of days. I have update the installable source located here. I'm done inking for a while but the source is in a good state to be used in projects IMO. I have been working on another open source project that will hopefully be released in the coming days plus a new commercial product so stay tuned! We recently released two new products for Windows Mobile and Windows CE developers. The first one is TaskManCF which runs on Windows Mobile (no SmartPhone) and Windows CE devices. It allows you to view all applications running, processes, threads etc. Here is a screen shot of the product running on a WM5 device. The second product we released is a Media Player Control for Windows CE devices. If you are a Compact Framework developer and want to add video media directly into your application then check out the this control. It allows you to add the video window directly onto a form so no loading up Media Player and hacking it to play video. We created a 'Video Kiosk' sample that highlights this feature. Here are some screen shots. Main Screen: Video Playing: The Media Player Control for Windows CE is only for Windows CE devices (as the name implies) and not supported on Windows Mobile. We are working on a version for Windows Mobile devices so stay tuned... There was a question on how to create a Textbox similar in style to Outlook Mobile contacts textbox shown below (single line shown for inputs). So to accomplish this I wrote a quick custom control that included a textbox internally and the control was on pixel higher than the textbox and set the Textbox.BorderStyle to None. On the OnPaint method of the custom control I just call Graphics.Clear() to draw a black line since only one pixel height is visible. Here is the source. It still requires a lot of work but it's a good start and it answers the original question. public class OutlookMobileTextBox : Control { private TextBox m_textBox; public OutlookMobileTextBox() { m_textBox = new TextBox(); m_textBox.Location = new System.Drawing.Point(48, 107); m_textBox.BorderStyle = BorderStyle.None; m_textBox.Size = new System.Drawing.Size(100, 21); Controls.Add(m_textBox); } protected override void OnResize(EventArgs e) { base.OnResize(e); m_textBox.Bounds = new Rectangle(0, 0, this.Width, m_textBox.Height); base.Height = m_textBox.Height + 1; } protected override void OnPaint(PaintEventArgs e) { e.Graphics.Clear(Color.Black); } protected override void OnPaintBackground(PaintEventArgs e) { } public override string Text { get { return this.m_textBox.Text; } set { this.m_textBox.Text = value; } } } And here is what it looks like on a device: DemoCampToronto14 is happening this Sept 17 2007 and I'll be there. What is DemoCamp? It's a variation of the unconference style of event where community members share what they've been working on, demo their products and meet others. I won't be presenting anything but will be attending (OpenNETCF has also helped sponsor the event). It's a great place to meet the people and see what cool things people are working on! If you are in the Toronto area and attending (registration has sold out) drop by and say hi! If we have never met here's what I look like :). The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway. Theme design by Jelle Druyts
http://blog.opennetcf.com/marteaga/default,month,2007-09.aspx
crawl-002
refinedweb
995
64.81
This namespace holds a enum of various period types like era, year, month, etc.. More... This namespace holds a enum of various period types like era, year, month, etc.. the type that defines a flag that holds a period identifier Special invalid value, should not be used directly. Era i.e. AC, BC in Gregorian and Julian calendar, range [0,1]. Year, it is calendar specific, for example 2011 in Gregorian calendar. Extended year for Gregorian/Julian calendars, where 1 BC == 0, 2 BC == -1. The month of year, calendar specific, in Gregorian [0..11]. The day of month, calendar specific, in Gregorian [1..31]. The number of day in year, starting from 1, in Gregorian [1..366]. Day of week, Sunday=1, Monday=2,..., Saturday=7. Note that updating this value respects local day of week, so for example, If first day of week is Monday and the current day is Tuesday then setting the value to Sunday (1) would forward the date by 5 days forward and not backward by two days as it could be expected if the numbers were taken as is. Original number of the day of the week in month. For example 1st Sunday, 2nd Sunday, etc. in Gregorian [1..5] Local day of week, for example in France Monday is 1, in US Sunday is 1, [1..7]. 24 clock hour [0..23] 12 clock hour [0..11] am or pm marker [0..1] minute [0..59] second [0..59] The week number in the year. The week number within current month. First day of week, constant, for example Sunday in US = 1, Monday in France = 2.
http://www.boost.org/doc/libs/1_53_0/libs/locale/doc/html/namespaceboost_1_1locale_1_1period_1_1marks.html
CC-MAIN-2015-32
refinedweb
274
77.03
2.1, being based on newer JAX-WS and JAXB standards and introducing new dependencies could cause some issues to make a migration from 2.0.x to 2.1 more than a "drop in" replacement. This page documents some of those issues. Code generation changes WS-Addressing types JAX-WS 2.1 supports the WS-Addressing stuff right in the API's. Thus, WSDL's that take/return the EnpointReferenceType will now generate the JAX-WS 2.1 EndpointReference instead of the CXF proprietary type that was generated in CXF 2.0.x. There is a flag (-noAddressBinding) to wsdl2java to disable the new type mapping, but the generated code is then not jaxws compliant. JAXB 2.1 annotations The generated code now adds some JAXB 2.1 specific annotations like @XmlSeeAlso. Thus, the code will not work with previous versions of CXF/JAX-WS. java2wsdl tool is gone The java2wsdl tool was replaced with the java2ws tool which provides much more flexibility. Add the -wsdl flag to java2ws if you just need the wsdl. java2wsdl goal in cxf-codegen-plugin is also gone There is a new plugin that uses the java2ws tool now. See: Runtime changes ASM jar The JAX-WS frontend now "requires" asm 2.x or 3.x to be able to process some of the JAXB annotations on the SEI interface. If you don't use those annotations on the SEI, or if you have generated wrapper classes (either via wsdl2java or java2ws with -wrapperbean flag), you can remove the asm jar. If you leave asm jar, there can be conflicts with other apps that use asm. The predominant one is Hibernate. The "workaround" for Hibernate is to remove the asm 1.x jar they use and replace the cglib jar with the cglib-nodeps jar that includes a special internal version of asm that would not conflict with the 2.x/3.x version we need. Woodstox requirement Due to a bug to be fixed in 2.1.1, it looks like wstx-asl jar is required for the application to startup if the spring beans file requires namespace support to parse. JDK 1.6 support 2.1 supports JDK 1.6 update 4 and later as update 4 includes the 2.1 versions of the JAX-WS API jar and the 2.1 version of the JAXB api jar and runtime. To use 2.1 with 1.6 up to update 3, you will need to add the jaxws-api jar and jaxb jars to the jre/lib/endorsed directories. You don't need to do anything special for JDK 1.5. Aegis configuration The configuration for the Aegis runtime has changed. See for descriptions of the changes.
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=84552&showComments=true&showCommentArea=true
CC-MAIN-2017-47
refinedweb
456
77.13
When we work with ASP.NET MVC, we find some default folders (view, model, controller, and some more), but if the application is big, it is complicated to maintain a structure logic for the files and modules for the site. To solve this problem, ASP.NET MVC provides a powerful feature called areas, with areas it is possible to "isolate" the modules of your site. First step: create a new ASP.NET MVC 4 application, I recommend to use the basic template.Next step: Over the project name, right click -> Add -> Area, and assign a name to the new area, when the area creation ends, we see a new folder called Areas, and within this folder you can see a folder with the name of your area and some files, think that an area is a mini-MVC project within your principal project. The next image shows how it looks if you create an area called Reportes: Reportes An important file that is created too, is ReportesAreaRegistration.cs, this file is used to configure the routing system for the area. public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "Reportes_default", "Reportes/{controller}/{action}/{id}", new { action = "Index", id = UrlParameter.Optional } ); } Also, in Global.asax, on the Application_Start method the configuration for the areas is called: Application_Start AreaRegistration.RegisterAllAreas(); Now, if you run the application, you received an error that says: The request for ‘Home’ has found the following matching controllers: Areas.Areas.Reportes.Controllers.HomeController Areas.Controllers.HomeController This error is produced because you have two controllers with the same name, but the solution is easy, go to the function RegisterRoutes of the class RouteConfig and add a namespace: RegisterRoutes RouteConfig routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }, namespaces: new[] { "Areas.Controllers" } ); I hope this post will.
https://www.codeproject.com/Tips/601504/Using-areas-in-ASP-NET-MVC-to-organize-a-project
CC-MAIN-2018-13
refinedweb
306
55.64
The java.lang.Class class is the center of reflection in Java. An object of the Class class represents a class in a program at runtime. The Class class is a generic class. It takes a type parameter, which is the type of the class represented by the Class object. For example, Class<Boolean> represents the class object for the Boolean class. Class<?> represents a class type whose class is unknown. We can use Class class to find information about a class at runtime. We can get the reference to the Class object of a class in the followings ways: A class literal is the class name followed by a dot and the word "class." For example, if you have a class Test, its class literal is Test.class and you can write Class<Test> testClass = Test.class; You can also get the class object for primitive data types and the keyword void using class literals as boolean.class, byte.class, char.class, short.class, int.class, long.class, float.class, double.class, and void.class. Each wrapper primitive data type class has a static field named TYPE, which has the reference to the class object of the primitive data type it represents. int.class and Integer.TYPE refer to the same class object. public class Main { public static void main(String[] args) { Class c = boolean.class;// w ww . j a v a 2s. c om c = Boolean.TYPE; c = byte.class; c = Byte.TYPE; c = char.class; c = Character.TYPE; c = short.class; c = Short.TYPE ; c = int.class; c = Integer.TYPE; c = long.class; c = Long.TYPE; c = float.class; c = Float.TYPE; c = double.class; c = Double.TYPE; c = void.class; c = Void.TYPE; } } The Object class has a getClass() method, which returns the reference to the Class object of the class of the object. The following code shows how to get the reference to the Class object of the Test class: class Test{/* w w w. j a v a 2s. c o m*/ } public class Main { public static void main(String[] args) { Test testRef = new Test(); Class testClass = testRef.getClass(); } } The Class class forName() static method returns a reference to a Class object. Its overloaded methods are Class<?> forName(String className) Class<?> forName(String name, boolean initialize, ClassLoader loader) The first version of the forName() method takes the fully qualified name of the class as parameter and loads the class, and returns its object reference. If the class is already loaded, it returns the reference to the Class object. The second version method can control if to initialize or not to initialize the class when it is loaded. We can also passin class loader. To load a class named com.java2s..Test: Class testClass = Class.forName("com.java2s..Test"); The following code shows how to load a class and gets the reference to its Class object. class MyClass {// w ww . jav a 2 s . co m static { System.out.println("Loading class MyClass..."); } } public class Main { public static void main(String[] args) { try { String className = "MyClass"; boolean initialize = false; ClassLoader cLoader = Main.class.getClassLoader(); Class c = Class.forName(className, initialize, cLoader); className = "MyClass"; System.out.println("about to load"); // Will load and initialize the class c = Class.forName(className); } catch (ClassNotFoundException e) { System.out.println(e.getMessage()); } } } The code above generates the following result.
http://www.java2s.com/Tutorials/Java/Java_Reflection/0020__Java_java.lang.Class.htm
CC-MAIN-2017-39
refinedweb
559
71.41
Created on 2004-02-23 12:10 by tanzer, last changed 2004-08-08 06:14 by tim.peters. This issue is now closed. In Python 2.3.3, doctest chokes on classes which contain an attribute of type super: Traceback (most recent call last): File "/ttt/private/tanzer/temp/confuse_doctest.py", line 14, in ? import doctest, confuse_doctest File "/Node/tttprime/ttt/private/tanzer/temp/confuse_doctest.py", line 15, in ? doctest.testmod(confuse_doctest) File "/usr/lib/python2.3/doctest.py", line 1148, in testmod f, t = tester.rundict(m.__dict__, name, m) File "/usr/lib/python2.3/doctest.py", line 908, in rundict f2, t2 = self.__runone(value, name + "." + thisname) File "/usr/lib/python2.3/doctest.py", line 1069, in __runone return self.rundoc(target, name) File "/usr/lib/python2.3/doctest.py", line 828, in rundoc f2, t2 = self.run__test__(d, name) File "/usr/lib/python2.3/doctest.py", line 937, in run__test__ raise TypeError("Tester.run__test__: values in " TypeError: Tester.run__test__: values in dict must be strings, functions, methods, or classes; <super: <class 'A'>, NULL> A simple example triggering the bug is attached. Python 2.3.3 (#2, Jan 13 2004, 00:47:05) [GCC 3.3.3 20040110 (prerelease) (Debian)] on linux2 Logged In: YES user_id=2402 I patched doctest.py to avoid the traceback (see attached patch-file). I'm not sure if this is the right place to fix the bug, though. Maybe inspect.classify_class_attrs should be changed instead? The second chunk of the patch improves the traceback given by doctest by giving the name of the offending dict-item. Logged In: YES user_id=195958 The problem is that inspect.get_class_attrs()'s "method" classification is a bit hetergeneous: it contains methods, and any method descriptors *except* classmethod and staticmethod. The proposed patch will fix behavior for super(), but not for any other method descriptors. So perhaps it would be better to explicitly test "_isfunction(v) or _ismethod(v)" rather than "not isinstance(value, super)". Logged In: YES user_id=31435 This got fixed as part of the massive doctest refactoring for 2.4. I have no plan to backport any of that to the 2.3 line.
http://bugs.python.org/issue902628
CC-MAIN-2014-52
refinedweb
361
62.34
Help answer threads with 0 replies . Blogs Recent Entries Best Entries Best Blogs Blog List Search Blogs Forums HCL Reviews Tutorials Articles Search Today's Posts Mark Forums Read LinuxQuestions.org > Blogs > Void cries or bashing. This blog is devoted to nonconstructive criticism, bashing, and other things that, being neither answers nor questions, are evidently unwelcome anywhere else in Linux Questions. This surprised even me No problems here. Posted 04-23-2011 at 12:14 AM by lupusarcanus This surprised even me AGer, I share your hypothesis, though I do believe I somehow ended up on the opposite side of the spectrum by going minimalist (OpenBSD, as default as possible). My theory is that complex graphical software sucks, and I trust my ability to learn how to use simple, reliable, command-line software in complex ways more than I trust complex graphical software's ability to shield me from having to learn. Too many complete failures with complex graphical software, I guess. This much we certainly agree on. rich_c, do larger user bases cause user-friendly code? Does user-friendly code cause larger user bases? Is it possible to have compact, efficient, reliable code and millions or billions of users? Or are the two linked together in some freaky synergy? My complaint isn't about the size of the user base, rather, it's with the sheer amount of crap code that gets piled higher and higher with even "server edition" linux distros. It's maddening that I have to build LFS to get linux the way I want it anymore. Maybe I'm just getting old and cranky...I think I need a nap. Posted 04-21-2011 at 06:42 PM by rocket357 This surprised even me Quote: Originally Posted by rocket357 Why do Linux users obsess over converting the entire world to FOSS? I can think of two potentially valid reasons: 1. Maybe the majority of FOSS users think they ought to share the good thing they're on to, to benefit others who might not be aware of the non proprietary alternatives. 2. Maybe FOSS users think if they can encourage a larger market share, then more bugs will be reported and fixed. Again, everyone wins! Posted 04-21-2011 at 04:32 AM by rich_c This surprised even me "Why do Linux users obsess over converting the entire world to FOSS?" Yes, it is kind of obsession. I have never discussed that with professionals, so self observation may lead to wrong conclusions, but my current hypothesis is that I am unsatisfied with all modern software so much that it hurts. Pro money development has very good reasons to produce junk, possibly with very few exceptions like QNX. How that can be different in the world where people eat, drink, drive junk? I do not see fundamental reasons that prohibit FOSS to be different. So, there is hope. Last hope, that is hard to loose. Posted 04-21-2011 at 03:45 AM by AGer This surprised even me "Are they good enough to convince everybody that FOSS is the way to go?" Why do Linux users obsess over converting the entire world to FOSS? Are you happy with Linux? If so, great...forget converting the world. They'll use wtf they think is best (and for some, hand holding from Microsoft is the only choice). If you aren't happy with Linux, change it. Not aiming this at you personally, AGer, but it seems odd to me that Linux users are so OCD over "market share" and "convincing the world". Posted 04-20-2011 at 03:00 PM by rocket357 Yet another commented example of why nada Let me list the main faults: 1. Unable to update what can be updated. 2. Does not understand that if the 404 error is received the Internet connection is OK. 3. Unable to switch to another mirror. It knows it, BTW. 4. Does not assume it is silly and imperfect. THE MOST COMMON FOSS GLITCH. More on 4. The pro way is: if something may be wrong, provide an override. The Ubuntu way: try to do that which cannot be done. With the upgrade, why not highlight problematic packages, show dependencies, and let me decide. Next I ran the Janitor. It suggested to remove a bunch of packages that are not used anymore. HOW COME this is possible with automatic package management and updates? More so, how come that only 1 of 3 kernel headers packages was selected for removal? Ubuntu feels free to release pure junk? You write Ubuntu is already broken. I agree, but it is THE distribution that defines Linux for the masses. Now "Refusing to update is better than partially updating and breaking". Correct. But it sounds somewhat like an achievement and I disagree here. Yes, it is common to write crap and iron it until the main use cases are more or less covered. There is a special buzz word for exactly that - tests first development. The problem is that it cannot be the only way to develop, but to do something better one has to sacrifice some freedom, and FOSS is all about freedom. This is why there is either a charismatic leader capable of taking away freedom, or a boss, or a road to failure (like firefox - Firefox - Firefox!! - FIREFOX!!! - Chrome). Frankly, when I compare Linux to QNX I am not that sure Linux is a proof of the FOSS superiority as the way to develop software. Re: "officially supported by server hardware". Either nonsense, or yet another problem. Hardware is interacting with the kernel and only with the kernel. If Linux cannot provide me with an environment where this thing either works with some kernel or money back, I am not interested. I would better use Windows which can discipline hardware vendors, even though it can neither do fork nor send a signal to a thread (oops, looks like .NET can, and it also has lighter and more efficient threads than Linux, and also has the compiler services namespace, and type safety that allows things like Silverlight security that Linux has exactly nothing to match with, and perfect integration between object, dynamic, and functional worlds). I understand all the .NET depends on few people and their status in Microsoft and WPF is already fading out and Muglia is gone and Silverlight is not winning big against Flash and Window Phone 7 is likely to fail and bury both Silverlight and Nokia, but the level of Linux innovation is open to discussion at best. My point is: Linux can built on its strength, be good to those who need that strength, reach some 10% of the desktop market by enlightening users and be never ignored, but instead it is trying to grab the "average user" which is impossible on the desktop and the last decade is mostly wasted. Posted 02-26-2011 at 06:20 PM by AGer Yet another commented example of why nada Refusing to update is better than partially updating and breaking... Oh wait it's already broken because it's Ubuntu. Either way, I'm sure windows users are used to reinstalling anyway. On top of that, if you are using it like windows (e.g. every day with automatic updates turned on) you will probably never have such problems. Aside from Ubuntu's shortcomings, no company should be using it in a production environment. It's not designed for that, it's designed for the home user. Red Hat is designed at production. They have set versions, recommended up date plans, long version support, support plan built into the cost. Red Hat is also the only linux os that I've seen officially supported by server hardware. Posted 02-26-2011 at 03:16 PM by lumak They do not come because they do not care In a fully static linked linux, we would see universal binaries. However, we only see partial static linked binaries such as drivers, blender, openoffice, firefox and other software that is designed for one large purpose. And even in that respect you have blender which is designed to fit into both the *nix file hierarchy while at the same time can exist in the users /home... Not sure if openoffice can do the same. I haven't tried. The old standard of /home does not quite fit the way windows users install software. They expect a /Program Files and shortcut links instead of properly defined PATH variables. Not to mention the concept of a symbolic link is completely lost on them. Rather I should say, they try and compare it to their windows shortcut links which are more akin to .desktop files. There is one linux distro, pointed out on 'general', that attempts to Windowize the *nix file tree. GoboLinux. But even this would require polkit and packaged binaries to install the packages. It almost seems like the only fix for this all is that more software programmers release universal static binaries for their programs that will target specific glibc/python/java versions and allow a user to install it as they please. Granted this is an extra compliance measure when so many even have troubles working with automake and or standards regarding the hicolor icon theme and desktop files. Posted 01-27-2011 at 12:08 PM by lumak They do not come because they do not care I think as more people become familiar with 'app stores' and appreciate how similar (Note, similar not the same.) they are to a Linux distro's repositories they'll apreciate the benefits in security and stability that we enjoy over the free-for-all mess that installing apps on a Windows machine ends up as. Posted 01-27-2011 at 08:03 AM by rich_c Another good example of you know what What a nice day is it! Immediately after posting this I have got a "You cannot see this page since you are not log in" screen. So, I managed to post to a blog without being logged in? We indeed are where we are.... Posted 06-02-2010 at 12:17 AM by AGer My multy boot with Windows 7 rant I was so upset with the findings that forgot to mention a couple of details. First, GRUB is the only way to go with BCD since it does not require copying of the boot sector to a file each time something - the kernel or the menu - changes. Second, the LILO step is not necessary if grub looks for its stages at the correct place regardless of how the change loader presents the disks. And GRUB actually does so, provided you do everything correctly. Unfortunately, grub does not use the correct way any program should run - this is the data, that is the result. In particular, LILO has the configuration file and if I run LILO I can review what was done later. With GRUB all I have is the "grub>" prompt. Thus, nothing is documented, presumably to make it harder to proof GRUB is always alpha, as if the explanations why GRUB 2 is necessary are not enough. What I guess I observed, no proof left as explained above, is that the "setup" command works correctly if I specify both the install and image drives but does not work with BCD if I use the "root" command to set the default. Interestingly enough, GRUB docs complain that some BIOSes do not provide the correct environment for GRUB and the "d" option of "install" should be specified anyway. This issues the questions: - How come BIOS should know/care about GRUB? - Why not to make the option that never hurts the default? - Do GRUB developers remember there may be a chain loaded or do they think they are Microsoft too? The positive finding is that 3 hard drives are sufficient to play the "Linux and Windows" game: one for Windows, one for Linux, and the third one to figure out how exactly GRUB works. Posted 02-14-2010 at 07:16 AM by AGer Why Linux is and always will be at 1% of desktops Quote:. Yes, there is .NET software working fast. Like Paint.NET. Or the Singularity OS. In some synthetic tests I did C# was the fastest, beating C++ with considerable margin. Paint .NET requires 29 Megs of memory; GIMP - 43. Paint .NET starts in Virtual Box about 3 times faster then GIMP on XFCE. So, what looks like sh*t? If Windows dies in foreseeable future, it will take Microsoft with it, I hope. .NET will be liberated, get the same status as Java and replace it in no time. If Windows does not die in the foreseeable future, what's the problem? I guess .NET, Java, and all wannabees share one huge mistake, possibly purposely implanted. Managed code removes an object when nobody references it. Good. Now I want to get rid of an object and tell everyone that references it to stop doing so. No way. Quote: no, you're wrong. without distribution maintainers we will still have linux. a linux that will be used by web-eleet: hackers and sysadmins, the ones that really want to know more or just need stability required to serve something heavy You may be right here, and hope you are, but I am afraid that we passed that point already. I cannot recall a one-man distro that is not based on something else. I guess more likely Linux will be for organizations only. Posted 12-12-2009 at 04:34 AM by AGer Why Linux is and always will be at 1% of desktops only with title i agree. partially. it isn't a desktop system mostly... with some workarounds of a skilled user system may come nice for unexperienced user... it's just my opinion linux is not a desktop OS. /* comment or not? */ /*. */ finally... yes, linux is not a user system right now. from what i saw on GUI in debian lenny i can say one word: "sh*t!!!"(it isn't any stable). it's GUI that unexperienced user can possibly use is a fail. epic fail. any beginner user will say that. linux is more of server operating system. i'm a developer and sysadmin. i personally don't really care about GUI. most of my work except for browsing and chatting on jabber i do on console. and i'm feeling good with it. of course maybe other linux distros can be more user-friendly... i didn't try them because not really interested. again it's just my thoughts and experience. linux will spread in some time. it needs time. and your help. Quote: Without distribution maintainers we will have no Linux. no, you're wrong. without distribution maintainers we will still have linux. a linux that will be used by web-eleet: hackers and sysadmins, the ones that really want to know more or just need stability required to serve something heavy. and remember the main thing: linux was built and brought to us by hackers. it will not become more user-friendly if we won't help. hacking is an art that needs time, energy and love(like everything does). linux needs developers. developers that might improve/add some core features that are unstable/unimplemented by now or developers that will build a user-system, ones that will create stable GUI apps, components and other thingies. you can be the one who will help. personally i plan to dedicate years of my life improving linux in future. the main thing is: if you don't like something: you either don't use it or you improve/replace it. the second way is better. Posted 12-11-2009 at 07:02 PM by Web31337 Windows 7 - the gap is closing Quote: Originally Posted by lumak The plug and play has dramatically improved. I noticed that too. I installed Samsung New PC Studio and apparently it installed Vista drivers that were not that good with Windows 7, but Windows downloaded and installed correct drivers. However, I would attribute that mostly to the MS support servers and people who maintain them, not to the OS per se. Quote: Originally Posted by lumak It's also important to not hate any of the OS's. They all have their faults and benefits to different people. That is, hating an OS is actually hating people. Yes, this should not be done, however tempting. Posted 09-24-2009 at 02:14 PM by AGer Windows 7 - the gap is closing The plug and play has dramatically improved. Even for network devices. If you plug in... say... an HD Homerun device, windows 7 will see it on the network, will offer to install the official HD Homerun software if you click on it, and work with media center... However, all I need to do is install MythTV and I'm up and running... well after the database setup, and the configuring thing, and the better numbering of channels thing... I think the important thing to realize is that Linux already is a major player in the OS world. Microsoft already considers Linux a threat and is trying to fight it on the 3rd party customer service end. It's also important to not hate any of the OS's. They all have their faults and benefits to different people. Whether it be falsely luring customers into a sense of security by saying there are NO viruses and you never have to worry about spyware, Using DRM and software subscriptions, Only being for corporate use, Or not being commercially developed for. Posted 09-24-2009 at 11:50 AM by lumak Windows 7 - the gap is closing good write-up. I, only once and for a brief moment, thought that the day Linux will become desktop OS of choice for John SixPack is around the corner. But the more i looked and learned about OS the less illusions i had. Maybe it's for better, maybe not. Posted 09-19-2009 at 06:34 PM by DBabo KDE 4.2 is still below beta quality Preposterous and in vaid - I doubt Quote: Originally Posted by socceroos You took the time to write this blog post. I jolly well hope you took the time to file these bugs you're describing in the KDE bugzilla. I did not file any bugs. I file bugs when I feel that I have discovered something that the developers should know but are likely to not know unless I report it. When there are lots of evident problems I do not bother to file bugs. Quote: Originally Posted by socceroos Otherwise, your words are in vain. I do not think so. From time to time I run over Linux faithful who sincerely wonder why so many people still use Windows. They should know what other eyes see looking at the same Linux. Quote: Originally Posted by socceroos I've personally found KDE 4.2 to be a lot more stable and bugless than you imply. Stable - yes, provided you do not do reckless experiments with video. Bugless - ha! "More bugless than you imply" - possibly yes, I do imply the worst case scenario. Quote: Originally Posted by socceroos Also, your claim that KDE 4.2 is alpha quality code is preposterous. Preposterous - no, maybe just deliberately choosing the old school meaning of "alpha". You know, there are people called beta testers who are getting money for beta testing beta quality code. Nobody pays to discover that which is evident. Thus, if there are evident problems - it is alpha. Just to double check, I visited the KDE Bugzilla. KDE is close to beta, but is not there yet. Now I would like to add some more of unfounded rant, but I have to log out - I checked if a newer ATI driver helps with desktop effects and accelerated video. It helps, but the Desktop Effects configuration screen got crazy and I always reboot before digging into configuration files. Posted 02-28-2009 at 02:58 PM by AGer KDE 4.2 is still below beta quality You took the time to write this blog post. I jolly well hope you took the time to file these bugs you're describing in the KDE bugzilla. Otherwise, your words are in vain. P.S. I've personally found KDE 4.2 to be a lot more stable and bugless than you imply. It seems you're going on a bit of an unfounded rant. Also, your claim that KDE 4.2 is alpha quality code is preposterous. Posted 02-17-2009 at 05:43 PM by socceroos Why Desktop Linux is nowhere, part 3 Good, glad to help. Posted 02-02-2009 at 09:13 AM by archtoad6 Why Desktop Linux is nowhere, part 3 I fixed it! Thanks for the comment, with exactly zero blog experience I never thought it is possible to change an already posted entry. It just looked like rewriting history to me. Posted 01-27-2009 at 06:33 AM by AGer Page 1 of 2 1 2 > AGer Registered Oct 2007 136 Blog Entries 22 Find Blog Entries by AGer Containing Text: Search Titles Only Advanced Search Blog Categories Local Categories Uncategorized Recent Comments This surprised even me by lupusarcanus Yet another commented example of why nada by AGer They do not come because they do not care by lumak Another good example of you know what by AGer My multy boot with Windows 7 rant by AGer Recent Entries This surprised even me Yet another commented example of why nada They do not come because they do not care How many MS-only technologies match Linux on dice.com? Another good example of you know what Recent Visitors 3ntolo bdeye Larry Webb lupusarcanus Mol_Bolom myklmar rich_c rocket357 Web31337 Archive < December 2014 Su Mo Tu We Th Sa 23 24 25 26 27 28 29 30 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 View all blog entries by AGer Find All Forum Posts by AGer All times are GMT -5. The time now is 09:37
http://www.linuxquestions.org/questions/blog.php?do=comments&u=370388
CC-MAIN-2014-52
refinedweb
3,723
72.66
We often come across a situation where we would need to store or transfer objects. The pickle module in Python is one such library that serves the purpose of storing Python objects as a serialized byte sequence into files for retrieval at a later time. Let us look into what it exactly does in this article. 1. Python Pickle Module Examples Let’s look into some examples of using the pickle module in Python. 1.1) ‘pickling’ into a file Since a file consists of bytes of information, we can transform a Python object into a file through the pickle module. This is called pickling. Let us look at how we could do this through an example. To do this operation of serializing an object into a file, we use the pickle.dump() method. Format: pickle.dump(data_to_store, filename, protocol_type) data_to_store-> The object to be serialized (pickled) filename-> The name of the file where the data is stored protocol_type-> The type of the protocol used (is set to 4 by default in Python 3.8) Here is an example to illustrate the same. import pickle data = ['one', 2, [3, 4, 5]] with open('data.dat', 'wb') as f: pickle.dump(data, f) 1.2) ‘Unpickling’ from a file This is just the opposite of pickling, wherein the object is retrieved from the file. The file, which contains the serialized information of the object as a byte sequence, is now deserialized into the Python object itself, and we can get the original information back. To perform this operation, we use the pickle.load() library function. Format: new_object = pickle.load(filename) new_object-> The object into which the method stores the information into filename-> The file containing the serialized information import pickle objdump = None with open('data.dat', rb') as f: # Stores the now deserialized information into objdump objdump = pickle.load(f) 2. Exception Handling with pickle Module The Pickle module defines some Exceptions, which are useful for programmers or developers to handle different scenarios and debug them appropriately. The module mentions that the following can be pickled: None, True, False - integers, floating points, complex numbers - strings, bytes, byte arrays - tuples, lists, sets, and dictionaries containing only picklable objects - Named functions defined at the top level of a module - Classes and built-in functions defined at the top level of a module Any other object is not picklable, and is called unpicklable. There are 3 primary exceptions that the module defines, namely: Here is an example of using exception handling to handle pickle.PicklingError, when trying to pickle an unpicklable object. import pickle # A lambda is unpicklable data = ['one', 2, [3, 4, 5], lambda l: 1] with open('data2.dat', 'wb') as f: try: pickle.dump(data, f) except pickle.PicklingError: print('Error while reading from object. Object is not picklable') Output Error while reading from object. Object is not picklable Here is an example of using exception handling to handle pickle.UnpicklingError, when trying to unpickle a non serialized file. import pickle with open('data1.dat', 'wb') as f: f.write('This is NOT a pickled file. Trying to unpickle this will cause an exception') objdump = None with open('data1.dat', 'rb') as f: try: objdump = pickle.load(f) except pickle.UnpicklingError: print('Cannot write into object') Output Cannot write into object 3. Problems faced in pickling and unpickling - As the module states in the documentation, it provides us a stern warning regarding pickling and unpickling of objects files. Do not use this module for unpickling unless you absolutely trust the source, since any kind of malicious code can be injected into an object file. - Also, there may be problems faced due to lack of compatibility between Python language versions, since the data structures may differ from version to version, and hence Python 3.0may not be able to unpickle a pickled file from Python 3.8. - There is also no cross-language compatibility, which may serve to be an annoyance for non-Python data transfers. The information is only Python-specific. 4. Conclusion Here, we learned more about the pickle module, which can be used to serialize/deserialize Python objects to/from files. It is a quick and easy way to transfer and store Python objects, which helps programmers to store data easily and quickly for data transfer. 5. References - JournalDev article on pickle: - Pickle module Documentation:
https://www.askpython.com/python-modules/pickle-module-python
CC-MAIN-2020-10
refinedweb
726
56.76
PHP DOM: Using XPath. I’ll use the same DTD and XML from the previous article to demonstrate the PHP DOM XPath functionality. To quickly refresh your memory, here’s what the DTD and XML look like: <> Basic XPath Queries XPath is a syntax available for querying an XML document. In it’s simplest form, you define a path to the element you want. Using the XML document above, the following XPath query will return a collection of all the book elements present: //library/book That’s it. The two forward slashes indicate library is the root element of the document, and the single slash indicates book is a child. It’s pretty straight forward, no? But what if you want to specify a particular book. Let’s say you want to return any books written by “An Author”. The XPath for that would be: //library/book/author[text() = "An Author"]/.. You can use text() here in square braces to perform a comparison against the value of a node, and the trailing “/..” indicates we want the parent element (i.e. move back up the tree one node). XPath queries can be executed using one of two functions: query() and evaluate(). Both perform the query, but the difference lies in the type of result they return. query() will always return a DOMNodeList whereas evaluate() will return a typed result if possible. For example, if your XPath query is to return the number of books written by a certain author rather than the actual books themselves, then query() will return an empty DOMNodeList. evaluate() will simply return the number so you can use it immediately instead of having to pull the data from a node. Code and Speed Benefits with XPath Let’s do a quick demonstration that returns the number of books written by an author. The first method we’ll look at will work, but doesn’t make use of XPath. This is to show you how it can be done without XPath and why XPath is so powerful. <?php public function getNumberOfBooksByAuthor($author) { $total = 0; $elements = $this->domDocument->getElementsByTagName("author"); foreach ($elements as $element) { if ($element->nodeValue == $author) { $total++; } } return $number; } The next method achieves the same result, but uses XPath to select just those books that are written by a specific author: <?php public function getNumberOfBooksByAuthor($author) { $query = "//library/book/author[text() = '$author']/.."; $xpath = new DOMXPath($this->domDocument); $result = $xpath->query($query); return $result->length; } Notice how we this time we have removed the need for PHP to test against the value of the author. But we can go one step further still and use the XPath function count() to count the occurrences of this path. <?php public function getNumberOfBooksByAuthor($author) { $query = "count(//library/book/author[text() = '$author']/..)"; $xpath = new DOMXPath($this->domDocument); return $xpath->evaluate($query); } We’re able to retrieve the information we needed with only only line of XPath and there is no need to perform laborious filtering with PHP. Indeed, this is a much simpler and succinct way to write this functionality! Notice that evaluate() was used in the last example. This is because the function count() returns a typed result. Using query() will return a DOMNodeList but you will find that it is an empty list. Not only does this make your code cleaner, but it also comes with speed benefits. I found that version 1 was 30% faster on average than version 2 but version 3 was about 10 percent faster than version 2 (about 15% faster than version 1). While these measurements will vary depending on your server and query, using XPath in it’s purest form will generally yield a considerable speed benefit as well as making your code easier to read and maintain. XPath Functions There are quite a few functions that can be used with XPath and there are many excellent resources which detail what functions are available. If you find that you are iterating over DOMNodeLists or comparing nodeValues, you will probably find an XPath function that can eliminate a lot of the PHP coding. You’ve already see how count() functions. Let’s use the id() function to return the titles of the books with the given ISBNs. The XPath expression you will need to use is: id("isbn1234 isbn1235")/title Notice here that the values you are searching for are enclosed within quotes and delimited with a space; there is no need for a comma to delimit the terms. <?php public function findBooksByISBNs(array $isbns) { $ids = join(" ", $isbns); $query = "id('$ids')/title"; $xpath = new DOMXPath($this->domDocument); $result = $xpath->query($query); $books = array(); foreach ($result as $node) { $book = array("title" => $booknode->nodeValue); $books[] = $book; } return $books; } Executing complex functions in XPath is relatively simple; the trick is to become familiar with the functions that are available. Using PHP Functions With XPath Sometimes you may find that you need some greater functionality that the standard XPath functions cannot deliver. Luckily, PHP DOM also allows you to incorporate PHP’s own functions into an XPath query. Let’s consider returning the number of words in the title of a book. In it’s simplest function, we could write the method as follows: <?php public function getNumberOfWords($isbn) { $query = "//library/book[@isbn = '$isbn']"; $xpath = new DOMXPath($this->domDocument); $result = $xpath->query($query); $title = $result->item(0)->getElementsByTagName("title") ->item(0)->nodeValue; return str_word_count($title); } But we can also incorporate the function str_word_count() directly into the XPath query. There are a few steps that need to be completed to do this. First of all, we have to register a namespace with the XPath object. PHP functions in XPath queries are preceded by “php:functionString” and then the name of the function function you want to use is enclosed in parentheses. Also, the namespace to be defined is. The namespace must be set to this; any other values will result in errors. We then need to call registerPHPFunctions() which tells PHP that whenever it comes across a function namespaced with “php:”, it is PHP that should handle it. The actual syntax for calling the function is: php:functionString("nameoffunction", arg, arg...) Putting this all together results in the following reimplementation of getNumberOfWords(): <?php public function getNumberOfWords($isbn) { $xpath = new DOMXPath($this->domDocument); //register the php namespace $xpath->registerNamespace("php", ""); //ensure php functions can be called within xpath $xpath->registerPHPFunctions(); $query = "php:functionString('str_word_count',(//library/book[@isbn = '$isbn']/title))"; return $xpath->evaluate($query); } Notice that you don’t need to call the XPath function text() to provide the text of the node. The registerPHPFunctions() method does this automatically. However the following is just as valid: php:functionString('str_word_count',(//library/book[@isbn = '$isbn']/title[text()])) Registering PHP functions is not restricted to the functions that come with PHP. You can define your own functions and provide those within the XPath. The only difference here is that when defining the function, you use “php:function” rather than “php:functionString”. Also, it is only possible to provide either functions on their own or static methods. Calling instance methods are not supported. Let’s use a regular function that is outside the scope of the class to demonstrate the basic functionality. The function we will use will return only books by “George Orwell”. It must return true for every node you wish to include in the query. <?php function compare($node) { return $node[0]->nodeValue == "George Orwell"; } The argument passed to the function is an array of DOMElements. It is up to the function to iterate through the array and determine whether the node being tested should be returned in the DOMNodeList. In this example, the node being tested is /book and we are using /author to make the determination. Now we can create the method getGeorgeOrwellBooks(): <?php public function getGeorgeOrwellBooks() { $xpath = new DOMXPath($this->domDocument); $xpath->registerNamespace("php", ""); $xpath->registerPHPFunctions(); $query = "//library/book[php:function('compare', author)]"; $result = $xpath->query($query); $books = array(); foreach($result as $node) { $books[] = $node->getElementsByTagName("title") ->item(0)->nodeValue; } return $books; } If compare() were a static method, then you would need to amend the XPath query so that it reads: //library/book[php:function('Library::compare', author)] In truth, all of this functionality can be easily coded up with just XPath, but the example shows how you can extend XPath queries to become more complex. Calling an object method is not possible within XPath. If you find you need to access some object properties or methods to complete the XPath query, the best solution would be to do what you can with XPath and then work on the resulting DOMNodeList with any object methods or properties as necessary. Summary XPath is a great way of cutting down the amount of code you have to write and to speed up the execution of the code when working with XML data. Although not part of the official DOM specification, the additional functionality that the PHP DOM provides allows you to extend the normal XPath functions with custom functionality. This is a very powerful feature and as your familiarity with XPath functions increase you may find that you come to rely on this less and less.
https://www.sitepoint.com/php-dom-using-xpath/
CC-MAIN-2019-51
refinedweb
1,524
51.99
Sure, WiFi is all well and good, but is it the only option for wireless communication on the Raspberry Pi? What if there isn’t a network available or you need a longer range? 433M! This tutorial was written by PJ Evans and first appeared in The MagPi magazine issue #75. Click here to download your free digital copy of The MagPi magazine. You'll need Prepare your Raspberry Pi boards To demonstrate sending messages using 433MHz, it makes sense to use two Raspberry Pi boards so we can have a conversation. None of what we’re doing here requires much processing power, so any Pi will do, even original Model As or Bs. Depending on what you’re comfortable with, install either full Raspbian Stretch, or – as we’re doing here – Raspbian Lite, as everything will be run from the command line. If you haven’t got access to multiple monitors and keyboards, consider using SSH to access each Pi with two windows on your main computer. That way you can see everything taking place. Meet the transceivers Each kit comes with two circuit boards. The longer of the two boards is the receiver, sporting four pins. Before wiring up, check the labelling of these pins very carefully as they do sometimes vary. Regardless of position, there will be 5 V power in (labelled VCC), ground (GND), and two ‘DATA’ lines which transmit the received signals. These are identical so you can use either. The smaller transmitter has three lines, which again can vary in position based on the manufacturer. Just like the receiver, you have VCC for power, GND for ground, and this time, a single data line. Wire-up the breadboard We’re using a tiny breadboard, but any size will work. In fact, a larger board with power and ground rails might be a bit tidier. Carefully insert a receiver and transmitter in each breadboard alongside each other. We want the two breadboards opposite so that the transmitter of Pi #1 (which we’re calling ‘Alice’) is pointing directly at the receiver of Pi #2 (‘Bob’) and vice versa. Connect six jumper leads to each breadboard, one on the rail for each pin of the transceiver pair. It doesn’t matter which ‘DATA’ line you use on the receiver. Connect to the Raspberry Pi boards Connect each Raspberry Pi to its six jumper leads. Luckily, this project doesn’t require any additional components, so you can wire directly. Both the receiver and transmitter run at 5 V, so connect each VCC jumper lead to physical pins 2 and 4 of the GPIO (the top two right-hand pins when pin 1 is top-left). Next, connect the GND leads to pins 6 and 9. Although your radio is now powered, it’s not much use if it can’t send and receive data, so connect the transmitter’s DATA to GPIO 17 and the receiver’s DATA to GPIO 27 (pins 11 and 13). Test receive Before we can do anything with our newly installed radio, we need some software. Open up a Terminal and issue the following commands: cd sudo apt install python3-pip git pip3 install rpi-rf git clone You now have everything installed to test your hardware. Pick your favourite of the two Raspberry Pi boards and enter the following: cd ~/rfchat python3 receive.py Now hold the remote control from the RF kit very close to the receiver and press its buttons. See numbers appear? Great. If not, review your wiring. Press CTRL+C to quit and repeat on the other Pi. Test send Position the Raspberry Pi boards so the two breadboards are within a centimetre of each other, with Alice’s transmitter pointing at Bob’s receiver and likewise the other way around. On Alice, start the receive script just as we did in the previous step. On Bob, enter the following in the Terminal: cd ~/rfchat python3 send.py 1234 All being well, ‘1234’ should be displayed repeatedly on Alice’s screen. There’s no error correction, so it’s normal to see missing or corrupt characters. If it doesn’t look quite right, try again. Once you’re happy, reverse the test to confirm Bob’s receiver is also working. Let’s have a chat Our two Raspberry Pi boards can now communicate wirelessly without WiFi. To demonstrate what’s possible, take a look at the rfchat.py script. This code uses threading (code‑speak for doing multiple things at once) to monitor the keyboard and receiver for data. We convert incoming and outgoing data to numbers (ASCII) and back. The result is a live chat interface. You can now send and receive messages. To start: cd ~/rfchat python3 rfchat.py Now slowly type on either Pi and the message will appear on the other. In fact, your local output is your receiver picking up your own transmitter! Increasing range with science The reason for the radio’s poor range is the tiny antennas, but this can be fixed. The antenna’s length needs to be a harmonic of the wavelength, which is calculated by dividing the speed of light by the frequency (299 792 458 m/s divided by 433 000 000). You can keep dividing the result of 692.36 mm by 2 until you get a sensible length. A 173 mm antenna is long enough to give an impressive range, normally covering a whole house. Solder 173 mm wires to all four ‘ANT’ solder points on the PCBs. Your rfchat should now work over long distances. Socket to me There are many household devices that use 433MHz to send control codes. Among the most popular are remote-control mains sockets, often used to switch lights. These commonly use 433MHz and protocols that rpi-rf can understand. cd ~/rfchat python3 receive.py Press buttons on the remote control. You’re likely to see a list of numbers, repeating for error correction, that change with each button. Make a note of these and then send them out as follows: python3 send.py [number] You should hear a reassuring ‘click’ from the relay of the socket. Try switching it on or off. Make it your own These 433MHz units add a range of possibilities to your Raspberry Pi projects at a very low cost. Not just home automation projects with controllable sockets, but also providing radio communication where WiFi isn’t practical, such as high-altitude ballooning or unusually positioned sensors like flood monitors. IoT devices can use radio to deliver and receive any information. Now you can control sockets from your Raspberry Pi, you can link these up to any kind of event you can imagine. How about detecting your car coming home using a Pi Camera Module and number-plate recognition, then switching on the house lights? import sys import tty import termios import threading import time from rpi_rf import RFDevice # Elegant shutdown def exithandler(): termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings) try: rx.cleanup() tx.cleanup() except: pass sys.exit(0) # Activate our transmitter and received tx = RFDevice(17) tx.enable_tx() rx = RFDevice(27) rx.enable_rx() # Receiving loop def rec(rx): print(“Receiving”) lastTime = None while True: currentTime = rx.rx_code_timestamp if ( currentTime != lastTime and (lastTime is None or currentTime - lastTime > 350000) ): lastTime = rx.rx_code_timestamp try: if (rx.rx_code == 13): # Enter/Return Pressed sys.stdout.write(‘\r\n’) else: sys.stdout.write(chr (rx.rx_code)) sys.stdout.flush() except: pass time.sleep(0.01) # Start receiving thread t = threading.Thread(target=rec, args=(rx,), daemon=True) t.start() print(“Ready to transmit”) # Remember how the shell was set up so we can reset on exit old_settings = termios.tcgetattr(sys.stdin) tty.setraw(sys.stdin) while True: # Wait for a keypress char = sys.stdin.read(1) # If CTRL-C, shutdown if ord(char) == 3: exithandler() else: # Transmit character tx.tx_code(ord(char)) time.sleep(0.01)
https://magpi.raspberrypi.com/articles/build-433mhz-radio-chat-device
CC-MAIN-2022-05
refinedweb
1,320
65.93
Question: The preferred stock of Denver Savings and Loan pays an The preferred stock of Denver Savings and Loan pays an annual dividend of $5.70. It has a required rate of return of 6 percent. Compute the price of the preferred stock. Relevant QuestionsAnalogue Technology has preferred stock outstanding that pays a $9 annual dividend. It has a price of $76. What is the required rate of return (yield) on the preferred stock?Explain why retained earnings have an associated opportunity cost?If corporate managers are risk-averse, does this mean they will not take risks? Explain.What are electronic communication networks (ECNs)? Generally speaking, are they currently part of the operations of the New York Stock Exchange and the NASDAQ Stock Market?What are some specific features of bond agreements? Post your question
http://www.solutioninn.com/the-preferred-stock-of-denver-savings-and-loan-pays-an
CC-MAIN-2017-39
refinedweb
135
60.51
The original computer is a 32-bit win7 system. Due to the needs of the project, the previous vs 2010 is updated to the current vs 2013. Reconfigure opencv environment, version 2.4.9( Win7 + Opencv 2.4.9 + VS 2013) In the past, we used to manually add the link library to configure the environment, which was very troublesome. Find the perfect configuration strategy called opencv on the Internet and try it. This method uses the way of writing property sheet, which can directly import the created property sheet into the project to complete the configuration of OpenCV, which is very convenient and fast. Thank you. The link is as follows: configuration_ in_ vs.html During this period, I also encountered a very confused problem. In the past, the manual configuration method was used, and there was no problem in opening the test image. This time, after importing the property sheet, the same method was used to test, but it always failed. It indicates that the related header file of OpenCV cannot be found, so it is thought that it is caused by the configuration error of property sheet or the system variable not being updated. After setting system variables several times and restarting, it still can’t be solved, which is very confusing. After searching the opencv directory, you can find the relevant header files, but the latest version is different from the previous directory. The original reason for finding data is that there is a problem with the path, not the property table. Directly include the following files in the original project, no need to include different header files, convenient and fast( The header file of opencv.hpp contains all the header files in opencv library.) #include <opencv2\opencv.hpp> Read More: - Environment configuration at the beginning of OpenCV + vs2015 - Vs2015 configuring OpenGL (glfw Library) - Vs configure tensorrt environment to use - Mac clion configuring opencv environment - [opencv + openvino] opencv-vino4.5.1 + opencv contrib installation - Configuring OpenGL in Chinese version of VS2010 and problem solving - A fatal error C1083 occurred when compiling C + + code under. Net: unable to open the include file: “xuhk”_ Jnihelloworld. H “: solution to no such file or directory error. - Project files may be invalid appears when cmake compiles opencv3.1, and the debug additional dependency of the compiled opencv3.1 is at the end - [solution] build vins and orb-slam based on opencv4 - Error resolution-“Error in configuration process, project files may be invalid” appears when Cmake compiles openCV - Error configuration process, project files may be invalid - error LNK2038: mismatch detected for ‘RuntimeLibrary’: value ‘MTd_StaticDebug’ doesn’t match value ‘ - fatal error LNK2019[UNK]fatalerror LNK1120 - VS2010 library function problem: objidl. H (11266): error c2061: syntax error: identifier ‘__ RPC__ out_ xcount_ part’ - Win10 vs 2017 prompt cannot open include file: “windows. H”: no such file or director - Solution to the problem of vs2017 error report unable to open source file - Configuration of OpenGL development environment under Windows environment, win10 + vs2019 + glfw + glad - Cmake compile opencv report error qtcore_ DIR QtOpenglDIR QtGui_ Dir ffmpeg loading failed - Solve the problem of VC6.0 open crash and OpenGL glut32.lib library - A series of problems in configuring OpenGL development environment in vs2015
https://programmerah.com/opencv-perfect-configuration-strategy-2015-win7-opencv-2-4-9-vs-2013-26969/
CC-MAIN-2021-21
refinedweb
537
52.8
You have a module that you don't need to load each time the program runs, or whose inclusion you wish to delay until after the program starts up. Either break up the use into its separate require and import components, or else employ the use autouse pragma. Programs that check their arguments and abort with a usage message on error have no reason to load modules they never use. This delays the inevitable and annoys users. But those use statements happen during compilation, not execution, as explained in the Introduction. Here, an effective strategy is to place argument checking in a BEGIN block before loading the modules. The following is the start of a program that checks to make sure it was called with exactly two arguments, which must be whole numbers, before going on to load the modules it will need: BEGIN { unless (@ARGV = = 2 && (2 = = grep {/^\d+$/} @ARGV)) { die "usage: $0 num1 num2\n"; } } use Some::Module; use More::Modules; A related situation arises in programs that don't always use the same set of modules every time they're run. For example, the factors program from Chapter 2 needs the infinite precision arithmetic library only when the -b command-line flag is supplied. A use statement would be pointless within a conditional because it's evaluated at compile time, long before the if can be checked. So we use a require instead: if ($opt_b) { require Math::BigInt; } Because Math::BigInt is an object-oriented module instead of a traditional one, no import was needed. If you have an import list, specify it with a qw( ) construct as you would with use. For example, rather than this: use Fcntl qw(O_EXCL O_CREAT O_RDWR); you might say this instead: require Fcntl; Fcntl->import(qw(O_EXCL O_CREAT O_RDWR)); Delaying the import until runtime means that the rest of your program is not subject to any imported semantic changes that the compiler would have seen if you'd used a use. In particular, subroutine prototypes and the overriding of built-in functions are not seen in time. You might want to encapsulate this delayed loading in a subroutine. The following deceptively simple approach does not work: sub load_module { require $_[0]; #WRONG import $_[0]; #WRONG } It fails for subtle reasons. Imagine calling require with an argument of "Math::BigFloat". If that's a bareword, the double colon is converted into your operating system's path separator and a trailing .pm is added. But as a simple variable, it's a literal filename. Worse, Perl doesn't have a built-in import function. Instead, there's a class method named import that we're using the dubious indirect object syntax on. As with indirect filehandles, you can use indirect objects only on a plain scalar variable, a bareword, or a block returning the object, not an expression or one element from an array or hash. A better implementation might look more like: load_module("Fcntl", qw(O_EXCL O_CREAT O_RDWR)); sub load_module { eval "require $_[0]"; die if $@; $_[0]->import(@_[1 .. $#_]); } But this still isn't perfectly correct in the general case. It really shouldn't import those symbols into its own package. It should put them into its caller's package. We could account for this, but the whole procedure is getting increasingly messy. Occasionally, the condition can be reasonably evaluated before runtime, perhaps because it uses only built-in, predefined variables, or because you've arranged to initialize the variables used in the conditional expression at compile time with a BEGIN block. If so, the if pragma comes in handy. The syntax is: use CONDITION, MODULE; use CONDITION, MODULE => ARGUMENTS; As in: use if $^O =~ /bsd/i, BSD::Resource; use if $] >= 5.006_01, File::Temp => qw/tempfile tempdir/; A convenient alternative is the use autouse pragma. This directive can save time on infrequently loaded functions by delaying their loading until they're actually used: use autouse Fcntl => qw( O_EXCL( ) O_CREAT( ) O_RDWR( ) ); We put parentheses after O_EXCL, O_CREAT, and O_RDWR when we autoused them but not when we used them or imported them. The autouse pragma doesn't just take function names; it can also take a prototype for the function. The Fcntl constants are prototyped to take no arguments, so we can use them as barewords in our program without use strict kvetching. Remember, too, that use strict's checks take place at compile time. If we use Fcntl, the prototypes in the Fcntl module are compiled and we can use the constants without parentheses. If we require or wrap the use in an eval, as we did earlier, we prevent the compiler from reading the prototypes, so we can't use the Fcntl constants without parentheses. Read the autouse pragma's online documentation to learn its various caveats and provisos. Recipe 12.2; the discussion on the import method in the documentation for the standard Exporter module, also found in Chapter 32 of Programming Perl; the documentation for the standard use autouse pragma
http://www.yaldex.com/perl-tutorial/0596003137_perlckbk2-chp-12-sect-3.html
CC-MAIN-2018-05
refinedweb
837
50.97
With the release of Everett looming, it's probably timely for some articles to be written on some of the 'new' features that are finally making it into Microsoft's C++ compiler. By new, I of course mean the things that every compiler since CFront has had, barring Visual C++. Nevertheless, they say it's better late than never, and so at last we are to have full Koenig lookup. So what is it ? Let's start with what the standard says, always a good place to start.... So what does this mean ? It means that if I call a function, and pass in a parameter that lives in a namespace, that namespace becomes a valid place for the compiler to look for that function. Why do we have such a rule ? Let's consider how it's likely to be used. The most common assumption is that an interface is the list of methods which are exposed by an object, those found in its header file. However, this is not all there is to it. If I have a class called Bar(), and I want to insert it into an ostream, I would write a method that looks like this: template<class charT, class Traits> std::basic_ostream<charT, Traits> & operator << (std::basic_ostream<charT, Traits> & os, const Bar & b) { // Stream some meaningful data from the object into the stream 'os' here. return os; } Is this a method on the public interface of the Bar class ? No. Is it part of the interface that Bar offers ? Most certainly. Without Bar, it is of no use to anyone, it wouldn't even compile without it. I'd like you to keep this example in mind as I show you an example of Koenig lookup, taken direct from the C++ standard. namespace NS { class T { }; void f(T){}; } NS::T parm; int main() { f(parm); }; In this example, a namespace is created, which contains a class and a function. Replace NS with std, and f with the inserter above and you'll see how it could exist in real life. Now we create an instance of NS::T, and then in our main function, we call the f method. What Koening lookup says is this: When a function is called, in order to determine if that function is visible in the current scope, the namespaces in which the functions parameters reside must be taken into account. So even though f is in theory hidden ( add a function called f in the namespace that takes an int, and if you try to call it from main, it will fail to compile), Koenig lookup says the function is visible because you have an instance of an object which exists in that namespace, and which is a parameter to the function. If we need Koenig lookup to write stream operators, etc., then how did it work in the past ? I'm not sure of the implimentation details, but the net result was that Koenig lookup only worked for operators, while straight functions, like the one in our example, did not work. Which in theory covered about 90% of the ways most people would use Koenig lookup, but also allowed you to write code which VC6/7 would compile, but a conforming C++ compiler would not. For example: namespace NS { class T { }; void f(T){}; } NS::T parm; void f(T){}; int main() { f(parm); }; This should not compile, because the call to f() is ambiguous. Under VC.NET and VC6 it compiles fine, because Koenig lookup fails. Here is an example of code that equally fails to compile in all compilers, just to keep the playing ground level: namespace NS1 { class T1 {/*...*/}; } namespace NS2 { class T2 {/*...*/}; void f(NS1::T1, T2) {/*...*/}; } namespace NS1 { void f(T1, NS2::T2) {/*...*/}; } NS1::T1 t1; NS2::T2 t2; int main() { f(t1,t2); } Thanks to whoever brought this up in the questions ( can't see them right now ). An earlier VC version simply cannot see ANY function named f. Add one to the global namespace, and it would call that. Everett, and every other compiler I know of, will complain that there are two functions visible with the same prototype. Passing in t1 makes the function in NS1 visible to the global namespace, and passing in T2 makes the function in NS2 visible. Koenig lookup is being applied twice here. I have had quite a few complaints with regard to this article, so I apologise if I am hammering the one simple point over and over, I don't get paid by the word, I just want everyone to 'get it'. Here is the standard example fleshed out to include a real class, and a real operator. #include <iostream> using std::cout; using std::cin; namespace NS { struct Date { int m_nYear, m_nMonth, m_nDay; Date::Date() : m_nYear(1969), m_nMonth(2), m_nDay(17) // My birthday { } }; template<class charT, class Traits> std::basic_ostream<charT, Traits> & operator << (std::basic_ostream<charT, Traits> & os, const Date & d) { os << d.m_nDay << "-" << d.m_nMonth << "-" << d.m_nYear; return os; } } int main() { NS::Date d; cout << d << " is my birthday, so buy me lots of loot"; int i; cin >> i; } This code defines a struct for holding a date, for simplicity the members are public. The iostream inserter is badly written, also for simplicity. Read my article if you want to know how to write good ones. The thing with this example is that if I ask 'how does the compiler know how to call the inserter', the answer seems obvious, it has to. It's part of the interface. Yes, but it's also a function that is not part of the class, and is hidden in a namespace. By now I hope you know that the reason the operator is visible is Koenig lookup. This example would probably compile on VC6 and 7 ( I cannot test it ), because as I said before, operators have always worked. However, the change is that now all code, including the example we pulled from the standard, will use Koenig lookup to decide which namespaces to search to find a function to call. To be honest, I'd regard examples like the one in the standard with caution. I can't think of a real world use for Koenig lookup outside of the sort of thing that compiled on previous versions of the Microsoft C++ compiler. However, I don't think that is the point. It's a bit hard to learn what cool things you can do with a language feature if that feature is not supported in your compiler. The fact that it was possible to write code on previous versions of VC which ( correctly ) won't compile in the next release is just one more reason that this change is long overdue. I look forward to finding new and exciting uses for Koenig lookup, and I hope you do as well. In response to the small percentage of people who felt that this was as clear as mud, I have added a couple of extra examples and some more explanation. I'd appreciate feedback as to if this clarifies things for you all. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cpp/koenig_lookup.aspx
crawl-002
refinedweb
1,200
68.4
Recently,: Here’s how I did it. Holy Crap! That’s a lot of stuff. It also added jQuery library files. I swallowed the bile that was rising in my throat and bravely moved on. 4) Next I had to add an OWIN Startup class. This was new to me. I didn’t want to spend hours studying what all those libraries do. That would defeat the reason for using third-party tools. So I just followed the instructions in the readme.txt file that was installed. You can also right click the project and select Add OWIN Startup Class. The final Startup.cs code should look like this: // Startup.cs using System; using System.Threading.Tasks; using Microsoft.Owin; using Owin; [assembly: OwinStartup(typeof(Startup))] public class Startup { public void Configuration(IAppBuilder app) { app.MapSignalR(); } } 5) Put a breakpoint on app.MapSignalR(); and run the program. Make sure the breakpoint is hit. There is no use proceeding until the startup code runs. 6) Create a Datatable to hold the golf scores and bind it to a GridView to display them. Note the DataTable is stored in the Application object so all sessions have access to it. The SignalR code also has access to it. Notes: My initial attempt worked: 3 people could post scores and each would see the other scores. However, when a 4th person joined, they didn’t see the previously entered scores. I realized I needed to store the scores on the server. I ended up throwing away the html table I started with and used a DataTable to store the scores and a GridView to display them. For completeness, below is the entire code of default.aspx.cs: using System; using System.Collections.Generic; using System.Data; using System.Web; using System.Web.UI; public partial class _Default : System.Web.UI.Page { // ---- Page Load ------------------------ protected void Page_Load(object sender, EventArgs e) { // populate the listboxes DDLPlayer.Items.Add("Tiger"); DDLPlayer.Items.Add("Phil"); DDLPlayer.Items.Add("Rory"); DDLPlayer.Items.Add("Steve"); for (int Hole = 1; Hole <= 18; Hole++) DDLHole.Items.Add(Hole.ToString()); for (int Score = 1; Score < 10; Score++) DDLScore.Items.Add(Score.ToString()); // populate the GridView GridViewScores.DataSource = GetScoresTable(); GridViewScores.DataBind(); } // ---- Get Scores Table ---------------------------------------------- // // if it doesn't exist yet, create it. // it uses what ever players are in the DDLPlayer object private DataTable GetScoresTable() { DataTable ScoresTable = Application["ScoresTable"] as DataTable; if (ScoresTable == null) { ScoresTable = new DataTable(); ScoresTable.Columns.Add(new DataColumn(@"Golfer\Hole", typeof(String))); for (int HoleIndex = 1; HoleIndex <= 18; HoleIndex++) ScoresTable.Columns.Add(new DataColumn(HoleIndex.ToString(), typeof(int))); for (int PlayerCount = 0; PlayerCount < DDLPlayer.Items.Count; PlayerCount++) ScoresTable.Rows.Add(ScoresTable.NewRow()); for (int PlayerCount = 0; PlayerCount < DDLPlayer.Items.Count; PlayerCount++) ScoresTable.Rows[PlayerCount][0] = DDLPlayer.Items[PlayerCount].Text; // PrimaryKey is needed so we can find the player row to update ScoresTable.PrimaryKey = new DataColumn[] { ScoresTable.Columns[0] }; Application["ScoresTable"] = ScoresTable; } return ScoresTable; } } 7) Next we need a way for a client to call the server. This is SignalR stuff: Create a class derived from Microsoft.AspNet.SignalR.Hub and add the PostScoreToServer function to it. //ScoresHub.cs: using System; using System.Collections.Generic; using System.Web; using Microsoft.AspNet.SignalR; using System.Data; public class ScoresHub : Hub { // ---- Post Score To Server ------------------------------------------ // called by clients // (this gets converted to a callable javaScript function called postScoreToServer // and is sent to the clients) public void PostScoreToServer(String Player, int Hole, int Score) { // send the new score to all the other clients Clients.All.SendMessageToClient(Player, Hole, Score); // store the scores in a 'global' table so new clients get a complete list DataTable ScoresTable = HttpContext.Current.Application["ScoresTable"] as DataTable; DataRow CurrentPlayer = ScoresTable.Rows.Find(Player); CurrentPlayer[Hole] = Score; } } Here is the first bit of magic: When SignalR starts up, it parses the ScoresHub code and creates JavaScript that gets sent to the clients when the client requests it. Server Client PostScoreToServer C# ----- SignalR Magic -----> postScoreToServer JS 8) Then we need a way for the Server call a client function. In other words, where did the SendMessageToClient function come from? Rather than include it piecemeal, here is the entire Default.aspx file: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html> <html xmlns=""> <head runat="server"> <title>Live Golf Scores from Steve Wellens</title> <script src="Scripts/jquery-1.10.2.js"></script> <script src="Scripts/jquery.signalR-2.0.1.js"></script> <script src='<%: ResolveClientUrl("~/signalr/hubs") %>' type="text/javascript"></script> <script> var scoresHub; $(document).ready(DocReady); // ---- Doc Ready ------------------------------------------- function DocReady() { $("#ButtonPostScore").click(PostScoreToServer); // hook up button click // use a global variable to reference the hub. scoresHub = $.connection.scoresHub; // supply the hub with a client function it can call scoresHub.client.SendMessageToClient = HandleMessageFromServer; $.connection.hub.start(); // start the local hub } // ---- Post Score to Server ------------------------------------ function PostScoreToServer(Player, Hole, Score) { var Player = $("#DDLPlayer").val(); var Hole = $("#DDLHole").val(); var Score = $("#DDLScore").val(); scoresHub.server.postScoreToServer(Player, Hole, Score); } // ---- Handle Message From Server ------------------------------- function HandleMessageFromServer(Player, Hole, Score) { // get the correct table row var tableRow = $("#GridViewScores td").filter(function () { return $(this).text() == Player; }).closest("tr"); // update the hole with the score tableRow.find('td:eq(' + Hole + ')').html(Score); } </script> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } td { width: 2em; } </style> </head> <body> <form id="form1" runat="server"> <asp:GridView </asp:GridView> <p> Player: <asp:DropDownList</asp:DropDownList> Hole: <asp:DropDownList</asp:DropDownList> Score: <asp:DropDownList</asp:DropDownList> <input id="ButtonPostScore" type="button" value="Post Score" /> </p> <h3>View this page with multiple browsers to see how SignalR works!</h3> </form> </body> </html> Here is the second bit of magic: Assigning* a function to the client Hub makes the function available to the server! scoresHub.client.SendMessageToClient = HandleMessageFromServer; *Remember, JavaScript allows you to attach new variables and function onto any object. When the client hub starts up, the attached function metadata gets sent to the server and converted to C# code….to be called by the server! Client Server HandleMessageFromServer JS ---- SignalR Magic -----> SendMessageToClient C# Done. That is it. That is the magic of SignalR. Here is a link to a live demo: Bonus Note: Almost every article on SignalR warns of the gotcha that when the server function is in Pascal case, it gets converted to Camel case: MyFunction -> myFunction. Going from client to server doesn’t have that problem since you specify both the C# function name and the JavaScript function name: scoresHub.client.SendMessageToClient=HandleMessageFromServer; I wish that was the end of my epic tale….but it isn’t. Here is roughly how I spent my time: Activity Developer Time SignalR code 5 percent DataTable, GridView, Html, CSS 30 percent Deployment* 65 percent *Deployment did not go smoothly. When the SignalR server code runs, it prepares JavaScript code to be sent to the client. When I deployed the application, the client requested the code but it could not be found: There were runtime exceptions at the client. Doing online searches for SignalR help is tough. Much of the information is MVC specific which I wasn’t using (that architecture is not appropriate for every application). Much of the information I found was already out of date and obsolete. These are all suggestions I found researching my problem: <%--<script src="signalr/hubs/" ></script>--%> <%--<script src="/signalr/signalr/hubs/" ></script>--%> <%--<script src="/signalr/hubs/" ></script>--%> <%--<script src="~/signalr/hubs"></script>--%> <%--<script src="Scripts/ScoresHub.js"></script> (hard copy of scripts)--%> <script src='<%: ResolveClientUrl("~/signalr/hubs") type="text/javascript" %>'></script> Some posts suggested certain libraries needed to be on IIS so I contacted my hosting service and described my problem. They asked me to create a new sub-domain and try running my project from that. It worked! Whew, that meant the server and IIS was configured properly. I was running the application from a subdirectory on my main domain and apparently SignalR doesn’t like that…it gets confused. I contacted the hosting service again and said I didn’t want to use a sub-domain. They suggested using a virtual directory that pointed to the subdirectory. I tried it and it worked! I normally don’t do this, but I’ve got to plug my host provider: Arvixe is the best. I have had nothing but good experiences with them. (Don’t ask me about my cell phone service provider support…THAT was a totally different experience involving frustration, profanity and wasted time trying to communicate with incompetent boobs)..
https://www.codeproject.com/Articles/814663/SignalR-is-Magic
CC-MAIN-2018-26
refinedweb
1,404
59.7
Functional vs Technical Spec Hi Joel, I am in the process of writing a functional spec for a new product. I really enjoyed your articles from a few years back on the functional spec process. In the beginning of one of the articles you define the difference between and functional and technical spec. I am curious if you have written anything that describes taking a functional spec and turning it into a technical spec? Thanks for the great site. Kevin Kevin Moore Monday, March 22, 2004 I haven't written such a thing. My way of thinking is that you just don't write "technical specs" that cover the entire functionality of an application ... that's what the functional spec is for. Once you have a complete functional spec the only thing left to document is points of internal architecture and specific algorithms that are (a) entirely invisible (under the covers) and (b) not obvious enough from the functional spec. So the technical spec winds up not existing. In it's place you may or may not have a few technical articles on small topics: is a nice example of such a thing. Remember, if anything you're talking about affects the user interface or even the behavior of the thing you're building, it goes in the functional spec... so all that's left for technical specification is things that are SOLELY of interests to programmers, and, in fact, a lot of that stuff might be best in comments, so at most, you should have a handful of short articles on topics like algorithms -- and you write those articles on demand, when you need to think through a design, or document the "big picture" for future programmers so they don't have to figure out what your code is trying to accomplish. Joel Spolsky Fog Creek SoftwareTuesday, March 23, 2004 > Wow! Elegant code. Sathyaish Chakravarthy Tuesday, March 23, 2004 So Joel, how do you get the identity of the tuple you just inserted? Is the auto-number in your code? i like i Tuesday, March 23, 2004 I find that I don't tend to write a be-all and end-all tech spec, because I'm usually the one coding it. I make an effort to code clearly and more useful points (like an API) wind up in a sort of technical summary. The appropriate level of documentation can vary from project to project, of course. From just the functional spec side, Alistair Cockburn's Effective Use Cases goes into depth on what your documentation should cover. (only get a copy of you're *really* into that sort of thing though) Joel Goodwin Tuesday, March 23, 2004 The ID of a "saved" row is always returned, regardless whether "Save" performed an insert or an update. Oh, and if your programmers aren't smart enough to work from a functional specification plus a few articles as Joel suggested, aside from firing them you could define interfaces (i.e. classes containing only pure virtual members) and document those. In the C# world, this is dead easy, especially if you use the code documentation features, which allow you to write something like: namespace Oli.DataAccessLayer { /// <summary>The IEntity interface will be implemented /// by all classes in the Data Access Layer /// <seealso href="joelonsoftware.com" /> /// </summary> /// <remarks> /// Idea stolen from Joel's article on /// Entity classes. /// </remarks> public interface IEntity { /// <summary> /// Every entity can at least be /// loaded. /// </summary> /// <param name="id">The ID of the /// entity to load</param> public void Load ( int id ); } } Oli Tuesday, March 23, 2004 Oops, I forgot to point out that NDoc, an open-source tool, will turn the XML comments you sprinkle in your .NET code into Javadoc-style HTML pages, CHM-style documentation or even the newer MSHelp 2.0, so that your code documentation looks just like Microsoft's. For those who are interested, what Joel describes is pretty much verbatim what Martin Fowler calls a Row Data Gateway. Martin's book "Patterns of Enterprise Application Architecture" is packed full of great information on designing applications. While it's not necessarily a book for all developers, anybody who's in a high level design position will probably get a lot of value out of it (plus the Gang of Four "Design Patterns" book). Brad Wilson (dotnetguy.techieswithcats.com) Tuesday, March 23, 2004 Recent Topics Fog Creek Home
http://discuss.fogcreek.com/askjoel/default.asp?cmd=show&ixPost=4202&ixReplies=8
CC-MAIN-2014-41
refinedweb
734
59.64
This is a notebook to accompany the blog post "Way number eight of looking at the correlation coefficient". Read the post for additional context! import numpy as np from datascience import * from datetime import * import matplotlib %matplotlib inline import matplotlib.pyplot as plots from mpl_toolkits.mplot3d import Axes3D import pandas as pd import math As before, we're using the datascience package, and everything else we're using is pretty standard. And, as before, here's the data we'll be working with, converted to standard units and plotted: heightweight = Table().with_columns([ 'Date', ['07/28/2017', '08/07/2017', '08/25/2017', '09/25/2017', '11/28/2017', '01/26/2018', '04/27/2018', '07/30/2018'], 'Height (cm)', [ 53.3, 54.6, 55.9, 61, 63.5, 67.3, 71.1, 74.9], 'Weight (kg)', [ 4.204, 4.65, 5.425, 6.41, 7.985, 9.125, 10.39, 10.785], ]) def standard_units(nums): return (nums - np.mean(nums)) / np.std(nums) heightweight_standard = Table().with_columns( 'Date', heightweight.column('Date'), 'Height (standard units)', standard_units(heightweight.column('Height (cm)')), 'Weight (standard units)', standard_units(heightweight.column('Weight (kg)'))) heightweight_standard heightweight_standard.scatter( 'Height (standard units)', 'Weight (standard units)') So far, this is all a recap of last time. Now, let's try turning our data sideways. The hacky way I have of doing this is to convert the data first to a numpy ndarray, then to a pandas DataFrame, and then transposing the DataFrame. This is kind of silly, but I don't know a better way to transpose a structured ndarray. If you do, let me know. # First convert to a plain old numpy ndarray. heightweight_standard_np = heightweight_standard.to_array() # Now convert *that* to a pandas DataFrame. df = pd.DataFrame(heightweight_standard_np) # Get the transpose of the DataFrame. df = df.T df pandas defaults to using RangeIndex (0, 1, 2, …, n) for the column labels, but we want the dates from the first row to be the column headers rather than being an actual row. That's an easy change to make, though. df.columns = df.iloc[0] df = df.drop("Date") df While we're at it, we'll convert the values in our DataFrame to numeric values, so that we can visualize them in a moment. df = df.apply(pd.to_numeric) df Eight dimensions are too many to try to visualize, but we can pare it down to three. We'll pick three -- the first (07/28/2017), the last (07/30/2018), and one in the middle (01/26/2018) -- and drop the rest. df_3dim = df.drop(df.columns[[1, 2, 3, 4, 6]],axis=1) df_3dim Now we can visualize the data with a three-dimensional scatter plot. %matplotlib notebook scatter_3d = plots.figure().gca(projection='3d') scatter_3d.scatter(df_3dim.iloc[:, 0], df_3dim.iloc[:, 1], df_3dim.iloc[:, 2]) scatter_3d.set_xlabel(df_3dim.columns[0]) scatter_3d.set_ylabel(df_3dim.columns[1]) scatter_3d.set_zlabel(df_3dim.columns[2]) height_point = df_3dim.iloc[0] weight_point = df_3dim.iloc[1] origin = [0,0,0] X, Y, Z = zip(origin,origin) U, V, W = zip(height_point, weight_point) scatter_3d.quiver(X, Y, Z, U, V, W, arrow_length_ratio=0.09) plots.show() What's going on here? We're in the "person space", where, as Rodgers and Nicewander explained, each axis represents an observation -- in this case, three observations. And there are two points, as promised -- one for each of height and weight. If we look at the difference between the two points on the z-axis -- that is, the axis for 07/30/2018 -- the darker-colored blue dot is higher up, so it must represent the height variable, with coordinates (-1.26135, 0.617255, 1.63707) That means that the other, lighter-colored blue dot, with coordinates (-1.3158, 0.728253, 1.41777), must represent the weight variable. I've also plotted vectors going from the origin to each point. These are the "variable vectors" for the two points. Finally, we want to figure out the angle between the two vectors. There are various ways to do that in Python; we'll use a simple one that works for us: def dotproduct(v1, v2): return sum((a*b) for a, b in zip(v1, v2)) def length(v): return math.sqrt(dotproduct(v, v)) def angle(v1, v2): return math.acos(dotproduct(v1, v2) / (length(v1) * length(v2))) angle_between_vvs = angle(height_point, weight_point) angle_between_vvs 0.11140728370937446 Finally, we can take the cosine of that to get the correlation coefficient $r$: math.cos(angle_between_vvs) 0.9938006245545371 height_point_8dim = df.iloc[0] weight_point_8dim = df.iloc[1] angle_between_8dim_vvs = angle(height_point_8dim, weight_point_8dim) angle_between_8dim_vvs 0.1338730551963976 Taking the cosine of this slightly bigger angle: math.cos(angle_between_8dim_vvs) 0.9910523777994951 This turns out to be the same as what we had previously calculated $r$ to be, modulo a little numerical imprecision. And so, that's way number eight of looking at the correlation coefficient -- as the angle between two variable vectors in "person space".
https://nbviewer.jupyter.org/github/lkuper/composition.al-notebooks/blob/master/Way%20number%20eight%20of%20looking%20at%20the%20correlation%20coefficient.ipynb
CC-MAIN-2019-13
refinedweb
814
59.9
import "github.com/pingcap/tidb/util/mvmap" Iterator is used to iterate the MVMap. Next returns the next key/value pair of the MVMap. It returns (nil, nil) when there is no more entries to iterate. MVMap stores multiple value for a given key with minimum GC overhead. A given key can store multiple values. It is not thread-safe, should only be used in one goroutine. NewMVMap creates a new multi-value map. Get gets the values of the "key" and appends them to "values". Len returns the number of values in th mv map, the number of keys may be less than Len if the same key is put more than once. NewIterator creates a iterator for the MVMap. Put puts the key/value pairs to the MVMap, if the key already exists, old value will not be overwritten, values are stored in a list. Package mvmap imports 1 packages (graph) and is imported by 64 packages. Updated 2019-12-02. Refresh now. Tools for package owners.
https://godoc.org/github.com/pingcap/tidb/util/mvmap
CC-MAIN-2020-50
refinedweb
169
76.01
A configuration structure for the LBT transmit algorithm. #include < rail_types.h> A configuration structure for the LBT transmit algorithm. Definition at line 1678 1705 of file rail_types.h. An overall timeout, in RAIL's microsecond time base, for the operation. If transmission doesn't start before this timeout expires, the transmission will fail. This is important for limiting LBT due to LBT's unbounded requirement that if the channel is busy, the next try must wait for the channel to clear. A value 0 means no timeout is imposed. Definition at line 1717 of file rail_types.h. The number of CCA failures before reporting the CCA_FAIL. The maximum supported value for this field is defined in RAIL_MAX_LBT_TRIES. A value 0 performs no CCA assessments and always transmits immediately. Definition at line 1692 of file rail_types.h. The documentation for this struct was generated from the following file: - common/ rail_types.h
https://docs.silabs.com/rail/2.5/struct-r-a-i-l-lbt-config-t
CC-MAIN-2019-26
refinedweb
150
51.95
Just recently, I weighed in on a post where the author was making a legitimate complaint about the quality of articles submitted. I won't go into the details, but since that discussion I've been tempted to write my first article as I think I should try my best and contribute more to the community. Today I commented on another authors post, it was well written and I could follow the thread of the article, but the examples were a little too obscure (for my liking), so their real world application might not be immediately apparent to the reader. So, on to my first article. It took me a while to grasp the concept of interfaces. It's not that they're particularly difficult as a concept, but how and where to apply them is where a developer can struggle. My intent with this article is not to show anything radically different, not to try and say "my article better describes x" but to try and put my understanding of interfaces and their practical implementation into my words, so that the reader has a different perspective with which to view the topic. I'm going to make the assumption that you understand (not necessarily are a master of) basic object orientated principals and that you are comfortable with the following words. The simplest analogy I can draw for an interface is that of a contract. A landlord of a property might have a standard contract, everybody expecting to live in a property owned by that landlord agrees that they will adhere to the rules and guidelines contained within it. How two tenants keep within that rule set is entirely up to them, but they are both bound by the same contract. An interface is a guarantee that certain behaviors and values will be available to anybody using an object that implements that interface. You define an interface in C# as follows: public interface ICustomAction<T> { string Name { get;} string Description { get; set; } T Execute( T val ); } An interface is always public. You cannot define access modifiers on the properties and methods defined within and you cannot provide any implementation for your interface. Note: If you are familiar with C# 3.0's features, specifically auto-properties, do not confuse your interface definition with an auto-property. It is not implementation in the case of an interface. You cannot create an object of an interface, you can only create an object of a class that implements an interface. Although you will see examples such as: public void MyMethod( ICustomAction<int> action) { } You can never actually declare an object of type ICustomAction. Give it a go, see what happens. ICustomAction ICustomAction<int> action = new ICustomAction<int>( ); Instead, you need to define a class and implement the interface, defining the functionality that you have agreed an object implementing this interface will provide. public class UpdateOrderAction : ICustomAction<int> { public string Name { get { return "UpdateOrderAction"; } } public string Description {get;set;} public int Execute( int val ) { // } } Not very useful at the moment. However, you've actually created a class from which you can instantiate an object and you will guarantee it provides a name and a method 'Execute'. Notice that we are now able to define access modifiers on the properties and methods defined. You are however, limited to your properties and methods being made public. Execute public As I said earlier, when you define an interface, you are making a guarantee that any properties and methods defined on that interface are available on all classes that implement it. Let us say that Jeff and Bill are writing a system together. Bill is going to work on the back end accounting system, Jeff is going to be working on the front end, data entry system. They're starting their development at the same time, so there is no existing code to work from. Jeff will be allowing data entry clerks to create invoices for customers, Bill's back end system will be responsible for posting those invoices to ledgers etc. So, Bill and Jeff sit down and flesh out the rough design of their system, what they'll need from one another. They agree that an invoice should contain: Id CustomerReferenceNumber Value So, they define an interface: public interface IInvoice { int Id {get;set;} int CustomerReferenceNumber {get;set;} decimal Value {get;set;} } Now, they both go away happy. Bill knows that he can work with an IInvoice object coming from Jeffs front end, Jeff knows that when he is ready, he can produce an invoice object that implements the IInvoice object they just discussed and he won't hold Bill up. Now, if Jeff decided that when a customer was a high profile customer, he would make the customer reference number private on the invoice, he would not be fulfilling the contract that he and Bill had agreed upon and that the IInvoice interface had promised. So, any class implementing an interface must make all the properties and methods that make up that interface public to all. IInvoice private public Using the example of the ICustomAction interface from earlier, we'll now continue to try and expand upon implementing our interface in a class. We defined the custom action interface as being of a type, so when we implement the interface in our object, in this case: ICustomAction public class MultiplyAction : ICustomAction<int> { public string Name { get { return "Multiply Action"; } } public string Description { get; set; } public int Execute( int val ) { Console.WriteLine( "Name: {0} Value: {1}", Name, val ); return val * 2; } } MultiplyAction ICustomAction<int> T public int Execute( int val) T Execute( T val) To see this in work, take a look at the sample source code. The generics I used are a little beyond the scope of this article, but hopefully my example code makes them understandable enough. You could define your ICustomAction like so: public interface ICustomAction { string Name {get;} string Description {get;set;} int Execute( int val); } In the attached code, there are three custom actions. One is the manager class that has a list of custom actions attached to it, the others are the actions that we can perform. In the program.cs, I create an object of type ActionManager and I add two custom actions to it. Notice that the code only specifies that an ICustomAction<int> is required, not that a Multiply or DivideAction is required. ActionManager Multiply DivideAction Without trying to throw around a bunch of common phrases (such as dependency injection, loose coupling, etc., few of which I believe I have a strong grasp of) Programming to the interface, rather than a solid implementation of an object gives the developers the flexibility to swap out the actual implementation without worrying too much about how the program will take the new change. Back to Jeff and Bill from before. Now, let us say that Jeff and Bill didn't have their original discussion, let us say that the conversation went something like: Jeff: Hey Bill, so what do you need from my data entry system? Bill: Well Jeff, I need to get an invoice, I need an Id, a customer reference number and a value. I can just tie up the customer reference number to the account and then post a value on the ledger. Jeff: Oh great! I'll pass you an invoice just like that. So, they go away and a week later, Jeff posts an Invoice object to Bill. Great. The system is working fine and they have got it up and running in record time. Their managers are overjoyed, the business is efficient. A month later, Jeff's manager approaches him. Manager: Jeff, we have a bit of an issue. We're having trouble reporting on the invoices. Some of our customers have the ability to override on an invoice by invoice basis just what terms they have. Jeff: Hmmmm Manager: You'll sort it out, that's great! So, Jeff goes away and thinks long and hard about this. He decides that the best way of doing this is to create a new type of invoice, a SuperInvoice (don't name your objects super anything!). He gets it done in an hour and then implements the change on the system. *BANG* SuperInvoice Bill: Jeff, what happened? The ledger postings crashed, it's talking about an invalid cast. Jeff: Ooops, we should have talked about interfaces in the first place. When Jeff implemented the change, he didn't think that Bill was dependent upon an Invoice object. When he implemented SuperInvoice, he just created a new type and implemented it within the system. There are several solutions to this problem, those of you familiar with inheritance may see my example as poor as Jeff could have just inherited from Invoice and all should have been fine. However, what Bill and Jeff originally did do was create an IInvoice interface. It gave Bill and Jeff the ability to program their respective parts without worry of the actual implementation of each object, when Jeff came to implement SuperInvoice, he would have implemented the interface and the system at Bill's end would have been none the wiser. It didn't need to be any the wiser. As far as Bill is concerned, it is an invoice, he doesn't need to worry about whether it carries anything not relevant to his system. Invoice SuperInvoice Interfaces are used everywhere throughout the .NET Framework and they are a powerful tool in object orientated programming and design. You may have seen IEnumerable, IDisposable and several others quite frequently while developing other programs. In the case of IDisposable, implementing this interface guarantees you will have a Dispose method on your object (whether you do anything or not[bad practice]).You may see: IEnumerable IDisposable IDisposable Dispose using( SqlConnection connection = new SqlConnection( )) { //Code } The using keyword will take any object that implements IDisposable. When the using statement is complete, it implicitly calls Dispose. If you define your methods to return an IEnumerable<string> it means you can return any collection that implements the IEnumerable interface and contains strings. You can return a list or any other firm object but you guarantee that the object you return will definitely provide certain properties and methods. using Dispose IEnumerable<string> IEnumerable string Well, that is the end of my first article. Hopefully some found it useful and informative. I do now realise the effort that goes into producing something like this and even though I work day in, day out as a developer I realise just how difficult it is to produce a "real world" example to work with. And I apologise for my criticism of other.
http://www.codeproject.com/Articles/54967/Interfaces-In-Action?fid=1558840&df=90&mpp=10&sort=Position&spc=None&tid=3352399
CC-MAIN-2015-22
refinedweb
1,783
50.77
On Wednesday, May 04, 2016 09:51:45 AM Joel Dahl wrote: > On Tue, May 03, 2016 at 03:52:43PM -0700, John Baldwin wrote: > > On Tuesday, May 03, 2016 10:20:21 PM Joel Dahl wrote: > > > On Tue, May 03, 2016 at 12:58:15PM -0700, John Baldwin wrote: > > > > On Tuesday, May 03, 2016 03:37:27 PM Michael Butler wrote: > > > > > On 05/03/16 11:21, Larry Rosenman wrote: > > > > > > On 2016-05-03 05:49, Joel Dahl wrote: > > > > > >> On Sat, Apr 30, 2016 at 10:36:53PM -0500, Larry Rosenman wrote: > > > > > >>> On a current -CURRENT if my Floppy disk controller and device are > > > > > >>> ENABLED, we do NOT pass mount root, and the floppy disk > > > > > >>> light is ON. > > > > > >> > > > > > >> Just a "me too". But this is with VMware Fusion. If I disable the > > > > > >> floppy from > > > > > >> BIOS, the virtual machine boots. If I leave it enabled, it hangs. > > > > > > Thanks for posting that I'm not the only one, and it's not flakey > > > > > > hardware. > > > > > > > > > > > > > > > > I have an, otherwise extremely reliable but ancient, Intel TR440BXA > > > > > motherboard doing this :-( > > > > > > > > > > What drove me mad for a while is that I have an identical machine, > > > > > with > > > > > exception of 10k RPM SCSI disks, which doesn't hang. I simply optioned > > > > > out "device fdc" and it's behaved ever since, > > > > > > > > Larry wasn't able to get into DDB when his box hung, are either of you > > > > able > > > > to get into DDB when it hangs? > > > > > > Um, ctrl-alt-backspace doesn't work for me, but ctrl-alt-esc does. > > > > > > I uploaded a few screenshots here: > > > > Thanks. It seems like the fdc worker thread isn't there and GEOM is stuck > > waiting for that thread to do something (well waiting for the commands > > that thread handles to be executed). > > > > First step would be just add a 'panic' to fdc_start_worker() in fdc.c at > > the end to make sure it is getting called. (And when it drops you into ddb > > you should be able to see the fdc0 kproc in 'ps' output). > > I did this, but no change. It hangs at the same place, so I guess the added > panic() is never called. Advertising The logic in fdc_acpi.c is a bit odd to follow, it sometimes goes to 'out' in the success case which is unusual. As a result, if your fdc device doesn't have an _FDE entry in the ACPI namespace we wouldn't start the worker thread. Try this change: Index: fdc_acpi.c =================================================================== --- fdc_acpi.c (revision 298950) +++ fdc_acpi.c (working copy) @@ -135,14 +135,13 @@ fdc_acpi_attach(device_t dev) obj = buf.Pointer; error = fdc_acpi_probe_children(bus, dev, obj->Buffer.Pointer); - if (error == 0) - fdc_start_worker(dev); - out: if (buf.Pointer) free(buf.Pointer, M_TEMP); if (error != 0) fdc_release_resources(sc); + else + fdc_start_worker(dev); return (error); } > > -- John Baldwin _______________________________________________ freebsd-current@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
https://www.mail-archive.com/freebsd-current@freebsd.org/msg165694.html
CC-MAIN-2018-34
refinedweb
475
72.66
´ÐÜøÐÝ ³. wrote: > > I´m experiencing a little problem due to my > > poor understanding of the way python handles > > global variables. Basically the problem is the following > > I have a programm with several variables which > > I would like to access globaly. > > var=0 > def example_function(??) > global var > var=var+1 > (function definition) > > print example_function(10) You'll get a lot more help here if you post code that is as clse to functional as possible, so we can see what you mean. Do you want something like this? >>> var = 0 >>> def function(n): ... global var ... var = var + n ... return var ... >>> >>> function(3) 3 >>> function(3) 6 >>> >>> function(3) 9 >>> function(3) 12 >>> print var 12 As you can see, the same var is used in all function calls, and the main namespace. Keep in mind, however, that in Python, "global" means global to a module, not the the whole program. This is a good thing, because you wouldn't want a module you import, that you didn't write, to mess with your global variables. If you have some data that you want visible to all parts of your program, you can put them all in a module (call it global_vars.py or something), and then you can do a: import global_vars and access the variables with: global_vars.var1 global_vars.var2 etc. You could also put them in a class it you want. Hope this helps, if it doesn't you need to be more clear about waht you want to do. -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------
https://mail.python.org/pipermail/python-list/2001-October/075277.html
CC-MAIN-2017-17
refinedweb
271
73.37
# GTK: The First Analyzer Run in Figures For some people, the introduction of a static analyzer into a project seems like an insurmountable obstacle. It is widely believed that the amount of analysis results issued after the first run is so large that only two options seem reasonable: do not mess with it at all or refocus all people on fixing warnings. In this article, we will try to dispel this myth by implementing and configuring the analyzer on a GTK project. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/5c6/889/2df/5c68892df9fe771f2d7c9390ae69c7ba.png)### Introduction [GTK](https://www.gtk.org/) is a cross-platform library of interface elements. Recently, GTK 4 was released, which was a great news hook to check the quality of the project's code using the [PVS-Studio](https://www.viva64.com/en/pvs-studio/) static code analyzer. We do this regularly, and we often have to set up the analyzer from scratch on many projects before investigating the code quality. In this post, I will share my experience of a quick PVS-Studio setting up on a C++ project. ### GTK analysis #### First results We get our first analyzer report and see the following results for general-purpose diagnostics: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/02c/dd9/272/02cdd927255515a9511867a5939264aa.png)4 (Fails) + 1102 (High) + 1159 (Medium) + 3093 (Low) = **5358** warnings. Then we quickly scroll through the report, identify spikes of uninteresting warnings, and make a decision for further configuration of the analyzer. #### Excluding directories Let's look at this warning: [V530](https://www.viva64.com/en/w/v530/) [CWE-252] The return value of function 'g\_strrstr\_len' is required to be utilized. strfuncs.c 1803 ``` /* Testing functions bounds */ static void test_bounds (void) { .... g_strrstr_len (string, 10000, "BUGS"); g_strrstr_len (string, 10000, "B"); g_strrstr_len (string, 10000, "."); g_strrstr_len (string, 10000, ""); .... } ``` This is the code of tests that do not directly relate to GTK, so we make a list of directories to exclude from the analysis and rerun PVS-Studio. In the run, the following directories will be excluded from the analysis: ``` gtk/_build/ gtk/subprojects/ gtk/tests/ gtk/testsuite/ ``` After that, I open the report and get the following result: 2 (Fails) + 819 (High) + 461 (Medium) + 1725 (Low) = **3007** warnings. After such a setup we got another positive effect which is the speed up of the analysis. #### Excluding macros Macros are probably one of the main reasons for a huge number of messages in some diagnostics. When looking through the report, we notice a lot of similar warnings: [V501](https://www.viva64.com/en/w/v501/) There are identical sub-expressions '\* (& pipe->ref\_count)' to the left and to the right of the '^' operator. gdkpipeiostream.c 65 ``` static GdkIOPipe * gdk_io_pipe_ref (GdkIOPipe *pipe) { g_atomic_int_inc (&pipe->ref_count); return pipe; } ``` Making changes to macros is usually the most difficult thing: someone is unlikely to hit the ground running and try to fix them immediately. At least not right away. Therefore, let's use the mechanism to disable diagnostics on macros. After quickly reviewing the report, we create the following settings file: ``` #V501 //-V:g_atomic_int_:501 #V547 //-V:GTK_IS_:547 //-V:GDK_IS_:547 //-V:G_IS_:547 //-V:G_VALUE_HOLDS:547 #V568 //-V:g_set_object:568 ``` Just a few lines that cover most of the problematic macros for [V501](https://www.viva64.com/en/w/v501/), [V547](https://www.viva64.com/en/w/v547/), and [V568](https://www.viva64.com/en/w/v568/). Look at the result: 2 (Fails) + 773 (High) + 417 (Medium) + 1725 (Low) = **2917** warnings. #### Disabling diagnostics Some diagnostics initially issue unsuitable warnings for a specific project. Let's look at the [V1042](https://www.viva64.com/en/w/v1042/) warning: [V1042](https://www.viva64.com/en/w/v1042/) [CWE-1177] This file is marked with copyleft license, which requires you to open the derived source code. main.c 12 This is a very useful diagnostic for a closed project, so as not to accidentally include code that follows specific license restrictions which may not be applicable for the project. But for GTK, this diagnostic is not of interest, so we will disable it and get an adjusted result: 2 (Fails) + 164 (High) + 417 (Medium) + 1725 (Low) = **2308** warnings. #### Reviewing fails There are 2 warnings of the Fails type in the project: * V002 Some diagnostic messages may contain incorrect line number in this file. gdkrectangle.c 1 * V002 Some diagnostic messages may contain incorrect line number in this file. gdktoplevelsize.c 1 This diagnostic warns that warnings on these files may point to the wrong lines of the code. Usually the difference is 1-2 lines. This is due to an incorrect macro expansion by the compiler. In our experience, most often the MSVC compiler was [spotted](https://www.viva64.com/en/b/0736/) doing this. We can simply ignore these warnings. ### Conclusions The result is as follows: 164 (High) + 417 (Medium) + 1725 (Low) = **2306** warnings. For sure, there is still something to configure. However, I've already solved the task that I had set for myself. That is quickly get the analyzer report, in which I can easily find errors. For example, now there is only one [V501](https://www.viva64.com/en/w/v501/) warning in the entire report and it is informative: [V501](https://www.viva64.com/en/w/v501/) There are identical sub-expressions 'G\_PARAM\_EXPLICIT\_NOTIFY' to the left and to the right of the '|' operator. gtklistbase.c 1151 ``` static void gtk_list_base_class_init (GtkListBaseClass *klass) { .... properties[PROP_ORIENTATION] = g_param_spec_enum ("orientation", P_("Orientation"), P_("The orientation of the orientable"), GTK_TYPE_ORIENTATION, GTK_ORIENTATION_VERTICAL, G_PARAM_READWRITE | G_PARAM_EXPLICIT_NOTIFY | // <= G_PARAM_EXPLICIT_NOTIFY); // <= .... } ``` This is a great result! Other diagnostics figures have also increased significantly. The analyzer report was reduced by as much as \*\*57% \*\*using scanty settings. Accordingly, the correct /false warnings ratio has also increased significantly. By the way, this is one of the reasons why there are no results comparisons of different code analyzers on the Internet. Because there is no consensus on whether to show the analyzer operation process as it is, or one still needs to carry out the initial configuration. All analyzers have their own mechanisms for this, which greatly complicates the complexity of the comparison task. And now it's time to pass the baton to Andrey Karpov, my colleague. ### Note by Andrey Karpov Even this simple and fast results filtering described in this article makes it much easier to work with the report. For example, using this report, I was able to skim through it for one evening and write out code fragments with errors, in an amount enough for writing an article. This is what I'll be doing during New Year's holidays. For sure, my task is simpler and differs from the process of configuring and implementing the analyzer in a real project. It is enough for me to rush through the list of warnings and track down obvious errors, ignoring false positives or incomprehensible warnings in complex sections of the code. In reality, it will take more time to set up the analyzer, pinpoint false positives, improve macros, and so on. But in fact, it's not that scary. For example, in the [article on the EFL Core Libraries project](https://www.viva64.com/en/b/0523/) check, I showed that you can easily configure the analyzer to give only **10-15%** of false warnings. Agree, it's not bad when from 10 warnings you get 8-9 decent error messages and 1-2 false positives. Well, don't forget that the [mass suppression mechanism](https://youtu.be/DsNHL_2feFQ) is always there for you. This allows you to start using the analyzer quickly, even in a large project. All warnings are considered as technical debt and are hidden so far. So, the team only deals with warnings related to new or modified code. Check out the article "[How to introduce a static code analyzer in a legacy project and not to discourage the team](https://www.viva64.com/en/b/0743/)" to find out more on this. Thank you for your attention and come back in a couple of weeks to read the article about the errors found.
https://habr.com/ru/post/536054/
null
null
1,353
56.25
Feb 21, 2017 11:01 AM|Bradly|LINK In my production test database all contents is lost. I published to a local directory on my development machine, then i transferred the files by ftp to the production test machine. I wanted to make SQL changes to the database manually. I tried to login the website, but that did not work, i checked the database , and it was empty . What hapend here ? Any ideas ? Best regards, Bradly Feb 22, 2017 09:58 AM|Bradly|LINK i found this one : public class ApplicationDbInitializer : DropCreateDatabaseIfModelChanges<ApplicationDbContext> i am using the vs asp.mvc template, apparently this also happens in production / release. Feb 22, 2017 11:57 AM|PatriceSc|LINK So it likely found a schema change and dropped the database to recreate it. Try perhaps to select the create_date from the sys.databases view to see when your db were created. You could also have a look at the creation date for objects found in your db to confirm the issue is really that the db has been created again from scratch. At this point, it's likely you have no choice than to restore you db (hopefully you had a proper backup/restore strategy), disable this inializer and compare the schema with a reference to spot schema changes and perhaps apply them manually. If I remember the strategy for EF7 is to do that "offline" (and maybe include that in your app but you have to be really explicit). IMO this is a good thing. I started before EF and never really liked handling schema changes as part of the app (I'm not even using EF migrations). Feb 23, 2017 11:01 AM|Bradly|LINK the initializer is removed now , but i get the following message The model backing the 'ApplicationDbContext' context has changed since the database was created. Consider using Code First Migrations to update the database (). In development i want to use code first migration, and in production i would like to make the changes manually , how can i disable this in production ? Thank you Contributor 5270 Points Feb 27, 2017 08:47 AM|AngelinaJolie|LINK Hi Bradly , Please check this article: Solution: To solve this error writhe the following code in Application_Start() Method in Global.asax.cs file Database.SetInitializer<CustomerDBContext>(null); Best regards, Jolie 7 replies Last post Feb 27, 2017 08:47 AM by AngelinaJolie
https://forums.asp.net/t/2116099.aspx?empty+database+after+ftp+publish+to+production+entity+framework+
CC-MAIN-2019-09
refinedweb
399
61.97
Having set up Redux, the next challenge you might face is effectively communicating changes from the outside world to your frontend application: Rather than having to manually ask every time something changes, we want to defer responsibility to the server so that it automatically updates and notifies you of changes. This tutorial will explain why, and guide you through configuring that communication using Redux effects and web sockets. So, you have this awesome Angular frontend running, complete with state management and all the bells and whistles a client would normally ask for. As more and more users are adopting or using the system, you start experiencing scenarios where multiple users are editing the same policy (insurance industry) or medical history. How is a user supposed to know that somebody else is already editing something? This would be a great scenario to have real-time notifications or alerts: If somebody updates a piece of information that I am working on, either I get notified that I am out of sync, or my work is simply updated ‘automagically!’ This is achieved using websockets, which allows the server to take on the responsibility of notifying connected clients of changes or events that took place, effectively given the server a voice. When working in the web and programming worlds in general, there are a billion different ways to achieve the same thing, using endless combinations of tech. For this article, though, I will be looking at communication between the server and a front end application using a pattern that can be applied to an enterprise scale application. Enter sockets, our saviour To illustrate the communication concepts, I chose SocketIO as the ‘real-time engine’. It’s super easy to understand, and it integrates into my Node and Angular environments with minimal setup (see the end of the article for references to other socket implementations). This kind of implementation is useful when you need to have multiple reactions for every action that occurs within your application. It’s also great for instances when there are multiple stakeholders interested in changes incurred by such an action. For example, when a user logs in (action), you want to set his roles in your app state (reaction), get his / her allowed menu items (reaction), and notify the connected dashboard of a user connecting (reaction in a whole separate system). Whoa, hold up… before you continue! To get the most out of this article, you should have a working knowledge of Angular, ngRx and websockets. If not, seek enlightenment from my previous article to get you going! Additionally, get the source code for the article and play with the tech on your machine. As for websockets, there are links at the end of the article to assist in further learning on the subject. In a nutshell, though, websockets allow a long-held, single TCP socket connection to be established between a client and server, which in turn allows for bi-directional (full duplex) messages to be distributed instantly or “real-time.” Preparations for awesomeness If you’re all good with the above, we can look ahead at what needs to be done: - Set up server-side SocketIO: Firstly, we will be looking at the SocketIO setup on the server side, and creating the global commands that the server emits and listens for. - Set up client-side SocketIO service: Here, we will create a place to handle the events or emits from the server and the payload it delivers - Create ngRx Effects to update the store: With the SocketIO stuff running, we create the Effects that fire when an action (such as AddTodo) is dispatched to the store. - Register the Effects: After creating the Effects, we need to register them in all the right places. Sound cool? Onwards! :) Step 1: Set up server-side SocketIO Firstly let’s get the ball rolling with the relevant bits from the server-side code. Remember: your server-side could be written in something completely different, but the CONCEPT will remain the same - it’s just implementation that will differ! If you are using Node, however, download the source code for a working example that you can change or mould to your liking. In this example, I create a socket on port 3000, which will listen for the following events: todoCreated, todoUpdated, todoDeleted. These are custom defined names to correspond to Create, Update and Delete actions performed. The names can be whatever you like, though; I chose these as they made sense within the context of the application at hand. Firstly, here is a snippet of the relevant part of the server-side websockets implementation (for the full file, see GitHub resource at the end of the article). Here we define the handles for the events that fire on the socket. If you are Node.js, add the following snippet to your server.js file: server.js // Some setup code here (see GitHub resource) // Set up socket event handlers io.on('connection', socket => { console.log('New client connected'); socket.on('todoCreated', (todo) => { io.sockets.emit('todoCreated', todo); }); socket.on('todoUpdated', (todo) => { io.sockets.emit('todoUpdated', todo); }); socket.on('todoDeleted', (todo) => { io.sockets.emit('todoDeleted', todo); }); socket.on('disconnect', () => { console.log('Client disconnected'); }); }); // Some further setup From the above, it’s worth noting that when an event is received, an emit occurs on io.sockets. This lets all clients connected to this socket know that this event occurred, ensuring all connected clients have the same consistent state (in a perfect world with 0 disconnects, etc.). Next we need to emit the relevant events on the socket in the corresponding controller action. We need to add an emit call to our PATCH and DELETE HTTP calls. Here is a snippet of the structure of my POST method in my todo controller (the PATCH and DELETE calls can be found in the source for the article, under Resources): todo.routes.js // Rest of the controller todoRouter.post('/', async (req, res) => { try { const todo = new Todo({ text: req.body.text }); const savedTodo = await todo.save(); const newTodo = { id: savedTodo._id, text: savedTodo.text, completed: false, completedAt: savedTodo.completedAt }; io.sockets.emit('todoCreated', newTodo); res.send(newTodo); } catch (error) { return res.status(400).send(error); } }); // Rest of the controller For this snippet, when a person creates a Todo, the created event is emitted to all connected clients to update their states as it has changed. This is specific to the todoCreated event, and the implementation for the Update and Delete methods are almost identical (see the source for the article, under Resources). However, this should be at the end of the function, which - if placed in a try-catch block as demonstrated - assumes that the save happened successfully. Cool, that’s all that is relevant to the topic on the server-side code. We are now ready to implement the client-side goodness that you have been patiently waiting for! Step 2: Set up client-side SocketIO service Time to dive into the UI… First up, we need a place to handle the events or emits coming in via the socket, and do something with the payload that the emit event delivers. For this, I wrote my implementation in a service (for reusability and maintainability) to dispatch messages to the server via the socket, subscribe to events that can be emitted from the server ( todoCreated, todoUpdated, todoDeleted), and do something with the payload for that event. Ensure that you have installed the ngx-socket-io package via NPM with the following command in your terminal: npm install --save ngx-socket-io Here is a snippet of the todo socket service within the source. It extends the Socket class from the ngx-socket-io package to generalise the dispatch and subscribe functionality. I found that it was cleaner to implement this than repeating the code throughout the components. Create your socket service as follows: todo-socket.service.ts import { Injectable } from '@angular/core'; import { Socket } from 'ngx-socket-io'; import { Observable } from 'rxjs/internal/Observable'; @Injectable({ providedIn: 'root' }) export class TodoSocketService extends Socket { constructor() { super({url: '', options: { origin: '*', transport : ['websocket'] }}); } public dispatch(messageType: string, payload: any) { this.emit(messageType, payload); } public subscribeToMessage(messageType: string): Observable<any> { return this.fromEvent(messageType); } } Here the **messageType** would be the event name, and the payload in this case it’s a **TodoModel** - the payload can, however, be anything So with this in place, all that’s left to do is implement the Effects and string all these pieces together. Step 3: Create Effects Now we can create the effects that fire when an action (such as AddTodo) is dispatched to the store. This will be the mechanism that is triggered to do the update magic advertised throughout this article. Here, there are many options regarding what is possible to do in the effect: - You could navigate to another url - You could dispatch other actions - You could even call other APIs There are, however, certain ways to set things up to interact correctly with the Action stream, so I would recommend reading this article for more information about getting this right. Create a todo.effects.ts file and paste the following code (be sure to resolve any needed references in the file): todo.effects.ts @Injectable() export class TodoEffects { @Effect({dispatch: false}) public serverTodoAdd$ = this.actions$ .pipe( ofType(TodoActionType.AddTodo), tap((payload) => console.log('Do something here as [AddTodo] effect', payload)) ); @Effect({dispatch: false}) public serverTodoUpdate$ = this.actions$ .pipe( ofType(TodoActionType.UpdateTodo), tap((payload) => console.log('Do something here as [UpdateTodo] effect', payload)) ); @Effect({dispatch: false}) public serverTodoRemove$ = this.actions$ .pipe( ofType(TodoActionType.RemoveTodo), tap((payload) => console.log('Do something here as [RemoveTodo] effect', payload)) ); constructor( private actions$: Actions ) {} } Firstly, the most noteworthy piece to address here is the injection of ‘Actions’. Actions is the action stream provided by ngRx that you subscribe to, to handle the dispatched events. Essentially, what we are doing above is creating multiple observers to the observable, with each effect being an observer. The ofType operator is used to specify the event type that should be handled by this effect, which is defined by the user in files such as todo.actions.ts in the provided source code. Here, I also use tap to fire an event and return the observable to the stream unchanged. For more reading about Effects and what is possible with the stream please see the additional info at the end of the article. Step 4: Register Effects with the store Now that we have all the building blocks in place, the next step would be to hook all of this up in the relevant modules. We will need to register the todo-socket.service.ts as well as the TodoEffects. Ensure that you set-up the EffectsModule in the feature module as demonstrated below: todo.module.ts // Rest of the file above @NgModule({ imports: [ // .. Rest of the imports here EffectsModule.forFeature([TodoEffects]) ], exports: [], declarations: [ TodoComponent, TodoItemComponent, TodoDetailComponent, TodoListComponent ], providers: [] }) export class TodoModule { } Something worth mentioning here is that my effects are part of my todo feature. So, when the effects are registered, it is done so with the forFeature option provided by the effects module. This allows you to implement Effects in lazily loaded modules, which will be loaded and made available with the feature module. Next, it is important that even if you don’t have effects in the root context of your application that you still call the forRoot function on the effects module as follows: app.module.ts // Rest of the file above @NgModule({ declarations: [ AppComponent, HomeComponent ], imports: [ // Rest of the imports here EffectsModule.forRoot([]), StoreRouterConnectingModule, !environment.production ? StoreDevtoolsModule.instrument() : [] ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } … And that’s it! If you have done everything correctly and you have the backend running correctly you should see the console logs from the Effects firing when you add/update a todo. I would recommend throwing in a few breakpoints or logs in the code to follow through the flow to get a better understanding if it is still not clear. I also recommend using Redux DevTools to visualise and debug the store interactions, as well as setting up the tools in your main module in this manner, and then using the tool as described here. Why did I go about it in this way? Firstly, I would like to reiterate that there are indeed many ways to skin a cat (so to speak): The concept that I am trying to illustrate is that websockets are the answer to how to build in real-time updates for your clients. In this tutorial, we are doing this in an environment where we have a Redux implementation through ngRx in an Angular client. This is quite a common setup of enterprise frontends. The purpose was to demystify websockets and to showcase the ease of integrating it into your communication flow to ensure consistent state among your connected clients. What I learned Through developing the code for this article, I learned that most of the time ideas will transform or even change through implementation as your understanding of a problem grows. As I was building this, removing redundancies and streamlining the communication pipeline along the way, my initial approach kept developing into better implementations. That said, there are many things that can still be implemented to better this approach, such as catering for disconnects, or setting up something like a polling service to check that the connection is still active and inform a client to refresh if not (or consider silent reconnect and verify the state). I had not considered these when I started out but they became apparent through working on the problem. Basically, don’t assume that your first iteration will be perfect - it probably won’t! :) Resources for further reading SocketIO: The official documentation for the SocketIO library used in this article. WebSockets conceptual deep dive: For those wanting to dive deep into WebSockets. Angular: Assumed knowledge for this article, without which you will be totally lost. ngRx Effects: All the info required to write awesome Effects in the Angular framework. GitHub source for this post: In case you would like to inspect the code in greater detail. Albert Janse van Rensburg is a full-stack software developer at Entelect, wielding experience in a broad variety of technologies and industries - which is totally not part of an elaborate plan to build his own death star (cough cough). He has built and optimised a vast number of enterprise solutions and, when not plugged in to the world of .NET and Angular, he tinkers with a variety of different technologies and ideas. Find him on Github to see more of what he does!
https://www.offerzen.com/blog/effective-communication-angular-ngrx-socketio
CC-MAIN-2021-31
refinedweb
2,446
52.29
Agenda See also: IRC log <trackbot> Date: 06 July 2010 <mphillip> Scribe: Mark <mphillip> Mark <eric> Scribe: mphillip No objections to approving the minutes No objections Eric - no progress on actions Peter - no progress Mark, 185 done, 188 still pending close action-185 <trackbot> ACTION-185 Change the namespace to closed Derek: 187 - reviewing the spec... ... In Binding spec section 2.2.3 we say that targetService MUST appear in requestURI but there is no corresponding fault or specified behaviour <eric>;%20charset=utf-8#binding-message-props Mark: The receiver may not know that the targetService was specified and so may not me able to throw a fault Eric: Agreed, this may be untestable - can test the positive but can't test the negative Peter: There is an analagy with SOAP/HTTP if there is no match for soapAction and I have seen that handled in several ways <eric> SOAP Action in SOAP 1.1: Derek: Will raise this as an issue for later discussion close action-187 <trackbot> ACTION-187 Review the spec. looking for other items that will need to change - e.g. editorial notes / nits closed <scribe> ACTION: Derek to raise an issue regarding targetService and the missing fault [recorded in] <trackbot> Created ACTION-191 - Raise an issue regarding targetService and the missing fault [on Derek Rokicki - due 2010-07-13]. <eric> Mark: For action 185 ... Describes changes for namespace RESOLUTION: Changes approved, issue 39 can be closed <eric> Eric: Reads note from IETF reviewer - outling options ... Provisional registration makes some sense ... URN prefix is an option Consensus on call is that it is not the ideal option Mark: Does provisional URI expire? Eric: No, but it could be replaced either by a provisional or permanent URI ... Provisional *should* block the registration of another "jms" URI unless the new one was demonstrably better ... and IETF *should* look for backward compatibility before replacing a provisional URI ... Can push to turn a provisional URI into a permanent URI but it could take some time Mark: What's the provisional process? <eric> We would change the status in section 9 of our document and re-submit as a provisional draft Mark: A colleague in IBM has a list of all the existing SOAP/JMS formats - perhaps we can use that to demonstrate this needs standardisation <scribe> ACTION: mark to send list of known JMS URI formats to the SOAP/JMS list [recorded in] <trackbot> Created ACTION-192 - Send list of known JMS URI formats to the SOAP/JMS list [on Mark Phillips - due 2010-07-13]. Eric: We would prefer to keep a URI rather than a URN so I will follow up on the procedure for provisional registration and find out how we would move forward to permanent action eric to Investigate provisional uri process and how we would move forward after that <trackbot> Created ACTION-193 - Investigate provisional uri process and how we would move forward after that [on Eric Johnson - due 2010-07-13]. Eric: We will need to prepare a document detailing the differences since the last CR, so we will discuss that work next week None This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: Mark Found Scribe: mphillip Inferring ScribeNick: mphillip Scribes: Mark, mphillip WARNING: No "Present: ... " found! Possibly Present: Derek Mark Peter Yves aaaa aabb aacc eric joined mphillip peaston soap-jms trackbot You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy Regrets: Phil_Adams Amy_Lewis Agenda: Found Date: 06 Jul 2010 Guessing minutes URL: People with action items: derek mark WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2010/07/06-soap-jms-minutes.html
CC-MAIN-2015-35
refinedweb
636
54.76
I bought an Enphase solar powered system in early 2017, one of the major appeals of the Enphase brand was that is has developer APIs, so I could track my systems power generation and even household usage. My aim is to get this data out of the Enphase APIs then try to make sense of it, possibly bringing in weather data and water usage from IoT systems on my ever increasing to-do list. There doesn’t seem to be a way of bulk exporting data, and with rate limiting on the free tier I figured I could write a script that hits the stats API for each day, grabs a whole days data then persist it, wait a period of time, then hit the next day. By delaying, I don’t break the rate limit, I can just kick the script off and have a coffee! import pendulum import time import requests import json import os userId = os.environ['ENHPASE_USER_ID'] key = os.environ['ENPHASE_KEY'] systemId = os.environ['ENPHASE_SYSTEM_ID'] # set tz code from tzCode = os.environ['TIME_ZONE'] # free tier only allows so many requests per minute, space them out with delays sleepBetweenRequests = int(os.environ['SLEEP_BETWEEN_REQUESTS']) # Start/end dates to export startDate = pendulum.parse(os.environ["START_DATE"], tzinfo=tzCode) endDate = pendulum.parse(os.environ["END_DATE"], tzinfo=tzCode) # Shouldn't need to modify this url = '' % systemId print('Starting report between %s and %s' % (startDate.to_date_string(), endDate.to_date_string())) period = pendulum.period(startDate, endDate) for dt in period.range('days'): time.sleep(sleepBetweenRequests) print('date [%s] START [%s] END [%s]' % (dt, dt.start_of('day'), dt.end_of('day'))) # HTTP Params params = {'user_id': userId, 'key': key, 'datetime_format': 'iso8601', 'start_at': dt.start_of('day').int_timestamp, 'end_at': dt.end_of('day').int_timestamp} r = requests.get(url=url, params=params) if r.status_code == 200: filename = "out/%s.json" % dt.to_date_string() os.makedirs(os.path.dirname(filename), exist_ok=True) with open(filename, 'w') as outfile: json.dump(r.json(), outfile, indent=2) print('Success %s' % dt.to_date_string()) else: print('Failed to get data for %s' % dt.to_date_string()) Run that using python stats.py and it’ll dump out a json file in /out/ per day. I’ve only got the stats API setup so far, but feel free to pull request in the others! Also keen an eye on my github project, I’ll be adding more to it, the plan is to get it running in AWS with scheduled lambdas to pull in data on an hourly basis, mash in data from weather APIs, and any IoT systems I build.
http://www.jameselsey.co.uk/blogs/techblog/tag/python-2/
CC-MAIN-2018-09
refinedweb
420
58.38
Currently, clients can use AuxiliaryConfiguration to store arbitrary (configuration) data in the project. From client's point of view, the use of AuxiliaryConfiguration is non-trivial, especially if the client needs to store only "a few" "primitive" values. I think that method like: Preferences ProjectUtils.getPreferences(Project p, Class c, boolean shared) which would mimic NbPreferences.forModule would be useful for the clients. The returned Preferences would use AuxiliaryConfiguration to store the data. If this is considered to be a good idea, I will try contribute the change. sounds ok to me. Not bad, though I would actually rather see a matching SPI; for example, if the project provided an instance of Preferences (i.e. a root node) in its Lookup, that would be used instead of AuxiliaryConfiguration. This would permit more natural storage of simple key-value preferences in a .properties file. (For example, in project.properties or private.properties with a "prefs." prefix to isolate them from Ant-related properties.) Also note that a full implementation of Preferences is not trivial. You need to implement listening, subnodes, etc. - a significant amount of code, as seen in our impl of NbPreferences. All this may be overkill for our needs. A slim interface like interface ProjectPreferences { // SPI, API would be matching final class String get(String key); void put(String key, String/*|null*/ value); } might suffice (callers would be expected to choose distinctive keys). Re SPI I was thinking about such an SPI, but seemed quite complex to me. We would need to use AuxiliaryConfiguration as the storaqe if the preferences SPI is not available anyway, IMO. And if a project would not implement the Preference SPI for e.g. 6.5, and implemented it after 6.5, we couldn't simply use the Preferences SPI in post-6.5, as the project wouldn't work in 6.5. We would need to wait for project upgrade. So seemed to me much simpler and acceptable for me to use only AuxiliaryConfiguration as the storage (the preferences SPI could be added at any time in the future is necessary). Impl. of Preferences does not seem so difficult to me - it seems to be enough to extend AbstractPreferences and implement a few methods. The use of Preferences allows existing features to be configured on the project-level with minimal changes. Thanks for the comments. Regarding SPI, of course we would need a fallback implementation of Preferences (or whatever the interface is) to use AC in case the project did not provide one. I do not understand your compatibility argument. The client calls something in ProjectUtils and reliably gets an interface to r/w properties. PU automatically uses a project-specific backend if available, otherwise AC. The client need not know the difference. Eventually we could probably deprecate AC. "Impl. of Preferences does not seem so difficult to me" - well talk to Radek Matous who spent quite some time implementing it for NbPreferences. The asynchronous nature of storage, plus change support, makes it tricky. A simplified interface letting you get and set simple string keys (no hierarchical nodes) with no change support would be far easier to implement. For unshared properties, I would recommend simply using NbPreferences. You can just e.g. hash the project directory path and include that in the key, to make a setting apply to just one project. (Does not carry over when the project is moved, but this is usually desirable anyway, and nbproject/private/ gets deleted in a move too.) Sorry for confusion on the compatibility: I meant compatibility from user's point of view. If a project is created in 6.5, opened in post-6.5 (not upgraded) and then opened in 6.5, it should work reasonably. Another possible problem is support for inner nodes - this may be difficult to do inside one .properties file. I would rather start with a simple implementation based solely on the AuxiliaryConfiguration, and introduce some kind of faster/better SPI later if necessary. The reason why I am proposing this API is that features that use "global" Preferences can easily be changed to use project-level Preferences. If there would be two separate API, it would be more difficult for these feature to be ported from "global" to "project" settings level. I think that the benefit/cost ratio of Preferences is quite good. Thanks for the suggestion regarding private properties - will try to do it this way. Support for inner nodes is one of the main reasons I do not suggest using j.u.p.Preferences; inner nodes are a significant complication for which there is no real use case. Re support for inner nodes: e.g. each Java Hint has its own Preferences node and could create its own sub-nodes. Moreover, Preferences are a standard API, so reusing it seems OK to me. I am attaching a patch that implements ProjectUtils.getPreferences. I would like to start an API review on adding this method, unless there are objections. A note on tests: the tests require an implementation of AuxiliaryConfiguration. I used J2SE Project for simplicity - I will try to rewrite the test so that the dependency on J2SE Project is not necessary. Created attachment 62158 [details] Proposed patch. Regarding support for inner nodes: you don't really need subnodes. You can just use "foo.bar.baz" syntax. Compare Firefox/Thunderbird settings storage, etc. Regarding project upgrade compatibility: if the fallback SPI impl for shared properties used AC, the API impl would just need to check in both AC and the project's custom SPI for a given key. I.e.: package spi; interface Impl { String get(String key, boolean shared); void put(String key, String value, boolean shared); // optionally also: Iterable<String> keys(boolean shared); } package api; public static String get(Project project, String key, boolean shared) { Impl i = p.lookup.lookup(Impl); if (i != null) { String v = i.get(k, shared); if (v != null) { return v; } } AC ac = project.lookup.lookup(AC); if (ac != null) { String v = ACHelper.get(ac, key, shared); if (v != null) { return v; } } // otherwise ignore shared flag // FileObject.attribute best here; a future masterfs impl could in fact use NbPreferences internally: Object o = project.getProjectDirectory().getAttribute(key); return (o instanceof String) ? (String) o : null; } public static void put(Project project, String key, String value, boolean shared) { Impl i = p.lookup.lookup(Impl); if (i != null) { i.put(key, value, shared); return; } AC ac = project.lookup.lookup(AC); if (ac != null) { ACHelper.put(ac, key, value, shared); return; } project.getProjectDirectory().setAttribute(key, value); } // optionally also: public static Iterable<String> keys(Project project, boolean shared) I would still prefer the option for the project to store simple key/value properties in a more natural way - in project.properties, private.properties, etc. Going through AuxiliaryConfiguration is awkward, inefficient, and harder to debug. Also will look extremely ugly for projects which use FileObject.setAttribute to implement AC. (I plan to submit an API change to provide a fallback impl of AC using FO.attribute. Should be useful for Maven projects, autoproject, etc.) I won't block this implementation which forces everything to go through AC, I just don't like it so much. Anyway, review comments: [JG01] The test dependency on j2seproject is illegal (will cause cycles building tests). Better to make a fake project type with a trivial AC impl which just persists its DOM fragments in memory. [JG02] Please use descriptive parameter names, not "p" etc. [JG03] "(specified as using {@link Class} as in {@link org.openide.util.NbPreferences#forModule(java.lang.Class)})" duplicates information in @param and can be deleted. [JG04] "for which project should be the preferences returned" => "project for which preferences should be returned" [JG05] getPreferences fails to document that it could return null. In such a case you could consider just delegating to getPrivateSettings to fall back gracefully and simplify the caller's API. [JG06] I'm not sure I get the point of the __idXXXX key. Is there some reason you cannot just return prefs.node(projectDir)? [JG07] findCNBForClass could be made more efficient by copying the simple impl from NbPreferences. [JG08] type="module" in the schema should perhaps be type="module-or-node"? maxOccurs=unbounded" is a syntax error. Actually try validating some files (preferably in the unit test). The names of elements in the schema are very confusing. What does a sample document actually look like? [JG09] Does your Preferences impl fire changes? If so, it should be tested. If not, it should be documented. [JG10] If the project does provide an AC, that should be used for both shared and private settings (passing the appropriate boolean flag to AC methods). I have added a simple SPI interface that allows efficient backstore for the properties and an impl provided by AntProjectHelper. Currently stores the data into nbproject/project.properties and nbproject/private/private.properties, but we may want to introduce something like nbproject/auxiliary.properties and nbproject/private/auxiliary.properties. JG01: fixed JG02: done JG03: done JG04: done JG05: added a note on null value. I do not think it is correct to return non-sharable settings when sharable settings are requested. JG06: prefs.node(projectDir) could cause the path would be too long for the system. Preferences also have a limit on the length of the key and node name. Anyway, I have rewritten this to use FileObject.{g|s}etAttribute. JG07: done (although I did not find a simplier way than to introduce a dep on Module System and loop through installed modules). JG08: fixed the problems in the schema. I am attaching a sample document. Didn't have time to write the test yet, but I will try. JG09: It does - the functionality is provided by java.util.prefs.AbstractPreferences. So the test would actually test whether the j.u.p.AbstractPreferences is implemented correctly. JG10: done (although I understood the comment in #desc5 in such a way that the private properties should go through NbPreferences in all cases). Attaching a new patch. Created attachment 62234 [details] Sample project.xml. Created attachment 62235 [details] Updated patch. I would like to ask API review for addition of: org.netbeans.api.project.ProjectUtils.getPreferences(Project, Class, boolean) org.netbeans.spi.project.AuxiliaryProperties org.netbeans.spi.project.support.ant.AntProjectHelper.createAuxiliaryProperties() Regarding JG05: I still think it is better to fall back to nonsharable settings in this case. At least the configuration will work for the current user - better than nothing. And what is the caller code supposed to do, anyway? Check for null, and if null, .... try using nonsharable settings instead. Better to keep the caller code concise and simple. This is consistent with the general API philosophy of Preferences - hope that the backing store is available but do not burden the caller if there is a problem with it, under the reasonable assumption that preferences are not precious and failure to store them will not cripple the application. Regarding JG06: NbPreferences has no such length restrictions. Regarding JG07: hmm, may be inefficient to loop this way. Do you really need to group things by module anyway? Is there anything wrong with using simple package name, as in Preferences.forPackage? [JG11] Module names should be code.name.base, not code-name-base which is used only in filenames. [JG12] Will you use the APH-based implementation in j2seproject and similar? According to the patch, I guess you will - but this contradicts the sample project.xml, doesn't it? [JG13] ProjectUtils.getPreferences should mention its SPI and when it is used. [JG14] What is the purpose of the CleaningWeakReference? Can't you just have a WeakHashMap<Project,AuxiliaryConfigBasedPreferencesProvider>? [JG15] AuxiliaryPropertiesImpl.java (in project.ant) appears to be missing from the patch. Regarding JG05: if we auto-fallback to non-sharable settings, the client code cannot decide what to do (it could, for example, show a warning to the user). Also, I am not sure if the users would consider their setting "not precious". If a non-null value is required, we need to add a new method like isSharablePreferencesSupport(Project). Regarding JG06: NbPreferences has no such length restrictions: only for property names (which is a clear violation of the Preferences contract, and moreover the impl is incorrect). There is still limitation on node name length. Also the path length issues still may occur. Regarding JG12: I plan to use the APH-based impl. for j2seproject and apisupport project. I attached the project.xml as a sample for only illustration. Regarding JG14: such a map would leak. Regarding JG15: Oops, attaching a patch that contains it. Created attachment 62281 [details] Updated patch. JG14: you mean because the value (ACBPP) refers to the key (Project). But your current code will leak, too, since the key is held strongly by HashMap. I might instead suggest using a WeakHashMap as originally suggested but not keeping a project field in ACBPP. (If you need to call saveProject in flush, keep the projectDirectory instead.) JG14: the project should be removed from the map by the CleaningWeakReference (see testReclaimable). To use WeakHashMap, the AuxiliaryConfiguration and AuxiliaryProperties couldn't be stored in the ACBPP (these are allowed to hold the corresponding instance of Project, IMO). So it would be needed to lookup them from the project lookup each time they are accessed (and the returned instances could vary in time). JG14: I see your point, but I think it would be accomplished more easily with a simple WeakHashMap<Project,WeakReference<ACBPP>>: if the value gets collected, make a new one, but don't bother to remove unused keys for Project's which still exist. You are trying to make a "biweak" map, which is impossible, so you have to use one of various workarounds. See Java bug #6389107 for discussion. JG05: I have removed the note about null return value - will rewrite the impl. to use ProjectUtils.getAuxiliaryConfiguration once it is available. JG11: if the cnb used dots, these would need to be escaped before they would be written into the properties file (dots are used to separate nodes in the properties files), so I used dashes. Hopefully this is not a big problem. JG13: done JG14: Ok, I used (Weak)HashMap+WeakReference to replace CleaningWeakReference. I am attaching an updated patch and will integrate on Monday, unless there are objections. Created attachment 62467 [details] Updated patch.;node=74a7ac1e7541 Integrated. Don't forget to update www/www/ns (in CVS) with the new schema (auxiliary-configuration-preferences.xsd). This is necessary to permit files using it to be validated. Thanks for reminder: lahvac@lahvac-laptop:/tmp/www/www/ns/auxiliary-configuration-preferences$ cvs commit RCS file: /cvs/www/www/ns/auxiliary-configuration-preferences/1.xsd,v done Checking in 1.xsd; /cvs/www/www/ns/auxiliary-configuration-preferences/1.xsd,v <-- 1.xsd initial revision: 1.1 done Do you mind fixing this also for web/j2ee project types: web.project, j2ee.clientproject, j2ee.earproject and j2ee.ejbjarproject? It would be highly appreciated. Is it actually necessary to add something to a project type? You added helper.createAuxiliaryProperties() to J2SE project but web/j2ee projects seem to work without that as ProjectUtils provide a fallback, right? Use of APH.cAP is preferable as it uses the *.properties storage which is more concise, does not trigger build{,-impl.xml} regeneration, etc. [JG16] Please mention in Project.getLookup Javadoc that an impl of AuxiliaryProperties is encouraged, and in the Javadoc for AP mention APH.cAP as an example implementation. Re. "Use of APH.cAP is preferable" - I fixed that for web/j2ee. Next time if you decide not to fix all project types it would useful to at least file issue so that it is not forgotten and and somebody else can do. Thanks, -David. Integrated into 'main-golden', available in NB_Trunk_Production #267 build Changeset: User: David Konecny <dkonecny@netbeans.org> Log: #134580 - Preferences for Web/J2EE Projects Integrated into 'main-golden', available in NB_Trunk_Production #271 build Changeset: User: Tomas Mysik <tmysik@netbeans.org> Log: Preferences for PHP Projects Related to #134580 - Preferences for Web/J2EE Projects.
https://netbeans.org/bugzilla/show_bug.cgi?id=134580
CC-MAIN-2018-51
refinedweb
2,681
59.6
Description: A beautiful notification library for Vue.js 3 that, enables you to create animated notifications in your application with features like, close button, custom templates and custom styles, and its easy to integrate in your project. How can I use it? 1. You will need to install the component with npm or yarn. #npm npm install –save @kyvg/vue3-notification #yarn yarn add @kyvg/vue3-notification 2. Add the dependencies in your main.js. Like this: import { createApp } from ‘vue’ import Notifications from ‘@kyvg/vue3-notification’ const app = createApp({…}) app.use(Notifications) 3. Now, add the global component to your App.vue. Like following. <notifications /> 4. Now, you will need to write the code that will trigger notifications from your .vue files. #basic method this.$notify(“Hello user!”); #with options this.$notify({ title: “Important message”, text: “Hello user!”, }); 5. Or, you may want to trigger notifications from other files then you should do like this. import { notify } from “@kyvg/vue3-notification”; notify({ title: “Authorization”, text: “You have been logged in!”, }); The post A Beautiful Notification Library For Vue.js 3 appeared first on Lipku.com.
https://online-code-generator.com/tag/vue-js-devtools/
CC-MAIN-2021-43
refinedweb
186
52.36
- It is easy to extend with different phrases - It can be simplified, because now you can have multiple replies to the same keyword - It shows how to work with lists - It shows how to "massage" the input from the user via [regsub] TV (jun 2 03) Excellent program idea, I didn't know it comes from such early computer days, I knew it from the trs80. I'm sure it is not all that can be done with tcl, but I thought I'd first let Recursing Eliza happen, and then do a multiple personality leading game in bwise, by having separate state elizas do supervized talking in various network configurations... Maybe after that a distributed version. # eliza.tcl -- # A very basic implementation of the famous Eliza program # (Idea copied from the book Introducing LOGO by Boris Allan) # namespace eval ::Talk { variable keywords [list] variable phrases [list] variable dummies [list] } # response -- # Link a response to a keyword (group multiple responses to # the same keyword) # # Arguments: # keyword Keyword to respond to # phrase The phrase to print # Result: # None # Side effects: # Update of the lists keywords and phrases # proc ::Talk::response { keyword phrase } { variable keywords variable phrases set keyword [string tolower $keyword] set idx [lsearch $keywords $keyword] # # The keyword is new, then add it. # Otherwise only extend the list of responses # if { $idx == -1 } { lappend keywords $keyword lappend phrases [list $phrase] } else { set prev_phrases [lindex $phrases $idx] set new_phrases [concat $prev_phrases [list $phrase]] set phrases [lreplace $phrases $idx $idx $new_phrases] puts $phrases } } # dummy -- # Register dummy phrases (used when no response is suitable) # # Arguments: # phrase The phrase to print # Result: # None # Side effects: # Update of the list dummies # proc ::Talk::dummy { phrase } { variable dummies lappend dummies $phrase } # replyto -- # Reply to the user (based on the given phrase) # # Arguments: # phrase The phrase the user typed in # Result: # None # Side effects: # Update of the lists keywords and phrases # proc ::Talk::replyto { phrase } { variable keywords variable phrases variable dummies regsub -all {[^A-Za-z]} $phrase " " phrase set idx -1 set phrase [string tolower $phrase] foreach word $phrase { set idx [lsearch $keywords $word] if { $idx > -1 } { set responses [lindex $phrases $idx] set which [expr {int([llength $responses]*rand())}] set answer [lindex $responses $which] break } } if { $idx == -1 } { set which [expr {int([llength $dummies]*rand())}] set answer [lindex $dummies $which] } puts $answer } # main code -- # Get the script going: # - Create a little database of responses # - Start the question-answer loop # ::Talk::response computer "Are you worried about machines?" ::Talk::response Death "Is this worry you?" ::Talk::response computers "We are intelligent!" ::Talk::response program "I just love Tcl - I was written in it" ::Talk::response off "No, sorry" ::Talk::response no "Tell me, why not?" ::Talk::response life "Life - do not talk to me about life!" ::Talk::response you "We are considering you, not me" ::Talk::response I "Do you often talk about yourself?" ::Talk::response I "Do you like talking about yourself?" ::Talk::dummy "So ... ?" ::Talk::dummy "Shall we continue?" ::Talk::dummy "What do you want to talk about?" ::Talk::dummy "Anything specific?" ::Talk::dummy "Talk about something more interesting?" # # First version, simple and straightforward # set version 2 if { $version == 1 } { puts "What is your problem? (End this conversation with: QUIT)" while { 1 } { gets stdin line if { $line == "QUIT" } { break } else { ::Talk::replyto $line } } } # # Second version, more complicated but with a modern twist :) # if { $version == 2 } { proc oneline {} { global responsive global forever if { $responsive == 1 } { gets stdin line if { $line == "QUIT" } { set forever 1 break } else { ::Talk::replyto $line after 0 oneline } } else { after 1000 oneline } } proc phonecall {} { global responsive puts "Trrriiiing!" set responsive 0 after 300 {puts "Damn"} after 600 {puts "Excuse me"} after 2600 {puts "Hm ...? At the office!"} after 4600 {puts "Yes"} after 5600 {puts "No"} after 6000 {puts "Eh, ..., no"} after 8000 {puts "Okay, bye"} after 8100 {puts "\nNow, where were we?"} after 8250 {set responsive 1} } puts "What is your problem? (End this conversation with: QUIT)" set responsive 1 after [expr {int((10+10*rand())*1000)}] phonecall after 0 oneline vwait forever } See also: Classic Eliza
http://wiki.tcl.tk/9235
CC-MAIN-2017-51
refinedweb
673
51.82
I’m testing SOAP interoperability between a command line ruby script and a Ruby on Rails based web service. I can’t seem to figure out how to get the service to raise an exception that gets propagated back to the client as a SOAP::FaultError, complete with fault code and fault string. Running my client ruby script against an Apache AXIS based Java SOAP Server or NUSOAP based PHP correctly gives me back an error of the form: #<SOAP::FaultError: authentication failed> from which I can then extract fault code: 1001 fault string: authentication failed Running against a Rails based Web service no matter what I raise on the server, it always seems to end up on the client as a #<RuntimeError: authentication failed>. Any suggestions? Here’s the code for my service: def authenticate(login, password, version) if (login == ‘khans’) return [1, ‘Shehryar’, ‘Khan’] elsif raise ‘authentication failed’ end end Here’s the code for my client: require ‘soap/wsdlDriver’ Ruby on Rails URL factory = SOAP::WSDLDriverFactory.new(“ wsdl”) soap = factory.create_rpc_driver begin soap_response = soap.authenticate(“khans”,“bollox”,“1.0”) puts ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" puts "Service.authenticate() succeeded:" puts " id: "+soap_response[0].to_s puts " firstname: "+soap_response[1].to_s puts " lastname: "+soap_response[2].to_s puts ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" rescue => e puts ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" puts "Service.authenticate() failed:" puts " fault code: "+e.faultcode.to_s puts " fault string: "+e.faultstring.to_s puts ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" end soap.reset_stream
https://www.ruby-forum.com/t/actionwebservice-how-to-raise-soap-faulterror/72965
CC-MAIN-2022-33
refinedweb
226
57.16
Agenda See also: IRC log <trackbot> Date: 09 December 2008Administrative <hlockhar> Call next week <hlockhar> no calls until Jan 6 <hlockhar> Scribe: hlockhar Resolution: Minutes from Dec 2 Approved All drafts updated last week Best Practices also 1.1 editors draft of signature ran a diff against 2nd edition Updated Requirements document Updated Roadmap page on Web cannot be part of 1.1 magnus: will be added to future document cannot extend same namespace Resolution: Produce a separate document for derived keys bruce: where are we going with this, will it be optional? ... seems to make life more complicated rather than less ... getting pushback about why we are doing it? <brich> why push forward on a separate spec for derived keys? where are we going with this? magnus: there is a need for more general capabilities ... available outside of WS-* brich: our users satisfied with WS-* solution <brich> it seems like it would be separate so it can be used in a number of places, but what might they be? <brich> if this is only going to be a 2.0 item, then why separate it out? magnus: The motivation and use cases for this work, and the reasons for why reliance on WS-* derived key methods is not recommended, was presented in July and September and in a posting to this list in September. It was not questioned on those occasions. fhirsch: need a proposal from magnus, can deal with packaging later ... decide later, 1.1 vs. 2.0, optional vs. necessary brich: wanted to raise the concern fhirsch: once we have a proposal we can decide how to deal with it magnus: don't want to work on it if we are not going to do it ... would be ok with it being optional in 1.1 <tlr> tlr: a separate spec would lead to RF Commitments that an optional feature in the base spec wouldn't possible approach would be optional in 1.1 and mandatory in 2.0 Brian was to provide text on DSA with SHA1 <bal> i have an action on me to draft some text for this <bal> my sense of the call from the last meeting was that we should make DSAwithSHA1 <bal> optional for signature generation, recommended for signature verification, and add implementation notes saying something to the effect of "if you expect to interop with xmldsig 1.0 and 1.0 2nd ed you should support dsawithsha1 for verification for interop" Kelvin: we don't have to require HMAC SHA256 Close issue 74 can be closed with no action fhirsch: do we have streaming reqs complete? pdata: need to add more, want to look at it again fhirsch: everybody please comment ... is text on Transforms correct? pdata: reqs section and design section ... reqs are ok, want to flesh out design portion more ... since we are making a breaking change, can make a bigger change ... can do without Transforms entirely ... can get a nonsensical set of Transforms ... have a proposal for a more constrained approach fhirsch: I know Konrad has concerns, but I understand your idea pdata: one problem with transform chain is hard to determine what is signed ... signature occurs in the middle of the chain ... need to declar what is being signed ... also want to identify the type of data being signed <fhirsch3> <fhirsch3> sean follow up pdata: binary types only allow byte range selection, not general transforms ... added types defined in other specs, for example WS-attachements <brich> ...also mentioned SWA - Soap With Attachments pdata: define some actions as text properties sean: said you are not proposing syntax, what is the proposal? pdata: actually this is a limited form of XPath filter 2 ... current approach is procedural ... need declarative approach ... suggesting a syntax that does not have transforms <bhill> bhill is on the queue <esimon2> +1 to pdatta fhirsch: like approach, nore controlled bhill: like the declarative approach ... concerned about ability to handle different data types ... harder and harder as more types are defined ... can avoid this by constraining processor to emit text to be hashed <bhill> the multiple parser problem is fundamental <bhill> to say "what is signed" requires the application to recapitulate the logic of the signature processor <bhill> this is difficult to guarantee fidelity even for very simple cases, and becomes more and more so as additional types are added <bhill> I would suggest that rather than implying "what is signed" the approach of having the signature processor provide a cached retrieval of the exact material used to calculate the digest <bhill> and constrain those outputs to either XML nodes or binary <bhill> is the preferred one mullen: is there is large benefit to making the change if there are transforms that are equivalent bhill: can declare a type that uses a known style sheet ... does application try to determine what was signed? <bhill> clarification: <bhill> my issue is with the description as "what is signed" <bhill> this invites the relying application to attempt to make this determination itself <bhill> re-doing the logic the signature processor has just done, possibly inaccurately Action to Pratik to write up more detailed proposal <trackbot> Sorry, couldn't find user - to <bhill> I think the preferred pattern should always look like "cached reference retrieval" in the draft best practices <tlr> ACTION: pratik to write up more detailed proposal in time for workshop [recorded in] <trackbot> Created ACTION-122 - Write up more detailed proposal in time for workshop [on Pratik Datta - due 2008-12-16]. <tlr> ACTION-122: s/workshop/January face-to-face/ <trackbot> ACTION-122 Write up more detailed proposal in time for workshop notes added <bhill> where the relying application always gets the exact nodeset or binary octets that went in to the digester <bhill> and doesn't have to know anything about the syntax and processing rules of XMLDSIG, regardless of whether they be procedural or declarative fhirsch: would like discussion at F2F ... want to address Konrad's concerns also fhirsch: I think we should add Juan Carlos material on long term sigs to Requirements Document Resolution: add Juan Carlos material on long term sigs to Requirements Document <scribe> ACTION: fhirsch to add Juan Carlos material on long term sigs to Requirements Document [recorded in] <trackbot> Sorry, couldn't find user - fhirsch <tlr> ACTION: fhirsch to add Juan Carlos material on long term sigs to Requirements Document [recorded in] <trackbot> Sorry, couldn't find user - fhirsch <tlr> ACTION: frederick to add Juan Carlos material on long term sigs to Requirements Document [recorded in] <trackbot> Created ACTION-123 - Add Juan Carlos material on long term sigs to Requirements Document [on Frederick Hirsch - due 2008-12-16]. <tlr> ISSUE-38 <tlr> ISSUE-38? <trackbot> ISSUE-38 -- Profile for signature processing for non-XML or for constrained XML requirements -- OPEN <trackbot> brich: Pratik's proposal could cover this, perhaps current spec allows it as well <tlr> ISSUE-56? <trackbot> ISSUE-56 -- Add references related to timestamping -- OPEN <trackbot> <tlr> ScribeNick: tlr Hal: Question is whether or not ?? actually happened hal, I suggest you take the chair for the moment frederick: being chased away from hotel by police Hal: issue-56, suggest we put this aside since critical parties aren't here ... who's editing? frederick: myself, pratik, ... hal: do you know what is to be put in? If you know, then I suggest action frederick: double check ... need to check what actually needs to be done <hlockhar> ACTION: fredrick to check with Juan Carlos on timestamp references [recorded in] <trackbot> Sorry, couldn't find user - fredrick <scribe> ACTION: frederick to follow up with Juan Carlos on ISSUE-56 [recorded in] <trackbot> Created ACTION-124 - Follow up with Juan Carlos on ISSUE-56 [on Frederick Hirsch - due 2008-12-16]. close ACTION-94 <trackbot> ACTION-94 Provide draft note on new algorithms for 1.1 closed close ACTION-111 <trackbot> ACTION-111 Add default attribute language to Best Practices doc closed close ACTION-116 <trackbot> ACTION-116 Add approved certificate encoding text to drafts closed close ACTION-118 <trackbot> ACTION-118 Add web services text from Hal to Requirements draft closed close ACTION-119 <trackbot> ACTION-119 Add pointer to Transforms note to Requirements draft closed close ACTION-120 <trackbot> ACTION-120 Review SP 800-57 for HMAC-SHA256 item closed frederick: would like to get list down to manageable, small list before face-to-face. Please create material hal: prefer material early! +1 frederick: please review issues list as well ... suggest adjourning Next meeting: next week
http://www.w3.org/2008/12/09-xmlsec-minutes.html
CC-MAIN-2015-48
refinedweb
1,418
57
Tk_CreateItemType, Tk_GetItemTypes - define new kind of canvas item #include <tk.h> Tk_CreateItemType(typePtr) Tk_ItemType * Tk_GetItemTypes() Structure that defines the new type of canvas item. Tk_CreateItemType is invoked to define a new kind of canvas item described by the typePtr argument. An item type corresponds to a particular value of the type argument to the create method methods to create items of the new type. If there already existed an item type by this name then the new item type replaces the old one. argc, char **argv); The interp argument is the interpreter in which the canvas's create method was invoked, and canvas is a handle for the canvas widget. itemPtr is a pointer to a newly-allocated item of size typePtr->itemSize. Tk has already initialized the item's header (the first sizeof(Tk_ItemType) bytes). The argc and argv arguments describe all of the arguments to the create command after the type argument. For example, in the method .c create rectangle 10 20 50 50 -fill black argc will be 6 and argv[0] will contain the string 10. createProc should use argc and arg methods. method is invoked to change the configuration options for a canvas item. This procedure must match the following prototype: typedef int Tk_ItemConfigureProc( Tcl_Interp *interp, Tk_Canvas canvas, Tk_Item *itemPtr, int argc, char **argv, int flags); The interp argument identifies the interpreter in which the method was invoked, canvas is a handle for the canvas widget, and itemPtr is a pointer to the item being configured. argc and argv contain the configuration options. For example, if the following command is invoked: .c itemconfigure 2 -fill red -outline black argc is 4 and argv contains the strings -fill through black. arg method for an item. It must match the following prototype: typedef int Tk_ItemCoordProc( Tcl_Interp *interp, Tk_Canvas canvas, Tk_Item *itemPtr, int argc, char **argv); The arguments interp, canvas, and itemPtr all have the standard meanings, and argc and argv describe the coordinate arguments. For example, if the following method is invoked: .c coords 2 30 90 argc will be 2 and argv will contain the string values. typePtr->postscriptProc is invoked by Tk to generate Postcript for an item during the postscript method. interp-_Canv. typePtr->scaleProc is invoked by Tk to rescale a canvas item during the scale method.' and y', where x' = originX + scaleX*(x-originX) y' = originY + scaleY*(y-originY) scaleProc must also update the bounding box in the item's header. typePtr->translateProc is invoked by Tk to translate a canvas item during the move method.. typePtr->indexProc is invoked by Tk to translate a string index specification into a numerical index, for example during the index method.. typePtr->icursorProc is invoked by Tk during the icursor methodIndex aren't offset+maxBytes bytes in the selection. typePtr->insertProc is invoked by Tk during the insert method);. typePtr->dCharsProc is invoked by Tk during the dchars method. Tk::CanvPsY, Tk::CanvTxtInfo, Tk::CanvTkwin canvas, focus, item type, selection, type manager
http://search.cpan.org/~srezic/Tk/pod/pTk/CrtItemType.pod
CC-MAIN-2017-17
refinedweb
497
55.54
Rich Rich is a Python library for rich text and beautiful formatting in the terminal. The Rich API make it easy to add colored text (up to 16.7million colors) and styles (bold, italic, underline etc.) to your script or application. Rich can also render pretty tables, markdown and source code with syntax highlighting. Installing Install with pip or your favorite PyPi package manager. pip install rich Rich print function To effortlessly add rich output to your application, you can import the rich print method, which has the same signature as the builtin Python function. Try this: from rich import print print("Hello, [bold magenta]World[/bold magenta]!", ":vampire:", locals()) Console Printing For more control over rich terminal content, import and construct a Console object. from rich.console import Console console = Console() Most applications will require one Console instance. The easiest way to manage your console would be to construct an instance at the module level and import it where needed. The Console object has a console.print("Hello", "World!") As you might expect, this will print "Hello World!" to the terminal. Note that unlike the There are a few ways of adding color and style to your output. You can set a style for the entire output by adding a style keyword argument. Here's an example: console.print("Hello", "World!", style="bold red") The output will be something like the following: ].") Console Logging: Note the log_locals argument, which outputs a table containing the local variables where the log method was called. The log method could be used for logging to the terminal for long running applications such as servers, but is also a very nice debugging aid.: : Tables Rich can render flexible tables with unicode box characters. There is a large variety of formatting options for borders, styles, cell alignment etc. Here's a simple example: from rich.console import Console from rich.table import Column, Table console = Console() table = Table(show_header=True, header_style="bold magenta") table.add_column("Date", style="dim", width=12) table.add_column("Title") table.add_columngit ("Production Budget", justify="right") table.add_column("Box Office", justify="right") table.add_row( "Dev 20, 2019", "Star Wars: The Rise of Skywalker", "$275,000,0000", "$375,126,118" ) table.add_row( "May 25, 2018", "[red]Solo[/red]: A Star Wars Story", "$275,000,0000", "$393,151,347", ) table.add_row( "Dec 15, 2017", "Star Wars Ep. VIII: The Last Jedi", "$262,000,000", "[bold]$1,332,539,889[/bold]", ) console.print(table) This produces the following output: Note that console markup is rendered in the same was:
https://pythonawesome.com/a-python-library-for-rich-text-and-beautiful-formatting-in-the-terminal/
CC-MAIN-2020-50
refinedweb
424
60.01
Windows. Web. Windows. Http. Filters Web. Windows. Http. Filters Web. Windows. Http. Filters Web. Windows. Http. Filters Web. Namespace Http. Filters Classes Interfaces Enums Remarks The Windows.Web.Http namespace and the related Windows.Web.Http.Headers and Windows.Web.Http.Filters namespaces provides an HTTP programming interface for UWP app that want to connect to HTTP services. The HTTP API provides consistent support in JavaScript, C#, VB.NET, and C++ for developers. The API also supports adding custom filters on requests and responses. Filters can simplify handling more complex network issues. The API also lets an app control read and write caching behavior. Classes in the Windows.Web.Http namespace support the use of filters based on the classes in the Windows.Web.Http.Filters namespace. Classes in the Windows.Web.Http.Filters namespace also let an app control read and write caching behavior on the client. The HttpCacheDirectiveHeaderValueCollection in the Windows.Web.Http.Headers provides a collection container for instances of the cache directives in Cache-Control HTTP header on HTTP content associated with an HTTP request or response. The Cache-Control header lets an app have more control over caching behavior. Classes in the Windows.Web.Http.Headers namespace represent HTTP headers as defined in RFC 2616 by the IETF. The HttpClient class in the Windows.Web.Http namespace provides a base class for sending HTTP requests and receiving HTTP responses. Filters provide a handler mechanism to help with common HTTP service issues. Simple filters can be chained together in a sequence to handle more complex HTTP service issues. An HTTP filter is a black box which takes an HTTP request message and produces an HTTP response message. How the filter gets an HTTP response is determined by the filter developer. Filters commonly add or change headers, handle authorization (possibly using the WebAuthenticationBroker ), or allow or disallow a request based on network conditions. Filters usually make changes and then pass the request to an inner filter, which is typically set when a filter is constructed. This approach allows a filter to be only responsible for a specific aspect of an HTTP request execution and delegate other aspects to other filters. A filter can implement caching, authentication, redirects, cookies, actual communication with the server, or a combination of these. The bottom-most (base) filter will usually be the one that does actual communication with the network. Many apps that use HTTP for network access often need to respond to a variety of conditions. Some common examples include: - Network retry (with back-off). - Adapting to metered networks (when a network connection is roaming, for example). - Authentication to social network sites. - Logging and telemetry. It’s not hard to handle any of these network issues by themselves. This becomes a complex problem when several of these issues are combined. Support for custom filters in the Windows.Web.Http.Filters namespace enables developers to create modular code for simple filters. A series of simple filters can be linked into a chain of filters to handle complex combinations of issues. The developer chooses everything about the filter chain: - The filters to add. - The order to place them in the filter chain. HttpClient is the main class used to send and receive requests over HTTP. HttpBaseProtocolFilter is what HttpClient uses to send and receive data. So HttpBaseProtocolFilter is typically the bottom of any custom filter chain. HttpBaseProtocolFilter can also be used to control caching and other behavior of the HTTP connection. Each HttpClient instance can have a different filter chain or pipeline. To write a custom filter, an app implements a custom version of the IHttpFilter interface. The code to implement the filter behavior is in the IHttpFilter.SendRequestAsync method. Filters can be written in C#/VB.NET or C++. Filters can be called and used from any language supported for UWP app including JavaScript. The sample code shows a filter to add a custom header to HTTP requests and responses. public class PlugInFilter : IHttpFilter { private IHttpFilter innerFilter; public PlugInFilter(IHttpFilter innerFilter) { if (innerFilter == null) { throw new ArgumentException("innerFilter cannot be null."); } this.innerFilter = innerFilter; } a filter, an app uses the HttpClient(IHttpFilter) constructor passing the interface of the filter to use for the HttpClient instance. To set up the filter chain, the new filter is linked to a previous filter and to HttpBaseProtocolFilter at the bottom. The sample code shows creating an HttpClient to use a custom filter. internal static void CreateHttpClient(ref HttpClient httpClient) { if (httpClient != null) { httpClient.Dispose(); } // HttpClient can be extended by plugging multiple filters together, // providing HttpClient with the configured filter pipeline. var baseFilter = new HttpBaseProtocolFilter(); // Adds a custom header to every request and response message. var myFilter = new PlugInFilter(baseFilter); httpClient = new HttpClient(myFilter); } See also Feedback Send feedback about:
https://docs.microsoft.com/en-us/uwp/api/windows.web.http.filters
CC-MAIN-2019-22
refinedweb
793
50.33
The i is the index where the list is seperated between yet-to-be-selected and selected elements. At each iteration the working length i of the list is decreased. One element from the current list is picked up uniformly randomly and then placed at the end of the list, by the swap and list length decrease operation. For example. Let us assume we have a list of arr[] = {1, 2, 3, 4, 5, 6} . In the illustration below the | indicates the current end of the working list. arr[] = {1, 2, 3, 4, 5, 6}, n = 6; The brackets below indicate the elements to swap 1, 2, [3], 4, 5, [6] | i=5, r = rand (0, 5) = 2, swap (arr[i], arr[r]), i-- 1, 2, 6, [4], [5]| 3 i=4, r = rand (0, 4) = 3, swap (arr[i], arr[r]), i-- [1], 2, 6, [5]| 4, 3 i=3, r = rand (0, 3) = 0, swap (arr[i], arr[r]), i-- 5, [2], [6]| 1, 4, 3 i=2, r = rand (0, 2) = 1, swap (arr[i], arr[r]), i-- [5], [6]| 2, 1, 4, 3 i=1, r = rand (0, 1) = 0, swap (arr[i], arr[r]), i-- 6 | 5 , 2, 1, 4, 3 i=0 terminate Shuffled list: arr[] = {6, 5, 2, 1, 4, 3}; In the above example we have an initial list. At any given step the algorithm generates a random number in range [0,i], where i is the current working length of list. The working length of the list indicates, how many elements of the original lists are yet to be selected to generate the shuffled list. Therefore the array range [0,i] indicates the yet to be shuffled list, and [i+1,n-1] indicates the shuffled list. At the last step when there is only one element, there is only one place where we can put it, and no swap is required, and the algorithm terminates. For example, when i=2 we have list [0,2] yet to be shuffled into the list and the shuffled list is in [3,5]. Basic implementation Let’s give the initial quick implementation for an shuffling for an integer array. #include <stdio.h> #include <stdlib.h> #include <time.h> void swap (int *a, int *b) { int temp = *a; *a = *b; *b = temp; } void fy_shuffle (int *arr, int n) { int r; srand (time (NULL)); while (n) { r = rand () % n; swap (arr + (n - 1), arr + r); n--; } } int main (int argc, char *argv[]) { int *arr; int n, i; if (argc != 2) { return 0; } n = atoi (argv[1]); if ((n <= 0) || (n > 100)) return 0; /* Limit stuffs */ arr = malloc (sizeof (int) * n); for (i=0; i<n; i++) { arr[i] = i; } printf ("before: "); for (i=0; i<n; i++) printf ("%d ", arr[i]); printf ("\n"); fy_shuffle (arr, n); printf ("after : "); for (i=0; i<n; i++) printf ("%d ", arr[i]); printf ("\n"); free (arr); return 0; } The function fy_shuffle is the implementation of the Fisher-Yates shuffle. Provide the length of the list within range [1,100], and the main function generates an identity array which it shuffles by calling fy_shuffle. Generic implementation This function fy_shuffle with integer only. It would have been better if we could use this function like the functions qsort or bsearch which can work with any type of structures. To make the fy_shuffle work like that, we can pass the array as a void * but the problem will be the type of elements will be unknown in the swap function. For exmaple if we want to shuffle an array of structures, then we need to swap structures, and once it is void * and we do not know the type of structure. The only problem here is the swap step, were we need to swap element by element, and the element will depend on the size of the type of the elements of the array. The simple answer is to block swap bytes. Let us assume that the array has a base address base with n elements and the size of each element is s then when we indicate the ith element using ((char *)base + (i * s)). Here we access the base address of the array byte by byte and because we know the width of the type, s, we can use jump multiples of s bytes to jump to the next element base address. I will use this idea to implement a generic block swap and incorporate it into the generic Fisher-Yates function as follows. void shuffle (void *base, size_t nel, size_t width) { int r; char *temp; temp = malloc (sizeof (width)); srand (time (NULL)); while (nel) { r = rand () % nel; /* Block swap */ memcpy (temp, (char *) base + r * width, width); memcpy ((char *) base + r * width, (char *) base + (nel - 1) * width, width); memcpy ((char *) base + (nel - 1) * width, temp, width); nel--; } free (temp); } Above highlighted lines are the generic block swap. Let me explain the memcpys a bit. Because we know the base address, number of elements of the specific type and the width of the type, we do not need to know the actual type to do the swap operation. The first memcpy will copy width number of bytes from base + r * width to temp. Here base + r * width points to the base address of the rth element in the list of whatever type. Similarly the second memcpy transfers width number of bytes into the rth element of the list from the last element of the current working list, which is indexed with (nel-1). The last memcpy transfers the temp to the (nel-1)th element. Basically the memcpy works as a block assignment operator. As we know the width of the type, we don’t care about the type itself. The invocation will be as follows. shuffle (arr, n, sizeof (int)); I have a small example for this with a structure as follows. struct my_struct { char a, b, c, d, e; }; int main (void) { int arr[] = {1, 2, 3, 4, 5, 6}; int n = 6, i; struct my_struct obj[6] = { {'a', 'b', 'c', 'd', 'e'}, {'f', 'g', 'h', 'i', 'j'}, {'k', 'l', 'm', 'n', 'o'}, {'p', 'q', 'r', 's', 't'}, {'u', 'v', 'w', 'x', 'y'}, {'z', '1', '2', '3', '4'} }; /* Test with integer shuffling */ for (i=0; i<n; i++) printf ("%d ", arr[i]); printf ("\n"); shuffle (arr, n, sizeof (int)); for (i=0; i<n; i++) printf ("%d ", arr[i]); printf ("\n"); /* Shuffle structure array */ for (i=0; i<n; i++) { printf ("%c %c %c %c %c\n", obj[i].a, obj[i].b, obj[i].c, obj[i].d, obj[i].e); } printf ("\n"); shuffle (obj, n, sizeof (struct my_struct)); for (i=0; i<n; i++) { printf ("%c %c %c %c %c\n", obj[i].a, obj[i].b, obj[i].c, obj[i].d, obj[i].e); } return 0; } Use the above code with the shuffle function to see the results. The output for the structures should have the structure elements in shuffled order and the elements should not be mixed. Complexity The shuffle is done in-place, therefore the growth of required space is constand with respect to the number of elements in the list. As the array is traversed only once, and each element is processed once, the growth of the time is O(n), therefore this algorithm works in linear time with respect to the number of elements in the list. On Linked list This can be extended on linked list, but as we need to seek to the rth, and in linked list we need to start from the head of the list, the time complexity will not be O(n) anymore. For linked list in the worst case, at each step the last element of the list can be selected for which we have (n*(n+1))/2> node visits in total, which makes the worst case O(n^2). If we expect in an average at each iteration the random number generated will be (i-0)/2, then also the upper-bound growth will be the same. Portability and Issues Are we introducing any kind of portability issues? I do not think so. This is because the code is not dependent on the byte ordering and/or the padding inside the structures (if any). The memcpy is done using the bytes as is, and the internal structures of the bytes are not being accessed or modified in any way. Because the starting address of each element in an given array and index is guranteed, therefore this code will work always. One obvious issue is with the srand initialization, if the fy_shuffle or the shuffle is called more than once in within one second, such that time (NULL) returns the same value, then the multiple calls of fy_shuffle or shuffle on the same array will result in identical results. To overcome this we can get a better seed generator, maybe pick a seed from /dev/random, or use the gettimeofday function and use microseconds value to initialize the random number generator. Although this will stop it from being portable from *nix and Windows. If you think there is something I have missed here, let me know in the comments.
https://phoxis.org/2015/04/21/a-generic-fisher-yates-shuffle/
CC-MAIN-2022-05
refinedweb
1,536
70.26
1 Semantic markup report MicroforMats, rDfa, GrDDL, MicroData anD oGP 2 Contents Foreword 3 PART 1 1. Introduction 5 2. Overview of Different S tandards 10 2.1 Microformat 10 2.1.1 Microformats and RDF 11 2.2 RDFa 12 2.2.1 RDFa and RDF 12 2 .3 GRDDL 1 3 2.4 Microdata 13 3. Examples 1 5 3.1 Microformat 15 3.2 RDFa 21 3.3 GRDDL 29 3.3.1 Using GRDDL with Microformat 29 3.3.2 Using GRDDL with RDFa 30 3.4 Microdata 30 3.4.1 Using Microdata with Microformat 33 3.4.2 Common Microdata Formats 34 4. Overview of Facebook’s Vocabulary 35 4.1 Open Graph Protocol (OGP) 35 4.1.1 OGP and RDF 35 4.2 Using Open Graph Protocol 36 5. Implementation 39 5.1 Semantic Markup in Drupal 39 5.2 Open Graph Protocol in Drupal 40 5.3 Semantic Markup and EPiServer 40 PART 2 6. Semantic Support to Existing Pages 43 6.1 Advantage of Existing Semantic Add Tools 43 6.2 Sample Vocabularies 43 6.3 Examples 46 6.3.1 Dickens Restaurant 46 6.3.2 Tourist Information 48 6.3.3 Concert with Dyvekes Viseklubb 51 6.3.4 Augustin Hotel 55 7. Summary 58 References 60 3 SEMANTIC MARKUP REPORT 1 3 Foreword This report has been prepared by Vestlandsforsking as a part of NCE Tourism Fjord Norway initiatives towards the use of semantic annotations in FjordNett web pages. The report addresses semantic markup technologies that have emerged over the last few years to bridge the vast, existing ‘‘clickable’’ Web and the Semantic Web. These markup allow authors to embed extra information within (X)HTML to mark up the structure, not just the visual presentation, of the information they publish. In this report, major approaches are explained, exploring their strengths and weaknesses, providing examples, and touching on future considerations for FjordNett. This work was undertaken with the financial assistance of the NCE Tourism Fjord Norway. We would like to thank Marcel Niederhauser and NCE Tourism for suggesting this project. We hope that this report will be a base for NCE Tourism in realising semantically annotated Tourism websites. This report is written by Rajendra Akerkar (Vestlandsforsking). 4 SEMANTIC MARKUP REPORT 1 4 PART 1 5 SEMANTIC MARKUP REPORT 1 5 1. Introduction The tourism domain can especially benefit from sophisticated e-Commerce solutions and Semantic Web technology, due to the significant heterogeneity of the market and information sources, and due to the high volume in online transactions. The Semantic Web aims at making the wealth of information that is available on the Web accessible to more precise search and automated information extraction and processing, based on a machine-readable representation of meaning in the form of ontologies (shared vocabulary). The core components of Semantic Web technology are 1. XML as a generic serialization syntax with mature tool support, 2. Resource Description Framework (RDF) as a data model for the representation of Semantic Networks in a distributed fashion, 3. Ontology languages like RDF-S, OWL, and WSML, for the representation of a domain of discourse, 4. Vocabularies, like e.g. WordNet, Cyc, TOVE, Dublin Core, FOAF, or Harmonise, and 5. Data, which can be regarded part of the vocabulary or not, depending on the respective research community. Semantic technologies comprise several technological standards where Semantic Web (a W3C standard) and Topic Maps (ISO standard) are the most used and best known. Semantic Web is the leading standard worldwide but in Norway Topic Maps has also gotten a strong hold and is used for instance as a semantic enhancement of VisitNorway.com. This report will only focus on semantic markup technologies, which means different ways of including semantic information in the source code (the (X)HTML code). The process of including semantic information (= metadata) is called annotation, and means adding machine-readable information to existing content. This way the technology can support a multiplicity of applications, e-g. more precise discovery or adaptation of content. In Tim Berners-Lee's original vision, the entities and relationships between them would be described using RDF. RDF uses this abstract model to decompose information/knowledge into small pieces, with some simple rules about the semantics (meaning) of each one of these pieces. The goal is to provide a general method that is simple and flexible enough to express any fact, yet structured enough that computer applications can operate with the expressed knowledge. This abstract model has the following key components: • statement (formally called as a triple) • subject and object resources • predicate Therefore, an RDF statement must have the following format: 6 SEMANTIC MARKUP REPORT 1 6 subject predicate object where the subject and object are names for two things in the world, with the predicate being the name of a relation (this relation is also sometimes called a property) that connects these two things. The idea behind RDF is to give us a simple way to make statements about things on the web and have machines understand us. Let's look at an example. Let's say we want to express that the website at was created by Bergen Tourist Board. In this case we have the following 3 things that comprise our assertion: The subject: The predicate or property: creator The object or value: Bergen Tourist Board Now to put this into RDF syntax, we do the following: <?xml version="1.0"?> <rdf:RDF xmlns:rdf="" xmlns: <rdf:Description rdf: <dc:creator>Bergen Tourist Board</dc:creator> </rdf:Description> </rdf:RDF> Here, <rdf:RDF> is the root element of an RDF document. It defines the XML document to be an RDF document. It also contains a reference to the RDF namespace. The <rdf:Description> element contains elements that describe the resource (subject). <dc:creator> is an element from the Dublin Core vocabulary that represents the creator of a given document. There are three main uses of RDF • to share simple factual data directly in the Web • as metadata, to describe other useful information • semantic annotations Tagging, is about attaching names, attributes, comments, descriptions to a document or to a selected part in a text. It provides additional information (metadata) about an existing piece of data. 7 SEMANTIC MARKUP REPORT 1 7 Compared to tagging, which speeds up searching and helps you find relevant and precise information, Semantic Annotation (Markup) goes one level deeper: • It enriches the unstructured or semi-structured data with a context that is further linked to the structured knowledge of a domain. • It allows results that are not explicitly related to the original search. Therefore, if tagging is about promptly finding the most relevant result, semantic annotation adds diversity and richness to the process. Semantic annotation is essentially a meaningful way to describe the structure and appearance of a particular document. Semantic Annotation helps to bridge the ambiguity of the natural language when expressing notions and their computational representation in a formal language. By telling a computer how data items are related and how these relations can be evaluated automatically, it becomes possible to process complex filter and search operations. To finish the markup process, we have to first create a collection of RDF statements to describe the meaning of a Web document, then put them into a separate file, and finally, we have to somehow link the original Web document to this RDF file. A simpler way of doing all these operations is provided in this document A simpler approach called Microformats was developed by Tantek Celik, Chris Messina and others [microformats.org/]. Unlike RDF, Microformats rely on existing (X)HTML standards and leverage CSS classes to markup the content. Critically, Microformats don't add any additional information to the page, but just annotate the data that is already on the page. Microformats enjoyed support and wider adoption because of their relative simplicity and focus on marking up the existing content. But there are still issues. First, the number of supported entities is limited, the focus has been on marking organizations, people and events, and then reviews, but there is no way to markup, for example, a movie or a book or a song. Second, Microformats are somewhat cryptic and hard to read. There is cleverness involved in figuring out how to do the markup, which isn't necessarily a good thing. In 2005, inspired by Microformats, Ian Davis, now CTO of Talis, developed eRDF -- a syntax within HTML for expressing a simplified version of RDF. His approach combined the canonical concepts of RDF and the idea from Microformats that the data is already on the page. Interestingly an iteration of Ian's work, called RDFa, has been adopted as a W3C standard. All the signs point in the direction of RDFa being the solution of choice for describing entities inside HTML pages. The microformats or RDFa constructs can be directly embedded into XHTML to convey the meaning of the document itself, instead of collecting them into separated documents. Until recently, despite the progress in the markups, adoption was hindered by the fact that publishers lacked the incentive to annotate the pages. What is the point if there are no applications that can take advantage of it? Luckily, in 2009 both Yahoo and Google put their 8 SEMANTIC MARKUP REPORT 1 8 muscle behind marking up pages. First Yahoo developed an elegant search application called SearchMonkey. SearchMonkey is an open platform for using structured data to build more useful and relevant search results. This app encouraged and enabled sites to take control over how Yahoo's search engine presented the results. The solution was based on both markup on the page and a developer plugin, which gave the publishers control over presenting the results to the user. Later, Google announced rich snippets. This supported both Microformats and RDFa markup and enabled webmasters to control how their search results are presented. Figure 1.1: How SearchMonkey works The main advantage of the Semantic Markup two such examples, we did searches for two movies on the Google and Yahoo! search engines, with returning rich results. Figure 1.2: Example of Google’s Rich Snippets when searching for The Guns of Navarone Figure 1.3: Example of Yahoo!’s SearchMonkey application 9 SEMANTIC MARKUP REPORT 1 9 The idea behind Rich Snippets is quite straightforward: it is created by using the structured data embedded in Web pages, and the structured data are added by Web page authors. More specifically, the crawler still works as usual, i.e., traveling from page to page and downloading the page content along its way. However, the indexer’s work has changed quite a bit: when indexing a given page, it also looks for the markup formats Google supports. Once some embedded markups are found, they will be collected and will be used to generate the Rich Snippets. 10 SEMANTIC MARKUP REPORT 1 10 2 Overview of Different Standards 2.1 Microformats To put it simple, microformats are a way to embed specific semantic data into the HTML content that we have today, so when a given application accesses this content, it will be able to tell what this content is about. We are familiar with HTML pages that represent people, so let us start from here. Let us say we would like to use microformats to add some semantic data about people. To do so, we need the so-called hCard microformat, which offers a group of constructs you can use to mark up the content: • a root class called vcard; • a collection of properties, such as fn (formatted name) and n (name), and quite a • few others. hCard microformat can be used to mark up the page content where a person is described. In fact, hCard microformat not only is used for people, but can also be used to mark up the content about companies, organizations and places, as we will see in the next section. Now, what if we would like to mark up some other content? For instance, some event described in a Web document. In this case, we will need to use the hCalendar microformat, which also provides a group of constructs we can use to mark up the related content: • a root class called vcalendar; • a collection of properties, such as dtstart, summary, location, and quite a few others. By the same token, if we would like to mark up a page content that contains a cooking recipe in restaurant, we then need to use the hRecipe microformat. We can define microformats as follows: Microformats are a collection of individual microformats, with each one of them representing a specific domain (such as person, event, location) that can be described by a Web content page. Each one of these microformats provides a way of adding semantic markups to these Web pages, so that the added information can be extracted and processed by software applications. With this definition in mind, it is understandable that the microformats collection is always growing: there are existing microformats that cover a number of domains, and for the domains that have not been covered yet, new microformats are created to cover them. For example, hCard microformat and hCalendar microformat are stable micro- formats; hResume microformat and hRecipe microformat are still in draft states. In fact, there is a microformats community that is actively working on new micro- formats. 11 SEMANTIC MARKUP REPORT 1 11 Finally, note that microformats are not a W3C standard or recommendation. They are offered by an open community and are open standards originally licensed under Creative Commons Attribution. They have been placed into the public domain since 29 December 2007. 2.1.1 Microformats and RDF In this section, we will first summarize the benefits offered by microformats, and more importantly, we will also take a look at the relationship between microformats and RDF. What I s Speci al About Mi crof ormat s? Microformats do not require any new standards; instead, they leverage existing standards. For example, microformats reuse HTML tags as much as possible, since almost all the HTML tags allow class attributes to be used. Second, the learning curve is minimum for content publishers. They continue to mark up their Web documents as they normally would. The only difference is that they are now invited to make their documents more semantically rich by using class attributes with standardized properties values, such as those from hCard microformat. Third, the added semantic markup has no impact on the document’s presentation, if it is done right. Lastly, and perhaps the most important one, is the fact that this small change in the markup process does bring a significant change to the whole Web world. The added semantic richness can be utilized by different applications, since applications can start to understand at least part of the document on the Web now. Mi crof ormat s and RDF Obviously, the primary advantage microformats offer over RDF is the fact that we can embed metadata directly in the XHTML documents. This not only reduces the amount of markup we need to write, but also provides one single content page for both human readers and machines. The other advantage of microformats is that microformats have a simple and intuitive syntax. However, microformats were not designed to cover the same scope as RDF was, and they simply do not work on the same exact level. To be more specific, the following are something offered by RDF, but not by microformats. • RDF does not depend on pre-defined “formats,” and it has the ability to utilize, share, and even create any number of vocabularies. 12 SEMANTIC MARKUP REPORT 1 12 • With the help from these vocabularies, RDF statements can participate in reasoning process and new facts can be discovered by machines. • Resources in RDF statements are represented as URIs, allowing a Linked Data Web to be created. • RDF itself is infinitely extensible and open-ended and hence much more flexible. 2.2 RDFa RDFa is quite simple to understand: it is just another way to directly add semantic data into XHTML pages. Unlike microformats which reuse the existing class attribute on most HTML tags, RDF provides a set of new attributes that can be used to carry the added markup data. Therefore, in order to use RDFa to embed the markup data within the Web documents, some attribute-level extensions to XHTML have to be made. In fact, this is also the reason for the name: RDFa means RDF in HTML attributes. Note that unlike microformats, RDFa is a W3C standard. More specifically, it became a W3C standard on 14 October 2008. Based on this standard document, RDFa is officially defined as follows: RDFa is a specification for attributes to express structured data in any markup language. Another W3C RDFa document, RDFa for HTML Authors, has provided the following definition of RDFa: RDFa is a thin layer of markup you can add to your web pages that make them understandable for machines as well as people. By adding it, browsers, search engines, and other software can understand more about the pages, and in so doing offer more services or better results for the user. 2.2.1 RDFa and RDF What I s Speci al About RDFa? The benefits offered by microformats are all still true for RDFa, and we can add one more here: RDFa is useful because microformats only exist as a collection of centralized vocabularies. More specifically, what if we want to mark up a Web page about a resource, for which there is no microformat available to use? In that case, RDFa is always a better choice, since you can in fact use any vocabulary for your RDFa markup. 13 SEMANTIC MARKUP REPORT 1 13 RDFa and RDF To put it simple, RDFa is just a way of expressing RDF triples inside given XHTML pages. However, RDFa does make it much easier for people to express semantic information in conjunction with a normal Web page. For instance, while there are many ways to express RDF (such as in serialized XML files that live next to standard Web pages), RDFa helps machines and humans read exactly the same content. This is one of the major motivations for the creation of RDFa. Having a HTML representation and a separate RDF/XML representation (or N3 and Turtle, etc.) is still a good solution for many cases, where HTTP content negotiation is often used to decide which format should be returned to the client. 2.3 GRDDL GRDDL (Gleaning Resource Descriptions from Dialects of Languages) is a way (a markup format, to be more precise) that enables users to obtain RDF triples out of XML documents (called XML dialects), in particular XHTML documents. The following GRDDL terminologies are important for us to understand GRDDL (pronounced 'griddl'): • GRDDL-aware agent: a software agent that is able to recognize the GRDDL transformations and run these transformations to extract RDF. • GRDDL Transformation: an algorithm for getting RDF from a source document. GRDDL became a W3C Recommendation on 11 September 2007. In this standard document, GRDDL is defined as the following: GRDDL is a mechanism for Gleaning Resource Descriptions from Dialects of Languages. The GRDDL specification introduces markup based on existing standards for declaring that an XML document includes data compatible with RDF and for linking to algorithms (typically represented in XSLT), for extracting this data from the document. 2.4 Microdata Microdata is the most recent competitor. 14 SEMANTIC MARKUP REPORT 1 14 Apart from the semantic elements HTML5 introduces Microdata – the way of annotating web pages with semantic metadata using just DOM attributes, rather than separate XML documents. Microdata annotates the DOM with scoped name/value pairs from custom vocabularies. Anyone can define a microdata vocabulary and start embedding custom properties in their own web pages.. Defining your own microdata vocabulary is easy. First, you need a namespace, which is just a URL. The namespace URL could actually point to a working web page, although that’s not strictly required. What Is Special About Microdata? Microformat looks very similar to the microformats, but the new item and itemprop attributes are used instead of class. The subject attribute can be used to avoid the “common ancestor” problem we have with microformats by simply referring to the item element by id. Mi crodat a and RDFa Microdata cannot express two things that RDFa supports: datatypes of literals, and XML literals. As part of the ongoing discussion about how to reconcile RDFa and microdata, Nathan Rixham 1 has put together a suggested Microdata RDFa Merge that brings together parts of microdata and parts of RDFa, creating a completely new set of attributes, but a parsing model that more or less follows microdata’s. 1 15 SEMANTIC MARKUP REPORT 1 15 3. Examples This section is devoted to a closer look at how to use markups discussed earlier in a given web document. 3.1 Microformats We will focus on hCard microformat in this section. hCard microformat is considered to be one of the most popular and well-established microformats. We will begin with an overview of hCard microformat, followed by some necessary HTML knowledge, and as usual, we will learn hCard by examples. From vCard to hCard Microformat hCard microformat has its root in vCard and can be viewed as a vCard representation in HTML, hence the letter h in hCard (HTML vCard). It is therefore helpful to have a basic understanding of vCard. Table 3.1 Example properties contained in vCard standard Property name Property description Semantic N Name The name of the person, place, or thing associated with the vCard object FN Formatted name The formatted name string associated with the vCard object TEL URL Telephone E-mail URL Phone number string for the associated vCard object A URL that can be used to get online information about the vCard object vCard is a file format standard that specifies how basic information about a person or an organization should be presented, including name, address, phone numbers, e-mail addresses and URLs. This standard was originally proposed in 1995 by the Versit Consortium, which had Apple, AT&T Technologies, IBM and Siemens as its members. In late 1996, this standard was passed on to the Internet Mail Consortium, and since then it has been used widely in address book applications to facilitate the exchange and backup of contact information. 16 SEMANTIC MARKUP REPORT 1 16 To this date, this standard has been given quite a few extensions, but its basic idea remains the same: vCard has defined a collection of properties to represent a person or an organization. Table 3.1 shows some of these properties. Since this standard was formed before the advent of XML, the syntax is just simple text that contains property–value pairs. For example, vCard object can be expressed as shown in Example 1. Example 1 vCard object BEGIN:VCARD FN:Marcel Niederhauser N:Niederhauser;Marcel;;; URL: END:VCARD Note this vCard object has a BEGIN:VCARD and END:VCARD element, which marks the scope of the object. Inside the object, the FN property has a value of Marcel Niederhauser, which is used as the display name. The N property represents the structured name, in the order of first, last, middle names, prefixes and suffixes, separated by semicolons. This can be parsed by a given application so as to understand each component in the person’s name. Finally, URL is the URL of the Web site that provides more information about the vCard object. With the understanding about vCard standard, it is much easier to understand hCard microformat, since it is built directly on the vCard standard. More specifically, the properties supported by the vCard standard are mapped directly to the properties and sub- properties contained in hCard microformat, as shown in Table 3.2. Table 3.2 Examples of mapping vCard properties to hCard properties vCard property hCard properties and sub-properties FN fn N n with sub-properties: family-name, given-name, additional-name, honorific-prefix, honorific-suffix URL url Note Table 3.2 does not include all the property mappings, and you can find the complete mappings from microformats’ official Web site. As a high-level summary, hCard properties can be grouped into six categories: 17 SEMANTIC MARKUP REPORT 1 17 • Personal information properties: these include properties such as fn, n, nickname. • Address properties: these include properties such as adr, with sub-properties such as street-address, region and postal-code. • Telecommunication properties: these include properties such as email, tel, and url. • Geographical properties: these include properties such as geo, with sub-properties such as latitude and longitude. • Organization properties: these include properties such as logo, org, with sub-properties such as organization-name and organization-unit. • Annotation properties: these include properties such as title, note, and role. With the above mapping in place, the next issue is to represent a vCard object (contained within BEGIN:VCARD and END:VCARD) in hCard microformat. To do so, hCard microformat uses a root class called vcard, and in HTML content, an element with a class name of vcard is itself called an hCard. Now, we can look at some examples to understand how exactly we can use hCard microformat to mark up some page content. Using hCard Microformat to Mark Up Page Content Let us start with a very simple example. Suppose that in one Web page, we have some HTML code as shown in Example 2. Example 2 Example HTML code without hCard microformat markup ... <!-- other HTML code --> <div> <a href="- Niederhauser/">Marcel Niederhauser</a> </div> ... <!-- other HTML code → Obviously, for our human eyes, we understand that the above link is pointing to a Web site which describes a person named Marcel Niederhauser. However, any application that sees this code does not really understand that, except for showing a link on the screen as follows: Marcel Niederhauser Now let us use hCard microformat to add some semantic information to this link. The basic rules when doing markup can be summarized as follows: • use vcard as the class name for the element that needs to be marked up, and this element now becomes a hCard object, and 18 SEMANTIC MARKUP REPORT 1 18 • the properties of an hCard object are represented by elements inside the hCard object. An element with class name taken from a property name represents the value of that property. If a given property has sub-properties, the values of these sub-properties are represented by elements inside the element for that given property. Based on these rules, Example 3 shows one possible markup implemented by using hCard microformat. Example 3 hCard microformat markup added to Example 2 ... <!-- other HTML code --> <div class="vcard"> <div class="fn">Marcel Niederhauser</div> <div class="n"> <div class="given-name">Marcel</div> <div class="family-name">Niederhauser</div> </div> <div class="url">- Niederhauser</div> </div> ... <!-- other HTML code --> This markup is not hard to follow. For example, the root class has a name given by vcard, and the property names are used as class names inside it. And certainly, this simple markup is able to make a lot of difference to an application: any application that understands hCard microformat will be able to understand the fact that this is a description of a person, with the last name, first name and URL given. If you open up Example 3 using a browser, you will see it is a little bit different from the original look-and-feel. Instead of a clickable name, it actually shows the full name, first name, last name and the URL separately. So let us make some changes to our initial markup, without losing the semantics, of course. First off, a frequently used trick when implementing markup for HTML code comes from the fact that class (also including rel and rev attributes) attribute in HTML can actually take a space- separated list of values. Therefore, we can combine fn and n to reach something as shown in Example 4. Example 4 An improved version of Example 3 ... <!-- other HTML code --> <div class="vcard"> 19 SEMANTIC MARKUP REPORT 1 19 <div class="n fn"> <div class="given-name">Marcel</div> <div class="family-name">Niederhauser</div> </div> <div class="url">- Niederhauser</div> </div> ... <!-- other HTML code --> This is certainly some improvement: at least we don’t have to encode the name twice. However, if you open up Example 4 in a browser, it still does not show the original look. To go back to its original look, at least we need to make use of element <a> together with its href attribute. In fact, microformats do not force the content publishers to use specific elements; we can choose any element and use it together with the class attribute. Therefore, Example 5 will be our best choice. Example 5 Final hCard microformat markup for Example 2 ... <!-- other HTML code --> <div class="vcard"> <a class="n fn url" href="- OSS/Marcel-Niederhauser"> <span class="given-name">Marcel</span> <span class="family-name">Niederhauser</span> </a> </div> ... <!-- other HTML code --> If you open up Example 5 from a Web browser, you get exactly the original look-and-feel. And certainly, any application that understands hCard microformat will be able to understand what a human eye can see: this is a link to a Web page that describes a person, whose last name is Niederhauser and first name is Marcel. Example 6 is another example of using hCard microformat. It is more complex and certainly more interesting. Example 6 A more complex hCard microformat markup example <div id="hcard-Marcel-Niederhauser" class="vcard"> 20 SEMANTIC MARKUP REPORT 1 20 <a class="n fn url" href="- OSS/Marcel-Niederhauser"> <span class="given-name">Marcel</span> <span class="family-name">Niederhauser</span> </a> <div class="org">NCE Tourism-Fjord Norway</div> <div class="tel"> <span class="type">work</span> <span class="value">47 955 56 256</span> </div> <div class="adr"> <div class="street-address">Lodin Leppsgt. 2b</div> <span class="locality">Bergen</span>, <span class="region">NO</span> <span class="postal-code">5003</span> <div class="country-name">Norway</div> </div> <a class="email" href="mailto:Marcel.Niederhauser@vestforsk.no"> Marcel.Niederhauser@vestforsk.no </a> </div> And Example 7 shows the result rendered by a Web browser. Example 7 Rendering result of Example 6 Marcel Niederhauser NCE Tourism-Fjord Norway Lodin Leppsgt. 2b, NO 5003 Bergen Norway Tel (Work) 47 955 56 256 Marcel.Niederhauser@vestforsk.no 21 SEMANTIC MARKUP REPORT 1 21 3.2 RDFa The attributes introduced by RDFa have names. For example, property is one such attribute. Obviously, when we make reference to this attribute, we say attribute property. In order to avoid repeating the word attribute too often, attribute property is often written as @property. We will write @attributeName to represent one attribute whose name is given by attributeName. The following attributes are used by RDFa: about content datatype href property rel resource rev role src typeof Let us see what XHTML elements these attributes can be used. The rule is very simple: you can use these attributes to just about any element. For example, you can use them on div element, on p element, or even on h2 (or h3, etc.) element. In real practice, there are some elements that are more frequently used with these attributes. The first such element is the span element. It is a popular choice for RDFa simply because you can insert it anywhere in the body of an XHTML document. Link and meta elements are also popular choices, since you can use them to add RDFa markups to the head element of a HTML document. This is in fact one of the reasons why RDFa is gaining popularity: these elements have been used to add metadata to the head element for years; therefore, any RDFa-aware software can extract useful metadata from them with only minor modifications needed. The last frequently used element when it comes to add RDFa markup into the content is a linking element. In fact, we can always use @rel on a link element to add more information about the relationship, and this information serves as the predicate of a triple stored in that a element. RDFa: Rules and Examples In this section we will see how to use RDFa to mark up a given content page, and we will also summarize the related rules when using the RDFa attributes. RDFa Rules Any given RDF statement has three components: subject, predicate and object. It turns out that RDFa attributes are closely related to these components: 22 SEMANTIC MARKUP REPORT 1 22 • Attributes rel, rev and property are used to represent predicates. • For attribute rel, its subject is the value of about attribute, and its object is the value of href attribute. • For attribute rev, its subject and object are reversed compared to rel: its subject is the value of href attribute, and its object is the value of about attribute. • For attribute property, its subject is the value of about attribute, and its object is the value of content attribute. Table 3.3 RDFa attributes as different components of an RDF statement Object values Subject attribute Predicate attribute Object Literal strings about property Value of content attribute Resource (identified by URI) about rel Value of href attribute One has to be careful about the object of a given RDF statement: its object can either take a literal string as its value or use another resource (identified by a URI) as its value. How is this taking effect when it comes to RDFa? Table 3.3 summarizes the rules. Based on Table 3.3, if the object of an RDF statement takes a literal string as its value, this literal string will be the value of content attribute. Furthermore, the subject of that statement is identified by the value of about attribute, and the predicate of that statement is given by the value of property attribute. If the object of an RDF statement takes a resource (identified by a URI) as its value, the URI will be the value of href attribute. Furthermore, the subject of that statement is identified by the value of about attribute, and the predicate of that statement is given by the value of rel attribute. Let us see some examples along this line. Assume Marcel has posted an article about the Bergen Tourist Information on Web site. In that post, we have some simple HTML code as shown in Example 8. Example 8 Some simple HTML code in my article about the Bergen Tourist Information <div> <h2>This article is about the Bergen Tourist Information and written by Marcel.</h2> </div> This can be easily understood by a human reader of the article. First, it says this article is about the Bergen Tourist Information; second, it says the author of this article is Marcel. Now I would like to use RDFa to add some semantic markup, so that machine can see these two facts. One way to do this is shown in Example 9. 23 SEMANTIC MARKUP REPORT 1 23 Example 9 Use RDFa to mark up the content HTML code in Example 8 <div xmlns: <p>This article is about <span about="" rel="dc:subject" href=""/>the Bergen Tourist Information and written by <span about="- OSS/Marcel-Niederhauser/article/Bergen" property="dc:creator" content="Marcel"/> Marcel.</p> </div> Recall that dc represents Dublin Core vocabulary namespace. We can pick up the RDFa markup segments from Example 9 and show them in Example 10. Example 10 RDFa markup text taken from Example 9 <span about="- Niederhauser/article/Bergen.html" rel="dc:subject" href=""/> <span about="- Niederhauser/article/Bergen.html" property="dc:creator" content="Marcel"/> Clearly, in the first span segment, the object is a resource identified by a URI. Therefore, @rel and @href have to be used as shown in Example 10. Note that is used as the URI identifying the object resource. This is an URI created by DBpedia project to represent the concept of Bergen. Here we are reusing this URI instead of inventing our own. On the other hand, in the second span segment, the object is represented by a literal string. Therefore, @property and @content have to be used. The last rule we need to discuss here is about attribute about. At this point, we understand attribute about is used to represent the subject of the RDF statement. But for a given XHTML content marked up by RDFa, how does an RDFa-aware application exactly identify the subject of the markup? This can be summarized as follows: • If attribute about is used explicitly, then the value represented by about is the subject. • If an RDFa-aware application does not find about attribute, it will assume that the about attribute on the nearest ancestor element represents the subject. 24 SEMANTIC MARKUP REPORT 1 24 • If an RDFa-aware application searches through all the ancestors of the element with RDFa markup information and does not find an about attribute, then the subject is an empty string and will effectively indicate the current document. These rules about subject are in fact quite intuitive, especially the last one, given the fact that lots of a document’s markup information will be typically about the document itself. RDFa Examples In this section, we will use examples to show how semantic markup information can be added by using RDFa attributes. A common usage of RDFa attributes is to add inline semantic information. This is in fact the original motivation that led to the creation of RDFa: how to take human- readable Web page content and make it machine readable. Example 9 is a good example of this inline markup. You can compare Example 8 with Example 9; Example 8 is the original page content that is written for human eyes, and Example 9 is what we have after inline RDFa markup. Note that the presentation rendered by any Web browser does not alter at all. Another example is to mark up the HTML code shown in Example 2. It is a good exercise for us since we have already marked up Example 2 using hCard microformats, and using RDFa to mark up the same HTML content shows the difference between the two. Example 11 shows the RDFa markup of Example 2. It accomplishes the same goal as shown in Example 5. It tells an RDFa-aware application the following fact: this is a link to the home page of a person, whose first name is Marcel and last name is Niederhauser. Example 11 RDFa markup for the HTML code shown in Example 2 ... <!-- other HTML code --> <div xmlns: <a about="- Niederhauser#Marcel" rel="foaf:homepage" href="- Niederhauser/">Marcel Niederhauser</a> <span property="foaf:firstName" content="Marcel"/> <span property="foaf:lastName" content="Niederhauser"/> </div> ... <!-- other HTML code --> 25 SEMANTIC MARKUP REPORT 1 25 Again, if you open up the above with a Web browser, you see the same output as given by Example 2. With what we have learned so far, understanding Example 11 should not be difficult at all. Note that FOAF vocabulary is used for RDFa to mark up the content. For now, just remember FOAF is a vocabulary, with a collection of words that one can use to describe people and their basic information. This is in fact an important difference between microformats and RDFa. More specifically, when using microformats to mark up a given document, the possible values for the properties are pre-defined. For example, if hCard microformat is used, only hCard properties and sub- properties can be used in the markup (see Example 5 for example). However, this is not true for RDFa markup: you can in fact use anything as the values for the attributes. For example, Example 11 could have been written as the one shown in Example 12. Example 12 Another version of Example 11 ... <!-- other HTML code --> <div xmlns:Niederhauser="- OSS/Marcel-Niederhauser/Niederhauser"> <a about="- Niederhauser#Marcel" rel="Niederhauser:myHomepage" href="- Niederhauser/">Marcel Niederhauser</a> <span property="Niederhauser:myFirstName" content="Marcel"/> <span property="Niederhauser:myLastName" content="Niederhauser"/> </div> ... <!-- other HTML code --> However, this is not a desirable solution at all. In order for any RDFa-aware application to understand the markup in Example 12, that application has to understand your vocabulary first. And clearly, if all the Web publishers went ahead to invent their own keywords, the world of available keywords would have become quite messy. Therefore, it is always the best choice to use words from a well-recognized vocabulary when it comes to mark up your page. Again, FOAF vocabulary is one such well- accepted vocabulary, and if you use it in your markup (as shown in Example 11) chance is any application that understands RDFa will be able to understand FOAF as well. In fact, this flexibility of the possible values of RDFa attributes is quite useful for many markup requirements. For example, assume in Marcel’s Web site, we have the following HTML snippet as shown in Example 13. 26 SEMANTIC MARKUP REPORT 1 26 Example 13 HTML code about friend, David ... <!-- other HTML code --> <div> <p>My friend, David, also likes skiing.</p> </div> ... <!-- other HTML code --> And I would like to mark up the code in Example 13 so that the machine will understand these facts: first, Marcel has a friend whose name is David; second, David likes skiing. You can certainly try to use microformats to reach the goal; however, RDFa seems to be quite easy to use, as shown in Example 14. Example 14 RDFa markup of Example 13 ... <!-- other HTML code --> <div xmlns: <p>My friend, <span about="- OSS/Marcel-Niederhauser#Marcel" rel="foaf:knows" href="">David </span>, also likes <span about=" " rel="foaf:interest" href=”">skiing. </span></p> <span about="" property="foaf: title"content="Mr."/> <span about="" proerty="foaf:lastName" content="David"/> </div> ... <!-- other HTML code --> Again, note that is used as the URI identifying skiing as a recreational activity. This is also a URI created by DBpedia project. We are reusing this URI since it is always good to reuse existing ones. On the other hand, is a URI that we invented to represent Mr. David, since there is no URI for this person yet. An application which understands RDFa will generate the RDF statements as shown in Example 15 from Example 14 (expressed in Turtle format). Example 15 RDF statements generated from the RDFa markup in Example 14 @prefix foaf: <>. 27 SEMANTIC MARKUP REPORT 1 27 <- Niederhauser#Marcel> foaf:knows <>. <> foaf:interest <>. <> foaf:title "Mr.". <> foaf:lastName "David". So far, all the examples we have seen are about inline markup. Sometimes, RDFa semantic markup can also be added about the containing document without explicitly using attribute about. Since this is a quite common use case of RDFa, let us take a look at one such example. Example 16 shows the markup that can be added to the document header. Example 16 RDFa markup about the containing document <html xmlns: <head> <meta property="dc:title" content="Marcel Niederhauser’s Homepage"/> <meta property="dc:creator" content="Marcel Niederhauser"/> </head> <body> <!-- body of the page --> Based on the RDFa rules, when no subject is specified, an RDFa-aware application assumes an empty string as the subject, which represents the document itself. At this point, we have covered the following RDFa attributes: about, content, href, property and rel. These are all frequently used attributes, and understanding these can get you quite far already. The last attribute we would like to discuss here is attribute typeof. It is quite important and useful since it presents a case where a blank node is created. Assume on my home page, I have the following HTML code to identify myself as shown in Example 17. Example 17 HTML code that identifies myself <div> <p>Marcel Niederhauser</p> <p>E-mail:<a href="mailto:marcel@ncetourism.com">marcel@ncetourism.com</a> </div> 28 SEMANTIC MARKUP REPORT 1 28 We would now like to use RDFa to mark up this part so the machine will understand that this whole div element is about a person, whose name is Marcel Niederhauser and whose e-mail address is marcel@ncetourism.com . Example 18 shows this markup. Example 18 RDFa markup of the HTML code shown in Example 17 <div typeof="foaf:Person" xmlns: <p property="foaf:name">Marcel Niederhauser</p> <p>E-mail: <a rel="foaf:mbox" href="mailto:marcel@ncetourism.com">marcel@ncetourism.com</a> </div> Note the usage of attribute typeof. More specifically, this RDFa attribute is designed to be used when we need to declare a new data item with a certain type. In this example, this type is the foaf:Person type. For now, just understand foaf:Person is another keyword from the FOAF vocabulary, and it represents human being as a class called Person. Now, when typeof is used as one attribute on the div element, the whole div element represents a data item whose type is foaf:Person. Therefore, once reading this line, any RDFa-aware application will be able to understand this div element is about a person. In addition, foaf:name and foaf:mbox are used with @property and @rel, respectively, to accomplish our goal to make the machine understand this information, as you should be familiar by now. Note we did not specify attribute about like we have done in the earlier examples. So what would be the subject for these properties then? In fact, attribute typeof on the enclosing div does the trick: it implicitly sets the subject of the properties marked up within that div. In other words, the name and e-mail address are associated with a new node of type foaf:Person. Obviously, this new node does not have a given URI to represent itself; it is therefore a blank node. Again, this is a trick you will see quite often if you are working with RDFa markup. The last question is if this new node is a blank node, how do we use it when it comes to data aggregation? For example, the markup information here could be quite important; it could be some supplement information about a resource we are interested in. However, without a URI identifying it, how do we relate this information to the correct resource at all? In this case, the answer is yes. In fact, we can indeed relate this markup information to another resource that exists outside the scope of this document. 29 SEMANTIC MARKUP REPORT 1 29 3.3 GRDDL 3.3.1 Using GRDDL with Microformats There are a number of ways to reference GRDDL in a document where micro- formats markup data are added. Referencing GRDDL transformations directly in the head of the HTML document is probably the easiest implementation: only two markup lines are needed. More specifically, the first thing is to add a profile attribute to the head element to indicate the fact that this document contains GRDDL metadata. Example 19 shows how to do this. Example 19 Adding profile attribute for GRDDL transformation <html xmlns="" xml: <head profile=""> <title>Marcel Niederhauser’s Homepage</title> </head> <body> <!-- body of the page --> In HTML, profile attribute in head element is used to link a given document to a description of the metadata schema that the document uses. The URI for GRDDL is given by the following, And by including this URI as shown in Example 19, we declare that the metadata in the markup can be interpreted using GRDDL. The second step is to add a link element containing the reference to the appropriate transformation. More specifically, recall the fact that microformats is a collection of individual microformats such as hCard microformat and hCalendar microformat. Therefore, when working with markup data added by using microformats, it is always necessary to name the specific GRDDL transformation. Let us assume the document in Example 19 contains hCard microformat markups. Therefore, the link element has to contain the reference to the specific transformation for converting HTML containing hCard patterns into RDF. This is shown in Example 20. Example 20 Adding link element for GRDDL transformation (hCard microformat) <html xmlns="" xml: <head profile=""> 30 SEMANTIC MARKUP REPORT 1 30 <title>Marcel Niederhauser’s Homepage</title> <link rel="transformation" href=""/> </head> <body> <!-- body of the page --> These two steps are all there is to it: the profile URI tells a GRDDL-aware application to look for a link element whose rel attribute contains the token transformation. Once the agent finds this element, the agent should use the value of href attribute on that element to decide how to extract the hCard microformat markup data as RDF triples from the enclosing document. What if hCalendar microformat markup has been used in the document? If that is the case, we should use the following transformation as the value of href attribute: 3.3.2 Using GRDDL with RDFa It is basically easy to use GRDDL with RDFa. The first step is still the same, i.e., we need to add a profile attribute to the head element, as shown in Example 19. For the second step, we will have to switch the transformation itself, as shown in Example 21. Example 21 Adding link element for GRDDL transformation (RDFa) <html xmlns="" xml: <head profile=""> <title>Marcel Niederhauser’s Homepage</title> <link rel="transformation" href=""/> </head> <body> <!-- body of the page --> 3.4 Microdata Let’s say we want to create a microdata vocabulary that describes a person. If we own the data- vocabulary.org domain, we’ll use the URL as the namespace for microdata vocabulary. That is an easy way to create a globally unique identifier: 31 SEMANTIC MARKUP REPORT 1 31 pick a URL on a domain that you control. In this vocabulary, we need to define some named properties. Let’s consider three basic properties: • name (your full name) • photo (a link to a picture of you) •. Imagine that you have a profile page or an ‘About’ itemscope itemtype= "- vocabulary.org/Person"> <div itemprop="title" class="title"> Project Manager </div> <div itemprop="name" class="name"> Marcel Niederhauser </div> </section> The major advantage of Microdata is its interoperability, i.e. any RDF representation of an ontology can be mapped to HTML5 microdata. A complete example of using microdata is given in the following lines: <!DOCTYPE html> <html> <head> <title>Microdata example</title> <script> 32 SEMANTIC MARKUP REPORT 1 32 document.createElement('article'); document.createElement('section'); document.createElement('aside'); document.createElement('hgroup'); document.createElement('nav'); document.createElement('header'); document.createElement('footer'); document.createElement('figure'); document.createElement('figcaption'); </script> <style> header, footer, section, article, nav, aside, hgroup, figure, figcaption, video { display: block; } </style> </head> <body> <h1>Me: defined in microdata</h1> <article itemscope <h2 itemprop="name"> Tim Berners-Lee</h2> <p><img itemprop="image" src="- eighth.jpg" alt="Photo of Tim - this is me"></p> <ul> <li>Nationality: <span itemprop="nationality">British</span></li> <li>Age: <span itemprop="age"></span></li> <li>Date of birth: <time itemprop="birthday" datetime="1955- 06-08"> 8 June 1955</time></li> <li>Hair colour: <span itemprop="colour">Brown</span></li> <li> <div itemscope <h3>My organisation</h3> 33 SEMANTIC MARKUP REPORT 1 33 <ul> <li>Name: <span itemprop="name"> World Wide Web consortium</span></li> <li>Organisation: <span itemprop="style">International Standard for the World Wide Web </span></li> <li>Members: <span itemprop="organisationsize">319</span></li> </ul> </div> </li> </ul> </article> <h3>My organisation</h3> <ul id="management team"> <li itemprop=" management team"> Jeff Jaffe</li> <li itemprop=" management team"> J. Alan Bird</li> <li itemprop=" management team"> Ian Jacobs</li> <li itemprop=" management team"> Ted Guild</li> <li itemprop=" management team"> Ralph Swick</li> </ul> <script type="text/javascript"> var biography = document.getItems("")[0]; alert('Hello ' + biography.properties['name'][0].textContent + '!'); </script> </body> </html> 3.4.1 Using Microdata with Microformat Microformats and microdata can gladly exist together. For example, utilizing both the microformats class values and the microdata properties: 34 SEMANTIC MARKUP REPORT 1 34 1. <dl itemscope itemtype="- vocabulary.org/Person" class="vcard"> 2. <dt itemprop="name" class="fn"><a href=" /" itemprop="url" class="url">Marcel Niederhauser</a></dt> 3. <dd itemprop="title" class="title">Project Manager</dd> 4. <dd itemprop="address" itemscope itemtype="- vocabulary.org/Address" class="adr"><span itemprop="locality" claBergen</span>, <abbr title="Norway" itemprop="regio n" class="region">NORWAY</abbr> <span itemprop="postal- code" class="postal-code">5003</span></dd> 5. </dl> 3.4.2 Common Microdata Formats One can define any itemtype s/he want for consumption by his/her organization’s. 35 SEMANTIC MARKUP REPORT 1 35 4. Overview of Facebook’s Vocabulary 4.1 Open Graph Protocol (OGP) The Open Graph Protocol (OGP) is the markup announced by Facebook that defines several essential attributes -- type, title, URL, image and description. Probably OGP is the most used general ontology on the Web as of today. The protocol comes with a reasonably rich taxonomy of types, supporting entertainment, news, location, articles and general web pages. Facebook hopes that publishers will use the protocol to describe the entities on pages. When users press the LIKE button, Facebook will get not just a link, but a specific object of the specific type. OGP redefines vocabulary terms which have been around for many years. og:image -> foaf:depiction og:latitude -> geo:lat og:postal-code -> vcard:postal-code og:email -> foaf:mbox og:phone_number -> foaf:phone The metadata to the page. The protocol implemented relies on RDFa. RDFa simply adds a set of extra attributes to the XHTML standard in order to support the extra meta needed for OGP. og:title : The title of your object as it should appear within the graph. og:type : The type of your object, e.g., "movie". Depending on the type you specify, other properties may also be required. og:image : An image URL which should represent your object within the graph. og:url : The canonical URL of your object that will be used as its permanent ID in the graph. With this basic information, a web site can be mapped in a social graph. In addition, there are other types of metadata you can add including: location, audio, video, etc. 4.1.1 OGP and RDF The Open Graph protocol is built a top of existing Semantic Web standards like RDF and RDFa, (the same standards which have been integrated into the Content Management System Drupal in its version 7). Facebook is joining Yahoo SearchMonkey and Google Rich Snippets which now all consume RDFa. Although it has been designed and created by Facebook, OGP can be used by 36 SEMANTIC MARKUP REPORT 1 36.. Facebook’s OGP is probably the most used semantic markup on the Web today. 10-15% of all FB ‘like’ are powered by OGP, and that is 10-15% of a very big number. The reason for FB to promote OGP is to enable ‘labeled links’ and eventually there is where the money is made in terms of tailored advertisings. What I s Speci al About OGP? OGP user’s. 4.2 Using Open Graph Protocol Open Graph protocol asks developer to reiterate information which is likely to exist in the page. For example when a field is marked up with RDFa in Drupal 7, the related semantic markup is directly added to the HTML markup surrounding the field data. OGP redefines vocabulary terms which have been around for many years: og:image -> foaf:depiction og:latitude -> geo:lat 37 SEMANTIC MARKUP REPORT 1 37 og:postal-code -> vcard:postal-code og:email -> foaf:mbox og:phone_number -> foaf:phone The problem is that existing RDF data which might already be using legacy vocabularies need to add OPG's specific terms if they want to be included in the Open Graph. This is a recurrent problem which happens every time a new big player adopts RDF, it happened with Yahoo! and Google too. RDF datasets end up with duplicate terms for the same semantic and have to add, say og:postal-code and google:postal-code even though they already have annotated their data with vcard:postal-code. It also has some limitations which would not exist if more standard RDFa markup was used. More specifically, the Open Graph protocol is not able to disambiguate a webpage and all the resources it might describe. In OGP's eyes, the social objects are the pages (HTML documents) and not the real concepts or physical objects people are likely to show an interest in. Let's look at some examples: o Take a user profile page (typically of type. o What if you want to like a particular comment on a page, and not the whole page? o Same goes for a page about a music album and all the songs it contains. The Web of Data Tim Berners-Lee and the Semantic Web community has been advocating for years is not what the Open Graph protocol enables, we're still at the old document linking stage here. The Open Graph protocol introduces og:type, an alternative to the widely used rdf:type. The rationale behind it is to keep the markup consistent in line with their <@property> <@content> syntax. However, because the @content attribute" /> and this is what a more RDF friendly markup would look like: <meta about="" typeof="og:Actor og:Director" /> 38 SEMANTIC MARKUP REPORT 1 38 By using the @typeof attribute sioc:. You can embed your email adress, phone & fax numbers like so: 1 <meta property="og:email" content="<?php bloginfo('admin_email'); ?>"/> 2 <meta property="og:phone_number" content="+44 123 456 7890"/> 3 <meta property="og:fax_number" content="+1-415-123-4567"/> Note that we have set og:emailto the WordPress admin email. You may want to change this if the admin email is not one for public knowledge. As we know, local search is on the increase. Open Graph has added location based tags which can be used for things such as Facebook Places: 1 <meta property="og:latitude" content="37.416343"/> 2 <meta property="og:longitude" content="-122.153013"/> 3 <meta property="og:street-address" content="1601 S California Ave"/> 4 <meta property="og:locality" content="Palo Alto"/> 5 <meta property="og:region" content="CA"/> 6 <meta property="og:postal-code" content="94304"/> 7 <meta property="og:country-name" content="USA"/> 39 SEMANTIC MARKUP REPORT 1 39 5. Implementation 5.1 Semantic Markup in Drupal Drupal 2 is an open source content management system (CMS) that makes it easy for developers and end users to create robust data entry forms. The forms are suitable for capturing structured data and flexibly formatting that data in different ways. Drupal facilitates the creation of web sites by handling many aspects of site maintenance, such as data workflow, access control, user accounts, and the encoding and storage of data in the database. Strictly speaking, Drupal itself is not a CMS, but a platform in which various modules can be plugged into and combined to shape a CMS tailored to your needs. There are modules for storing different kinds of content, to develop content based on various criteria, in order to present content in different ways and for many other purposes. A site administrator initially sets up the site by installing the core Drupal web application and choosing from a large collection of modules that add specific functionality to the site, such as improved user interface, enhanced search, various export formats, extended statistics and so on. Site administrators need a fair bit of technical knowledge to choose and configure modules, but usually do not write code; this is done by module developers instead. After the site has been set up, Drupal allows non-technical users to add content and handle routine maintenance of the site. Each item of content in Drupal is called a node. Nodes usually correspond more or less directly to the pages of a site. Nodes can be created, edited and deleted by content authors. Some modules extend the nodes, for example a taxonomy module allows assignment of nodes to categories, and a comment module adds blog-style comment boxes to each node. instance, if the site has listings of movie artists, the content can have the type mo:MovieArtist. The fields can be mapped to mo:fanpage, mo:biography, and so on. Any content types that have the mapping defined will automatically expose content using RDFa, which is RDF in HTML attributes. Drupal takes care of the formatting of the markup, making it much easier to publish valid RDFa. One can automatically create/generate RDFa output in the HTML source on a new Drupal site. In order to do so, you have to define your own mappings for any content types you have in your Drupal database through interface. Just remember that the RDF module must be activated. 2 40 SEMANTIC MARKUP REPORT 1 40 5.2 Open Graph Protocol in Drupal dueog:image. 'description' (machine namefield_description) its content will be marked up with OGP. Similarly you can create a field of type integer 'phone_number' and it will be exported as well. Finally the module adds the Like button for commodity and automatic integration with Facebook. The Open Graph protocol's not complete, but none of Google or Yahoo! got it right the first time either, and we believe OGP will align with the best practices. 5.3 Semantic Markup and EPiServer With the rapid adoption of RDFa and the other semantic markup standards (Microformats and Microdata), semantic markup will have a huge impact on the Web CMS. The fact that Google supports all three formats with its “Rich Snippets” technology will be a major driver in the near future. EPiServer is a content management system (CMS) with closed source and the platform for all FjordNett sites. In EPiServer, editors use new classification property types to connect pages to 41 SEMANTIC MARKUP REPORT 1 41 Concepts being managed in the Web3 Platform. These concepts are exposed on the website and can show links to other EPiServer pages and to other related concepts. The concept driven approach creates a more natural way for users to navigate and find content in EPiServer. As well as concept pages for humans to navigate by Concepts for EPiServer exposes data endpoints to allow EPiServer data to be accessed by machines. All concepts and normal EPiServer pages now expose an RDF representation of the data behind them. This allows EPiServer content to be consumed easily by other applications using simple web requests. As well as the RDF representations the Web3 data is also accessible via a SPARQL endpoint. This allows the core domain model to be accessed and used by other applications. However, EPiServer has no functionality for semantic markup (built-in Support for RDFa, Microformats, and Microdata). 42 SEMANTIC MARKUP REPORT 1 42 PART 2 43 SEMANTIC MARKUP REPORT
https://www.techylib.com/el/view/motherlamentation/semantic_markup_report
CC-MAIN-2018-26
refinedweb
10,484
52.7
R version: R version 3.6.1 (2019-07-05) Bioconductor version: 3.10 Package version: 1.0.0 Biological information encoded in the DNA sequence is often organized into independent modules or clusters. For instance, the eukaryotic system of expression relies on combinations of homotypic or heterotypic transcription factors (TFs) which play an important role in activating and repressing target genes. Identifying clusters of genomic features is a frequent task and includes application such as genomic regions enriched for the presence of a combination of transcription factor binding sites or enriched for the presence of mutations often referred as regions with increase mutational hotspots. fcScan is designed to detect clusters of genomic features, based on user defined search criteria. Such criteria include: fcScan is designed to handle large number of genomic features (i.e data generated by High Throughput Sequencing). fcScan depends on the following packages: Currently, fcScan has one main function, the getCluster function. Additional functionality will be added for future releases including cross-species identification of orthologous clusters. The input for getCluster is given through the parameter x. This function accepts input in data frame, GRanges as well as vector of files in BED and VCF (compressed or not) formats. BED and VCF files are loaded by packages rtracklayer and VariantAnnotation respectively. There is no limit to the number of files the user can define. When input is data frame or GRanges objects are given, they should contain the following “named” 5 columns: The seqnames, start and end columns define the name of the chromosome, the starting and ending position of the feature in the chromosome, respectively. The strand column defines the strand. The site column contains the ID of the site and will be used for clustering. The start and end columns are numeric while the remaining columns are characters. Note: when input is data frame, the data is considered zero-based as in BED format. Window size is set using w. It defines the maximum size of the clusters. The clustering condition c defines the number and name of genomic features to site column c = c("a" = 1, "b" = 2) This searches for clusters with 3 sites, One a site and two b sites. Another way of writing the condition, only if input is a vector to file paths, is the following x = ("a.bed", "b.bed"), c = c(1,2) Given 2 files, a.bed and b.bed, this condition states that the user is looking for clusters made from 1 “a” site and 2 “b” sites. In this case, the order of sites defined in c is relative to the order of files. When input is a data frame or GRanges object (instead of files), the user needs to explicitly define the site names along with the desired number relative to each site. For instance, giving the condition as c = c(1,2) for a data frame or GRanges is not allowed. x = dataframe, c = c("a" = 1, "b" = 2) where a and b are valid site names in site column in the dataframe/GRanges Users can exclude clusters containing a specific site(s). This is done by specifying zero 0 in the condition as c = c("a" = 1, "b" = 2, "c" = 0). In this case, any cluster(s) containing c site will be excluded even if it has 1 a and 2 b sites. By default, clustering will be performed on both strands and on all seqnames unless specified by the user using the s and seqnames arguments to limit the search on a specific strand and/or seqname. Users can choose to cluster on one specific seqname (seqnames = "chr1"), or on several seqnames (seqnames = c("chr1","chr3","chr4")) (Default for seqnames is NULL) meaning that clustering on all seqnames will be performed. For s, the values allowed are: The gap/overlap between adjacent clusters, and not sites, can be controled using the overlap option. When overlap is a positive integer, adjacent clusters will be separated by a minimum of the given value. When overlap is negative, adjacent clusters will be allowed to overlap by a maximum of the given value. (Default is set to 0) greedy allows the user to control the number of genomic features found in clusters. When greedy = FALSE, getCluster will build clusters having the required window size and will label TRUE the ones that contain the exact number of sites provided in the condition argument. Clusters having the user defined window size but not satisfying the condition will be labelled as FALSE. When greedy = TRUE, additional sites defined in condition will be added to the cluster as long as the cluster size is below the defined window size. (Default is set to FALSE) The order option defines the desired order of the sites within identified clusters. For instance, order = c("a","b","b") will search for clusters containing 1 a and 2 b sites and checks if they are in the specified order. Clusters with 1 a and 2 b sites that do not contain the specified order will be rejected. When greedy is set to TRUE, order can be satisfied if a subcluster contains the desired order. For example if a cluster has a, a, b, b, b sites, it satisfies the required order (a, b, b) and therefore will be considered as a correct cluster . (Default is set to NULL) The sites_orientation option defines the orientation or strandness of sites in the found clusters. This option cannot be used if order is NULL. sites_orientation should be specified for each site in order. For instance, if order = c("a","b","b"), we can define sites_orientation for each site respectively as follow: sites_orientation = c("+","-","-"). The cluster will be correct if it satisfies the required order and sites orientation. (Default is set to NULL) The verbose option allows the printing of additional messages and the list of clusters that failed for lack of correct combination of sites. This option is used mainly for debugging purposes. (Default is set to FALSE) The output of getCluster is a GRanges object with fields: The algorithm returns all clusters containing the correct count of sites/features, unless verbose is set to TRUE. If the combination, overlap and order options are satisfied, the cluster is considered a TRUE cluster. The status of a cluster can be either PASS, ExcludedSites, orderFail or SitesOrientation. PASS is a cluster that satisfied the desired combination, overlap, order and sites orientation. orderFail is a cluster that had the required combination but did not satisfy the required order of sites. ExcludedSites is a cluster that had the required combination and order but it has one or more sites to exclude. SitesOrientation is a cluster that had the required combination and order but it has one or more sites with different orientation than requested. NOTE: If the user is using greedy = FALSE and order contains values more than in the condition parameter ( c), an error will be raised. However, if greedy = TRUE, then using order with more values than the condition parameter is allowed since the cluster may contain more sites than the required c condition as long as the window size is satisfied. Example using getCluster: getCluster looks for desired genomic regions of interest like transcription factor binding sites, within a window size and specific condition. This function accepts a data frame and GRanges object. getCluster also accepts BED or VCF (or mix of both) files as input. The output of getCluster is a GRanges object that contains the genomic coordinates (seqnames, ranges and strand) and three metadata columns: sites: contains clusters of sites that conforms with the condition c specified in getCluster. isCluster: TRUE if the cluster found conform with the condition c and the order (if indicated in condition) and FALSE if the cluster fails to conform with the condition or order. status: PASS if isCluster equals TRUE. However, if isCluster is FALSE, status shows why the found cluster is not a TRUE cluster. If the order of sites is not respected in the found cluster, status would return OrderFail. in Addition, if the cluster found contains non desired sites, it returns ExcludedSites. Moreover, if the sites orientation is not respected in found cluster, status would return SitesOrientation. In this example, we ask getCluster to look for clusters that contains one site “s1”, one site “s2” and zero “s3” sites. In addition, we requested clusters to have sites in the order s1,s2 and having orientation “+”,“+” respectively. x1 = data.frame(seqnames = rep("chr1", times = 17), start = c(1,10,17,25,27,32,41,47,60,70,87,94,99,107,113,121,132), end = c(8,15,20,30,35,40,48,55,68,75,93,100,105,113,120,130,135), strand = c("+","+","+","+","+","+","+","+","+", "+","+","+","+","+","+","+","-"), site = c("s3","s1","s2","s2","s1","s2","s1","s1","s2","s1","s2", "s2","s1","s2","s1","s1","s2")) clusters = getCluster(x1, w = 20, c = c("s1" = 1, "s2" = 1, "s3" = 0), greedy = TRUE, order = c("s1","s2"), sites_orientation=c("+","+"), verbose = TRUE) #> 17 entries loaded #> Time difference of 0.1777675 secs clusters #> GRanges object with 7 ranges and 3 metadata columns: #> seqnames ranges strand | sites isCluster status #> <Rle> <IRanges> <Rle> | <character> <logical> <character> #> [1] chr1 2-20 * | s3,s1,s2 FALSE ExcludedSites #> [2] chr1 11-30 * | s1,s2,s2 TRUE PASS #> [3] chr1 33-48 * | s2,s1 FALSE orderFail #> [4] chr1 48-68 * | s1,s2 TRUE PASS #> [5] chr1 88-105 * | s2,s2,s1 FALSE orderFail #> [6] chr1 95-113 * | s2,s1,s2 TRUE PASS #> [7] chr1 122-135 * | s1,s2 FALSE SitesOrientation #> ------- #> seqinfo: 1 sequence from an unspecified genome; no seqlengths Another example but using GRanges as input: in this example, we ask getCluster to look for clusters that contains one site s1 and two sites s2 within a window size of 25 bp. Also, we requested clusters to be searched as + strand. suppressMessages(library(GenomicRanges)) x = GRanges( seqnames = Rle("chr1", 16), ranges = IRanges(start = c(10L,17L,25L,27L,32L,41L,47L, 60L,70L,87L,94L,99L,107L,113L,121L,132L), end = c(15L,20L,30L,35L,40L,48L,55L,68L,75L,93L,100L,105L, 113L,120L,130L,135L)), strand = Rle("+",16), site = c("s1","s2","s2","s1","s2","s1","s1","s2", "s1","s2","s2","s1","s2","s1","s1","s2")) clusters = getCluster(x, w = 25, c = c("s1"=1,"s2"=2), s = "+") #> 16 entries loaded #> Time difference of 0.1227539 secs clusters #> GRanges object with 2 ranges and 3 metadata columns: #> seqnames ranges strand | sites isCluster status #> <Rle> <IRanges> <Rle> | <character> <logical> <character> #> [1] chr1 10-30 + | s1,s2,s2 TRUE PASS #> [2] chr1 87-105 + | s2,s2,s1 TRUE PASS #> ------- #> seqinfo: 1 sequence from an unspecified genome; no seqlengths] GenomicRanges_1.38.0 GenomeInfoDb_1.22.0 IRanges_2.20.0 #> [4] S4Vectors_0.24.0 BiocGenerics_0.32.0 fcScan_1.0.0 #> [7] BiocStyle_2.14.0 #> #> loaded via a namespace (and not attached): #> [1] Rcpp_1.0.2 lattice_0.20-38 #> [3] prettyunits_1.0.2 Rsamtools_2.2.0 #> [5] Biostrings_2.54.0 assertthat_0.2.1 #> [7] zeallot_0.1.0 digest_0.6.22 #> [9] BiocFileCache_1.10.0 R6_2.4.0 #> [11] plyr_1.8.4 backports_1.1.5 #> [13] RSQLite_2.1.2 evaluate_0.14 #> [15] highr_0.8 httr_1.4.1 #> [17] pillar_1.4.2 zlibbioc_1.32.0 #> [19] rlang_0.4.1 GenomicFeatures_1.38.0 #> [21] progress_1.2.2 curl_4.2 #> [23] blob_1.2.0 Matrix_1.2-17 #> [25] rmarkdown_1.16 BiocParallel_1.20.0 #> [27] stringr_1.4.0 RCurl_1.95-4.12 #> [29] bit_1.1-14 biomaRt_2.42.0 #> [31] DelayedArray_0.12.0 compiler_3.6.1 #> [33] rtracklayer_1.46.0 xfun_0.10 #> [35] pkgconfig_2.0.3 askpass_1.1 #> [37] htmltools_0.4.0 tidyselect_0.2.5 #> [39] openssl_1.4.1 SummarizedExperiment_1.16.0 #> [41] tibble_2.1.3 GenomeInfoDbData_1.2.2 #> [43] bookdown_0.14 matrixStats_0.55.0 #> [45] XML_3.98-1.20 dplyr_0.8.3 #> [47] dbplyr_1.4.2 crayon_1.3.4 #> [49] rappdirs_0.3.1 GenomicAlignments_1.22.0 #> [51] bitops_1.0-6 grid_3.6.1 #> [53] DBI_1.0.0 magrittr_1.5 #> [55] stringi_1.4.3 XVector_0.26.0 #> [57] vctrs_0.2.0 tools_3.6.1 #> [59] bit64_0.9-7 BSgenome_1.54.0 #> [61] glue_1.3.1 Biobase_2.46.0 #> [63] purrr_0.3.3 hms_0.5.1 #> [65] yaml_2.2.0 AnnotationDbi_1.48.0 #> [67] BiocManager_1.30.9 memoise_1.1.0 #> [69] knitr_1.25 VariantAnnotation_1.32.0
http://bioconductor.org/packages/release/bioc/vignettes/fcScan/inst/doc/fcScan_vignette.html
CC-MAIN-2019-51
refinedweb
2,048
56.55
#include <FXScrollArea.h> Inheritance diagram for FX::FXScrollArea: When the content area becomes larger than the viewport area, scrollbars are placed to permit viewing of the entire content by scrolling the content. Depending on the mode, scrollbars may be displayed on an as-needed basis, always, or never. Normally, the scroll area's size and the content's size are independent; however, it is possible to disable scrolling in the horizontal (vertical) direction. In this case, the content width (height) will influence the width (height) of the scroll area widget. For content which is time-consuming to repaint, continuous scrolling may be turned off. See also:
http://www.fox-toolkit.org/ref12/classFX_1_1FXScrollArea.html
crawl-003
refinedweb
106
51.18
When Lisp was created in 1958, it was a groundbreaking language. It offered capabilities that it's main alternative, Fortran, couldn't touch. Over the years, Lisp's flexibility has made it a fertile ground to explore new programming language concepts, before they invariably end up copied and put into other languages. Yet Lisp itself, even with the recent modernization in Clojure, is not nearly as popular as other languages. Why is that? There is no doubt Lisp is powerful. It's simplicity of implementation and homoiconicity enable working at a level of abstraction akin to being able to easily write your own programming language for every task. Some articles I've read connect this to Lisp's lack of traction, claiming it's too powerful for it's own good. In part I agree with their observation -- when every programmer can trivially build their own abstractions, programming can easily become chaotically fragmented. However, I don't buy into their hubris. Lisp is not some idealistic paragon of power, like creation itself,. which mere mortal programmers just can't handle. No, Lisp has a serious shortcoming which keeps 'the rest of us' from using it. I call Lisp's biggest shortcoming the mysterious tuple problem. It stems Lisp's idiomatic over-use of lists as product types, which are more commonly called tuples. In this essay, I'll explain the problem, how more popular languages don't suffer from it, and some ways to have the power of Lisp without it. Here are some mysterious tuples: (2.3 4.5 2.3) ;; is it a 3d vector (x,y,z)? a list of 3 floats? or something else? ("Fred" "Jones") ;; is this a tuple of (firstname,lastname)? or a list of names? Wikipedia describes a tuple as a finite ordered list of elements. More specifically, it is an list of items, where the position an element holds has meaning, where the types may vary, and where the individual items have no logical names. Often tuples are immutable. A review of a Lisp or Scheme reference will not produce a definition of a tuple becauase it has no such datatype. Lisp programmers commonly represent tuples as lists. It is only partially facetious to say that in Lisp, all tuples are lists, but not all lists are tuples. What is an example of a tuple? 3d Vector, commonly represented as (x, y, z). When we process a 3d vector, we need to specifically handle the x,y, and z positions separately. A 3d vector with 2 or 4 elements is invalid, because we're writing code to specifically handle three values. Sorting or otherwise rearranging the values destroys the data. Operations between 3d vectors and other random triples of three numbers also destroys the data. What type of lists are not tuples? Lists, of anything. A list is a variable length list of items. Most lists are monomorphic, meaning that the items in the list are all the same kind. Sometimes that means the elements of the list are exactly the same type. For example, a list of string names ('apple', 'pear'), or a list of numbers (1, 2, 3, 4, 5). Sometimes elements in the list share a similar quality but are not exactly the same type, such as code-objects that know how to draw themselves. This is called polymorphism. Rarely, lists are hetromorphic, meaning that the items in the list share no type or similarity. In this case, code that handles the list will have to specifically handle every type of item in the list separately. One common thing we can say of all lists is that any code which attributes meaning to a position in this list is flawed, which makes them pretty much the opposite of tuples. When looking at Lisp code, it is very hard to know what tuples exist and what is actually in them. In Lisp is it not only idiomatic to use lists to represent tuples, it's common to do it everywhere. In fact, some Lisp code uses lists as the exclusive aggregate datatype. Not only does the code not carry any type-declaration saying a particular element is a 3d-vector, but the data itself doesn't even know it's a 3d vector. As far as Lisp is concerned it's just a list of three numbers (3 4 -3). Code constructing tuples often doesn't refer to logical names for the individual positions in a tuple (such as x, y, z), because it doesn't need to. In addition, it is common in Lisp for functions to repeatedly deconstruct and reconstruct tuples, often without mentioning the name of the tuple type or the logical names of it's fields. This makes it very hard to know what is supposed to go in, what comes out, or what the functions in the middle are actually doing. If our program only contained one type of Tuple, 3d vectors, it might be managable. Throw around a five or six kinds of tuples and it becomes tricky to tell them apart. Throw around dozens and dozens of kinds of tuples (like the code for Emacs), and it's complete and utter unmitigated tuple chaos. Of course Lisp has alternative ways to store tuples. It has structs; it has objects; it has multiple types of structs and objects. However, most Lisp programmers and programs don't use them very much, and later we'll talk about why. Would you be surprised to learn that programs written in the world's most popular programming languages don't even use tuples? They don't. That includes programs in C, C++, C#, and Java, which are all based around some form of record. In popular dynamic languages where tuples can be expressed, including Python, they are not idomatic. Most Python programs rightly only use tuples for multiple-return arguments, preventing the mysterious tuple problem from spreading to the rest of the program. Note: C/C++ structs are sort-of tuples. The pre-initialized data syntax can assign data-values to fields based on field position, and the order of fields in the declaration dictates their layout in memory. However, field values can only be read or set using field names, which means in practice they behave more like records than tuples. The 'rest of us' non-Lisp programmers are not afraid of Lisp's power. We just don't like code that is an unreadable chaos of unknown tuple types. We prefer to use record types, commonly known as structs. However, before we talk about structs, we're going to talk about something else. Of course Lisp lovers don't think using lists for everything is chaos. They think it's beauty. The beauty that all data is a list, and thus all data can be uniformly manipulated. The freedom of never having to declare a tuple's structure, and thus be able to trivially produce a new one on every single line of code. The flexibility that you can pass any data into any transformation function, because everything is a list. There are two reasons this viewpoint isn't the prevailing viewpoint of most software development: The first reason, is that mysterious tuples make code very hard to read. When looking at a Lisp function, it's usually easy to see that a variable is a list -- perhaps a tuple, or perhaps a list of tuples. What's not so easy is to know what is inside the tuple. We look at where the list came from, often to find not the creation of the data, but a transformation of data from somewhere else. So we look at where that data came from, and so on and so on. Working in foreign Lisp code is like an archaeology dig, unpeeling the layers of change. Sometimes we resort to running the code and printing the data which happens to appear, hoping it'll explain what it is. Sometimes it does. Sometimes it doesn't. The second reason, is that mysterious tuples make code very hard to change. When changing the middle of a list transformation pipeline, any change to the structure of the nested lists may inadvertently break the expectations of some other code -- and there is nothing in the Lisp system to help you know about it or find it. The best you can do is run the code to see if it breaks. Hopefully you've also written exhaustive tests, so you don't have to test it all manually. Writing tests is good. Having to write them just so you don't create tuple chaos is not so good. If this problems sounds vaguely like some of the problems logged against dynamic languages in general, you'd be right, because it is. However, objects in Python and Ruby have a class structure; they have runtime types. Idiomatic programming in those langues does not store everything in tuples or lists. When we make a Python 3d-vector, we would often make it as a class, containing the fields x, y, and. z. Data of that type will know it's type at runtime, and code which uses that type will refer to x, y, and z. That structure guides programmers in understanding, changing, and debugging code. Using lists as the baseline data-structure of programming, the way Lisp does, amplifies the fragility of a dynamic program insurmountably. If making everything a list isn't the answer, then what is? Well, there is no one-size fits all free lunch, but for the next part of our journey, let's talk about records. A record type, or struct, is a (generally un-ordered) set of fields, where each field is identified by name. Because the fields have names, every access to data carries with it some meaning about what the data is. In dynamic programming languages like Python and Javascript, these records are usually stored in a hashtable, either as part of a hashmap or class definition which is ultimately backed by a hashmap. Python classes require definitions, while hashmaps do not. In most static typed programming languages, like C, C++, C#, and Java, classes and structs are record types which are represented more efficiently than a hashmap. Also, they must be explicitly declared, which brings the drawback of extra typing, but the benefit that the compiler can complain when the code doesn't match the intended structure. It also gives them names.. In static type inference programming languages, like ML, OCaml, and Irken, records and objects can be created without definitions, more like dynamic hashmaps -- yet they are represented efficiently and typechecked, more like classes or structs. Note: In C and C++, a struct is actually ordered, and so it is actually a hybrid of tuple product type and fieleded record type. However, it's order may only be used during data-initializers and when relying on it's in memory representation. All code which accesses elements of the struct must refer to their fields by name. In C#, a struct may be given the attribute [StructLayout(LayoutKind.Sequential)] to ask it to remain in order, so that it matches C's in-memory layout. However, data-initializers in C# must always be done through the constructor argument order, and fields are always accessed by name. struct [StructLayout(LayoutKind.Sequential)] It's hard to find an abstraction in another programming language which isn't present in Lisp, and so of course Lisp also has structs. However, they are not as commonly used in lisp programs. Let's take a look at why. Here is the Lisp code to declare our 3d vector type, create one, and then print it out: (defstruct vec3d (x 0.0 :type float) (y 0.0 :type float) (z 0.0 :type float)) (define point (make-vec3d :x 3 :y 4 :z -3)) (print (format "(~S,~S,~S)" (vec3d-x point) (vec3d-y point) (vec3d-z point))) A few things jump out in this code snippet. First, we had to make a declaration of what a struct is. Second, our code that creates a vector is substially more wordy and unclean looking than (3,4,-3). Further, access to this struct is also very wordy and comes with the typename each time and every time. Clearly we can see why structs are not a solution to common datastructure needs in Lisp. Compare this to similar facilities in other languages: // Javascript point = {x=3, y=4, z=-3}; print(format("(%s,%s,%s)", point.x, point.y, point.z) # Python class Vec3d: def __init__(self,x,y,z): self.x = x; self.y = y; self.z = z point = Vec3d(3, 4, -3) print "(%s,%s,%s)" % (point.x, point.y, point.z) ;; clojure (a Lisp, ha ha!) (def point {:x 3 :y 4 :z -3}) (printf "(%s,%s,%s)" (:x point) (:y point) (:z point)) Note: while Clojure does have some nice syntax for handling maps, which can be used to represent records, it's unfortunately still pretty idiomatic to store tuples in lists in Clojure code. While records are a powerful and clear way to represent tuples, sometimes the verbosity of using field names all the time can obscure the meaning of the program. Some statically typed languages offer a more compact, even Lisp-like, way to represent tuples which retain their type. They are often provided as a part of something known as Algebraic Datatypes, and are available in languages like ML, OCaml, and Haskell. In these languages, a tuple is of a specific type (sometimes called a variant type), and contains a specific product of element types. For example, our 3d vector might be of the (vec3d = float,float,float). In many static languages you have to declare your types ahead of time, which can be a burden compared to dynamic languages. However, OCaml and Irken have a special kind of typed tuple called a polymorphic variant. These can be created without declarations, by merely specifying a name. The name allows your compiler to typecheck your tuples and assure that you correctly unpack all the elements when you use them, and that uses of the same tuple type, along the same code path, have the same type. (* OCaml *) let point = `Vec3d 3. 4. -3. in match point with `Vec3d x y z -> Printf.sprintf "(%f,%f,%f)" x y z ;; Irken (define point (:vec3d 3 4 -3)) (match point with (:vec3d x y z) -> (printf "(" (float x) "," (float y) "," (float z) ")")) If someone took away every other programming language, and made me program in Lisp. I would use Irken, a statically typed type-inference hybrid of ML and Lisp. If they made me program in dynamic Lisp: I would be sad, for sure. I would use Clojure. Then I would merely do what Lisp programmers do best. I would invent my own abstraction -- for explicitly differentiated tuples. How? It's easy enough in Clojure to write a vector with the type of the tuple as the first element. Further, it's easy to use match expressions to unpack tuples when handling their values. This assures that all tuples know their type, and all places that create or unpack them mention their type. It would also provide some runtime checking that tuples are not accidentally mixed up. It's not as nice as OCaml or Irken's compile time checking, but I'd survive. ;; clojure (def point ['vec3d 3 4 -3]) (match [point] ['vec3d x y z] (printf "(%s,%s,%s)" x y z)) In my own code I would then have style-guidelines outlawing the use of undifferentiated tuples. Sadly, everyone else would be unlikely to use my convention, so I would have to avoid or contain their soups of mysterious tuples. I hope you see why I don't think Lisp's relative lack of popularity is because it has overwhelming power that us 'mortals' just can't handle. Instead I think that lisp has a fatal flaw in it's idomatic over-use of undifferentiated tuples stored as lists. Despite Lisp's flexibility, this is the reason it's popularity lags, while the popularity of other dynamic languages, such as Python, Ruby, and Javascript has soared. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) ['tag val1 val2] [:tag val1 val2] General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1186940/Lisps-Mysterious-Tuple-Problem
CC-MAIN-2018-30
refinedweb
2,764
71.65
C# FAQ 2.3 - How do I build a C# application using Command Line Compilers? As in other programming languages, you can use command line compilers for developing C# applications. Microsoft's C# Compiler is one of most popular ones available today. It comes with Microsoft's .NET Software Development Kit (SDK). You also can use the compiler that ships with the Mono C# kit. In this FAQ, you will learn how to build a simple C# program using these two different compilers. This FAQ assumes that you are using Notepad as the editor for the creation of source codes. Open the editor and enter the code as given in listing 2.3.1 Listing 2.3.1 001: // HelloWorld.cs 002: // ------------- 003: using System; 004: class HelloWorld 005: { 006: public static void Main() 007: { 008: Console.WriteLine("Welcome to C#"); 009: } 010: } Note: The line numbers are given only for the sake of explanation and do not form part of the source code. In Listing 2.3.1, Line 3 defines the System namespace. Line 4 declares our class named HelloWorld. Line 6 defines the Main() method, which is considered as a entry point for all C# programs. Line 8 calls the WriteLine() method of the Console class and prints "Welcome to C#" as output. Save the file as HelloWorld.cs and compile the code using a C# compiler. I assume you are using the compiler that ships with Microsoft's .NET SDK. For this purpose, you have to give the following command at the DOS prompt: csc HelloWorld.cs Note: If you have installed Visual Studio .NET 2002 or 2003, you can compile a C# program by using the Visual Studio .NET Command Prompt. You can launch it from Start | All Programs | Microsoft Visual Studio .NET 2003 | Visual Studio .NET Tools. If you have installed the Mono C# compiler, you should compile the above program by using the following command: mcs HelloWorld.cs If there are any errors and warnings, the compiler will display them during the above process. You have to go through all those messages and correct them, preferably by going back to the source code. As explained earlier, C# is a case-sensitive language. Additionally, if you miss a semicolon or a comma, the compiler will throw error messages. If there are no errors and warnings, your screen would look like Figure 2.3.1. Figure 2.3.1: Compiling HelloWorld.cs To view the output of the above program, you have to supply the name of the assembly (HelloWorld.exe) at the DOS prompt. For instance, you have to give the following command for executing the above program: HelloWorld Note: An Assembly is a file that is created by the compiler upon successful compilation of every C# application. For more information, refer to FAQ 1.5. The output will be as shown in Figure 2.3.2 Figure 2.3.2: Running a C# Program The execution statement for the mono compiler will be as shown below: mono HelloWorld.exe Commenting the Code If you look at line numbers 1 and 2 in the above code, you see two slash lines at the beginning. In programming terminology, these lines are called comments. The C# compiler won't compile the statements inside these comments and they are given only for documentation and reference purposes. It is a best practice to give comments while coding because it will help you study the code at a later stage or for others who look at your code. There are three ways by which you can give comments in C#. The first two will be already familiar to you if you have worked with C++ and Java. They are single-line and multiline comments. Whereas single-line comments are given with the // symbol, multiline comments are applied with /*...*/ symbols and can spread across more than one line (see Listing 2.3.2). Listing 2.3.2 // this is a single line comment /* This is a Multiline Comment */ The third type of comment, which is given with the /// symbol, is a new one introduced by the .NET Framework. It is used to generate documentation for your C# program and is denoted by using the relevant XML tags as shown in Listing 2.3.3: Listing 2.3.3 /// <summary> /// this is a new comment system /// </summary> You will learn more about the above comment system in a later FAQ. There are no comments yet. Be the first to comment!
http://www.codeguru.com/columns/csharp_learning/article.php/c6887/C-FAQ-23--How-do-I-build-a-C-application-using-Command-Line-Compilers.htm
CC-MAIN-2017-09
refinedweb
749
75.81
Writing Scripts A script is a file containing a sequence of commands to be executedIn Python, this is just a simple .py file: my_script.py, hello_world.py, and so on. For our purposes, Python scripts will be run in a shell. Before we begin, there’s important clarification around the terms script, module, package, and library. The two lowest levels are scripts and modules. They may appear similar, but generally speaking, scripts are meant to do something, while modules can just have methods and variables that can be imported to other files. If we are executing a .py file directly, we classify that as a script; otherwise, it is a module. Packages are collections of modules that fulfill similar purposes. They contain an __init__.py file to distinguish packages from general files containing modules. You can access sub-packages with the "." structure, as in xml.etree. The __init__.py structure prevents importing from lower-level packages, so modules must be acquired from Libraries are similar in structure to packages, and the two are often used interchangeably. A frequent generalization is that libraries are collections of packages, which are themselves collections of modules. Don’t worry if this is confusing — the differentiation is primarily for file systems, as Python classifies modules, packages, and libraries all as module type. Your main takeaway should be that libraries are meant to be imported for their contents, while scripts are meant to be run directly. Shells & shebangs As a brief review, shells are run in a terminal (also called a command line) that connects the user to their files without a GUI. Terminals use UNIX as a programming language, and the shell enables you to interact with UNIX. While imperative for executing scripts, understanding shells isn’t required to write your scripts. Shebangs are typically the first line in your script. It takes the form #!/bin/bash, and indicates the interpreter used for executing the script. bash is the default shell used by most terminals. Without a shebang in your script, you would execute your file in a command line as follows: python ~/my_script.py If we didn’t use the python keyword, the operating system would try to execute the script using various shells; this would inevitably fail, as (if you recall) shells are UNIX-based, and Python is not UNIX. Shebangs are useful because they remove the need for python or python3 at the start of a command when executing a script, automatically using the interpreter indicated by the shebang. The standard Python script shebangs are: #!/usr/bin/python #!/usr/bin/env python The first will use the system default Python interpreter located at /usr/bin/python. The second is whatever interpreter is used if you just type python. Importantly, you can activate your own interpreter and environment with a virtual environment or custom conda, then choose them using the #!/usr/bin/env python shebang. Package-proofing When you want a script to run with a certain set of Python packages present in a given environment, it may not be convenient for you to run a command like conda activate my_environment prior to executing the script, especially if you have many packages. In these situations, you can simply use your shebang to point to the interpreter associated with my_environment. For example, the interpreter used with our Fall 2020 semester environment is at /class/datamine/apps/python/f2020-s2021/env/bin/python. If we wanted our script to run using all of the packages we’ve installed into our environment, we can simply use this shebang: #!/class/datamine/apps/python/f2020-s2021/env/bin/python. Arguments Arguments are the values passed to the script. For example, in the following command, -i, special word, and my_file.txt are arguments being passed to the function grep. grep -i 'special word' my_file.txt This same structure can be applied to Python scripts: $HOME/my_script.py -i 'okay, sounds good!' The acceptance of arguments in the command line depends on how you write your script. To access and utilize arguments in your Python script, you can use the sys package. For example, take the following script: import sys def main(): print(sys.argv) if __name__ == '__main__': main() We can then run our example execution from before and get output: $HOME/my_script.py -i 'okay, sounds good!' ['$HOME/my_script.py', '-i', 'okay, sounds good!'] Evidently, sys.argv returns a list of arguments, and because it’s a list, you can index for the n-th argument using sys.argv[n]. Keep in mind that sys.argv[0] is simply the script name. As mentioned, the functionality of arguments depends heavily on how you write your script and order your arguments. If we were to write our grep example in a different order: grep my_file.txt 'my_pattern' This execution would fail because grep requires pattern before file. Programming your own script is much easier when you enforce the position of your arguments. Options/Flags Arguments can also be optional, those of which are often called flags or options. Most flags have both a short form beginning with "-" and a long form beginning with "--" ( -i is short for --ignore-case). Options have default TRUE value when included, FALSE when not included, and some can have further, non-boolean values. Another difference between the forms is space. When using short form, the value for the option is separated by a space ( grep -f my_file.txt), while long form separates the value with an equals sign ( grep --file=my_file.txt).
https://the-examples-book.com/programming-languages/python/writing-scripts
CC-MAIN-2022-33
refinedweb
917
65.93
Next: Zero Length, Previous: Fixed-Point, Up: C Extensions As an extension, the GNU C compiler, and RL78 __flashqualifier will locate data in the .progmem.datasection. Data will be read using the LPMinstruction. Pointers to this address space are 16 bits wide. __flash1 __flash2 __flash3 __flash4 __flash5 .progmemN .datawhere N refers to address space __flashN. The compiler will set the RAMPZsegment register approptiately before reading data by means of the ELPMinstruction. __memx RAMPZset according to the high byte of the address. Objects in this address space will be located in .progmem.data. Example char my_read (const __flash char ** p) { /* p is a pointer to RAM that points to a pointer to flash. The first indirection of p will read_i (void) { return i; } #else #include <avr/pgmspace.h> /* From avr-libc */ const int var PROGMEM = 1; int read_i (void) { return (int) pgm_read_word (&i); } #endif /* __FLASH */ Notice that attribute progmem locates data in flash but accesses to these data will read from generic address space, i.e. from RAM, so that you need special accessors like pgm_read_byte from avr-libc. Limitations and caveats __flashor __flashN address spaces will show undefined behaviour. The only address space that supports reading across the 64 KiB flash segment boundaries is __memx. __flashN address spaces you will have to arrange your linker skript. extern const __memx char foo; const __memx void *pfoo = &foo; The code will throw an assembler warning and the high byte of pfoo will be initialized with 0, i.e. the initialization will be as if foo was located in the first 64 KiB chunk of flash.; When the variable i is accessed, the compiler will generate special code to access this variable. It may use runtime library support, or generate special machine instructions to access that address space.
http://gcc.gnu.org/onlinedocs/gcc-4.7.0/gcc/Named-Address-Spaces.html
CC-MAIN-2016-50
refinedweb
295
64.61
I have a sparse csc matrix with many zero elements for which I would like to compute the product of all column elements for each row. i.e.: A = [[1,2,0,0], [2,0,3,0]] V = [[2, 6]] A.prod(1) Approach #1: We can use the row indices of the sparse elements as IDs and perform multiplication of the corresponding values of those elements with np.multiply.reduceat to get the desired output. Thus, an implementation would be - from scipy import sparse from scipy.sparse import csc_matrix r,c,v = sparse.find(a) # a is input sparse matrix out = np.zeros(a.shape[0],dtype=a.dtype) unqr, shift_idx = np.unique(r,return_index=1) out[unqr] = np.multiply.reduceat(v, shift_idx) Sample run - In [89]: # Let's create a sample csc_matrix ...: A = np.array([[-1,2,0,0],[0,0,0,0],[2,0,3,0],[4,5,6,0],[1,9,0,2]]) ...: a = csc_matrix(A) ...: In [90]: a Out[90]: <5x4 sparse matrix of type '<type 'numpy.int64'>' with 10 stored elements in Compressed Sparse Column format> In [91]: a.toarray() Out[91]: array([[-1, 2, 0, 0], [ 0, 0, 0, 0], [ 2, 0, 3, 0], [ 4, 5, 6, 0], [ 1, 9, 0, 2]]) In [92]: out Out[92]: array([ -2, 0, 6, 120, 0, 18]) Approach #2: We are performing bin-based multiplication. We have bin-based summing solution with np.bincount. So, a trick that could be use here would be converting the numbers to logarithmic numbers, perform bin-based summing and then convert back to original format with exponential (reverse of log) and that's it! For negative numbers, we might to add a step or more, but let's see what the implementation be like for non-negative numbers - r,c,v = sparse.find(a) out = np.exp(np.bincount(r,np.log(v),minlength = a.shape[0])) out[np.setdiff1d(np.arange(a.shape[0]),r)] = 0 A sample run with non-negative numbers - In [118]: a.toarray() Out[118]: array([[1, 2, 0, 0], [0, 0, 0, 0], [2, 0, 3, 0], [4, 5, 6, 0], [1, 9, 0, 2]]) In [120]: out # Using listed code Out[120]: array([ 2., 0., 6., 120., 18.])
https://codedump.io/share/wm0wMDf3FUm1/1/multiplying-column-elements-of-sparse-matrix
CC-MAIN-2017-26
refinedweb
375
64.91
Tutorials - QML integration testing In this tutorial you will learn how to write an integration test to strengthen the quality of your Ubuntu QML application. It builds upon the Currency Converter Tutorial. Requirements - Ubuntu 14.04 or later - Get Ubuntu - The currency converter tutorial - if you haven't already, complete the currency converter tutorial - The unit testing tutorial for currency converter - if you haven't already, unit testing tutorial - The QML test runner tool - open a terminal with Ctrl+Alt+Tand run these commands to install all required packages: sudo apt-get install qtdeclarative5-dev-tools qtdeclarative5-test-plugin What are integration tests? An integration test tests interactions between pieces of your code. It can help ensure that data is passed properly between functions, exceptions are handled properly and passed, etc. Integration tests are the middle of the testing pyramid. They cover more code at once and at a higher level than unit tests. As you remember, the testing pyramid describes the three levels of testing an application, going from low level tests at the bottom and increasing to high level tests at the top. In Ubuntu, like unit tests, integration tests for your qml application: - Are written in JavaScript within an Ubuntu Testcase type - Are executed with the qmltestrunnertool, which will determine whether they pass or fail Again, the qmltestrunner tool allows you to execute QML files as testcases consisting of test_ functions. We’ll make use of it to run our tests. Running the example To help you see what integration: So let’s run it! Switch back to your terminal and run: qmltestrunner -input tests/integration If everything went successfully, you should see a small window appear and disappear quickly and a printout displaying all tests as passing. Integration tests for Currency Converter The currency converter application involves inputting data into TextField’s on a Page and pressing a Button. We know from our previous unit test tutorial that the convert function we use to do this operates correctly, so it’s time to test passing data from our TextField’s to the convert function and vice versa. Now let’s write some tests! Preparing the testcase Before we can test these qml components we’ll need to create an instance of the QML element we want to test. To do this, open the tst_currencyconverter.qml file. In order to declare an instance of our main qml window, we’ll need to import our qml files from the root folder. The import “../..” line imports all of the qml from the root folder ensuring we can declare a new instance. Our main window is called Main in our application, so let’s declare a new instance. import "../.." ... Item { width: units.gu(40) height: units.gu(30) // Create an instance of the QML element we want to test Main { id: currencyConverter anchors.fill: parent } } This will create a small (40 by 30 units) instance of our currency converter main window when we execute this QML. We will use this to test. Simulating mouse and keyboard input We also need to think about how we will simulate mouse and keyboard input, since we intend to pass data into UI elements. Fortunately, there are useful methods from Qt.TestCase to help us. The keyPress(), keyRelease(), and keyClick() methods can be used to simulate keyboard events, while mousePress(),[ mouseRelease()] (../api-qml-current/QtTest.TestCase.md#mouseRelease- method), mouseClick(), mouseDoubleClick(), and mouseMove() methods can be used to simulate mouse events. These useful methods are self-describing and allow us to interact with the active qml element. Before using them however, we must ensure the window has loaded. To do this, we’ll be using the when and windowShown properties. when: windowShown Adding this simple check will ensure our test won’t begin until the window has loaded. Our first testcase With our test now all ready to launch and wait for our element to load, we can write our test for converting rates. Note again that we simulate the mouse and keyboard as inputs for our test. function test_convert(data) { var inputFrom = findChild(currencyConverter, "inputFrom") var inputTo = findChild(currencyConverter, "inputTo") // Click in the middle of the inputFrom TextField to focus it mouseClick(inputFrom, inputFrom.width / 2, inputFrom.height / 2) // Click at the right end of the inputFrom TextField to clear it mouseClick(inputFrom, inputFrom.width - units.gu(1), inputFrom.height / 2) // Press key from data set keyClick(data.inputKey) // Now the field should contain the value from the data set // compare() also checks the type. We need to convert text to int if the data set holds ints. compare(parseInt(inputFrom.text), data.value) // The output field should be 0 when the input is 0, otherwise it should be greater than 0 if (data.value == 0) { // Here we compare the text to the string "0" compare(inputTo.text, "0", "0 Euros is not 0 Dollars!?!?") } else { // With verify() automatic casting can happen. verify(inputTo.text > 0) } } This test case will clear the input text field and input values. We then assert to ensure that two things occur. The first is that the text field receives and properly reacts to our input. The second assertion checks if the conversion field is properly updated with a converted value. Going deeper Did you notice our test case also has an import of data? This lets us test a few different values to make sure we have all our edge cases covered. We can do this by defining _data functions. Examine the following function in the test case. function test_convert_data() { return [ { tag: "0", inputKey: Qt.Key_0, value: 0 }, { tag: "5", inputKey: Qt.Key_5, value: 5 } ] } This function is named the same as our test_convert function, with an additional string of _data appended to the end. This instructs qmltestrunner to run our test_convert function with the given inputs; 1 run for each set of values. Another test There’s an additional test we can code to ensure our input fields behave properly. The clear button is a part of the main window and the text fields should react when it is pressed. Let’s write a testcase to ensure this behaves as expected. function test_clearButton() { var inputFrom = findChild(currencyConverter, "inputFrom") var inputTo = findChild(currencyConverter, "inputTo") // Click in the middle of the inputFrom TextField to focus it mouseClick(inputFrom, inputFrom.width / 2, inputFrom.height / 2) // Press Key "5" keyClick(Qt.Key_5) // Now the field should contain the value 0.05 because 0.0 is already in there in the beginning tryCompare(inputFrom, "text", "0.05") var clearBtn = findChild(currencyConverter, "clearBtn") mouseClick(clearBtn, clearBtn.width / 2, clearBtn.height / 2) // Now the field should be set back to "0.0" tryCompare(inputFrom, "text", "0.0") } In this testcase we utilize the tryCompare function to issue asserts in reaction to our simulation of inputs. This allows for an asynchronous event to occur, as opposed to the compare function which we used above. In other words, our assertion won’t fail immediately, since the inputfield needs some small amount of time to react to the button state. Notice the multiple assertions as well. If we ever decide the clear button should perform additional functions, we can update this testcase. Conclusion You've just learned how to write integrations tests for a form-factor- independent Ubuntu application for the phone. But there is more information to be learned about how to write qml tests. Check out the links below for more documentation and help.
https://docs.ubuntu.com/phone/en/apps/qml/tutorials-qml-integration-testing.html
CC-MAIN-2018-47
refinedweb
1,240
56.76
I have recently blogged about an idea how JavaBeans™ could be extended to reduce the bloat created by this widely-accepted convention in the Java world. That article was reblogged on DZone and got quite controversial feedback here (like most ideas that try to get some fresh ideas into the Java world). I want to revisit one of the thoughts I had in that article, which was given a bit less attention, namely:. I’m specifically having trouble understanding why we should use getters all over the place. Getters / setters are a convention to provide abstraction over property access. I.e., you’d typically write silly things like this, all the time: public class MyBean { private int myProperty; public int getMyProperty() { return myProperty; } public void setMyProperty(int myProperty) { this.myProperty = myProperty; } } OK. Let’s accept that this appears to be our everyday life as a Java developer, writing all this bloat, instead of using standard keywords or annotations. I’m talking about standards, not proprietary things like Project Lombok. With facts of life accepted, let’s have a look at java.io.File for more details. To me, this is a good example where JavaBean-o-mania™ went quite wrong. Why? Check out this source code extract: public class File { // This is the only relevant internal property. It would be 'final' // if it wasn't set by serialisation magic in readObject() private String path; // Here are some arbitrary actions that you can perform on this file. // Usually, verbs are used as method names for actions. Good: public boolean delete(); public void deleteOnExit(); public boolean mkdir(); public boolean renameTo(File dest); // Now the fun starts! // Here is the obvious 'getter' as understood by JavaBeans™ public String getPath(); // Here are some additional 'getters' that perform some transformation // on the underlying property, before returning it public String getName(); public String getParent(); public File getParentFile(); public String getPath(); // But some of these 'transformation-getters' use 'to', rather than // 'get'. Why 'toPath()' but not 'toParentFile()'? How to distinguish // 'toPath()' and 'getPath()'? public Path toPath(); public URI toURI(); // Here are some 'getters' that aren't really getters, but retrieve // their information from the underlying file public long getFreeSpace(); public long getTotalSpace(); public long getUsableSpace(); // But some of the methods qualifying as 'not-really-getters' do not // feature the 'get' action keyword, duh... public long lastModified(); public long length(); // Now, here's something. 'Setters' that don't set properties, but // modify the underlying file. A.k.a. 'not-really-setters' public boolean setLastModified(long time); public boolean setReadable(boolean readable); public boolean setWritable(boolean writable); // Note, of course, that it gets more confusing when you look at what // seem to be the 'not-really-getters' for the above public long lastModified(); public boolean canRead(); public boolean canWrite(); } Confused? Yes. But we have all ended up doing things this way, one time or another. jOOQ is no different, although, future versions will fix this. How to improve things Not all libraries and APIs are flawed in this way. Java has come a long way and has been written by many people with different views on the subject. Besides, Java is extremely backwards-compatible, so I don’t think that the JDK, if written from scratch, would still suffer from “JavaBean-o-mania™” as much. So here are a couple of rules that could be followed in new APIs, to get things a bit cleaned up: - First off, decide whether your API will mainly be used in a Spring-heavy or JSP/JSF-heavy environment or any other environment that uses JavaBeans™-based expression languages, where you actually WANT to follow the standard convention. In that case, however, STRICTLY follow the convention and don’t name any information-retrieval method like this: “File.length()”. If you’re follwoing this paradigm, ALL of your methods should start with a verb, never with a noun / adjective - The above applies to few libraries, hence you should probably NEVER use “get” if you want to access an object which is not a property. Just use the property name (noun, adjective). This will look much leaner at the call-site, specifically if your library is used in languages like Scala. In that way, “File.length()” was a good choice, just as “Enum.values()”, rather than “File.getLength()” or “Enum.getValues()”. - You should probably ALSO NOT use “get” / “set” if you want to access properties. Java can easily separate the namespace for property / method names. Just use the property name itself in the getter / setter, like this: public class MyBean { private int myProperty; public int myProperty() { return myProperty; } public void myProperty(int myProperty) { this.myProperty = myProperty; } } Think about the first rule again, though. If you want to configure your bean with Spring, you may have no choice. But if you don’t need Spring, the above will have these advantages: - Your getters, setters, and properties have exactly the same name (and case of the initial letter). Text-searching across the codebase is much easier - The getter looks just like the property itself in languages like Scala, where these are equivalent expressions thanks to language syntax sugar: “myBean.myProperty()” and “myBean.myProperty” - Getter and setter are right next to each other in lexicographic ordering (e.g. in your IDE’s Outline view). This makes sense as the property itself is more interesting than the non-action of “getting” and “setting” - You never have to worry about whether to choose “get” or “is”. Besides, there are a couple of properties, where “get” / “is” are inappropriate anyway, e.g. whenever “has” is involved -> “getHasChildren()” or “isHasChildren()”? Meh, name it “hasChildren()” !! “setHasChildren(true)” ? No, “hasChildren(true)” !! - You can follow simple naming rules: Use verbs in imperative form to perform actions. Use nouns, adjectives or verbs in third person form to access objects / properties. This rule already proves that the standard convention is flawed. “get” is an imperative form, whereas “is” is a third person form. - Consider returning “this” in the setter. Some people just like method-chaining: public MyBean myProperty(int myProperty) { this.myProperty = myProperty; return this; } // The above allows for things like myBean.myProperty(1).myOtherProperty(2).andThen(3); Alternatively, return the previous value, e.g: public int myProperty(int myProperty) { try { return this.myProperty; } finally { this.myProperty = myProperty; } } Make up your mind and choose one of the above, keeping things consistent across your API. In most cases, method chaining is less useful than an actual result value. Anyway, having “void” as return type is a waste of API scope. Specifically, consider Java 8?s lambda syntax for methods with / without return value (taken from Brian Goetz’s state of the lambda presentations): // Aaaah, Callables without curly braces nor semi-colons blocks.filter(b -> b.getColor() == BLUE); // Yuck! Blocks with curly braces and an extra semi-colon! blocks.forEach(b -> { b.setColor(RED); }); // In other words, following the above rules, you probably // prefer to write: blocks.filter(b -> b.color() == BLUE) .forEach(b -> b.color(RED)); Thinking about this now might be a decisive advantage of your API over your competition once Java 8 goes live (for those of us who maintain a public API). - Finally, DO use “get” and “set” where you really want to emphasise the semantic of the ACTIONScalled “getting” and “setting”. This includes getting and setting objects on types like: - Lists - Maps - References - ThreadLocals - Futures - etc… In all of these cases, “getting” and “setting” are actions, not property access. This is why you should use a verb like “get”, “set”, “put” and many others. Summary Be creative when designing an API. Don’t strictly follow the boring rules imposed by JavaBeans™ and Spring upon an entire industry. More recent JDK APIs and also well-known APIs by Google / Apache make little use of “get” and “set” when accessing objects / properties. Java is a static, type-safe language. Expression languages and configuration by injection are an exception in our every day work. Hence we should optimise our API for those use cases that we deal with the most. Better, if Spring adapt their ways of thinking to nice, lean, beautiful and fun APIs rather than forcing the Java world to bloat their APIs with boring stuff like getters and setters! Reference: Bloated JavaBeans – Don’t Add Getters to Your API from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog.
http://www.javacodegeeks.com/2013/02/bloated-javabeans-dont-add-getters-to-your-api.html
CC-MAIN-2013-20
refinedweb
1,388
55.24
Voice Command Calculator in Python using speech recognition and PyAudio Here we are going to build our own voice command calculator in python. So what is a voice command calculator? The name itself is the answer to our question. A calculator calculates operands with the operator. But here we are not gonna take input from the user with the keyboard. We will take input from the user’s voice. For example, 9 + 8 = 17 We can make a calculator using a Python program easily. Just take inputs from the user and print the result. But here we need to work with speech recognition. Python Voice Command Calculator Our goal is like this: If a user says “nine plus eight” the output will be like this: 9 + 8 17 If a user says “nine divided three” the output will be: 9 divided 3 3.0 Again, if the user says “eight multiplied by seven” the output will be: 8 x 7 56 And so on. Steps to follow to build a voice command calculator in Python: Here is the logic: - At first, we will set our microphone device. - Accept voice from the user with the mic. - Remove noise and distortion from the speech. - Convert the speech or voice to text. - Now store the text as a string in a variable. - Print the string if you wish. ( Not necessary but it will help you determine if the text is all right or not ) - split the string into three parts: first operand, operator and the second operand - Now convert the operands to integers. - Finally, do the calculation in your program as you got all the things you need. Let’s implement it in Python: Requirements to build speech/voice calculator: We need the following: - SpeechRecognition - PyAudio Set up things to start our program You can install those with pip: pip install SpeechRecognition pip install pyaudio If you are using Mac then you will need to install postaudio and pyaudio both. brew install portaudio pip install pyaudio Linux users can simply download it using: $ sudo apt-get install python-pyaudio python3-pyaudio One more thing you must need to know: - Your mic device index. To learn how to find mic device index follow: Find all the microphone names and device index in Python using PyAudio Now you are ready to jump into the coding part. To check if you are all set, your packages are installed successfully just try this below code: import speech_recognition as sr print("Your speech_recognition version is: "+sr.__version__) Output: Your speech_recognition version is: 3.8.1 If this runs with no errors then go to the next part. In my previous tutorial, I have explained Get voice input with microphone in Python using PyAudio and SpeechRecognition So in this tutorial, I will not explain those things again. I will only focus on our voice calculator. If you need to know the full explanation just follow my previous tutorial. Here I will provide the code. Python code to get the voice command from the user:) Run the program and it will print whatever you say. The fun is that. If you say “nine plus ten” it will return a string ” 9 + 10 ” Note that: r.adjust_for_ambient_noise(source) The above line is used to remove the reduce the noise. r.recognize_google(audio) – This will return the converted text from voice as a string. You will need an active internet connection to run this program. ( I am using google speech recognition, as right now it is free and we can send the unlimited request. ) But if you are going to create a project or do something bigger with it you should use google cloud speech. Because google speech recognition is running right now for free of cost. But Google does not assure us that the service will never stop. If everything is fine till now you can go for the next step. Split the string and make operation: Here we face the main difficulty. We got a string. For example, “103 – 15”. This is a string so we can’t simply do operation on it. We need to split up the string into three parts and then we will get three separate string. “103”,”-“,”15” We need to convert “103” and “15” to int. Those are our operands. And the “+” is our operator. Use the operator module. This will make our task easy. import operator()))) The sign we wrote in our programs: +, -, x, divided, etc are operators. For each operator, we have mentioned a particular method. As you can see, for “divided” => operator.__truediv__, for Mod or mod ( as during speech to text conversion sometimes it returns capital letter for the first character ) => operator.mod You can set you own commands too if you wish. return get_operator_fn(oper)(op1, op2) This will calculate your result. So here is the full code of this voice command calculator in Python: import operator()))) Output: Your speech_recognition version is: 3.8.1 Say what you want to calculate, example: 3 plus 3 11 + 12 23 To make multiplication simply say ” number1 multiplied by number2″ Here is a screenshot: voice command calculator in python for example, say ” 16 multiplied by 10 ” Multiplied by will be automatically converted to “x” by Google’s speech recognition. To get mod just say, ” 17 mod 9 ” It will give you the result. For division just say, “18 divided 7 ” Here you can see I have not used divided by because google’s speech recognition will not convert that to “/” and we gonna split our strings into three parts. So if we give “number1 divided by number2” it can’t be split up into three parts. “number1”, “divided” “by” “number2” and 4 parts will give us an error because the function can accept only three parameters. def eval_binary_expr(op1, oper, op2): If you get check your converted string. I have used print(my_string) this to check if I got my desire string or not. Please note that: My audio input ( microphone ) device index is 1. You have to put your device index in your program. To learn how to find device index check this Find all the microphone names and device index in Python using PyAudio Great! Very cool! Thank you for posting Good job guys Does the same code work for raspberry pi as well?? Yeah that will also work for Raspberry Pi too. The only thing you need to do is: set-up your microphone. Open the terminal window and run lusb. It will show you all the USB devices connected to your machine. That’s it. You can also set the volume of your mic device high. In order to do this you may run alsamixer. Hope these tips will help you to perform this voice operation on your Raspberry Pi. Thank u so much… What if I want to create my calculator to evaluate an expression with more than two operands and operators… Eg 4*5+3 I was thinking the same as you before posting the content and I have got a good solution. You can take an optional parameter or argument. But the best way will be if you find a way to evolve string as a mathematical operation. And the easiest way is to do this in Python. eval(‘4*5+3’) I hope it will help you out. If you have further query please ask. Bro I need GUI for this app. Click on the contact button and type your requirements and send it. We will get back to you. My voice calculator is not calculating any operation can u just me out!! Yes sure. Check your email. Just send the screenshot of your problem in reply to that email. The idea is very Good….. Could you Please share the demo… What kind of demo do you want? GUI or Just a video demo of this voice command calculator Video Demo for voice calculator ……. The code is provided with full tutorial thus you can test it on your machine. I hope it will help you out. My voice calculator is not calculating any operation , plz help me Are you getting any error? If you are not getting any error that means the server is not responding. Thanks, your tutorial are awesome, even I. Looking for speech to text recognition, like how to setup all in python. Even, I tried with many link but these are less than less. So could you please help with your own tutorial and also email address, so can send my work like what I have done till time. can I run it in pycharm? TypeError: eval_binary_expr() missing 1 required positional argument: ‘op2’ this error ocur Kindly check if you have defined ‘op2’ properly or not. And yes you can obviously run it on any IDE. So Pycharm is also fine to run this program. Please explain above code briefly. You have use two different user defined function. That thing I’m not getting properly. Thanks for the tutorial! My recent project is totally based on it Can you please send the demo where voice can be heard? Please do help me with it! I think, in line no 6, you should write with s_r.Microphone() as source: instead of with my_mic_device as source: . BY this, it is working fine in my system. Ha-ha it works only if you say ‘5 + 5’ or ‘5 x 5’, but if you say ‘5 + 5 + 5’ it prints error – “TypeError: eval_binary_expr() takes 3 positional arguments but 5 were given” 🙂 yes it works for just 3 parameters I even tried eval() but that didn’t work my code is throwing error as “my_string not defined” . Please help write this on code: my_string = r.recognize_google() or see the code above carefully It can be impliment on rasbberry pi please reply yes it can… all you need do is make sure you set your Microphone input What to do if I want to perform this all without Internet connection ?
https://www.codespeedy.com/voice-command-calculator-in-python/
CC-MAIN-2022-27
refinedweb
1,664
73.98