text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi everyone, We’ve recently started the Early Access Program for CLion 1.1. It includes a lot of new goodies (like LLDB on OS X support, for example). However we’d also like to roll out CLion 1.0.5 bug fix update today. First of all, this build addresses an unpleasant regression with member functions reported as not implemented when class is located in the namespace. Among other fixes, you can find a set of CMake related improvements: - Bundled CMake 3.2.3 on all platforms. - Incorrect unescaping behavior in CMake when cmake_policy(SET CMP0005 OLD)is used. - Problem when using $(subs..)expression in compiler flags. - Incorrectly split compiler command line. The build also addresses a fix for autocompletion when using auto keyword (CPP-2998), adds Unused class inspection: and more. See the full list of changes. As usual, a patch update is available or you can download and install CLion 1.0.5 directly from our website. Happy developing! The JetBrains CLion Team
https://blog.jetbrains.com/clion/2015/07/clion-1-0-5-bug-fix-update/
CC-MAIN-2020-10
refinedweb
166
61.73
A LineIntersector is an algorithm that can both test whether two line segments intersect and compute the intersection point if they do. More... #include <LineIntersector.h>.) Computes the "edge distance" of an intersection point p in an edge. The edge distance is a metric of the point along the edge. The metric used is a robust and easy to compute metric function. It is not equivalent to the usual Euclidean metric. It relies on the fact that either the x or the y ordinates of the points in the edge are unique, depending on whether the edge is longer in the horizontal or vertical direction.. Compute the intersection of a point p and the line p1-p2. This function computes the boolean value of the hasIntersection test. The actual value of the intersection (if there is one) is equal to the value of p. Computes the "edge distance" of an intersection point along the specified input line segment. Computes the index of the intIndex'th intersection point in the direction of a specified input line segment. Returns the intIndex'th intersection point. Computes the intIndex'th intersection point in the direction of a specified input line segment. Returns the number of intersection points found. This will be either 0, 1 or 2. Tests whether the input geometries intersect. Tests whether either intersection point is an interior point of one of the input segments. trueif either intersection point is in the interior of one of the input segments Tests whether either intersection point is an interior point of the specified input segment. trueif either intersection point is in the interior of the input segment Test whether a point is a intersection point of two line segments. Note that if the intersection is a line segment, this method only tests for equality with the endpoints of the intersection segment. It does not return true if the input point is internal to the intersection segment. Tests whether an intersection is proper. The intersection between two line segments is considered proper if they intersect in a single point in the interior of both segments (e.g. the intersection is a single point and is not equal to any of the endpoints). The intersection between a point and a line segment is considered proper if the point lies in the interior of the segment (e.g. is not equal to either of the endpoints). Returns false if both numbers are zero. Force computed intersection to be rounded to a given precision model. No getter is provided, because the precision model is not required to be specified.
https://geos.osgeo.org/doxygen/classgeos_1_1algorithm_1_1LineIntersector.html
CC-MAIN-2019-09
refinedweb
431
56.66
few points: First, I think you have misstated the cost of the products. You now have two NetWare 5.1 licenses, with a number of users each. Since you mentioned 100 users, using that as the target, you should use upgrade pricing, and based on your other comments, I would assume you should use academic pricing. You would not have to upgrade both sets of user licensing, just one, because the license model changes to per-user-object. You also do not have to buy another NetWare server license after you get the first one, because the license covers an unlimited number of servers. That difference in licensing between Windows servers/CALs and NetWare eDirectoru user should be mentioned for fairness - if you want to add more servers, it won't cost anything more than the hardware, unlike Windows. Second, an ALA e-license at CDW (promo) for 100 users upgrade is a whoppin' $635 us. A media set goes for $11.48. That's quite a bit less than the $4700 you quoted, which is pricing for charitable organizations. Since you quoted academic pricing for Windows, you should compare apples-to-apples, unless for some reason the definition differs between the two companies.- Third, I wonder what type of Windows user CAL you quoted - per seat or per server? What is the cost of the server license, besides the user CALs - last I checked, Microsoft charges folx for each server they install... Fourtn, you are now running a NetWare network, not a Windows network. Your workstations are Windows, but authenticate to NDS, and your production servers are not domain members, so you don't have a Windows network. Fifth, you have definitely skewed the report towards the preference of the other 2 support folx, and presumably, yourself. One thing that should be mentioned is that, although it does take more training to support more OSes, a mixed environment based on NetWare has a lower admin-to-user ratio than does a pure Windows environment, and once the NetWare back-end is set up, most of the time an admin spends is chasing down Windows problems. Sixth, the study comparing migrating windows to linux vs upgrading windows - how does that have any bearing on anything? You don't have to migrate squat to Linux. NetWare has always been and will continue to be the best at multiplatform support, including Windows, and that won't change with OES. If you WANT to migrate your desktops to Linux, it will be easier if your network remains a NetWare network. Seventh, as far as I know, the only radical departure from the past in OES, besides the name, is that it will be available on either the NetWare kernel or the Linux kernel. It will still use eDirectory, it will still have superior filesystem security, it will still be more compatible with any platform you want to use on the desktop, the management tools will be consistent with what has been in use with NetWare 6.5. Eighth, NetWare training has always been available primarily through the Novell Training partners. That's not any different than before. What's new is that you can go to Microsoft training classes at a lot of public institutions. The number of books on a shelf at a particular bookstore doesn't seem to me to be that valid a measure for choosing continuity over change. Remember, one of the first Microsoft books to be popular was Windows for Dummies... I will post more if I have time. Back to work... Technically, this was a "Novell Certified NetWare Engineer" "When comparing Novell and Microsoft please realize there are two aspects of each vendor’s product to be aware of: The server operating system (OS) and the directory service." I DISagree with that statement. With Novell's products, the NOS platform (NetWare) and the Directory Service (eDirectory) are not linked in the same was as M$ W2K and AD. AD is ONLY available on the W2K/W2K3 platform. eDirectory is available on a multiplicity of platforms, *including* NetWare, W2K/W2K3, Linux (2 flavors), Solaris, AIX, HP-UX, et. al. This is, IMO, an important difference, in that using AD *locks* you into the M$ OS platform. Choosing eDirectory does not lock you into a platform; therefore, you retain flexability to respond to situations and needs down the road, ones you don't know about and can't predict right now. "....Active Directory as its directory service." While it is true that AD is *marketed* as a Directory Service, objectively its just the same old tired NT4 Domains. All they added was an extensible schema and transitive-trust relationships (but its still NT4 trust relationships). It is a 2-D namespace, just a 3-D view (kinda like drawing interlocking squares on a piece of paper to simulate a cube). In contrast, eDirectory is an actual Directory Service, from the ground up, with an actual 3-D database and far more data integrity mechanisms than you find in AD ("tombstones" are just plain lame, I can't think of a better word to describe them). AD lacks partitioning, timesync, backlinks, and on-the-fly repair. "A large percentage of the [Novell] customer base has switched to Microsoft NT, Microsoft 2000 and most recently to Microsoft Windows 2003 server systems." And a large percentage of them have come to regret the decision, and even reverse it. R.W. Bennett in the UK. Central Michigan Hospitals in the US. Heritage Oaks Bank in California. And those are just three I can think of. Read Linda Musthaler's column expressing her reflection that it was a poor move to make (). Ask Anheuser-Busch if they'd do their migration over again. Check out Gartner Group's WestCorp Financial case study. If 53% of people jump off a bridge, that makes it a good idea? At one time, IIS hosted 50% of the 'Net's websites - its less then 25% now. What does that tell you? "Many consulting companies have been hired to recommend corporate wide networking strategies and have selected Microsoft Windows...." Yeah, they did, because they knew they were practically GUARANTEED a steady stream of callbacks, with the attendant billable hours (and that's the name of the game in consulting: billable hours), to constantly fix, repair, re-install and troubleshoot the environment. You think they recommended that because it was best for the customer? Check out the NWFusion Forum following their moronic "King of the NOS Hill" article - you'll see people in the consulting field quietly admit that they recommended Redmond's dubious warez because they knew it would result in higher hardware sales and more billable hours (). You want to base your decision on THAT? "Why buy two operating systems?" Because a software monoculture is dangerous. Just ask all the companies that had their entire corporate network brought to its knees by a 16-year-old twerp in Germany. Slammer, anyone? Netsky? Phatbot? The litany goes on and on, and they all leveraged the porous, joking nature of Windoze "security" (an oxymoron, like "military intelligence"). And the assertion that "Windoze is most-hacked because its most prevalent" is a fallacy. If that were the case, then Apache webserver, which runs 2/3rds of the websites in the world (Source: Netcraft) would be the most-hacked webserver. But almost all the webserver hacks are on IIS. My Apache logs are littered with IIS hack attempts. "The [NetWare v5.1] servers do little else than provide authentication, file security and file storage. They do NOT host any applications at this time." Hardly an OS limitation. The could easily have hosted an E-Mail system (e.g. NetMail, GroupWise) and a webserver (Netscape Enterprise). There's NOTHING running on the Windoze servers that could not be running on the NetWare servers, in terms of services. "....the Dell servers were designed to operate only with Novell Netware 6.5 and later ...." Completely irrelevant to the fact that they could run v5.1 just fine. Unlike Windoze, NetWare is fairly indifferent to the hardware it runs on - it just runs. "....study by The Yankee Group shows the cost of migrating from Windows to Linux is three to four times as much as upgrading from one Windows version to another." 1) Find out who FUNDED that study. Dollars to doughnuts the M$ marketing folx had a hand. That has been the ONLY way they have gotten any significant favorable studies. 2) All you're looking at is INITIAL cost, not Total Cost of Ownership (TCO). And TCO study after TCO study (Gartner Group, Burton Group) - the actual independent ones, not funded by M$ (or Novell) - have consistently shown that Windoze is the highest TCO environment. It consistently consumes more hardware (more capital outlay), has more downtime, and takes more effort to administer. If something costs $1,000 less to buy, but then costs you $5000 more to own, have you saved any money, or have you cost yourself $4000? Like any good crack dealer, Redmond makes their initial, up-front costs low. When they have you hooked, then you pay. "Yes we are using Netware for file storage. But that can be done just as easily and efficiently with Windows." Wanna bet? In Windoze, try hiding the existence of a sub-directory from a user who has any access to the parent directory. That is, if you have \\SERVER1\DATA\STUFF and everyone has, say, Read access in that directory, create \\SERVER1\DATA\STUFF\PRIVA "Both Novell and Microsoft directory services have redundant server capability in the operating system and directory services design." Yeah, right. Try accessing your Windoze user profile when the server its stored on is down. Can't do it - Windoze stores its user profiles in files on a server. Using ZENworks, desktop profiles are stored in NDS, and are available as long as the Directory Service is available. "Security" You overlook the obvious scenario of a staffer bringing an infected computer inside your firewall and infecting your entire network with the latest Windoze virus that isn't stopped by your scanners yet; or Little Johnny, who has the time to keep up with all the latest Windoze hacks, rootkitting your Windoze servers from his iPaq. M$ has admitted than Windoze is not going to be secure before 2011. How many apps, apps you probably run today, require the user to be logged in as an "Administrator" equivalent to run? "User Login Interface" Your discussion completely ignores Native File Access Protocols, which allows the NetWare server to appear as a CIFS server. You also completely ignore the differences in functionality and manageability. "The Novell Netware 6.5 or Novell OES uses, like Microsoft 2003, an X.500 LDAP based directory service." That's not true - in EITHER case. First, LDAP is a directory ACCESS protocol, a standardized way to get at information in the directory. It has nothing to do with the structure or implementation of the actual directory service. They are both proprietary databases - altho I continue to say that AD is a "Directory Service" in marketing only. It is true that both environments offer LDAP interfaces. Anyone who claims any technological superiourity of AD over eDirectory is either a paid M$ shill or has no understanding of the technical issues. AD is nothing but the same old NT4 Domains. "Scalability" You ignore the fact that Windoze, no matter the enviroment size, *consistently* takes 2x to 3x as much hardware/time/effort to accomplish the same tasks as an equivalent NetWare environment. Don't believe me? Look at the Gartner Group Westcorp Financial case study. Their Windoze servers cost an average of 2x what the NetWare ones cost, and the cost of managing, servicing and maintaining those Windoze servers averaged almost 3 times as much, annually. You think you'll magically avoid that reality? "Support and Training of Staff....Here Windows has a huge advantage." Yes, they do. They have a seemingly endless supply of suckers willing to shell out big bucks for an environment they tout as being so easy and cheap to manage. Well, if its so easy and cheap, why do you need 22 weeks of extensive W2K3 training? This is possibly the WORST reason to move to W2K3. "If we moved to a Windows Active Directory each staff work station would be reconfigured to use the Microsoft client instead of the Novell client." That can already happen, as I pointed out with NFAP. "Right now file security is not exactly where we would want it." And you're fooling yourself if you think it'll get better with a switch to W2K3. File security is a crude subset of what you have in NetWare. And you are reduced to Users and Groups as your only security principals - forget leveraging your Directory Service to make your security administration easier. "The cross over to Active Directory can be achieved without any significant library trauma." Right. Wait until you need to do your first directory repair, and you have to REBOOT the server into its special "directory repair" mode. In the NDS world, you can do it on the fly, with NO impact the logged-in users. "The upgrade to Netware OES might be more problematic in that we don’t quite know the steps we would have to take at this time." Utterly brilliant... "We don't know what we're talking about, but since we've already made up our minds we're not going to bother with research. Facts just get in the way, and are overrated." "Licensing Cost" You overlook the fact that you need a SERVER license for each W2K3 server, ON TOP OF each CAL. And then there are the ongoing licensing costs of M$ Licensing 6.0. You also seem to be looking at the wrong price sheet. Seems to me, if you are an "academic instritution" like you are claiming for the M$ licensing, you should be looking at - you can get NetWare/OES, GroupWise and ZENworks ALL for $2/user. "Staff Bias" That's an understatement. "To select Novell’s OES is not simply a server upgrade but a completely new direction to move in." WRONG. First, there's NetWare v6.5 out TODAY, and which would be a painless upgrade. Next, OES will offer you a CHOICE (something you're never going to get from Redmond) of using a Linux kernel (the "new direction") ORr a NetWare kernel (the same basic technology you're already using). You are grossly inaccurate to cast that choice as a "completely new direction". If you stayed with the NetWare kernel, then all OES will do is change the product name and add the new features. "....open sourced technology innovation but at greater cost, effort, and peril to the library." Cripes, what FUD. The Windoze virus of the week is not a "peril"? The higher hardware purchase (capital outlay), maintenance (ongoing costs) and administration (staff time = money) costs of Windoze are not a "greater cost"? The "critical patch of the day" doesn't require a greater "effort"? Seems to me you're VERY selective about your concerns. Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration. One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community. Not possible. No DOS module of any sort runs on NetWare. Perhaps the DOS-based accounting modules (that presumably ran on a Microsoft-OS client PC) would only speak to a Pervasive database housed on a NetWare box using SPX calls. Therefore, this tidbit is also not germane to the decision to change your infrastructure from a NetWare/NDS network to a Windows network, and also seems biased because it implies that the NetWare platform in general is somehow outdated because the accounting software used DOS modules. Not true. NetWare 7 is a component of Open Enterprise Server. As Novell has repeated, NetWare is NOT dead. I heard that at BrainShare the DAY AFTER they announced OES. If you think Microsoft has a long life ahead of them, you should have heard what IBM told us at BrainShare 2004: Windows is dead as far as ANY IBM customer is concerned - they have already migrated 2 MILLION servers off Windows to SuSE Linux and are migrating another 6 million next year. In addition, ALL IBM customers are being migrated OFF Exchange and on to... wait for it... GroupWise. "Right. Wait until you need to do your first directory repair, and you have to REBOOT the server into its special "directory repair" mode. In the NDS world, you can do it on the fly, with NO impact the logged-in users." Actually, with eDirectory 8.x and above, you can do directory repairs on a NetWare system without locking the database, thus not even affecting NON-logged in users who are attempting to log into the network during the repair. ""Paradigm included DOS accounting modules which would run only on Netware servers at the time." That's bull. I've got a DOS-based system as well (AMSI) that will work under NetWare 5.1 as well as 4.11 - it's because the Pervasive SQL engine can be made to be backwards compatible with the BTrieve database. The ONLY underlying factor is the transport protocol. I've never tried it on pure IP (something Windows 2003 STILL lacks due to the fact it encapsulates NetBIOS in IP still) but I know that it works with IPX. I'll let you know later this year if I can get an old 1998 database written for BTrieve on a NetWare 4.11 server to work with NetWare 6.5 in pure IP. I'm betting I can do it. Other than that - I fully agree with PsiCops assessments. He especially hit the AD pretty good but missed a few points: Security Equivilances in AD: In eDirectory, you can use an OU for handling security (everyone in the .OU=Accounting can use the accounting printer). In AD, this is not possible, you have to rely on Groups still. Furthermore, what does this supervisor in this AD Domain have access to: .O=ACME | +.OU=Accounting | +.OU=Marketing | +.CN=Supervisor Does he have supervisor rights to just the Marketing OU? Nope - he's got supervisor rights in the ENTIRE Domain - from .O=ACME all the way down. So much for setting up local office "admins" that you don't want to have rights up the tree. Patitioning in AD: You can, in fact, partition AD, as long as you break up your forest into individual AD Domains. Meaning that you can't have one Domain and partition it from there based on the OU's within the Domain, you have to create separate Domains - then the old Trust Relationships nightmare is back to haunt you. Granted, AD creates automatic trusts between parent and child Domains but it does not create trusts between sibling Domains automatically. Static Inheritance in AD: Security is still token-based. Rights are static inherited. Any changes causes massive replication across the network. Change those rights and the user has to LOG OUT and then back in to see them. Not so with eDirectory. WAN Replication: Traffic in AD is about 10 times that of eDirectory. Get's nasty when you have hundreds or thousands of replications happening (such as students logging into the network at a school with multiple campuses across town). Database Size: AD database size is about 10 times that of eDirectory - for the SAME INFORMATION. Increase your hard drive space. Repair: a new version of AD is supposed to allow online repair without reboot. It's not in Win2K3 yet. Supposedly, however, this version of AD is not compatible with the current version of AD. This means upgrading AD across the board. Vulnerabilities: Firewalls withstanding, many hacks can be made against a Windows server via HTTP (port 80). That'll pretty much bypass the firewall. AD LDAP Responses: WHEN AD responds to LDAP queries (it's pretty damn slow compared to eDirectory) it's usually wrong about 70% of the time. AD Requirements: There are 5 MAJOR components to AD, services that MUST be available at all times in order for AD to function properly. Lose ONE of those services and regardless of whatever other servers you have running, AD is no longer functional. AD Scalability: AD can scale to millions of objects. Big deal. eDirectory has scaled to over 1.6 BILLION objects in a single tree. This means that eDirectory is going to have the better performance even in smaller environments. eDirectory 8.8 will remove the 1.6 billion limitation. either way, it's enough to put everyone in China in a single tree. But its the TCO he should be paying attention to. eDirectory IS X.500 based, has been from day one (over 10 years ago), and IS compliant with LDAP v.3. Natively. Active Directory is NOT X.500 based. It is legacy Microsoft Domain based, with some conformance with X.500 spec solely because of the heirarchy of DNS, which was kludged on top of the old domain model to make it seem X.500-based. It is not LDAP v.3 compliant, hence the wonderful track record of 70% failure rate in LDAP lookups DSPoole mentioned. And I agree that because AD is essentially NT Domains, its nowhere near X.500. I got a split. Thanks. You guys are great and I learned alot about all of it !!! Ultimately, you're the one who has to live with the decision. My goal here has been to make sure you had some facts and were not relying on the FUD spewed out by the M$ Marketing Machine. And they put out a lot of FUD. Personally, I would not be comfortable trusing my IT enterprise to a company like Bill's - they engage in predatory pricing, they don't hesitate to use their monopoly power to crush their competition (which ends up denying you alternatives to their products), and their licensing terms are rapacious and draconian. And I used to like and recommend their products - I wouldn't have used anyone else's C compiler, back when I was slinging code (and I cut my teeth on Borland's products). But the company I admired back then is not the company today - its changed, and not for the better. Everything would have been fine between me and Microsoft if it weren't for their FUD regarding NetWare and their "break the client" patches and fixes activity from the mid '90's to date. They could have stuck with being a good client OS provider, but no, they had to take the worst possible option - turn their decent client OS into a "server" OS, which is the exact opposite of what happened with *nix and what's happening with Linux (which is taking a good, solid server OS and turning it into a client OS.) I wasn't there for the rift with IBM over the direction of NT, but based on where IBM went with OS/2 I'd guess it had a lot to do with the built-in vulnerabilities because M$ wanted it to look and feel like Windows and be "easy to use" without regard for stability or security. Even though OS/2 failed, it was not because of inferior technology, but again from the M$ marketing strategy and their "break whatever isn't ours" activity. I, unlike those you were hearing from in the other TA, have continued to work with NetWare AND Windows, in a mixed environment. My experience has not been with the huge companies that PsiCop and DSPoole work for, though - mine has been at companies with in the neighborhood of 100 users - give or take 20 to 50. That's a bit closer to what you're talking about. Speaking from experience relevant to your size situation (I work for a manufacturing company) this size company is NOT too small for NetWare and eDirectory, by any means. We have, in addition to NetWare, NT, Win2K Server, Win2K3 Server, and RedHat Linux ES, with primarily Win2K Pro on the desktop. And I am the only Network Administrator. We have a Security guy that does the WAN and internet, and handles MS Exhange (he can have it!) but I do all the rest. So much for the simplification argument. I don't see how we'd be able to manage without adding to staff if we were to go all Microsoft all the time... Good luck to you, and whatever your decision, come visit us again at EE. We have 400+ users on the WAN, another 200 users off WAN but using networked services (and I'm about to ZENworks 6.5 the lot of them) and then over 750 additional accounts in eDirectory and GroupWise on top of the 650 corporate users. I just built another Windows 2000 server today (to replace a Windows NT box) and a NetWare 6.5 box to replace a 5.1 box. Scattered across the U.S. (I'm in San Diego today). Moving to an all IP environment. hey, I found VNC hosts for OS/2 Warp and NetWare, guess which remote control software I'm moving to? Client side: Mostly W2K with some NT v4 Population: About 1,400 total client workstations and about 1,200 users scattered across about WAN-connected 20 locations (not counting telecommuters and "interstate offices" in other states operating out of people's homes with laptops) Network: Pure IP, IPX was eliminated some years ago; NDPS-based printing for several hundred printers; ZENworks v4 moving to v6.5; switched Ethernet locally, a mix of F-R and xDSL across the WAN Our biggest push right now is exploring Linux as a platform to migrate to from WINDOZE
https://www.experts-exchange.com/questions/21133825/Does-this-documnet-have-merit-Netware-OES-vs-Windows-2003.html
CC-MAIN-2018-26
refinedweb
4,343
63.49
Terminate threads during application or not? Posted by jpluimers on 2019/10/02 I got an interesting question a while ago: should an application terminate (anonymous) threads or not? - [WayBack] delphi – How do I reliably wait on a thread that has just been created? – Stack Overflow: you cannot as Execute might not be called, and there is a race condition on Started. - [WayBack] Debugging Multithreaded Applications with Delphi: WaitFormight not return (which you cannot get around this because all overloads of the CheckThreadErrorare non-virtual) - [WayBack] delphi – terminate all the threads (TThread) on closing application – Stack Overflow - [WayBack] multithreading – How to terminate anonymous threads in Delphi on application close? – Stack Overflow: you cannot because referring to any of those threads is a race condition. A problem is that a thread might not execute unless you call WaitFor before Terminate is called. The reason is that the internal function ThreadProc does not start Execute if the thread is already terminated. The ThreadProcin the System.Classesunit is an ideal place to set breakpoints in order to see which threads might start. Other useful places to set breakpoints: - TAnonymousThread.Execute - TExternalThread.Execute Execute not being called by ThreadProc is a bug, but it is not documented because QC is gone (taking the below entry with it), it is not in QP and the docwiki never got updated. Given QC has so much information, I am still baffled that Embarcadero took it down. Sergey Kasandrov (a.k.a. serg or sergworks) wrote in [WayBack] Sleep sort and TThread corner case | The Programming Works about this bug and refers to WayBack: QualityCentral 35451 – TThread implementation doesn’t guarantee that thread’s Execute method will be called at all . The really bad thing are the WayBack: QualityCentral Resolution Entries for Report #35451 Resolution “As Designed” implying the design is wrong. In his post, sergworks implemented the Sleep sorting in Delphi. Related: - WayBack: Genius sorting algorithm: Sleep sort - [WayBack] Reddit – 4chan: Sleep sort : programming - [WayBack] 4chan: Sleep sort : programming - [WayBack] Sorting algorithms/Sleep sort – Rosetta Code - [WayBack] What is the time complexity of the sleep sort? – Stack Overflow Note that application shutdown is a much debated topic. Best is to do as little cleanup as possible: your process is going to terminate soon anyway. No need to close handles or free memory: Windows will do that for you anyway. See for instance: - [WayBack] The old-fashioned theory on how processes exit – The Old New Thing - [WayBack] Quick overview of how processes exit on Windows XP – The Old New Thing - [WayBack] Changes to power management in Windows Vista – The Old New Thing - [WayBack] Now that Windows makes it harder for your program to block shutdown, how do you block shutdown? – The Old New Thing - [WayBack] A process shutdown puzzle – The Old New Thing - [WayBack] A process shutdown puzzle: Answers – The Old New Thing - [WayBack] A process shutdown puzzle, Episode 2 – The Old New Thing - [WayBack] Why does Internet Explorer not call DLL_PROCESS_DETACH on my DLL when I call ExitProcess? – The Old New Thing - [WayBack] When DLL_PROCESS_DETACH tells you that the process is exiting, your best bet is just to return without doing anything – The Old New Thing - [WayBack] Why can’t I delete a file immediately after terminating the process that has the file open? – The Old New Thing - [WayBack] During process termination, the gates are now electrified – The Old New Thing - [WayBack] How my lack of understanding of how processes exit on Windows XP forced a security patch to be recalled – The Old New Thing - [WayBack] The old-fashioned theory on how processes exit – The Old New Thing - [WayBack] Some reasons not to do anything scary in your DllMain – The Old New Thing Related to waiting: Related to executing: - [WayBack] TThread.Create Constructor - [WayBack] CheckThreadError Method - [WayBack] TThread.Destroy Destructor - [WayBack] TThread.Execute Method - [WayBack] TThread.OnTerminate Event - [WayBack] TThread.Resume Method - [WayBack] TThread.Suspend Method - [WayBack] TThread.Terminate Method - [WayBack] TThread.Terminated Property - [WayBack] TThread Members - [WayBack] System.Classes.TThread.Started – RAD Studio API Documentation –jeroen
https://wiert.me/2019/10/02/terminate-threads-during-application-or-not/
CC-MAIN-2021-04
refinedweb
670
52.19
Python’s built-in hex(integer) function takes one integer argument and returns a hexadecimal string with prefix "0x". If you call hex(x) on a non-integer x, it must define the __index__() method that returns an integer associated to x. Otherwise, it’ll throw a TypeError: object cannot be interpreted as an integer. Input : hex(1)Output : '0x1'Input : Output :Output : hex(2) '0x2'Input : Output :Output : hex(4) '0x4'Input : Output :Output : hex(8) '0x8'Input : Output :Output : hex(10) '0xa'Input : hex(11)Output : '0xb'Input : hex(256)Output : '0x100' Python hex() hex() for Custom Objects If you call hex(x) on a non-integer or custom object x, it must define the __index__() method that returns an integer associated to x. class Foo: def __index__(self): return 10 f1 = Foo() print(hex(f1)) # '0xa' How to Fix “TypeError: ‘float’ object cannot be interpreted as an integer”? Python’s hex() function can only convert whole numbers from any numeral system (e.g., decimal, binary, octary) to the hexadecimal system. It cannot convert floats to hexadecimal numbers. So, if you pass a float into the hex() function, it’ll throw a TypeError: 'float' object cannot be interpreted as an integer. >>> hex(11.14) Traceback (most recent call last): File "<pyshell#20>", line 1, in <module> hex(11.14) TypeError: 'float' object cannot be interpreted as an integer To resolve this error, you can round the float to an integer using the built-in round() function or you write your own custom conversion function: How to Convert a Float to a Hexadecimal Number in Python? To convert a given float value to a hex value, use the float.hex() function that returns a representation of a floating-point number as a hexadecimal string including a leading 0x and a trailing p and the exponent. Note that the exponent is given as the power of 2 by which it is scaled—for example, 0x1.11p+3 would be scaled as 1.11 * 2^3 using the exponent 3. >>> 3.14.hex() '0x1.91eb851eb851fp+1' >>> 3.15.hex() '0x1.9333333333333p+1' Alternatively, if you need a non-floating point hexadecimal representation similar to most online converters, use the command hex(struct.unpack('<I', struct.pack('<f', f))[0]). import struct def float_to_hex(f): return hex(struct.unpack('<I', struct.pack('<f', f))[0]) print(float_to_hex(3.14)) print(float_to_hex(88.88)) The output are the octal representations of the float input values: 0x4048f5c3 0x42b1c28f Sources: - - - - Hex Formatting Subproblems Let’s consider some formatting variants of the hexadecimal conversion problem converting a number into lowercase/uppercase and with/without prefix. We use the Format Specification Language. You can learn more on this topic in our detailed blog tutorial. We use three semantically identical variants for each conversion problem. How to Convert a Number to a Lowercase Hexadecimal With Prefix >>> '%#x' % 12 '0xc' >>> f'{12:#x}' '0xc' >>> format(12, '#x') '0xc' How to Convert a Number to a Lowercase Hexadecimal Without Prefix >>> '%x' % 12 'c' >>> f'{12:x}' 'c' >>> format(12, 'x') 'c' How to Convert a Number to an Uppercase Hexadecimal With Prefix >>> '%#X' % 12 '0XC' >>> f'{12:#X}' '0XC' >>> format(12, '#X') '0XC' How to Convert a Number to an Uppercase Hexadecimal Without Prefix >>> '%X' % 12 'C' >>> f'{12:X}' 'C' >>> format(12, 'X') 'C' Summary Python’s built-in hex(integer) function takes one integer argument and returns a hexadecimal string with prefix "0x". >>> hex(1) '0x1' >>> hex(2) '0x2' >>> hex(4) '0x4' >>> hex(8) '0x8' >>> hex(10) '0xa' >>> hex(11) '0xb' >>> hex(256) '0x100' If you call hex(x) on a non-integer x, it must define the __index__() method that returns an integer associated to x. class Foo: def __index__(self): return 10 f1 = Foo() print(hex(f1)) # '0xa' Otherwise, it’ll throw a TypeError: object cannot be interpreted as an integer..
https://blog.finxter.com/python-hex-function/
CC-MAIN-2021-43
refinedweb
645
50.26
Hi All, I am trying to store some values in an object property. But my code does not work. Why does my code fail? import bpy # Kind of out-of-place but required to be declared before used. def returnIfObject(passedName=""): try: result = bpy.data.objects[passedName] except: result = None return result myObject = returnIfObject("Cube") if myObject != None: try: myProperty = myObject["myProperty"] result = True except: result = False if result == True: print("Fetched: " + str(myProperty) + ".") else: # Assign a property, none exists. try: myObject.properties["myProperty"] = "myValue" result = True except: result = False if result == True: print("myPropery is assigned!") else: print ("Failed to store myValue in myProperty.") else: print ("Failed to fetch object [Cube].") On the default scene, it prints “Failed to store myValue in myProperty”. Is my syntax wrong?
https://blenderartists.org/t/2-5-object-properties/485529
CC-MAIN-2020-40
refinedweb
128
54.08
these approaches: - Java interfaces - JSR 223 – Scripting for the Java Platform - com.sun.tools.javafx.ui.FXBean - Using the Java classes created by the JavaFX compiler directly - (JavaFX Reflection) The last two approaches are mentioned just for completeness. It requires some detailed knowledge of how a JavaFX class is translated into Java code to use the classes directly and it is more cumbersome than the other approaches. For these reasons I’ll skip it in this article, but plan to dedicate an entire article on an overview of how JavaFX classes get translated to Java code. JavaFX Reflection is not implemented yet, but planned. I will add a description for this approach as soon as the details are decided. Code sample 1 shows a class, which needs to be passed to Java. It has a public function printProperty, which we want to call from Java code. import java.lang.System; public class MyJavaFXClass { public attribute property: String; public function printProperty() { System.out.println(property); } } Code Sample 1: Definition of MyJavaFXClass Java interfaces The first approach (and my favorite one) is to use Java interfaces for passing JavaFX classes to Java code. This approach requires to define a Java interface, let the JavaFX-class implement the interface and reference the interface in the Java code. For our little example, we define the interface Printable which is shown in Code Sample 2. public interface Printable { void printProperty(); } Code Sample 2: The interface Printable Now we can write a Java class, which takes an implementation of the interface Printable and calls its method printProperty (code sample 3). public class MyLibraryByInterface { public static void print (Printable p) { p.printProperty(); } } Code Sample 3: Using an interface to access JavaFX code The last step is to modify the JavaFX class from code sample 1, so it implements the interface Printable. Also I added some code to create an instance of this class and pass it to the Java library. import java.lang.System; public class MyJavaFXClass extends Printable { public attribute property: String; public function printProperty() { System.out.println(property); } } var o = MyJavaFXClass {property: "Hello JavaFX by Interface"}; MyLibraryByInterface.print(o); Code Sample 4: Implementing a Java interface with JavaFX Script This approach has some major advantages. As you can see, the Java code contains no JavaFX-specific code at all. The very same code could handle a Java-object which implements the interface Printable. As a side effect, you can easily write Java-code which can be used with JavaFX-code and Java-code at the same time. Or if you want to migrate parts of an existing Java-application, you can simply switch to use interfaces and then exchange class after class. (Well, it’s probably not going to be THAT simple, but you get the idea.) The drawback of this approach is, that one cannot instantiate objects directly as usual when using interfaces. You either need to pass the objects to the Java code or use factory-methods. But the very first factory still needs to be passed to the Java code – or created with the next approach. JSR 223 – Scripting for the Java Platform JSR 223 – Scripting for the Java Platform defines mechanisms to access scripting languages from Java code. An implementation for JavaFX Script is available and part of the JavaFX Runtime. With the provided scripting engine, you can read JavaFX-files, compile, and run them. And because the engine for JavaFX Script implements the interface Invocable, it is even possible to call members of JavaFX objects. In code sample 5, a class is defined, which takes an arbitrary object and tries to call its method printProperty with the help of the JavaFX Scripting Engine. The first two lines of the method get the ScriptingEngineManager, which manages the available scripting engines. The engine for JavaFX Script is requested and casted to the interface Invocable. The line in the try-block invokes the function printproperty of the object that was given as parameter. import javax.script.*; public class MyLibraryByScriptEngine { public static void print (Object o) { ScriptEngineManager manager = new ScriptEngineManager(); Invocable fxEngine = (Invocable) manager.getEngineByName("javafx"); try { fxEngine.invokeMethod(o, "printProperty"); } catch (Exception ex) { ex.printStackTrace(); } } } Code sample 5: Using the JavaFX Script engine For this approach we do not need to change the JavaFX class at all. The following two lines will create the object and pass it to the Java library. import MyJavaFXClass; var o = MyJavaFXClass {property: "Hello JavaFX by ScriptEngine"}; MyLibraryByScriptEngine.print(o); Code Sample 6: Calling MyLibraryByScriptEngine As you can see, the Java code to handle a JavaFX object is slighty more complicated than in the first approach. The major drawback is, that the compiler is not able to check the code. Instead you will receive a runtime exception if for example you have a typing error in the method-name printProperty, like I did. But on the other hand this approach is the most powerful one of these three aproaches, as you are able to access JavaFX-files directly. We have just scratched the possibilities here; if you want to read more, I recommend this general introduction or this article specific to JavaFX Script. com.sun.tools.javafx.ui.FXBean The last approach, to use the class FXBean within the JavaFX Runtime, differs from the other approaches, because it does not give you access to the functions of a JavaFX object but to its attributes. The code sample 7 receives an arbitrary object, reads its attribute property with the help of the class FXBean, and prints the attribute. First an FXBean is constructed using the class of the given object as parameter. The next line requests an attribute with the name property and the type string. Actually the FXBean does not return a String immeadiately, but an ObjectVariable<String>. This is the internal representation of a String-attribute mapped to Java code. We get the value of the attribute by calling the method get, which is printed in the last line of the try-block. import com.sun.tools.javafx.ui.FXBean; import com.sun.javafx.runtime.location.ObjectVariable; public class MyLibraryByFXBean { public static void print (Object o) { try { FXBean bean = new FXBean(o.getClass()); ObjectVariable<String> attribute = (ObjectVariable<String>)bean.getObject(o, "property"); String property = attribute.get(); System.out.println(property); } catch (Exception ex) { ex.printStackTrace(); } } } Code Sample 7: Using the class FXBean The next two lines of JavaFX code create an object and pass it to the Java library. import article2.Listing1.MyJavaFXClass; var o = MyJavaFXClass {property: "Hello JavaFX by FXBean"}; MyLibraryByFXBean.print(o); Code Sample 8: Calling MyLibraryByFXBean As already mentioned above does this approach differ from the others and can therefore not be compared. Using the FXBean you are able to read and modify attributes directly. There are a number of methods to simplify acces to attributes of type Number, Integer, Boolean, and sequences. After considering the primitive datatypes and Java objects in the last article, we took a look at JavaFX objects in this article. The next article will focus on dealing with sequences in Java code. Sun debería poner más esfuerzo en hacer que los lenguajes fueran más compatibles leyendo java como si java pudiera leer estos lenguajes porque lo que ahora tenemos es muchos lenguajes compatibles con java pero java no es compatible con ellos lo que provoca que no se puedan utilizar construcciones hechas en estos lenguajes desde Java y se pierda el poder de estos lenguajes al no tener una buena integración con Java. Un lenguaje para todos y todos para uno. (Se necesita que se estandarice como son las construcciones (clousure, property, method) de los lenguajes a nivel de bytecode para poder hacer así una verdadera integración entre todos y que todos se puedan comunicar entre ellos sin necesidad de una interface que en fin es un esfuerzo más que no veo por qué haya que hacerlo). Posted by leonardoavs on March 29, 2008 at 05:56 PM CET # @leonardoavs: Your comment sounds interesting, but unfortunately my knowledge of foreign languages is very limited. :-( If you could please translate it to English (or German if you prefer), I'd be happy to post a reply. Posted by Michael Heinrichs on March 31, 2008 at 11:21 AM CEST # He does not want to write wrapper code. -- Google Translate -- "Sun should put more effort into making them more compatible languages as if reading java java could read these languages because what we now have is many languages compatible with Java, but Java is not compatible with them causing them to not be possible to use buildings made in these languages from Java to lose the power of these languages by not having a good integration with Java. A language for all and all for one. (Requires that standardize such as construction (clousure, property, method) of the language bytecode level to enable a real integration between all and that everyone can communicate between themselves without having an interface that is in order an effort that most do not see why that has to)." Posted by Jan Erik Paulsen on March 31, 2008 at 11:01 PM CEST # @leonardoavs: I have good news for you. JavaFX Script is compiled into regular Java bytecode and therefore it is possible to use it directly. But it requires some knowledge of how features like multiple inheritance, binding of attributes and functions etc. work and so far I do not see the real benefit compared to using interfaces. As said before, I will cover this approach in a later article, probably as soon as bound functions are fully implemented. So just be patient. ;-) Posted by Michael Heinrichs on April 01, 2008 at 12:26 PM CEST # Yeah JavaFX Scripts is compiled to ByteCode, but it is not a real integration because Java need to do one Interface to communicate to JavaFX. Apologies for the bad Ingles. Posted by Leonardoavs on April 11, 2008 at 08:14 AM CEST # very worst Posted by ragaven on September 09, 2008 at 12:18 PM CEST # Hello Mike, first of all thanks for your good work. I am new in Using JavaFX and I am trying to program you example. I am using the Eclipse plug-in for programming ( I think it should be not the Problem ). I stack at the point I want to compile the MyJavaFXClass ( which is a .fx class). The Eclipse compiler do not like that „ printProperty() { „ . In further examples it was used something like : „ function AnimationExample.composeNode() = Group { content: [Rect { … „ It would be really nice if you can give me a hind that I can continue with learning JavaFX Thanks allot Max P. Posted by Max P. on September 21, 2008 at 12:31 PM CEST # whre is com.sun.tools.javafx.ui.* package please send me this package on mail : tremuricic@yahoo.com. i did not find this package good day. Posted by cornel on March 21, 2009 at 11:23 AM CET # good day. i want to make a javafx browser usse FXBean. this browser is a javafx acpication configurable with xml file. Xml file is send by webservice. i want do develop a special xml file for fxbrowser. this aplication is very good ria aplication. The xml file configure javafxBrowser like Xml configure XULRunner gre. the browser will have code java and code java call JavaFX Runtime. i need mor information about java fx Runtime. my emal addres is tremuricic@yahoo.com please send me com.sun.tools.javafx.ui.* package good day. Posted by cornel on March 21, 2009 at 11:43 AM CET #
http://blogs.sun.com/michaelheinrichs/entry/using_javafx_objects_in_java
crawl-002
refinedweb
1,923
62.48
I just started C++ today and im using Dev-C++ as a compiler. Ive tried many examples, including the ones in this web's tutorial, but when i compile and run them the programs do not show on the screen, i am using cin.get(); so i think the problem is that i need to use some function like "Show on Screen" ;) well, here is the example i am using: Please change the code as needed to make the window showPlease change the code as needed to make the window showCode: #include <iostream> using namespace std; int main() { cout<<"HEY, you, I'm alive! Oh, and Hello World!\n"; cin.get(); } Thank you in advance :D
http://cboard.cprogramming.com/cplusplus-programming/73916-programs-not-showing-printable-thread.html
CC-MAIN-2015-35
refinedweb
116
74.73
Revision: 218 Author: mbaas Date: 2007-06-11 16:35:59 -0700 (Mon, 11 Jun 2007) Log Message: ----------- Updated the changelog Modified Paths: -------------- cgkit/trunk/changelog.txt Modified: cgkit/trunk/changelog.txt =================================================================== --- cgkit/trunk/changelog.txt 2007-06-11 23:06:11 UTC (rev 217) +++ cgkit/trunk/changelog.txt 2007-06-11 23:35:59 UTC (rev 218) @@ -5,8 +5,12 @@ New features: +- New module mayabinary to parse Maya binary scene files + Bug fixes/enhancements: +- Bugfix 1677388: The debug messages that print pointer now use the format + string %#lx which should also work on 64Bit systems (thanks to Paul Egan) - GLRenderInstance: Bugfix: Picking in vsplit stereo mode can now correctly be done on the left image. - mayaascii: When several MEL commands were in the same line it could happen @@ -47,10 +51,14 @@ - Bugfix: The render tool wasn't properly processing facevarying/facevertex variables on meshes (the extractUniform() function didn't take these storage classes into account). +- mayaascii: The options of the file command have been completed. +- mayaascii: The lockNode command is supported. +- mayaascii: Reading the file can be aborted. - mayaascii: The importer was broken because of the previous modification of the simplecpp module. This is fixed now. - render.py: The Globals section can now contain a parameter output_framebuffer that controls whether a framebuffer display is opened or not. +- simplecpp: Reading a file can be aborted now. - simplecpp: Don't use keyword arguments to pass the compare function to sort(). This wasn't supported before Python 2.4. - ribexport: All RMLightSource instances were using the same shader as the This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. Revision: 217 Author: mbaas Date: 2007-06-11 16:06:11 -0700 (Mon, 11 Jun 2007) Log Message: ----------- Added a parser for Maya binary files. Added Paths: ----------- cgkit/trunk/cgkit/mayabinary.py Added: cgkit/trunk/cgkit/mayabinary.py =================================================================== --- cgkit/trunk/cgkit/mayabinary.py (rev 0) +++ cgkit/trunk/cgkit/mayabinary.py 2007-06-11 23:06:11 UTC (rev 217) @@ -0,0 +1,348 @@ +# ***** Python Computer Graphics Kit. +# +# The Initial Developer of the Original Code is Matthias Baas. +# Portions created by the Initial Developer are Copyright (C) 2004 +# +# ***** +# $Id$ + +"""This module contains the MBReader base class to parse Maya binary files (~IFF files). +""" + +import struct, os.path + +class Chunk: + """Chunk class. + + This class stores information about a chunk. It is passed to the handler + methods which can also use this class to read the actual chunk data. + + The class has the following attributes: + + - tag: The chunk name (four characters) + - size: The size in bytes of the data part of the chunk + - pos: The absolute position of the data part within the input file + - parent: The GroupChunk object of the parent chunk + - depth: The depth of the node (i.e. how deep it is nested). The root + has a depth of 0. + + The binary chunk data can be read by using the read() method. You + can specify the number of bytes to read or read the entire data + at once. It is not possible to read data that lies outside this + chunk. + """ + + def __init__(self, file, parent, tag, size, pos, depth): + """Constructor. + """ + # The file handle that currently points to the start of the chunk data + self.file = file + # The parent chunk object + self.parent = parent + # The chunk name (four characters) + self.tag = tag + # The chunk size in bytes (only the data part) + self.size = size + # The absolute position (of the data) within the input file + self.pos = pos + # The depth of the node (i.e. how deep it is nested) + self.depth = depth + + # The number of bytes read so far + self._bytesRead = 0 + + def __str__(self): + return "<Chunk %s at pos %d (%d bytes)>"%(self.tag, self.pos, self.size) + + def chunkPath(self): + """Return the full path to this chunk. + + The result is a concatenation of all chunk names that lead to this chunk. + """ + name = "%s"%(self.tag) + if self.parent is None: + return name + else: + return self.parent.chunkPath()+".%s"%name + + def read(self, bytes=-1): + """Read the specified number of bytes from the chunk. + + If bytes is -1 the entire chunk data is read. + """ + if self.file is None: + raise RuntimeError, "This chunk is not active anymore" + + maxbytes = self.size-self._bytesRead + if bytes<0: + bytes = maxbytes + else: + bytes = min(bytes, maxbytes) + self._bytesRead += bytes + return self.file.read(bytes) + +class GroupChunk(Chunk): + """Specialized group chunk class. + + In addition to the Chunk class this class has an attribute "type" + that contains the group type (the first four characters of the data part). + """ + + def __init__(self, file, parent, tag, size, pos, type, depth): + Chunk.__init__(self, file=file, parent=parent, tag=tag, size=size, pos=pos, depth=depth) + + # The group type + self.type = type + + # Start with 4 because the type was already read + self._bytesRead = 4 + + def __str__(self): + return "<GroupChunk %s (%s) at pos %d (%d bytes)>"%(self.tag, self.type, self.pos, self.size) + + def chunkPath(self): + """Return the full path to this chunk. + """ + name = "%s[%s]"%(self.tag, self.type) + if self.parent is None: + return name + else: + return self.parent.chunkPath()+".%s"%name + + +class MBReader: + """Read Maya IFF files and call an appropriate handler for each chunk. + + This is the base class for a .mb reader class. Derived classes can implement + the chunk handler methods onBeginGroup(), onEndGroup() and onDataChunk(). + These handlers receive a Chunk (or GroupChunk) object as input that + contains information about the current chunk and that can also be used + to read the actual chunk data. + """ + + def __init__(self): + pass + + def abort(self): + """Aborts reading the current file. + + This method can be called in handler functions to abort reading + the file. + """ + self._abortFlag = True + + def onBeginGroup(self, chunk): + """Callback that is called whenever a new group tag begins. + + chunk is a GroupChunk object containing information about the group chunk. + """ + pass +# print "BEGIN", chunk + + def onEndGroup(self, chunk): + """Callback that is called whenever a group goes out of scope. + + chunk is a GroupChunk object containing information about the group chunk + (it is the same instance that was passed to onBeginGroup()). + """ + pass +# print "END", chunk + + def onDataChunk(self, chunk): + """Callback that is called for each data chunk. + + chunk is a Chunk object that contains information about the chunk + and that can be used to read the actual chunk data. + """ + pass +# print " ", chunk + + def read(self, file): + """Read the binary file. + + This method reads all chunks sequentially and invokes appropriate + callback methods. + """ + + self._fileName = getattr(file, "name", "?") + + # Check if this actually is a Maya file + # (and that it starts with a group tag) + header = file.read(12) + file.seek(0) + if len(header)!=12 or header[0:4]!="FOR4" or header[8:12]!="Maya": + raise ValueError, 'The file "%s" is not a Maya binary file.'%self._fileName + + self._file = file + self._abortFlag = False + + # The current byte position inside the file + pos = 0 + # A stack with alignment values. Each group tag pushes a new value + # which is popped when the group goes out of scope + alignments = [] + # A stack with the currently open group chunks. The items are + # 2-tuples (endpos, groupchunk). + pendingGroups = [] + # The current depth of the chunks + depth = 0 + while not self._abortFlag: + tag,size = self.readChunkHeader() + if tag==None: + break + pos += 8 + if self.isGroupChunk(tag): + # Group chunk... + type = file.read(4) + if len(pendingGroups)==0: + parent = None + else: + parent = pendingGroups[-1][1] + chunk = GroupChunk(file=file, parent=parent, tag=tag, size=size, pos=pos, type=type, depth=depth) + self.onBeginGroup(chunk) + av = self.alignmentValue(tag) + alignments.append(av) + end = pos+self.paddedSize(size, av) + if len(pendingGroups)>0 and end>pendingGroups[-1][0]: + raise ValueError, 'Chunk %s at position %s in file "%s" has an invalid size (%d) that goes beyond its contained group chunk.'%(tag,pos-8,os.path.basename(self._fileName),size) + pendingGroups.append((end, chunk)) + pos += 4 + depth += 1 + else: + # Data chunk... + chunk = Chunk(file=file, parent=pendingGroups[-1][1], tag=tag, size=size, pos=pos, depth=depth) + self.onDataChunk(chunk) + pos += self.paddedSize(size, alignments[-1]) + + # Check which groups are to be closed... + while len(pendingGroups)>0 and pos>=pendingGroups[-1][0]: + end,chunk = pendingGroups.pop() + self.onEndGroup(chunk) + depth -= 1 + + # Seek to the next chunk position. This is done here (even though it + # wouldn't be necessary in some cases) so that the callbacks have + # no chance to mess with the file handle and bring the reader + # out of sync. + file.seek(pos) + + def readChunkHeader(self): + """Read the tag and size of the next chunk. + + Returns a tuple (tag, size) where tag is the four character + chunk name and size is an integer containing the size of the + data part of the chunk. + Returns None,None if the end of the file has been reached. + Throws an exception when an incomplete tag/size was read. + """ + header = self._file.read(8) + if len(header)==0: + return None,None + if len(header)!=8: + raise ValueError, 'Premature end of file "%s" (chunk tag & size expected)'%os.path.basename(self._fileName) + return (header[:4], struct.unpack(">L", header[4:])[0]) + + def isGroupChunk(self, tag): + """Check if the given tag refers to a group chunk. + + tag is the chunk name. Returns True when tag is the name + of a group chunk. + """ + return tag in ["FORM", "CAT ", "LIST", "PROP", + "FOR4", "CAT4", "LIS4", "PRO4", + "FOR8", "CAT8", "LIS8", "PRO8"] + + def alignmentValue(self, tag): + """Return the alignment value for a group chunk. + + Returns 2, 4 or 8. + """ + if tag in ["FORM", "CAT ", "LIST", "PROP"]: + return 2 + elif tag in ["FOR4", "CAT4", "LIS4", "PRO4"]: + return 4 + elif tag in ["FOR8", "CAT8", "LIS8", "PRO8"]: + return 8 + else: + return 2 + + def paddedSize(self, size, alignment): + """Return the padded size that is aligned to the given value. + + size is an arbitrary chunk size and alignment an integer + containing the alignment value (2,4,8). If size is already + aligned it is returned unchanged, otherwise an appropriate + number of padding bytes is added and the aligned size is + returned. + """ + # Padding required? + if size%alignment!=0: + padding = alignment-size%alignment + size += padding + return size + + def dump(self, buf): + """Helper method to do a hex dump of chunk data. + + buf is a string containing the data to dump. + """ + offset = 0 + while len(buf)>0: + data = buf[:16] + buf = buf[16:] + s = "%04x: "%offset + s += " ".join(map(lambda x: "%02x"%ord(x), data)) + s += (55-len(s))*' ' + for c in data: + if ord(c)<32: + c = '.' + s += c + print s + offset += 16 + + +if __name__=="__main__": + + import sys + + class MBDumper(MBReader): + def onBeginGroup(self, chunk): + print "BEGIN", chunk + + def onEndGroup(self, chunk): + print "END", chunk + + def onDataChunk(self, chunk): + print chunk + + rd = MBDumper() + rd.read(open(sys.argv[1], "rb")) + \ No newline at end of file This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
http://sourceforge.net/mailarchive/forum.php?forum_name=cgkit-commits&max_rows=25&style=nested&viewmonth=200706&viewday=11
CC-MAIN-2013-48
refinedweb
1,793
67.45
Good day! I am using the Arduino to output PWM to switch a MOSFET gate, to be used for a buck converter. I am also using the Arduino to monitor the voltage output from the buck converter, and I want to log that voltage in an SD card. However, it seems that PWM does not work when I write data to the SD card using the SD library. What happens is that when the output voltage data is written to the SD card, the PWM (from pin 11) turns off. The PWM signal will alternately appear and disappear when viewed in an oscilloscope. It seems that the Arduino is alternating between writing to the SD card and then back to the PWM output. Here is a test code: #include <SD.h> const int chipSelect = 4; File dataFile; void setup(){ Serial.begin(9600); pinMode(11,OUTPUT); pinMode(10,OUTPUT); pwmSetup(); dataFile = SD.open("datalog.txt", FILE_WRITE); } void loop(){ // delay(1000); analogWrite(11, 200); dataFile.println("Hello World"); // delay(2000); // analogWrite(11, 50); // delay(1000); // analogWrite(11, 200); // delay(2000); // analogWrite(11, 50); // delay(5000); } void pwmSetup(){ //set PWM freq to 31250 Hz TCCR2A = _BV(COM2A1) | _BV(COM2B1) | _BV(WGM20); TCCR2B = _BV(CS20); OCR2A = 0; } Can anyone point out what am I doing incorrectly? Thanks
https://forum.arduino.cc/t/problem-with-sd-library-and-pwm-at-the-same-time/135996
CC-MAIN-2022-27
refinedweb
213
62.58
Ram wrote:> > I've looked at the code. Look in fs/proc/base.c (Linux 2.6.10),> > proc_root_link().> > > > I don't see anything there to prevent you from traversing to the> > mounts in the other namespace.> > > > So why is it failing? Any idea?> > Since you are traversing a symlink, you will be traversing the symlink> in the context of traversing process's namespace. > > If process 'x' is traversing /proc/y/root , the lookup for the root> dentry will happen in the context of process x's namespace, and not> process y's namespace. Hence process 'x' wont really get into> the namespace of the process y.Lookups don't happen in the context of a namespace.They happen in the context of a vfsmnt. And the switch to a newvfsmnt is done by matching against (dentry,parent-vfsmnt) pairs.current->namespace is only checked for mount & unmount operations, notfor path lookups.Which means proc_root_link, when it switches to the vfsmnt at the rootof the other process, should traverse into the tree of vfsmnts whichmake up the other namespace.-- Jamie-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2005/4/28/232
CC-MAIN-2017-13
refinedweb
208
67.55
nl:: Weave:: Profiles:: Echo_Next:: WeaveEchoClient #include <src/lib/profiles/echo/Next/WeaveEchoClient.h> Provides the ability to send Weave EchoRequest messages to a peer node and receive the corresponding EchoResponse messages. Summary The WeaveEchoClient class implements the initiator side of the Weave Echo protocol. Similar to the ICMP ping protocol, the Weave Echo protocol can be used to test the liveness and reachability of a Weave node. Applications can use the WeaveEchoClient class to send one-off or repeating EchoRequest messages to a peer node identified by a Binding object. A corresponding class exists for responding to echo requests (see WeaveEchoServer). The WeaveEchoClient takes a Weave Binding object which is used to identify and establish communication with the intended recipient of the echo requests. The Binding can be configured and prepared by the application prior to the initialization of the WeaveEchoClient object, or it can be left unprepared, in which case the WeaveEchoClient will request on-demand preparation of the binding (see Binding::RequestPrepare() for details). On-demand preparation of the Binding will also be requested should it fail after having entered the Ready state. SendRepeating Mode The SendRepeating() method can be used to put the WeaveEchoClient into SendRepeating mode. In this mode the client object sends a repeating sequence of EchoRequest messages to the peer at a configured interval. SendRepeating mode can be canceled by calling the Stop() method. Multicast and Broadcast A WeaveEchoClient object can be used to send EchoRequests to multiple recipients simultaneously by configuring the Binding object with an appropriate IPv6 multicast address or IPv4 local network broadcast address (255.255.255.255). When the WeaveEchoClient object detects a multicast or broadcast peer address, it automatically enters MultiResponse mode upon send of the EchoRequest. In this mode, the object continues to listen for and deliver all incoming EchoResponse messages that arrive on the same exchange. The object remains in MultiResponse mode until: 1) the application calls Stop() or Send(), 2) in SendRepeating mode, the time comes to send another request, or 3) no response is received and the receive timeout expires. API Events During the course of its operation, the WeaveEchoClient object will call up to the application to request specific actions or deliver notifications of important events. These API event calls are made to the currently configured callback function on the client object. Except where noted, applications are free to alter the state of the client during an event callback. One overall exception is the object's Shutdown() method, which may never be called during a callback. The following API events are defined: PreparePayload The WeaveEchoClient is about to form an EchoRequest message and is requesting the application to supply a payload. If an application desires, it may return a new PacketBuffer containing the payload data. If the application does not handle this event, a EchoRequest with a zero-length payload will be sent automatically. The application MAY NOT alter the state of the WeaveEchoClient during this callback. RequestSent An EchoRequest message was sent to the peer. ResponseReceived An EchoResponse message was received from the peer. Arguments to the event contain the response payload and meta-information about the response message. CommunicationError An error occurred while forming or sending an EchoRequest, or while waiting for a response. Examples of errors that can occur while waiting for a response are key errors or unexpected close of a connection. Arguments to the event contain the error reason. ResponseTimeout An EchoResponse was not received in the alloted time. The response timeout is controlled by the DefaultResponseTimeout property on the Binding object. RequestAborted An in-progress Echo exchange was aborted because a request was made to send another EchoRequest before a response was received to the previous message. This can arise in SendRepeating mode when the time arrives to send the next EchoRequest. This can also happen if the application calls Send() after an EchoRequest has been sent but before any response is received. When the object is in MultiResponse mode, the event is suppressed if at least one EchoResponse message has been received. Public types EventCallback void(* EventCallback)(void *appState, EventType eventType, const InEventParam &inParam, OutEventParam &outParam) EventType EventType State State Public attributes AppState void * AppState A pointer to application-specific data. Public functions GetBinding Binding * GetBinding( void ) const Returns a pointer to the Binding object associated with the WeaveEchoClient. GetEventCallback EventCallback GetEventCallback( void ) const Returns a pointer to the API event callback function currently configured on the WeaveEchoClient object. GetState State GetState( void ) const Retrieve the current state of the WeaveEchoClient object. Init WEAVE_ERROR Init( Binding *binding, EventCallback eventCallback, void *appState ) Initialize a WeaveEchoClient object. Initialize a WeaveEchoClient object in preparation for sending echo messages to a peer. IsSendRrepeating bool IsSendRrepeating() const Returns true if the WeaveEchoClient object is currently in send-repeating mode. RequestInProgress bool RequestInProgress() const Returns true if an EchoRequest has been sent and the WeaveEchoClient object is awaiting a response. Send WEAVE_ERROR Send( void ) Send an EchoRequest message to the peer. This method initiates the process of sending an EchoRequest message to the peer node. If and when a corresponding EchoResponse message is received it will be delivered to the application via the ResponseReceived API event. When forming the EchoRequest message, the WeaveEchoClient makes a request to the application, via the PreparePayload API event, to prepare the payload of the message. If the Binding object is not in the Ready state when this method is called, a request will be made to Binding::RequestPrepare() method to initiate on-demand preparation. The send operation will then be queued until this process completes. WEAVE_ERROR Send( PacketBuffer *payloadBuf ) Send an EchoRequest message to the peer with a specific payload. This method initiates the process of sending an EchoRequest message to the peer node. The contents of the supplied payload buffer will be sent to the peer as the payload of the EchoRequest message. If and when a corresponding EchoResponse message is received it will be delivered to the application via the ResponseReceived API event. Upon calling this method, ownership of the supplied payload buffer passes to the WeaveEchoClient object, which has the responsibility to free it. This is true regardless of whether method completes successfully or with an error. If the Binding object is not in the Ready state when this method is called, a request will be made to Binding::RequestPrepare() method to initiate on-demand preparation. The send operation will then be queued until this process is complete.Repeating WEAVE_ERROR SendRepeating( uint32_t sendIntervalMS ) Initiate sending a repeating sequence of EchoRequest messages to the peer. This method initiates a repeating process of sending EchoRequest messages to the peer. As EchoResponse messages are received from the peer they are delivered to the application via the ResponseReceived API event. When SendRepeating() is called, the WeaveEchoClient enters send-repeating mode in which it stays until Stop() is called or a Binding error occurs. Calling SendRepeating() multiple times has the effect of resetting the send cycle and updating the interval. The initial send of a sequence occurs at the time SendRepeating() is called, or whenever the Binding becomes ready after SendRepeating() is called (see below). Subsequent sends occur thereafter at the specified interval. Each time a send occurs, the WeaveEchoClient makes a request to the application, via the PreparePayload API event, to prepare the payload of the message. If the Binding object is not in the Ready state when it comes time to send a message, a request will be made to the Binding::RequestPrepare() method to initiate on-demand preparation. Further repeated message sends will be paused until this process completes. A failure during on-demand Binding preparation will cause the WeaveEchoClient to leave send-repeating mode. SetEventCallback void SetEventCallback( EventCallback eventCallback ) Sets the API event callback function on the WeaveEchoClient object. Shutdown void Shutdown( void ) Shutdown a previously initialized WeaveEchoClient object. Note that this method can only be called if the Init() method has been called previously. Stop void Stop( void ) Stops any Echo exchange in progress and cancels send-repeating mode. WeaveEchoClient WeaveEchoClient( void ) Public static functions DefaultEventHandler void DefaultEventHandler( void *appState, EventType eventType, const InEventParam & inParam, OutEventParam & outParam ) Default handler for WeaveEchoClient API events. Applications are required to call this method for any API events that they don't recognize or handle. Supplied parameters must be the same as those passed by the client object to the application's event handler function.
https://openweave.io/reference/class/nl/weave/profiles/echo-next/weave-echo-client
CC-MAIN-2021-21
refinedweb
1,397
53.31
It is used to get/set exceptions mask. The exception mask is an internal value kept by all stream objects specifying for which state flags an exception of member type failure (or some derived type) is thrown when set. This mask is an object of member type iostate, which is a value formed by any combination of the following member constants − Following is the declaration for ios::exceptions function. get (1) iostate exceptions() const; set (2) void exceptions (iostate except); The above first first form (1) returns the current exception mask for the stream. The above second form (2) sets a new exception mask for the stream and clears the stream's error state flags (as if member clear() was called). except − A bitmask value of member type iostate formed by a combination of error state flag bits to be set (badbit, eofbit and/or failbit), or set to goodbit (or zero). It returns a bitmask of member type iostate representing the existing exception mask before the call to this member function. Basic guarantee − if an exception is thrown, the stream is in a valid state. Accesses (1) or modifies (2) the stream object. Concurrent access to the same stream object may cause data races. In below example explains about ios::fill function. #include <iostream> #include <fstream> int main () { std::ifstream file; file.exceptions ( std::ifstream::failbit | std::ifstream::badbit ); try { file.open ("test.txt"); while (!file.eof()) file.get(); file.close(); } catch (std::ifstream::failure e) { std::cerr << "Exception opening/reading/closing file\n"; } return 0; }
https://www.tutorialspoint.com/cpp_standard_library/cpp_ios_exceptions.htm
CC-MAIN-2020-16
refinedweb
257
54.42
Time slice and Suspense API - What’s coming in React 17? - Pusher Blog React powers so many awesome web and mobile apps such as Whatsapp, Instagram, Dropbox and Twitter. Along the road, React had to make some tough changes, an example being the migration from the difficult BSD + Patents license to the very non-restrictive MIT license following the decision by the Apache Foundation to ban the use of React. The change proved to be a key decision as not only did it bring more developers to React, it led a number of key projects such as WordPress and Drupal to adopt React What’s new in React 17? The Fiber rewrite that subsequently led to the release of React 16.0 came with changes such as Error boundaries, improved server side rendering, fragments and portals just to mention a few (learn more). However React 17 comes with even more exciting features. At a JavaScript conference in Iceland, JSConf 2018, the creator of Redux and a core team member of React, Dan Abramov, demoed the new features that would be present in React 17. In React’s latest release, a few factors that were addressed include: - How network speed affects the loading state of your application and in the larger picture – user experience. - How the state of your application is managed on low end devices. Time slice Early on in the development process of React 16.0, asynchronous rendering was kept off due to potential backwards compatibility issues. Although it enables faster response in UI rendering, it also introduces challenges for keeping track of changes in the UI. That’s where time slicing comes in. In the words of Dan: Time slice was created to make asynchronous rendering easier for developers. Heavy React UIs on devices with “not so great” CPU power could make users experience a “slow” feel when navigating through the app. With time slicing, React can now split computations of updates on children components into chunks during idle callbacks and rendering work is spread out over multiple frames. This enhances UI responsiveness on slower devices. Time slice does a great job in handling all the difficult CPU scheduling tasks under the hood without developer considerations. Suspense Wouldn’t it be great if your app could pause any state update while loading asynchronous data? Well that’s one of the awesome features of suspense. “Suspense provides an all-encompassing medium for components to suspend rendering while they load asynchronous data.” Suspense takes asynchronous IO in React to a whole new level. With the suspense feature, ReactJS can temporarily suspend any state update until the data is ready while executing other important tasks. This feature makes working with asynchronous IO operators such as calling REST APIs like Falcor or GraphQL much more seamless and easier. Developers can now manage different states all at once while users still get to experience the app regardless of network speed – instead of displaying only loading spinners, certain parts of the app can be displayed while other parts load thus ensuring that the app stays accessible. In many ways, suspense makes Redux, the state management library appear even more defunct. Suspense lets you *delay* rendering the content for a few seconds until the whole tree is ready. It *doesn’t* destroy the previous view while this is happening. — Dan Abramov (@dan_abramov) March 4, 2018 While Dan was demonstrating how suspense works, he used an API called createFetcher. createFetcher can be described as a basic cache system that allows React to suspend the data fetching request from within the render method. A core member of the React team, Andrew Clark, made it clear what to expect from createFetcher: createFetcher from @dan_abramov's talk is this thing: We're calling it simple-cache-provider (for now). It's a basic cache that works for 80% of use cases, and (when it's done) will serve as a reference implementation for Apollo, Relay, etc. — Andrew Clark (@acdlite) March 1, 2018 Note: It must be noted that the createFetcherAPI is extremely unstable and may change at anytime. Refrain from using it in real applications. You can follow up on its development and progress on Github. To show you how suspense works, I’d like to adopt excerpts from Dan’s IO demo at JSConf 2018: import { createFetcher, Placeholder, Loading } from '../future'; In the image above, the createFetcher API imported from the future has a .read method and will serve as a cache. It is where we will pass in the function fetchMovieDetails which returns a promise. In MovieDetails, a value is read from the cache. If the value is already cached, the render continues like normal else the cache throws a promise. When the promise resolves, React continues from where it left off. The cache is then shared throughout the React tree using the new context API. Usually components get cached from context. This implies that while testing, it’s possible to use fake caches to mock out any network requests. createFetcher uses simple-cache-provider to suspend the request from within the render method thus enabling us begin rendering before all the data has returned. simple-cache-provider has a .preload method that we can use to initiate a request before React gets to the component that needs it. Let’s say in an ecommerce app you’re switching from ProductReview to Product, but only ProductInfo needs some data. React can still render Product while the data for ProductInfo is being fetched. Suspense enables the app to be fully interactive while fetching data. Fears of race conditions occuring on the app while a user clicks around and triggers different actions are totally quashed. In the picture above, using the Placeholder component, if the Movie review takes more than one second to load while waiting for async dependencies, React will show the spinner. We can pause any state update until the data is ready and then add async loading to any component deep in the tree. This is possible through the isLoading that lets us decide what to show. Note: simple-cache-provider. Chances are, just like every name or API that has been proposed, there might be breaking changes in future ReactJS releases. Ensure you refrain form using this in real applications. Summary With time slice we can handle all our arduous CPU scheduling tasks under the hood without any considerations. With suspense we solve all our async race conditions in one stroke. A brief recap of the points noted by Dan at JSConf 2018: While you’re at it you can also watch Dan’s full presentation at JSConf 2018 where he gave detailed reasons behind the new features in React 17, most notably CPU and I/O optimizations.
http://brianyang.com/time-slice-and-suspense-api-whats-coming-in-react-17-pusher-blog/
CC-MAIN-2018-51
refinedweb
1,123
61.77
For my spark trials, I have downloaded the NY taxi csv files and merged them into a single file, nytaxi.csv . I then saved this in hadoop fs. I am using spark on yarn with 7 nodemanagers. I am connecting to spark over Ipython notebook. Here is a sample python script for counting the number of lines in nytaxi.csv. nytaxi=sc.textFile("hdfs://bigdata6:8020/user/baris/nytaxi/nytaxi.csv") filtered=nytaxi.filter(lambda x:"distance" not in x) splits = filtered.map(lambda x: float(x.split(",")[9])) splits.cache() splits.count() This returns 73491693. However when I try to count lines by the following code, it returns a value around 803000. def plusOne (sum, v): #print sum, v return sum + 1; splits.reduce(plusOne) I wonder why the results vary. Thanks A sample line from csv: u'740BD5BE61840BE4FE3905CC3EBE3E7E,E48B185060FB0FF49BE6DA43E69E624B,CMT,1,N,2013-10-01 12:44:29,2013-10-01 12:53:26,1,536,1.20,-73.974319,40.741859,-73.99115,40.742424'
http://www.howtobuildsoftware.com/index.php/how-do/hM4/python-hadoop-apache-spark-results-of-reduce-and-count-differ-in-pyspark
CC-MAIN-2018-47
refinedweb
168
61.63
GUI WindowBuilder Tool Now on Eclipse.org PLUS, after almost five years, Matthias Wessendorf leaves Oracle. GUI Tool WindowBuilder – Now on Eclipse.org GUI developer tool WindowBuilder is now an official Eclipse project, after Google donated the former Instantiations Eclipse tools to the Eclipse Foundation. WindowBuilder is composed of SWT Designer and Swing Designer, aiding in the creation of Java GUI applications. The WindowBuilder team now plan to extend the project to other languages and UI frameworks, such as C++, JavaScript, and PHP. Currently, the project supports Java and XML based UI frameworks. WindowBuilder can now be downloaded for free from the Eclipse website. OSGi Apache Aries Project Releases 0.3 Version 0.3 of the Apache Aries OSGi project is now available. Apache Aries provides a set of pluggable Java components that enable an enterprise OSGi application programming model. New in this release, is a proxy component that allows users to generate proxyies based on both interfaces and classes, and an application component that provides a runtime in which OSGi apps are isolated from one another. Support for blueprint namespace has also been added. The Release Notes for 0.3 can be viewed now. Apache Forrest 0.9 Framework Released Version 0.9 of the Apache Forrest publishing framework has been released. Apache Forrest is a framework for transforming input from various sources into a unified presentation in one, or more, output formats. It can generate static documents, and can either be used as a dynamic server, or deployed by its automated facility. This version updates the Apache Coccon Spring-based framework to the latest version of their stable 2.1 branch, adds initial “Markdown” and “iCal” output plugins to Whiteboard, and updates to Apache Ant version 1.7.1. More information is available at the Changelog. Matthias Wessendorf Leaves Oracle Principal software engineer at Oracle, Matthias Wessendorf has announced that he will be leaving Oracle at the end of February, after almost five years at the company. He will also be stepping down from his position as Apache MyFaces PMC Chair and will cease actively participating in JSF-related projects. JIRA 4.2.4 Released with JIRA Importers Plugin The Atlassian JIRA team have released JIRA 4.2.4, which comes with version 1.7.1 of the JIRA Importers Plugin. The Importer plugin works with Bugzilla 2.20 to 3.6.3, Mantis from 1.1.8 to 1.2.4, FogBugz 7.3.6 to 8.2.27, and CSV files. JIRA 4.2.4 also fixes a bug in the ‘Filter Results’ gadget. First ATLP Release for Apache OODT The Apache OODT team have announced their first release as an Apache TLP. Apache OODT is mix-and-match component based software, which provides metadata for middleware, including data discovery and query optimisation, and software architecture. The 0.2 release can be downloaded now.
https://jaxenter.com/gui-windowbuilder-tool-now-on-eclipse-org-102896.html
CC-MAIN-2021-31
refinedweb
478
60.21
Hi Gustav, The concurrency of MDB pools is limited by the number of available threads - as each running MDB necessarily uses a thread while it is in its onMessage(). So this means that your "sleep" below consumes a thread until it finishes, making the thread unavailable for other MDBs to do their work. You can increase the number of available threads, and/or assign your MDBs to their own thread pools. Depending on your app, you might also be able to change the app to use seperate MDB pools for long-running and short-running jobs. For information, refer to the JMS Performance Guide white-paper, available here: Tom Gustav Lund wrote: > Hi All, > > I'm using WLS 8.1 sp2 on W2K. > > I have some MDBean that ran for some time and doing this, they blocks for execution > of other MDBean requests. > > If I submit 9 JMS MDBean request everything works fine, but when I have > 9 JMS > request and some of my MDBeans consume time, then they block for others. > > If I submit 50 JMS MDBean request, and the 4. MDBean instance hangs for 20. Second. > Then the 4. MDBean instance will block for the execution of 5 MDBean request. > > In my tests I have one MDBean that hangs, and this will always blocks for execution > of 1/10 of my request. 100 requests -> then 10 request hangs. > > Snapshot of my programs: > public class RunMDBean { > static public void main(String args[]) { > System.out.println("Staring."); > > try { > RunMDBean sender = new RunMDBean(); > seder.init("JMSQueueFactory", "JMSQueue"); > for (int i=0; i<50; i++) { > sender.send(""); > } > sender.close(); > } catch (JMSException e) { > e.printStackTrace(); > } > > System.out.println("Done."); > } > > MDBean > public void onMessage(Message message) { > boolean hang=false; > > synchronized (this ) { > if (++runs==4) > hang=true; > } > > System.out.println("Running:"+runs); > > if (hang) { > System.out.println("Hangs"); > try { > Thread.sleep(20000); > } catch (InterruptedException e) { > e.printStackTrace(); > } > System.out.println("Hangs done"); > } > } > > > Why?. Any ideas or comments on this are very welcome! > > Thanx! > > Regards, > Gustav Lund > (glu@tdc.dk)
http://fixunix.com/weblogic/224792-mdbeans-hangs-blocks-others.html
CC-MAIN-2015-35
refinedweb
334
67.96
This stable and current. Install To install the latest stable version: sudo easy_install django-sphinx To install the latest development version (updated quite often): svn checkout django-sphinx cd django-sphinx sudo python setup.py install Note: You will need to install the sphinxapi.py package into your Python Path or use one of the included versions. To use the included version, you must specify the following in your settings.py file: # Sphinx 0.9.9 SPHINX_API_VERSION = 0x116 # Sphinx 0.9.8 SPHINX_API_VERSION = 0x113 # Sphinx 0.9.7 SPHINX_API_VERSION = 0x107 Usage The following is some example usage: class MyModel(models.Model): search = SphinxSearch() # optional: defaults to db_table # If your index name does not match MyModel._meta.db_table # Note: You can only generate automatic configurations from the ./manage.py script # if your index name matches. search = SphinxSearch('index_name') # Or maybe we want to be more.. specific searchdelta = SphinxSearch( index='index_name delta_name', weights={ 'name': 100, 'description': 10, 'tags': 80, } ) queryset = MyModel.search.query('query') results1 = queryset.order_by('@weight', '@id', 'my_attribute') results2 = queryset.filter(my_attribute=5) results3 = queryset.filter(my_other_attribute=[5, 3,4]) results4 = queryset.exclude(my_attribute=5)[0:10] results5 = queryset.count() # as of 2.0 you can now access an attribute to get the weight and similar arguments for result in results1: print result, result._sphinx # you can also access a similar set of meta data on the queryset itself (once it's been sliced or executed in any way) print results1._sphinx Some additional methods: - count() - extra() (passed to the queryset) - all() (does nothing) - select_related() (passed to the queryset) - group_by(field, field, field) - set_options(index='', weights={}, weights=) The django-sphinx layer also supports some basic querying over multiple indexes. To use this you first need to understand the rules of a UNION. Your indexes must contain exactly the same fields. These fields must also include a content_type selection which should be the content_type id associated with that table (model). You can then do something like this: SphinxSearch('index1 index2 index3').query('hello') This will return a list of all matches, ordered by weight, from all indexes. This performs one SQL query per index with matches in it, as Django's ORM does not support SQL UNION. Config Generation django-sphinx now includes a tool to create sample configuration for your models. It will generate both a source, and index configuration for a model class. You will still need to manually tweak the output, and insert it into your configuration, but it should aid in initial setup. To use it: import djangosphinx from myproject.myapp.models import MyModel output = djangosphinx.generate_config_for_model(MyModel) print output If you have multiple models which you wish to use the UNION searching: model_classes = (ModelOne, ModelTwoWhichResemblesModelOne) output = djangosphinx.generate_config_for_models(model_classes) You can also now output configuration from the command line: ./manage.py generate_sphinx_config <appname> This will loop through all models in <appname>and attempt to find any with a SphinxSearch instance that is using the default index name (db_table).
http://code.google.com/p/django-sphinx/
crawl-002
refinedweb
492
51.04
#include <searchm.h> #include <searchm.h> True if unique ID is in database. T:Remote client sent cookie. F:Usertrack generated cookie. The command line parameters / information. NYI. 0 = use default The calculated number of results to display per page. 0=> all. searchm_display. The API begin phase file pointer. The API end phase file pointer. The API nohits phase file pointer. The API result phase file pointer. The results generated by the query. The searchm_hits/swishhits value. A convience variable. End result. The total number of complete pages (may be zero). The number of results on the last page. A convience variable. Start result. The total number of pages (may be zero). The metalist for this result. The create begin phase file name. The created end phase file name. The created nohits phase file name. The state management cookie, either generated by mod_usertrack, or received from the remote client. The created result phase file name. The state management cookie retrieved from the database. The requst information. The request server configuration. The request directory configuration. The calculated/ actual page number to display. searchm_page. The calculated next page. searchm_pagenext. The calculated previous page. searchm_pageprev. A temporary pool valid during file replacement/ results phase. The query string to process within searchm_parse_cmdline. The SwishRemovedWords result. The tags cache. The SwishParsedWords result. The Swish C API handle. The Swish C API result. The Swish C API results handle. The Swish C API search object. The created searchm_index string. "" (empty string) or NULL is an error. Passed directly to SwishInit. The created searchm_query string. "" (empty string) or NULL is an error. Passed directly to SwishExecute. The created searchm_sort string. If defined, passed directly to SwishSort. For non weighted tags, 1 if a valid value. Otherwise, the number of values in a list somewhere. The terms list for this result. For non chunked transfers, the total byte to send. The created searchm_within string.
http://searchm.sourceforge.net/html/structquery__info.html
CC-MAIN-2017-17
refinedweb
316
65.39
(1) Pimydoc(1) Pimydoc Pimydoc : [P]lease [i]nsert my doc[umentation] A Python3/GPLv3/OSX-Linux-Windows/CLI project, using no additional modules than the ones installed with Python3. Insert documentation in (text|code) files and update it. This project is written in Python but the files to be documented may be written in any language : Python, C++, Java... Example : -> inside "pimydoc", documentation source file. [pimydoc] STARTSYMB_IN_DOC :¤ [ressource::001] An interesting ressource. [ressource::002] Another interesting ressource. -> inside a (C++) source file : void foo(arg): /* ressource::001 */ // ressource::002 func("...") -> inside a (Python) source file : def foo(arg): """ ressource::001 """ # ressource::002 print("...") The source files become : void foo(arg): /* ressource::001 ¤ An interesting ressource. */ // ressource::002 // ¤ An interesting ressource. func("...") def foo(arg): """ ressource::001 ¤ An interesting ressource. """ # ressource::002 # ¤ another interesting ressource. print("...") (2) purpose and file format(2) purpose and file format Pimydoc inserts in source files the text paragraphs stored in "pimydoc", the documentation source file. Moreover, the script updates the text paragraphs already present in the source files if the documentation source file has changed. (2.1) documentation source file (docsrc) format(2.1) documentation source file (docsrc) format The file "pimydoc" is mandatory : it contains the documentation to be inserted in the source directory. After an optional header beginning with the string "[pimydoc]", the documentation itself is divided along doctitles written in brackets, like "[doctitle1]", followed by docline, a line followed by linefeed character(s). ### a simple example of a "documentation source file" : [pimydoc] REGEX_SOURCE_FILTER : .+py$ [doctitle1] This line will be added as the first line of the "doctitle1" This line will be added as the first line of the "doctitle1" [doctitle2] This line will be added as the first line of the "doctitle2" This example will find all Python files (see REGEX_SOURCE_FILTER) and add after each "doctitle1" mention the two lines given, and add after each "doctitle2" the line given. (2.1.1) a special case(2.1.1) a special case If a docline is not followed by some linefeed character(s) (e.g. if this line is the last of the file), the script will automatically add a linefeed. The added linefeed is either the last linefeed detected by reading the file, either "\r\n" for Windows systems either "\n" in the other cases. (2.1.2) comments(2.1.2) comments By default, comments are the lines beginning with the "###" string. Comments added at the end of a line like "some stuff ### mycomment" are not allowed. You can change the comment startstring : see the COMMENT_STARTSTRING in the documentation source file. (2.1.3) header(2.1.3) header The header is optional and always begin with the "[pimydoc]" string. The header's content is made of single lines following the "KEY:VALUE" format. The available keys are (the values are given as examples) : REGEX_SOURCE_FILTER : .+py$ REGEX_FIND_DOCTITLE : ^\[(?P<doctitle>.+)\]$ STARTSYMB_IN_DOC :| | PROFILE_PYTHON_SPACENBR_FOR_A_TAB : 4 REMOVE_FINAL_SPACES_IN_NEW_DOCLINES : True ! Beware, the syntax "KEY = VALUE" is not supported. • REGEX_SOURCE_FILTER : .+py$• REGEX_SOURCE_FILTER : .+py$ Python regex describing which file in the source directory have to be read and -if required- modified. Examples : REGEX_SOURCE_FILTER : .+py$ ... for Python files REGEX_SOURCE_FILTER : .+cpp$|.+h$ ... for C++ files Beware, the format string "*.py" is not supported. • REGEX_FIND_DOCTITLE : ^[(?P.+)]$• REGEX_FIND_DOCTITLE : ^[(?P.+)]$ Python regex describing in the documentation source file, NOT in the source files the the way the doctiles appear. The group name (?P) is mandatory. Examples : REGEX_FIND_DOCTITLE : ^[(?P.+)]$ ... if doctitles appear as "[mydoctitle]" in the documentation source file. REGEX_FIND_DOCTITLE : ^<(?P.+)>$ ... if doctitles appear as "" in the documentation source file. • STARTSYMB_IN_DOC :| |• STARTSYMB_IN_DOC :| | Characters appearing in source files juste before a doc line. The STARTSYMB_IN_DOC characters may be preceded by other characters, like spaces, "#", "//", and so on : """ | | doctitle """ # | | doctitle // | | doctitle /* | | doctitle */ You may want to add a space before and after STARTSYMB_IN_DOC; there's a difference between "STARTSYMB_IN_DOC :| |" and "STARTSYMB_IN_DOC :| | ". • PROFILE_PYTHON_SPACENBR_FOR_A_TAB : 4• PROFILE_PYTHON_SPACENBR_FOR_A_TAB : 4 For Python files : a tabulation in a docline may be replaced by spaces. The PROFILE_PYTHON_SPACENBR_FOR_A_TAB key sets the number of spaces replacing each tabulation. If you don't want any replacement, set this key to 0. Examples : PROFILE_PYTHON_SPACENBR_FOR_A_TAB : 4 (standard; see) PROFILE_PYTHON_SPACENBR_FOR_A_TAB : 0 (no replacement) • REMOVE_FINAL_SPACES_IN_NEW_DOCLINES : True• REMOVE_FINAL_SPACES_IN_NEW_DOCLINES : True If set to True, the leading spaces at the end of docline added by Pimydoc are removed. Examples : REMOVE_FINAL_SPACES_IN_NEW_DOCLINES : True REMOVE_FINAL_SPACES_IN_NEW_DOCLINES : False In the documentation source file, characters at the beginning of a line which is read as a comment and skipped. The default value is "###". (2.2) how to add doctitles in the files stored in the source directory(2.2) how to add doctitles in the files stored in the source directory You juste have to add on a line the name of a doctitle (do not follow the format described in REGEX_FIND_DOCTITLE, e.g. "[ressource::001]", this regex being used only in the documentation source file). The documentation (the doclines) is added by inserting the doclines after the line containing the doctitle. Pimydoc will add before each docline the same characters appearing before the doctitle. By example : -> inside "pimydoc", documentation source file. [ressource::001] An interesting ressource. -> inside a source file : def foo(arg): """ ressource::001 """ print("...") The source file becomes : def foo(arg): """ ressource::001 ¤ An interesting ressource. """ print("...") (2.3) profiles(2.3) profiles According to the extension of the files read in the source directories, Pimydoc slightly changes the way it adds and updates the documentation. The known profiles are : • "Python"• "Python" for files written in Python2 or Python3. • see PROFILE_PYTHON_SPACENBR_FOR_A_TAB . • "CPP" (i.e. C++)• "CPP" (i.e. C++) nothing is changed compared to the default profile. • "default"• "default" default profile (3) installation and tests(3) installation and tests Don't forget : Pimydoc is Python3 project, not a Python2 project ! $ pip3 install pimydoc or $ wget Since pimydoc.py is a stand-alone file, you may place this file in the target directory. How to run the tests : $ python3 -m unittest tests/tests.py or $ nosetests (4) workflow(4) workflow $ pimydoc -h ... displays all known parameters. $ pimydoc --version ... displays the version of the current script. basic optionsbasic options $ pimydoc ... searches a "pimydoc" file in the current directory, reads it and parses the current directory, searching files to be modified. $ pimydoc --downloadpimydoc ... downloads the default documentation source file (named 'pimydoc') and exits. $ pimydoc --sourcepath path/to/the/targetpath --docsrcfile name_of_the_docsrc_file ... gives to the script the name of the source path (=to be modified) and the name of the documentation source file (e.g. "pimydoc") trick : how to modify the start string in a source directory ?trick : how to modify the start string in a source directory ? three steps : $ pimydoc --remove (or -r) (modify the documentation source file : modify the STARTSYMB_IN_DOC key) $ pimydoc advanced optionsadvanced options removing the documentation added by Pimydocremoving the documentation added by Pimydoc $ pimydoc --remove $ pimydoc -r ... will remove the documentation added by Pimydoc. security modesecurity mode $ pimydoc --securitymode $ pimydoc -s ... will not delete the backup files created by Pimydoc before modifying the source files. verbosityverbosity $ pimydoc --verbose 0 ... displays only error messages $ pimydoc --verbose 1 $ pimydoc -vv ... both display only error messages and info messages $ pimydoc --verbose 2 $ pimydoc -vvv ... both display all messages, including debug messages (5) project's author and project's name(5) project's author and project's name Xavier Faure (suizokukan / 94.23.197.37) : suizokukan @T orange D@T fr Pimydoc : [P]lease [i]nsert my doc[umentation] (6) arguments(6) arguments usage: pimydoc.py [-h] [--sourcepath SOURCEPATH] [--docsrcfile DOCSRCFILE] [--verbose {0,1,2}] [-vv] [-vvv] [--version] [--remove] [--securitymode] [--downloadpimydoc] optional arguments: -h, --help show this help message and exit --sourcepath SOURCEPATH source path of the code (default: /home/suizokukan/projets/pimydoc/pimydoc) --docsrcfile DOCSRCFILE source documentation file name (default: /home/suizokukan/projets/pimydoc/pimydoc/pimydoc) --verbose {0,1,2} degree of verbosity of the output. 0 : only the error messages; 1 : all messages except debug messages; 2 : all messages, including debug messages; See also the -vv and -vvv options. (default: 1) -vv verbosity set to 1 (all messages except debug messages). (default: False) -vvv verbosity set to 2 (all messages, including debug messages). (default: False) --version, -v show the version and exit --remove, -r Remove every pimydoc line in all the files of the source directory (default: False) --securitymode, -s Security mode : backup files created by Pimdoc are not deleted (default: False) --downloadpimydoc download a default pimydoc file in the current directory and exit (default: False) (7) history / future versions(7) history / future versions v 0.2.7(beta) (2016_10_08) fixed a message in rewrite_new_targetfile()v 0.2.7(beta) (2016_10_08) fixed a message in rewrite_new_targetfile() • fixed a message in rewrite_new_targetfile() • unittests : 2 tests (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.2.6(beta) (2016_09_18) warning for doctitles defined but never readv 0.2.6(beta) (2016_09_18) warning for doctitles defined but never read • a warning appears if a doctitle is defined in the documentation source file but never appears in the source directory • unittests : 2 tests (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.2.5(beta) (2016_09_17) COMMENT_STARTSTRING in the documentation source filev 0.2.5(beta) (2016_09_17) COMMENT_STARTSTRING in the documentation source file • comment startstring can now be defined in the documentation source file (see COMMENT_STARTSTRING in the documentation). • added two debug messages in rewrite_new_targetfile(). • unittests : 2 tests (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.2.4(beta) (2016_09_08) : fixed a minor bug in DocumentationSource.init()v 0.2.4(beta) (2016_09_08) : fixed a minor bug in DocumentationSource.init() • DocumentationSource.__init__() : if the documentation source file is a directory, an error is raised to prevent the directory to be opened. • unittests : 2 tests (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.2.3(beta) (2016_09_08) : fixed the default value of REGEX_FIND_DOCTITLEv 0.2.3(beta) (2016_09_08) : fixed the default value of REGEX_FIND_DOCTITLE • Settings.__init__() : fixed the default value of REGEX_FIND_DOCTITLE to "^\\[(?P<doctitle>.+)\\]$" • unittests : 2 tests (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.2.2(beta) (2016_09_08) : fixed a bug in DocumentationSource.newline()v 0.2.2(beta) (2016_09_08) : fixed a bug in DocumentationSource.newline() • DocumentationSource.newline() handles correctly the case where a doctitle is ill-formed. • improved messages : the line number is now marked by the word "line". • unittests : 2 tests (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.2.1(beta) (2016_09_08) : fixed an error in pimydocv 0.2.1(beta) (2016_09_08) : fixed an error in pimydoc • added the accidently deleted "STARTSYMB_IN_DOC :| |" in pimydoc • unittests : 2 tests (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.2(beta) (2016_09_08) : messages display improvementv 0.2(beta) (2016_09_08) : messages display improvement • added an info message at the end of the download_default_pimydoc() function. • improved the final message of the pimydoc() function. • improved a message in the rewrite_new_targetfile() function. • unittests : 2 tests (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.1.9(beta) (2016_09_08) : fixed issue #002, --downloadpimydoc optionv 0.1.9(beta) (2016_09_08) : fixed issue #002, --downloadpimydoc option • rewrite_new_targetfile : a new exception is caught (UnicodeDecodeError) when reading a binary file • --downloadpimydoc option : the function download_default_pimydoc() downloads from the default "pimydoc" file. • updated the documentation • unittests : 2 tests (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.1.8(beta) (2016_09_07) : fixed issue #001v 0.1.8(beta) (2016_09_07) : fixed issue #001 • rewrite_new_targetfile() : fixed the way a linefeed is added to a docline without linefeed : either the last linefeed characters read in the file, either \r\n on Windows systems, either "\n". • rewrite_new_targetfile() opens the files in binary mode to avoid that Windows OS modified the linefeed characters to be added at the end of doclines. Without this modification, the tests can't pass on Windows systems since the test files use the \n linefeed, NOT the \r\n one. • open() uses the option "encoding='utf-8" so that the script doesn't use the cp1252 encoding on Windows systems. • (tests.py) the PATH_TO_CURRENT_TEST directory is removed at the beginning of each test function. • added a rewrite_new_targetfile__line() function to size down the rewrite_new_targetfile() function. • improved the documentation • unittests : 2 tests (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.1.7(beta) (2016_08_29) : documentation improved/project in beta stage.v 0.1.7(beta) (2016_08_29) : documentation improved/project in beta stage. • improved the documentation • fixed a bug in the tests : the filenames have to be sorted in order to hash a directory exactly the same way on different computers. • Fixed a bug in rewrite_new_targetfile() : lines containing STARTSYMB_IN_DOC or STARTSYMB_IN_DOC.rstrip() have to be discarded. Fixed the corresponding tests. • Fixed a bug in tests/tests.py : files whose name ends in "~" are discarded. • removed from the pimydoc() function the "just_remove_pimydoc_lines" argument, since the value can be deduced from args.remove • added a test : Tests.test_pimydoc_function__r() to test the --remove argument. • added the file : tests/__init__.py • removed the unused constant STARTSYMB_IN_DOC__ESCAPE • rewrite_new_targetfile() : added a debug message • added tests/test8/ which should have been added in 0.1.6 • improved error message in newline() • modified rewrite_new_targetfile() : if PROFILE_PYTHON_SPACENBR_FOR_A_TAB is set to 0, no tabulation is replaced. • DocumentationSource.newline() is now a method (not a subfunction of DocumentationSource.__init__() anymore) • unittests : 1 test (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.1.6(alpha) (2016_08_28) : a docline can be inside a commentary beginning with "#" or "//"v 0.1.6(alpha) (2016_08_28) : a docline can be inside a commentary beginning with "#" or "//" • modified rewrite_new_targetfile() to handle doclines to be added after some symbols like "#" (Python) or "//" (C/C++). Added some tests to test this feature. • simplified the documentation source file : no more "REGEX_FIND_PIMYDOC*" keys. • improved the documentation • unittests : 1 test (passed) • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () v 0.1.5(alpha) (2016_08_27) first unittestsv 0.1.5(alpha) (2016_08_27) first unittests • add the "tests/" directory • fixed a bug in DocumentationSource.__init__() : .format() syntax has to be used anywhere except for logging messages. • fixed a bug in remove_and_return_linefeed() : I forgot to return an empty linefeed if the last characters are not "\n", "\r" or "\r\n" • fixed a bug in rewrite_new_targetfile() : if a docline doesn't end with a linefeed character, we have to add it manually in the target files. • fixed a bug in rewrite_new_targetfile() : the file opened is now explicitly closed. • added the README.rst file : the file is required by Pypi and should have been added since v 0.1.4. • documentation improved - unittests : 1 test (passed) - raw Pylint invocation : 10.0/10.0 for all scripts. - version packaged and sent to Pypi () v 0.1.4(alpha) (2016_08_26) first version available on Pypiv 0.1.4(alpha) (2016_08_26) first version available on Pypi • Pimydoc is now available on Pypi : • improved the documentation • raw Pylint invocation : 10.0/10.0 for all scripts. • version packaged and sent to Pypi () older versions :older versions : + 0.1.3(alpha) : • pimydoc is callable from a Python script. • in rewrite_new_targetfile(), FileNotFoundError is catched to prevent an error by reading some special files like temp files. • in pimydoc_a_file(), added a call to shutil.copystat() to correctly set the permissions of the new target file. • in DocumentationSource.__init__() added a test about the existence of the file to be read. • improved messages displayed by the script • raw Pylint invocation : 10.0/10.0 + 0.1.2(alpha) • improved documentation • added the LICENSE.txt file • raw Pylint invocation : 10.0/10.0 + 0.1.1(alpha) • added -vv, -vvv options • --verbosity > --verbose • raw Pylint invocation : 10.0/10.0 + 0.1(alpha) • added --securitymode/-s option • improved documentation • raw Pylint invocation : 10.0/10.0 + 0.0.9(alpha) : two profiles are available : "Python" and "C++"; removed two useless constants (SOURCEPATH, DOCSRCFILENAME); the exit values are {0, -1, -2}; try/except added around calls to re.compile() and around .group(); improved documentation; + 0.0.8(alpha) : added the --remove/-r option. + 0.0.7(alpha) : redefined --verbosity={0,1,2}, default=1; improved documentation + 0.0.6(alpha) : removed the useless constant VERBOSITY; improved the documentation; the regexes are now compiled + 0.0.5(alpha) : fixed a bug. If REMOVE_FINAL_SPACES_IN_NEW_DOCLINES is set to "True", some empty lines wanted by the definition in pimydoc may be cut. E.g. : ( space character being replaced by _ ) STARTSYMB_IN_DOC :|_|_ REMOVE_FINAL_SPACES_IN_NEW_DOCLINES : True [doc001] line1 <--- this an empty line ... the expected result is : But the rewrite_new_targetfile() function needs to identify the added lines : searching "|_|_" wouldn't be sufficient to find to second line, hence the need to search with REGEX_FIND_PIMYDOCLINE_IN_CODE2, i.e. REGEX_FIND_PIMYDOCLINE_IN_CODE without final spaces (\_ here, since the REGEX_* constants are re.escape'd). + 0.0.4(alpha) : added a function : remove_and_return_linefeed(); improved the way final spaces are handled at the end of the added lines; added a new setting : REMOVE_FINAL_SPACES_IN_NEW_DOCLINES + 0.0.3(alpha) : improved documentation; pylinted the code; REGEX_SOURCE_FILTER; project's name set to "pimydoc"; improved the way the regexes are read in the documentation source file. + 0.0.2(alpha) : --verbosity levels reduced to 3; improved documentation + 0.0.1(alpha) : Settings class, no more call to eval() + 0.0.0(alpha) : proof of concept (8) technical details(8) technical details (8.1) exit values :(8.1) exit values : See the file named "pimydoc", section "exit codes"
https://libraries.io/pypi/Pimydoc
CC-MAIN-2019-35
refinedweb
2,967
50.63
Hello. I am using Intel C++ compiler with Visual Studio 2008. When I override the new/delete operators I get a strange behaviour for a very simple program.I simply override the new/delete operators and then I create a string object. The problem is that when I use Intel C++ in Release mode, my operator new is not called BUT my operator delete is called. This means that the application crashes because I do a scalable_free but the system had done a normal malloc. This does not happen in Intel C++ Debug mode or in MS C++ (Debug or Release mode). Here is the test program: #include "tbbscalable_allocator.h" #include #include using namespace std; void* operator new (size_t size) throw (std::bad_alloc) { cout << "Called my new" << endl; if (size == 0) size = 1; if (void* ptr = scalable_malloc(size)) return ptr; throw std::bad_alloc(); } void* operator new[] (size_t size) throw (std::bad_alloc) { cout << "Called my new[]" << endl; return operator new (size); } void operator delete (void* ptr) throw () { cout << "Called my delete" << endl; if (ptr != 0) scalable_free(ptr); } void operator delete[] (void* ptr) throw() { cout << "Called my delete[]" << endl; operator delete (ptr); } // I omit the non-throwing versions, but they exist int main() { string str("A test string. A test string."); return 0; } The output of the program with Intel C++ Release: is "Called my delete" (and a crash) while the output in Debug mode and in MS C++ is none (meaning that none of my operators is called). Finally, if I disable the optimisations (/Od), I don't get the error. Is this a bug in the Intel C++ compiler? I would appreciate your comments and any workaround you can think of. Thanks in advance Alfredo
https://software.intel.com/en-us/forums/intel-c-compiler/topic/294594
CC-MAIN-2018-09
refinedweb
285
61.46
Authentication is a very necessary feature for applications that store user data. It’s a process of verifying the identity of users, ensuring that unauthorized users cannot access private data — data belonging to other users. This leads to having restricted routes that can only be accessed by authenticated users. These authenticated users are verified by using their login details (i.e. username/email and password) and assigning them with a token to be used in order to access an application’s protected resources. In this article, you’re going to be learning about: Dependencies We will be working with the following dependencies that help in authentication: - Axios For sending and retrieving data from our API - Vuex For storing data gotten from our API - Vue-Router For navigation and protection of Routes We will be working with these tools and see how they can work together to provide robust authentication functionality for our app. The Backend API We will be building a simple blog site, which will make use of this API. You can check out the docs to see the endpoints and how requests should be sent. From the docs, you’ll notice few endpoints are attached with a lock. This is a way to show that only authorized users can send requests to those endpoints. The unrestricted endpoints are the /register and /login endpoints. An error with the status code 401 should be returned when an unauthenticated user attempts to access a restricted endpoint. After successfully logging in a user, the access token alongside some data will be received in the Vue app, which will be used in setting the cookie and attached in the request header to be used for future requests. The backend will check the request header each time a request is made to a restricted endpoint. Don’t be tempted to store the access token in the local storage. Scaffold Project Using the Vue CLI, run the command below to generate the application: vue create auth-project Navigate into your new folder: cd auth-project Add the vue-router and install more dependencies — vuex and axios: vue add router npm install vuex axios Now run your project and you should see what I have below on your browser: npm run serve 1. Vuex Configuration with Axios Axios is a JavaScript library that is used to send requests from the browser to APIs. According to the Vuex documentation; “Vuex is a state management pattern + library for Vue.js applications. It serves as a centralized store for all the components in an application, with rules ensuring that the state can only be mutated in a predictable fashion.” What does that mean? Vuex is a store used in a Vue application that allows us to save data that will be available to every component and provide ways to change such data. We’ll use Axios in Vuex to send our requests and make changes to our state (data). Axios will be used in Vuex actions to send GET and POST, response gotten will be used in sending information to the mutations and which updates our store data. To deal with Vuex resetting after refreshing we will be working with vuex-persistedstate, a library that saves our Vuex data between page reloads. npm install --save vuex-persistedstate Now let’s create a new folder store in src, for configuring the Vuex store. In the store folder, create a new folder; modules and a file index.js. It’s important to note that you only need to do this if the folder does not get created for you automatically. import Vuex from 'vuex'; import Vue from 'vue'; import createPersistedState from "vuex-persistedstate"; import auth from './modules/auth'; // Load Vuex Vue.use(Vuex); // Create store export default new Vuex.Store({ modules: { auth }, plugins: [createPersistedState()] }); Here we are making use of Vuex and importing an auth module from the modules folder into our store. Modules Modules are different segments of our store that handles similar tasks together, including: Before we proceed, let’s edit our main.js file. import Vue from 'vue' import App from './App.vue' import router from './router'; import store from './store'; import axios from 'axios'; axios.defaults.withCredentials = true axios.defaults.baseURL = ''; Vue.config.productionTip = false new Vue({ store, router, render: h => h(App) }).$mount('#app') We imported the store object from the ./store folder as well as the Axios package. As mentioned earlier, the access token cookie and other necessary data got from the API need to be set in the request headers for future requests. Since we’ll be making use of Axios when making requests we need to configure Axios to make use of this. In the snippet above, we do that using axios.defaults.withCredentials = true, this is needed because by default cookies are not passed by Axios. aaxios.defaults.withCredentials = true is an instruction to Axios to send all requests with credentials such as; authorization headers, TLS client certificates, or cookies (as in our case). We set our axios.defaults.baseURL for our Axios request to our API This way, whenever we’re sending via Axios, it makes use of this base URL. With that, we can add just our endpoints like /register and /login to our actions without stating the full URL each time. Now inside the modules folder in store create a file called auth.js //store/modules/auth.js import axios from 'axios'; const state = { }; const getters = { }; const actions = { }; const mutations = { }; export default { state, getters, actions, mutations }; state In our state dict, we are going to define our data, and their default values: const state = { user: null, posts: null, }; We are setting the default value of state, which is an object that contains user and posts with their initial values as null. Actions Actions are functions that are used to commit a mutation to change the state or can be used to dispatch i.e calls another action. It can be called in different components or views and then commits mutations of our state; Register Action Our Register action takes in form data, sends the data to our /register endpoint, and assigns the response to a variable response. Next, we will be dispatching our form username and password to our login action. This way, we log in the user after they sign up, so they are redirected to the /posts page. async Register({dispatch}, form) { await axios.post('register', form) let UserForm = new FormData() UserForm.append('username', form.username) UserForm.append('password', form.password) await dispatch('LogIn', UserForm) }, Login Action Here is where the main authentication happens. When a user fills in their username and password, it is passed to a User which is a FormData object, the LogIn function takes the User object and makes a POST request to the /login endpoint to log in the user. The Login function finally commits the username to the setUser mutation. async LogIn({commit}, User) { await axios.post('login', User) await commit('setUser', User.get('username')) }, Create Post Action Our CreatePost action is a function, that takes in the /post endpoint, and then dispatches the GetPosts action. This enables the user to see their posts after creation. async CreatePost({dispatch}, post) { await axios.post('post', post) await dispatch('GetPosts') }, Get Posts Action Our GetPosts action sends a GET request to our /posts endpoint to fetch the posts in our API and commits setPosts mutation. async GetPosts({ commit }){ let response = await axios.get('posts') commit('setPosts', response.data) }, Log Out Action async LogOut({commit}){ let user = null commit('logout', user) } Our LogOut action removes our user from the browser cache. It does this by committing a Mutations const mutations = { setUser(state, username){ state.user = username }, setPosts(state, posts){ state.posts = posts }, LogOut(state){ state.user = null state.posts = null }, }; Each mutation takes in the state and a value from the action committing it, aside Logout. The value gotten is used to change certain parts or all or like in LogOut set all variables back to null. Getters Getters are functionalities to get the state. It can be used in multiple components to get the current state. The isAuthenticatated function checks if the state.user is defined or null and returns true or false respectively. StatePosts and StateUser return state.posts and state.user respectively value. const getters = { isAuthenticated: state => !!state.user, StatePosts: state => state.posts, StateUser: state => state.user, }; Now your whole auth.js file should resemble my code on GitHub. Setting Up Components 1. NavBar.vue And App.vue Components In your src/components folder, delete the HelloWorld.vue and a new file called NavBar.vue. This is the component for our navigation bar, it links to different pages of our component been routed here. Each router link points to a route/page on our app. The v-if="isLoggedIn" is a condition to display the Logout link if a user is logged in and hide the Register and Login routes. We have a logout method which can only be accessible to signed-in users, this will get called when the Logout link is clicked. It will dispatch the LogOut action and then direct the user to the login page. <template> <div id="nav"> <router-linkHome</router-link> | <router-linkPosts</router-link> | <span v- <a @Logout</a> </span> <span v-else> <router-linkRegister</router-link> | <router-linkLogin</router-link> </span> </div> </template> <script> export default { name: 'NavBar', computed : { isLoggedIn : function(){ return this.$store.getters.isAuthenticated} }, methods: { async logout (){ await this.$store.dispatch('LogOut') this.$router.push('/login') } }, } </script> <style> #nav { padding: 30px; } #nav a { font-weight: bold; color: #2c3e50; } a:hover { cursor: pointer; } #nav a.router-link-exact-active { color: #42b983; } </style> Now edit your App.vue component to look like this: <template> <div id="app"> <NavBar /> <router-view/> </div> </template> <script> // @ is an alias to /src import NavBar from '@/components/NavBar.vue' export default { components: { NavBar } } </script> <style> #app { font-family: Avenir, Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; color: #2c3e50; } </style> Here we imported the NavBar component which we created above and placed in the template section before the <router-view />. 2. Views Components Views components are different pages on the app that will be defined under a route and can be accessed from the navigation bar. To get started Go to the views folder, delete the About.vue component, and add the following components: Home.vue Rewrite the Home.vue to look like this: <template> <div class="home"> <p>Heyyyyyy welcome to our blog, check out our posts</p> </div> </template> <script> export default { name: 'Home', components: { } } </script> This will display a welcome text to the users when they visit the homepage. Register.vue This is the Page we want our users to be able to sign up on our application. When the users fill the form, their information is been sent to the API and added to the database then logged in. Looking at the API, the /register endpoint requires a username, full_name and password of our user. Now let’s create a page and a form to get those information: <template> <div class="register"> <div> <form @submit. <div> <label for="username">Username:</label> <input type="text" name="username" v- </div> <div> <label for="full_name">Full Name:</label> <input type="text" name="full_name" v- </div> <div> <label for="password">Password:</label> <input type="password" name="password" v- </div> <button type="submit"> Submit</button> </form> </div> <p v-Username already exists</p> </div> </template> In the Register component, we’ll need to call the Register action which will receive the form data. <script> import { mapActions } from "vuex"; export default { name: "Register", components: {}, data() { return { form: { username: "", full_name: "", password: "", }, showError: false }; }, methods: { ...mapActions(["Register"]), async submit() { try { await this.Register(this.form); this.$router.push("/posts"); this.showError = false } catch (error) { this.showError = true } }, }, }; </script> We start by importing mapActions from Vuex, what this does is to import actions from our store to the component. This allows us to call the action from the component. data() contains the local state value that will be used in this component, we have a form object that contains username, full_name and password, with their initial values set to an empty string. We also have showError which is a boolean, to be used to either show an error or not. In the methods we import the Register action using the Mapactions into the component, so the Register action can be called with this.Register . We have a submit method this calls the Register action which we have access to using this.Register, sending it this.form. If no error is encountered we make use of this.$router to send the user to the login page. Else we set showError to true. Having done that, we can include> Login.vue Our LogIn page is where registered users, will enter their username and password to get authenticated by the API and logged into our site. <template> <div class="login"> <div> <form @submit. <div> <label for="username">Username:</label> <input type="text" name="username" v- </div> <div> <label for="password">Password:</label> <input type="password" name="password" v- </div> <button type="submit">Submit</button> </form> <p v-Username or Password is incorrect</p> </div> </div> </template> Now we’ll have to pass our form data to the action that sends the request and then push them to the secure page <script> import { mapActions } from "vuex"; export default { name: "Login", components: {}, data() { return { form: { username: "", password: "", }, showError: false }; }, methods: { ...mapActions(["LogIn"]), async submit() { const User = new FormData(); User.append("username", this.form.username); User.append("password", this.form.password); try { await this.LogIn(User); this.$router.push("/posts"); this.showError = false } catch (error) { this.showError = true } }, }, }; </script> We import Mapactions and use it in importing the LogIn action into the component, which will be used in our submit function. After the Login action, the user is redirected to the /posts page. In case of an error, the error is caught and ShowError is set to true. Now,> Posts.vue Our Posts page is the secured page that is only available to authenticated users. On this page, they get access to posts in the API’s database. This allows the users to have access to posts and also enables them to create posts to the API. <template> <div class="posts"> <div v- <p>Hi {{User}}</p> </div> <div> <form @submit. <div> <label for="title">Title:</label> <input type="text" name="title" v- </div> <div> <textarea name="write_up" v-</textarea> </div> <button type="submit"> Submit</button> </form> </div> <div class="posts" v- <ul> <li v- <div id="post-div"> <p>{{post.title}}</p> <p>{{post.write_up}}</p> <p>Written By: {{post.author.username}}</p> </div> </li> </ul> </div> <div v-else> Oh no!!! We have no posts </div> </div> </template> In the above code, we have a form for the user to be able to create new posts. Submitting the form should cause the post to be sent to the API — we’ll add the method that does that shortly. We also have a section that displays posts obtained from the API (in case the user has any). If the user does not have any posts, we simply display a message that there are no posts. The StateUser and StatePosts getters are mapped i.e imported using mapGetters into Posts.vue and then they can be called in the template. <script> import { mapGetters, mapActions } from "vuex"; export default { name: 'Posts', components: { }, data() { return { form: { title: '', write_up: '', } }; }, created: function () { // a function to call getposts action this.GetPosts() }, computed: { ...mapGetters({Posts: "StatePosts", User: "StateUser"}), }, methods: { ...mapActions(["CreatePost", "GetPosts"]), async submit() { try { await this.CreatePost(this.form); } catch (error) { throw "Sorry you can't make a post now!" } }, } }; </script> We have an initial state for form, which is an object which has title and write_up as its keys and the values are set to an empty string. These values will change to whatever the user enters into the form in the template section of our component. When the user submits the post, we call the this.CreatePost which receives the form object. As you can see in the created lifecycle, we have this.GetPosts to fetch posts when the component is created. Some styling, <style scoped> * { box-sizing: border-box; } label { padding: 12px 12px 12px 0; display: inline-block; } button[type=submit] { background-color: #4CAF50; color: white; padding: 12px 20px; cursor: pointer; border-radius:30px; margin: 10px; } button[type=submit]:hover { background-color: #45a049; } input { width:60%; margin: 15px; border: 0; box-shadow:0 0 15px 4px rgba(0,0,0,0.06); padding:10px; border-radius:30px; } textarea { width:75%; resize: vertical; padding:15px; border-radius:15px; border:0; box-shadow:0 0 15px 4px rgba(0,0,0,0.06); height:150px; margin: 15px; } ul { list-style: none; } #post-div { border: 3px solid #000; width: 500px; margin: auto; margin-bottom: 5px;; } </style> 2. Defining Routes In our router/index.js file, import our views and define routes for each of them }) export default router 3. Handling Users - Unauthorized Users If you noticed in defining our posts routes we added a metakey to indicate that the user must be authenticated, now we need to have a router.BeforeEachnavigation guard that checks if a route has the meta: {requiresAuth: true}key. If a route has the metakey, it checks the store for a token; if present, it redirects them to the loginroute. const router = new VueRouter({ mode: 'history', base: process.env.BASE_URL, routes }) router.beforeEach((to, from, next) => { if(to.matched.some(record => record.meta.requiresAuth)) { if (store.getters.isAuthenticated) { next() return } next('/login') } else { next() } }) export default router - Authorized Users We also have a metaon the /registerand /loginroutes. The meta: {guest: true}stops users that are logged in from accessing the routes with the guestmeta. router.beforeEach((to, from, next) => { if (to.matched.some((record) => record.meta.guest)) { if (store.getters.isAuthenticated) { next("/posts"); return; } next(); } else { next(); } }); In the end, your file should be like this:, }); router.beforeEach((to, from, next) => { if (to.matched.some((record) => record.meta.requiresAuth)) { if (store.getters.isAuthenticated) { next(); return; } next("/login"); } else { next(); } }); router.beforeEach((to, from, next) => { if (to.matched.some((record) => record.meta.guest)) { if (store.getters.isAuthenticated) { next("/posts"); return; } next(); } else { next(); } }); export default router; 4.Handling Expired Token (Forbidden Requests). Add the snippet below after the Axios default URL declaration in the main.js file. axios.interceptors.response.use(undefined, function (error) { if (error) { const originalRequest = error.config; if (error.response.status === 401 && !originalRequest._retry) { originalRequest._retry = true; store.dispatch('LogOut') return router.push('/login') } } }) This should take your code to the same state as the example on GitHub. Conclusion If you’ve been able to follow along until the end, you should now be able to build a fully functional and secure front-end application. Now you’ve learned more about Vuex and how to integrate it with Axios, and also how to save its data after reloading. Resources (ks, ra, il)
https://www.fvwebsite.design/fraser-valley-website-design/authentication-in-vue-js-smashing-magazine/
CC-MAIN-2021-17
refinedweb
3,172
56.66
This may be a simple question, but I’ve spent a few minutes searching the forum without finding the answer. I may have searched for the wrong keywords, so I added multiple keywords to my topic to aid future visitors. I’m trying to make a little python library with a reusable function. So, I created /etc/openhab2/automation/lib/python/personal/mylib.py …and in my script file I reference it like so: from personal.mylib import myfunction So far so good. As soon as I call myfunction, mylib.py gets compiled to mylib$py.class in the same directory, and myfunction executes. Now, here’s the question. What should I do when I want to make changes to myfunction, just in case my code doesn’t work perfectly the first time? I know, it’s unlikely, but it could happen. No matter what I do, the library is never recompiled. In fact, even deleting mylib$py.class has no effect! The only way I was able to do it right now was to rename mylib.py to mylib2.py and then change the import reference, which is obviously as unmaintainable as restarting openHAB would be. What is the intended workflow here?
https://community.openhab.org/t/jsr223-jython-how-to-reload-refresh-recompile-a-personal-python-library-as-youre-developing-it/77439
CC-MAIN-2022-21
refinedweb
203
78.65
Python – ‘solved’ program is an example of divide-and-conquer programming approach where the binary search is implemented using python. Binary Search implementation In binary search we take a sorted list of elements and start looking for an element at the middle of the list. If the search value matches with the middle value in the list we complete the search. Otherwise we eleminate half of the list of elements by choosing whether to procees with the right or left half of the list depending on the value of the item searched. This is possible as the list is sorted and it is much quicker than linear search. Here we divide the given list and conquer by choosing the proper half of the list. We repeat this approcah till we find the element or conclude about it’s absence in the list. def bsearch(list, val): list_size = len(list) - 1 idx0 = 0 idxn = list_size # Find the middle most value while idx0 <= idxn: midval = (idx0 + idxn)// 2 if list[midval] == val: return midval # Compare the value the middle most value if val > list[midval]: idx0 = midval + 1 else: idxn = midval - 1 if idx0 > idxn: return None # Initialize the sorted list list = [2,7,19,34,53,72] # Print the search result print(bsearch(list,72)) print(bsearch(list,11)) When the above code is executed, it produces the following result: 5 None
https://scanftree.com/tutorial/python/python-data-structure/python-divide-conquer/
CC-MAIN-2022-40
refinedweb
231
52.94
Description Class for a single node in the SPH cluster. Does not define mass, inertia and shape because those are shared among them. #include <ChMatterSPH.h> Member Function Documentation Increment the provided state of this object by the given state-delta increment. Compute: x_new = x + dw. Implements chrono::ChContactable.. Apply the force, expressed in absolute reference, applied in pos, to the coordinates of the variables. Force for example could come from a penalty model. Implements chrono::ChContactable. Get the absolute speed of a local point attached to the contactable. The given point is assumed to be expressed in the local frame of this object. This function must use the provided states. Implements chrono::ChContactable. Get the absolute speed of point abs_point if attached to the surface. Easy in this case because there are no rotations.. Implements chrono::ChContactable. Return the coordinate system for the associated collision model. ChCollisionModel might call this to get the position of the contact model (when rigid) and sync it. Implements chrono::ChContactable.
http://api.projectchrono.org/classchrono_1_1_ch_node_s_p_h.html
CC-MAIN-2019-30
refinedweb
167
58.99
adesh Mahalingappa(7) Sourav Kayal(4) Ajay Yadav(3) Sachin Bhardwaj(3) Matthew Cochran(3) Jean Paul(3) Microsoft Press(3) Saillesh Pawar(2) Vamshi Krishna(2) Abhimanyu K Vatsa(2) Pranay Rana(2) Mudita Rathore(2) Jignesh Trivedi(2) Sudhir Choudhary(2) rebai hamida(1) Jaipal Reddy(1) Ankit Bansal(1) Sukesh Marla(1) Mahesh Babu(1) Joe Pitz(1) Santhosh Veeraraman(1) Anand Thakur(1) Surapureddy Sriram(1) salvatore.capuano (1) Gayathri Anbazhagan(1) Mukesh Kumar(1) Sharath kumar(1) Ravi Shekhar(1) Akhil Mittal(1) Mahak Gupta(1) Deepak Dwij(1) Krishna Garad(1) Suthish Nair(1) Ashish Tripathi(1) Niranjan Kumar Kandaswamy(1) Krishnan LN(1) guru prasad(1) Sumon Miah(1) Pradeep Yadav(1) Viral Jain(1) Nemi Chand (1) Debendra Dash(1) Maruthi Palllamalli(1) Umer Qureshi(1) Sukanya Mandal(1) Bruno Leonardo Michels(1) Shubham Kumar(1) Sushil Singh(1) Gul Md Ershad(1) Santosh Gadge(1) Saineshwar Bageri(1) Nimit Joshi(1) Sandeep Singh Shekhawat(1) Vo Duc Thanh(1) Anil Kumar(1) Preeti Zutshi(1) Vinod Kumar(1) Sonia Bhadouria Vishvkarma(1) Atul Kumar(1) Shubham Saxena(1) Arjun Panwar(1) Sanjoli Gupta(1) Christopher Edward(1) Balakrushna Swain(1) Diptimaya Patra(1) Dhananjay Kumar (1) Amit Choudhary(1) Micke Blomquist(1) Subhendu De(1) Harshit Vyas(1) Puran Mehra(1) Vipin Yadav(1) John Charles Olamendy(1) Nazimuddin Tajuddin Basha(1) Resources No resource found Build Persisting Layer With ASP.NET Core And EF Core Using PostgreSQL And SQL Server 2016 Oct 14, 2016. In this article, you will learn how to build persisting layer with ASP.NET Core and EF Core, using PostgreSQL and SQL Server 2016. Business Entity And Data Access Layer In MVC Dec 04, 2015. This article explains how to use business entities layer and data access layer in ASP.NET MVC.. Applied Secure Socket Layer in .NET: Part 2 Installation and Testing Oct 28, 2014. This article is resuming the voyage by covering the applied aspect of SSL on .NET website via IIS webserver along with the creation of digital certificates. Secure Socket Layer in .NET Oct 27, 2014. This article explains the Secure Sockets Layer (SSL) including the inherent issue of web server security and the process of SSL configuring and implementing in the form of digital certificates over an ASP.NET website. Core OS Layer in iPhone Mar 12, 2013. In this article I will explain iOS Core OS Layer and it's framework in iPhone. Cocoa Touch Layer in iPhone Mar 08, 2013. In this article I will explain Cocoa Touch layer and it's framework. Extend 3 Layer ASP.NET Application to 4 Layer to Achieve Higher Level Abstraction Aug 19, 2012. In this article we will understand the meaning of letter N in N-Layer applications and the concept of one of the OOPS pillar Abstraction. Generic Data Access Layer for WCF : Part 5 Sep 21, 2011. In this article, we will solve the runtime serialization issues we encountered in the previous article. Generic Data Access Layer for WCF : Part 4 Sep 20, 2011. In this article we will create a WCF Business Layer and its methods. Generic Data Access Layer using ADO.NET Entity Framework Sep 19, 2011. In this article, we will learn how to create a Generic data access layer with a WCF Layer. I use the Entity Framework to create a Data Model for our database. Generic Data Access Layer using ADO.NET Entity Framework : Part 2 Sep 19, 2011. In this article, we will take a deeper look at what the Entity Framework provides us and how we can modify it to achieve a Generic Data Access Layer. Generic Data Access Layer for WCF : Part 3 Sep 19, 2011. In this article, we will see how we can modify what the ADO.NET Entity Framework has provided us to achieve a Generic Data Access Layer. LINQ With 3 Layer Architecture (Insert Data Into Database) Jun 22, 2011. Here you will see how to use LINQ with 3 Layer Architecture (Insert data into database). Using 3 Layer Architecture to Insert Data Into a Database Apr 13, 2011. How to use 3 Layer architecture to insert data into a database. How to Retrieve Images from Database (In Layer Architecture) Feb 27, 2011. Here you will learn how to Retrieve Images from a Database (In a Layer Architecture). Optimize your Data Layer for quicker Code Development Feb 19, 2010. In this article we will see how to optimize the data layer. F# Data Abstraction Layer For C# Apr 06, 2009. In this article I'll take a look at building a data abstraction layer in F# and consuming it with. Set up Secure Sockets Layer (SSL) using Digital Certificates Mar 06, 2007. This article explains how to secure an IIS Web application using SSL certificates.?. Data Access Layer based on dataSets Jul 01, 2003. This article aims to introduce the reader to several conceptual problems encountered in the development of a generic Data Access Layer (from now on referred to as DAL). Layers In Unity Jun 14, 2017. In this article, I am going to explain how to add layers in your unity project. Reusability Of The Code With Three Layers Architecture In ASP.NET Nov 26, 2015. In this article, you will learn about how to implement three layers architecture with ASP.NET application. Displaying Data in Gridview Using Three-Layer Architecture Aug 16, 2015. This article shows how to display data in a GridView using Three-Layer Architecture. How to Implement 3 Layered Architecture Concepts in ASP.Net Oct 11, 2014. In this article you will learn how to implement 3-Tier Architecture concepts in ASP.Net. How to Work With Layers in ASP.Net Project Nov 24, 2013. This article explains how to make and work with layers in ASP.NET projects. Pass Data in Layered Architecture: Part-2: Non Uniform Style Oct 07, 2013. This article explains how to transfer data in a non-uniform fashion. Pass Data in Layered Architecture: Part-1: Uniformly Using Entity Class Oct 04, 2013. This article explains how to pass data across layers in a uniform fashion using an entity class. Understanding Multilayered Architecture in .NET May 09, 2012. This article focuses on understanding a basic multilayered architecture in C#. Traffic Layers Bing Maps in AJAX May 01, 2012. In this article, we provide an example of Bing Maps In which we show the traffic in the map with the help of the Ajax Control 7.0 ISDK. Canvas Shape Layering Using HTML 5 Mar 06, 2012. In this article we are going to have a very interesting section about canvas shape layering using HTML 5. To layer shapes we can use one of the following layering methods like moveToTop(), moveToBottom(), moveUp() and moveDown(). NLayers Architecture Feb 24, 2012. NLayers tries to provide a Layering solution to a typical ASP.NET application. This article is a continuation of the previous one about NLayers Introduction and Installation. NLayers Introduction Feb 18, 2012. In this article I would like to introduce a layering framework named NLayers. This article is intended for experienced developers or architects with a good understanding of ASP.NET and the ADO.NET Entity Framework. Generic Data Access Layer: Part 1 Sep 08, 2011. In this article we wwill be discussing how we can create a Generic Data Acess Layer which we could use in developing our Business Applications. Scene Order (Layer Order) in XAML Silverlight Apr 11, 2011. In this article, you will learn how to order elements or scenes. Layer Model Of Development Mar 29, 2011. A layer is a reusable portion of code that performs a specific function. WCF Messaging Layer Feb 15, 2011. This article helps to explain and list some basics of WCF Messaging Layer. DALC4NET (An All in One .NET Data Access Layer) Feb 07, 2011. DALC4NET is an Open Source data access layer built for Microsoft .NET projects. This enables us to access data from SQL Server, Oracle, MySql, MS Access, MS Excel etc. data bases. Creating Extensible and Abstract Layer Feb 28, 2008. This article explains you about the abstraction and extensibility which is an important factor in modern day frameworks. Importance of Data Access Layer Jan 19, 2006. This article is written to see the importance of having a separate Data Access layer. Remote Data Access Layer Aug 24, 2004. The attached source project is a data access layer library and the main idea of developing such a DAL is to separate the database execution from the client/end user and maintain it on the server side, there by reducing the number of direct simultaneous connection to the SQL Server. Architecture Of .NET Framework Jul 06, 2017. In I will explain why we divide the .NET framework into layers.. ASP.NET MVC 5 With ReactJS Jun 24, 2017. React is a very popular front-end library developed by Facebook. It’s basically used for handling view layer for web and mobile apps. Cookie Manager Wrapper In ASP.NET Core May 03, 2017. In this article, you will learn how to work with cookies in an ASP.NET Core style (in the form of an interface) , abstraction layer on top of cookie object and how to secure cookie data. Creating Web API With Repository Pattern And Dependency Injection Nov 02, 2016. In this article, you will learn how to create Web API with a Layered Repository Pattern, using Dependency Injections. Enterprise Library Data Access Application Block In C# .NET Dec 12, 2015. In this article I will explain how easy it is to develop Data Access Layers with these libraries. Introduction To Data binding UWP Nov 30, 2015. In this article you will get an introduction to Data binding UWP. Data binding is the process that establishes a connection between the UI layer with our Data Layer Internet of Things (IoT) - Part 2 (Building Blocks & Architecture) Aug 21, 2015. This article explains the IoT services and the layers in the IoT. Design Patterns: Proxy Jul 09, 2015. The proxy design pattern is a layer that prevents you from instantiating heavy objects that will not be needed at a certain time. CRUD Application Using DLL and Stored Procedure May 29, 2015. This application shows how to do Create, Read, Update and Delete (CRUD) operations on a BOOKS table using a DLL and a Stored Procedure in SQL Server with ASP.Net. Overview of Windows Presentation Foundation (WPF) Architecture Mar 11, 2015. This article provides an overview of the Windows Presentation Foundation (WPF) architecture. ADO.NET Overview Dec 09, 2014. In this article we examine the connected layer and learn about the significant role of data providers that are essentially concrete implementations of several namespaces, interfaces and base classes. Cross-Site Scripting (XSS) Dec 01, 2014. This article describes a common application layer hacking technique, Cross-Site Scripting wherein attackers inject client-side scripts into web pages. Optimized Data Binding, Paging and Sorting in ListView Control Jul 28, 2014. In this article you will learn how to optimize Data Binding, Paging and Sorting in a ListView control. Generate Sequence Diagram in C# Feb 17, 2014. This article is about a feature provided by Microsoft Visual Studio to generate a sequence diagram from existing code. Getting Exact Location of Exception in C# Code Jan 28, 2014. This article is about determining the exact location of an exception in code. How ASP.Net Web API Works Jan 04, 2014. This article explains how the Web API works. Here you will see the application layer of the API , the MVC architecture and the Web API architecture. Create and Implement 3-Tier Architecture in ASP.Net Dec 29, 2013. This article explains how to create and implement a 3-tier architecture for our project in ASP.Net. Working With ASP.Net Web Forms in Visual Studio 2013 Dec 16, 2013. In this article we will create the Data Access Layer to the ASP.NET Web Forms Application in Visual Studio 2013. Understand WCF: Part 2: Understand RESTful Service Oct 13, 2013. This article will clarify a few more concepts related to services and protocols. Understand 3-Tier Architecture in C# Oct 04, 2013. In this article we will learn to implement 3- Tier architecture in C#.NET application. An Example of URL Rewriting With ASP.Net Aug 21, 2013. This article introduces URL rewriting using a data-driven application. This application creates various blogs and these blogs are accessed by the title of the blog. SSL in ASP.Net Web API Jun 27, 2013. In this article you will learn about the SSL (Secure Sockets Layer) in ASP.NET Web API. Prevent Dead Lock in Entity Framework May 02, 2013. Recently I am working on Entity Framework as data access layer based Project. We found many performance issues with this application. Learn OSI Network Model Apr 23, 2013. OSI model roughly translated as the reference model of open systems connectivity. Rain Effect in Photoshop Apr 10, 2013. In this article you will learn how to create a Rain Effect in Photoshop. Passport Size Image in Photoshop Apr 01, 2013. In this article you will learn how to make a passport-sized photo in Photoshop. iPhone Operating System Architecture Feb 25, 2013. In this article I will explain the iOS Architecture and it's layers. Data Source Controls Feb 02, 2013. In this article, we explore the role of Data Source Controls in building web applications. Create Custom Shapes As Needed in Photoshop Jan 29, 2013. In this article I am going to explain creating Custom Shapes according to your requirement in Photoshop. How to Use Smarty in PHP Jan 09, 2013. Smarty is a Template Engine which separates the Presentation Layer from the Business Layer and provides manageable wait exchange data between the two layers. jQuery UI Datepicker in MVC 4 Issue Oct 28, 2012. Today, I spent couple of hours in finding the best suited fix of this issue. Actually that was a very simple problem and we may fix it by adding another http request layer in the application but that was not a productive choice. Import Adobe Photoshop File in Expression Blend 4 Sep 12, 2012. Today we are going to see the use of Import Adobe Photoshop File Option of File Menu. CLR Function in SQL Server 2005 Aug 27, 2012. In SQL Server 2005 and later version of it, database objects like function, store Procedure, etc can be created which are already created in CLR. SharePoint 2010 - Create SSL Enabled Site Aug 16, 2012. As part of development activities we might need to create a SSL enabled site inside SharePoint 2010. These sites will be accessed using HTTPS instead of HTTP. Command Query Responsibility Separation (CQRS) Jul 26, 2012. CQSR is an architectural pattern which states that while designing the data access layer in a project, it can be split into two separate units of codebase. Build Your Deep Zoom Mosaic Jul 15, 2012. Here you will learn about Deep Zoom which is a layered format of images and is also known as SeaDragon which is now Microsoft Presentation. Data Binding and Data Templating in Windows Store App Feb 13, 2012. In this article we are going to explore building a Windows Store App using JavaScript and HTML5, which is how to bind your data model to the UI layer. How to Secure a Web Site Using SSL Jan 18, 2012. Secure communication is an integral part of securing your distributed application to protect sensitive data, including credentials, passed to and from your application, and between application tiers. ASP.NET MVC, WCF, ASP.NET Webforms, and JQuery Aug 10, 2011. The sole purpose of any API within the applications I build is to deal with business layer logic and the data related to it. Ideally speaking I would want my API to return structured data which is easily transformed into a format for the client application using the API (e.g. JSON). And all my UI for web applications must be handled with client-side scripts. Client side includes both static HTML, CSS and JavaScript, and in this instance I specifically refer to JavaScript and the handling of my API’s data using JSON, for which I use jQuery. ADO .NET Evolution: Part II: 2-Tier to 3-Tier Aug 09, 2011. Implementation and example of a 3-Tier Application. MEF With WCF - Start UP Jul 23, 2011. In this article I will be creating a Data Access Layer using WCF. I will be using MEF to export the data from the Data Access Layer class and then import it. Silverlight 4 LINQ to SQL Classes in VS 2010 Jul 14, 2011. LINQ to SQL is an ORM (stands for Object Relational Mapper/Mapping), which provides a data access layer for the application. SQL Azure Architecture May 17, 2011. SQL Azure, which resides in the Microsoft Data Center, provides relational databases to applications with four layers of abstraction. Simplest example of MVP design pattern of Asp.net Mar 10, 2011. So here we’ll discuss the MVP pattern. MVP is Model, View , Presenter. This pattern is how the interaction between these layers can be done.. Chapter 8: From 2008 to 2010: Business Logic and Data Jan 20, 2011.. Chapter 5: From 2005 to 2010: Business Logic and Data Nov 30,”) Chapter 1: From 2003 to 2010: Business Logic and Data Nov 22,”) 3 Tier Architecture Nov 02, 2010. 3-Tier architecture is a very well know buzz word in the world of software development whether it web based or desktop based. In this article I am going to show how to design a web application based on 3-tier architecture. How to use Tool Menu in Microsoft Expression Blend Sep 15, 2010. Tool Menu in Microsoft Expression Blend has the option for Creating Layer, Make Button, Make Control, Make Image 3D, Make Brush Resource, Edit Brush Resource, Font Manager, Options. N-Tire Web Application Sample May 26, 2010. In this simple article we will see a sample of n-tier web application.. How to Architect an Application Apr 28, 2010. In this article let’s get into the business of how to architect an application. About Layer NA File APIs for .NET Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!
http://www.c-sharpcorner.com/tags/Layer
CC-MAIN-2017-34
refinedweb
3,078
58.28
The java.util.LinkedHashMap class is Hash table and Linked list implementation of the Map interface, with predictable iteration order.Following are the important points about LinkedHashMap − The class provides all of the optional Map operations, and permits null elements. The Iteration over a HashMap is likely to be more expensive. Following is the declaration for java.util.LinkedHashMap class − public class LinkedHashMap<K,V> extends HashMap<K,V> implements Map<K,V> Following is the parameter for java.util.LinkedHashMap class − K − This is the type of keys maintained by this map. V − This is the the type of mapped values. This class inherits methods from the following classes −
https://www.tutorialspoint.com/java/util/java_util_linkedhashmap.htm
CC-MAIN-2020-05
refinedweb
110
50.73
Building your own vanilla HTML5 game without dependencies might sound like a tough and odd job to do. In fact, it's a really fun experience and a huge opportunity to sharpen your JS and general programming skills. One awesome competition that empowers this movement is the JS13K competition. It's a 1 month competition where you build an HTML5 game with a 13kb file size limit. While this post is dedicated to this competition, the content applies to building small JS games in general. As a warm up for the competition, I've decided to prepare a project setup and gulp build, which can be found here with all the setup instructions. This post will be explaining the project setup, where each piece falls and describe the gulp build process, in the bootstrap project linked above. Assuming you already have some experience with HTML5 games and basic concepts like the game loop. Project structure . +-- README.md +-- _build ? +-- game.min..css ? +-- game.min.js ? \-- index.html +-- _dist ? \-- game.zip +-- gulpfile.js +-- package.json \-- src +-- css ? \-- style.css +-- images +-- index.html \-- js +-- draw.js +-- game.js +-- random_obj.js \-- util.js At the root Here, we have our gulp build file, our npm dependencies in package.json and two directories _build and _dist. The job of the gulp build process is to output the final concatenated, minified js, css and html code into _build, and then zip all of the contents and store them under the _dist directory. This process runs automatically every time the work is saved, which helps keep an eye on the final output especially on the size of the zip file that is being updated in the _dist directory, before finally submitting. The source code Everything related to the source of the project would go under src which would contain our css, images, and js code each in its respective directory. Entry point Also notice that index.html is at the root of the src folder, which is the entry point to our game, in it we'd initialise the canvas and include all dependencies. This would be the non-compiled version which would run on the local dev server and looks something like this: <!DOCTYPE HTML> <html lang="en"> <head> <title>JS13K Starter</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1, maximum-scale=1, user-scalable=0" /> <meta name="apple-mobile-web-app-capable" content="yes" /> <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent" /> <!-- build:css --> <link rel="stylesheet" type="text/css" href="./css/style.css" /> <!-- endbuild --> </head> <body> <h1>JS13K Starter Project</h1> <h3>Click to randomize!</h3> <canvas></canvas> <!-- build:js --> <script src="./js/game.js"></script> <script src="./js/draw.js"></script> <script src="./js/util.js"></script> <script src="./js/random_obj.js"></script> <!-- endbuild --> </body> </html> JS Code As for the JS, one easy way to structure is it follow a couple of simple principles: Wrap your game code around a namespace Separate code into different files based on their functionality A simple way to namespace your game is to have a global object for the game, I named it $ in this case as its short and jQuery is not being used in the project. Under that name space I'd attach different objects and functions, used across different files. For example, we can have reusable functions around drawing on canvas available under the $.Draw module inside the draw.js file which contains stuff like $.Draw.rect, $.Draw.circle or anything related to drawing on the screen. As for the logic of different characters or entities related to the game such as a meteor, hero, or a villain, those would have their own files and manage their own logic relative to the game and its loop. Those modules would have a class like structure with a constructor. That's because we want to be creating several instances of our game objects into the game such as several meteors, one ups, etc... The code for such a module would look something like this: "use strict"; $.RandomObj = function () { this.x = $.util.randomInRange(0, $.width); this.y = $.util.randomInRange(0, $.height); this.dimension = 0; this.targetDimension = $.util.randomInRange(50, 70); this.growthSpeed = $.util.randomInRange(0.5, 2); this.color = $.util.pickRandomFromObject($.colors); }; $.RandomObj.prototype.render = function () { $.Draw.rect(this.x, this.y, this.dimension, this.dimension, this.color); }; $.RandomObj.prototype.update = function () { if (this.dimension < this.targetDimension) { this.dimension += this.growthSpeed; } }; Creating an instance of our RandomObj would be something like var randomObjectInstance = new $.RandomObj(); which ideally would be instantiated from within our game loop. Each game object class would implement two essential functions, render and update. Each of those functions would be called automatically by our game loop on each iteration. Calling randomObjectInstance.render() would trigger the painting code for that specific instance of that specific game object. Calling randomObjectInstance.update() would trigger the change to be done on that instance's state such as growing in size + 1 when that game loop tick takes place. The game loop itself would be as simple as an infinite loop that triggers a render and update calls on each iteration on all the game objects available in the world. Example: $.loop = function () { $.render(); $.update(); window.requestAnimFrame($.loop); }; $.update = function () { for (var i = 0; i < $.entities.length; i++) { $.entities[i].update(); } }; $.render = function () { $.Draw.clear(); for (var i = 0; i < $.entities.length; i++) { $.entities[i].render(); } }; The game loop is available in the game.js file in the project. Gulp build The build script is pretty straightforward, as mentioned earlier, its all about concatenating, minifying the source then outputting a final zip file. They're all divided to separate tasks serve, buildCSS, buildJS, buildIndex, and zipBuild. And finally, they're all grouped into one main build command which runs them in order. This runs every time changes are saved on the project thanks to the watch command. In addition, the game's entry point index.html via a serve command. To get everything up and running in one shot, just run gulp this will run an initial build, serve the app on the localhost and will run the watch command which will check for any changes and trigger a new build and zip. After each save, the gulp build will output the current size of the zipped project in order to stay alert on the size of the project. Something like [09:32:55] Size of game.zip: 2.75 KB You can find the starter project example on this link. I hope you find this JS13K starter useful and I'm looking forward to participating and seeing all the awesome submissions this year. Would love to hear your feedback and suggestions on this post.
https://www.thecodeship.com/web-development/bootstrap-vanilla-js-game-gulp-build-project-setup/
CC-MAIN-2018-51
refinedweb
1,131
55.74
This article demonstrates using C# and MapPoint 2009 to look up a street address from a latitude/longitude. MapPoint does not use a Web Service. All data is stored locally, on the user's machine. From my research, MapPoint provides the most economical solution to perform a reverse geocode locally. Libraries from vendors such as ThinkGeo or GeoFrameworks cost upwards of $3000 while MapPoint costs $300. You can download the MapPoint North America 2009 trial from. The download is a whopping 1.2GB, because it contains all of the geographic information for North America. MapPoint does not include .NET assemblies, and can only be interfaced through COM. If you are my age (26), you may not have much experience using COM. The Windows SDK includes a tool called tlbimp.exe to generate a .NET assembly from a COM Type Library. The assembly, Interop.MapPoint.dll, is included in the code download, and you can generate it yourself with the command: "C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\tlbimp.exe" "C:\Program Files\Microsoft MapPoint 2009\MPNA83.tlb" /out:c:\Interop.MapPoint.dll /namespace:Interop.MapPoint For more information on COM, I recommend. The method Map.ObjectsFromPoint(int x, int y) queries MapPoint for objects at a given latitude/longitude. This method returns street addresses, countries, restaurants, and anything else in the MapPoint database. We implement the method: Map.ObjectsFromPoint(int x, int y) private StreetAddress LookupStreetAddress(Location loc) { FindResults arr = _map.ObjectsFromPoint(_map.LocationToX(loc), _map.LocationToY(loc)); return arr.OfType<location>().Where(o => o.StreetAddress != null).Select(o => o.StreetAddress).FirstOrDefault(); } to cull the results for a street address. If location does not touch a street, it is unlikely that a street address will be returned. MapPoint does not include a method to find the nearest street address. Streets seem to be around .0001 degrees in width, so our algorithm uses a grid of points .0001 degrees apart. We iterate through the points, from closest to farthest from the starting location, calling LookupStreetAddress on each point until a match is found. LookupStreetAddress public StreetAddress GetNearestAddress(double lat, double lon) { if (lat == double.NaN || lon == double.NaN) { return null; } // MapPoint needs to be centered _map.GoToLatLong(lat, lon, 1); // Zoom level seems to affect what is returned... // haven't figured out the pattern here for (int i = 0; i < 10; i++) { _map.ZoomIn(); } // make 10 squares around, each .0001 degrees apart StreetAddress addr; for (int i = 0; i < 10; i++) { foreach (Location loc in GetPointsAround(lat, lon, i, .0001)) { if ((addr = LookupStreetAddress(loc)) != null) { return addr; } } } return null; } And there you have it. I have used this with a GPS device and was able to accurately see my current address. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Math Primers for Programmers
http://www.codeproject.com/Articles/34507/Reverse-Geocoding-with-C-and-MapPoint-2009
CC-MAIN-2013-20
refinedweb
484
61.12
Introducing: Restful Objects Restful Objects is a public specification of a hypermedia API for domain object models. Version 1.0.0 of the specification has just been released and may be downloaded from here, and there are already two open source frameworks that implement the specification - one for the Java platform and one for .NET. How does Restful Objects differ from other RESTful standards or frameworks - such as the JAX-RS (JSR311) for the Java platform, or Microsoft’s Web API for .NET? While such frameworks typically help in abstracting away pure network issues, they don’t ensure any uniformity in the structure of the resources defined to interact with different types of domain object. And they provide little or no support for what is arguably the most important principle of RESTful systems: "hypertext as the engine of application state", or HATEOAS. In plain language this means that it should be possible to access the entire functionality of the system just by following hypermedia controls (links) from a home resource. Perhaps the closest match to what Restful Objects is attempting is OData. However, the big difference is that while Odata is really targeted to CRUD (Create, Read, Update, Delete) functionality, Restful Objects provides access to the full behaviour (the methods) of the objects. The new frameworks that implement the Restful Objects spec don’t just make the task of writing a HATEOAS-conformant API to a domain object model easier - they actually eliminate the work altogether! You can literally take a domain object model - written as plain-old Java objects or plain-old C# objects respectively - and create the complete RESTful API to them in minutes. Moreover because the resources and representations provided by these frameworks conform to the public specification, it means that a client written to work with a server running one framework will work with the other also. Our hope is that over the next few months we will see more server-side implementations of the Restful Objects specification (we know of at least one other party planning to do so). We're also starting to see the emergence of standard client-side frameworks to consume a Restful Objects API; more on that later. Applicability and benefits The Restful Objects specification is most likely to appeal to developers who use domain-driven development (DDD) and who therefore already have or are developing a domain object model, and who also want to provide some sort of RESTful API to this model. Our own experience of DDD involves very large scale domain object modeling [1, 4, 5], with the majority of the behaviour encapsulated as methods on the domain entities. We are looking forward to using Restful Objects to further leverage such domain model assets. Apart from the enormous savings in effort, there are a couple of other benefits to be gained from using a framework that implements the Restful Objects spec: - Testing. Since all business logic is expressed only in domain classes, such business logic may be tested using fast and cheap in-memory unit tests. In contrast, if a RESTful API is custom-written and tuned to specific use cases, then it can only be effectively tested through (slower) integration tests running across a webserver. - Documentation. Because Restful Objects is driven from a domain object model, it can be documented using established techniques (eg UML, or simply Javadoc/Sandcastle). This is far preferable to inventing some new notation to explicitly document a hand-crafted RESTful API. Before we go on, we should also say that Restful Objects is agnostic with respect to the semantics of the domain object model. So, whether you believe that domain objects should represent the entities of the business, or whether you believe that there should be a separate ‘resource model’ representing instances of use-cases, say - the Restful Objects specification is equally applicable. For simplicity, in this article we'll predominantly use examples that assume the former (domain objects as entities), but we'll revisit this topic later on in the article. Resources In the Restful Objects specification each domain object is a resource; for example the following URI addresses a customer with the Id of 31: ~/objects/customers/31 where ~ is the server address (such as), customers defines the type of object and 31 is the ‘instance identifier’ of that type. The spec doesn't define the format of this identifier; any unique string will do. Why the need for /objects? This is to distinguish the resource from other top-level resources, including services (singleton objects such as repositories and factories that have methods but no properties) and domain-types (type metadata); each of these top-level resources has their own sub-resource structure. Each object resource will have a number of sub-resources representing members of that object. So: ~/objects/customers/31/properties/dateOfBirth addresses a particular Customer's date of birth, while: ~/objects/customers/31/collections/orders addresses the collection of Orders associated with the Customer. The table below shows the object, property, and collection resources mapped against the HTTP methods used to access them. Where no useful meaning could be assigned to a particular HTTP method then a 405 error is specified. The above analysis is hardly innovative - many bespoke RESTful APIs feature something similar. The Restful Objects specification breaks new ground, though, in applying the same approach to object ‘actions’ - meaning those of an object’s methods that are to be accessible through the RESTful interface. To the extent that this is tackled at all in other designs (for example in [2]), it is usually to map an action's invocation to HTTP POST. Our objectives led us to distinguish between providing information about an object action - such the set of parameters required - and actually invoking the action. Thus, the URL: ~/objects/customers/31/actions/lastOrder describes an action that will retrieve the last order for customer 31, while: ~/objects/customers/31/actions/lastOrder/invoke is the resource to invoke this action. We also realised that requiring all actions to be invoked with a POST was not making correct use of HTTP: what about idempotent actions, or query actions that just retrieve other objects (sometimes described as ‘side-effect free’)? The table below shows how we resolved this: In some domain models, most of the behaviour (business logic) is implemented on services, with domain entities acting as little more than data structures passed to and from the service actions. The Restful Objects specification copes well with this pattern; the only difference is that the URL would identify a service instance rather than a domain object instance. However, the spec copes equally well with a pattern where most of the behaviour is implemented as methods on ‘behaviourally-rich’ domain objects; services then play the much more minor role of providing access to objects (finding existing objects, or creating new ones). Representations Each resource returns a representation; the Restful Objects spec defines that these representations are in JSON (JavaScript Object Notation). For resources called with a PUT or POST method, a representation must also be provided as a body to the request, describing how the resource is to be modified. The specification defines a few primary representations: object(which represents any domain object or service) list(of links to other objects) property collection action action result(typically containing either an objector a list, or just feedback messages) and it also describes a small number of secondary representations such as home, and user. Each of these representations has a formal media type, exposed in the HTTP Content-Type header. For domain objects, this takes the form: application/json;profile="urn:org.restfulobjects:repr-types/object"; x-ro-domain-type="" Formally, this is of type application/json, with two optional parameters: profile, and x-ro-domain-type. These two parameters provide additional semantics to the consuming client, and can be thought of as layered one on top of another. At the lowest level, application/json tells the client merely that the payload is JSON. For some clients, such as developer plug-ins to browsers, this is all the information that is required. On top of this, the profile parameter indicates that the representation is of a domain object. Finally, the x-ro-domain-type parameter indicates the type of the object: in this case a Customer. A client might use this to customize the user interface, or just to verify that the representation returned is the one expected. Incidentally, the specification uses the .x-ro- prefix to minimize namespace conflicts whenever there is no existing or draft standard that can be leveraged. This necessarily gives rise to a number of custom parameters and query strings; Restful Objects defines no custom HTTP headers, however. Links In line with the HATEOAS principle, each representation contains links to other resources, and each of these links has a rel parameter defining the nature of the relationship. The specification re-uses several IANA-defined rel values (for example, self, up, describedby), plus some custom ones. For example: urn:org.RESTfulobjects:rels/details;action="placeOrder" defines a link from an object representation to a resource for an action on that object. The prefix: urn:org.restfulobjects:rels/details is sufficient for a generic client, while the additional parameter action - indicating in this case that this link is for the placeOrder action - may be of use to a bespoke client. Putting all the above together, the table below shows a typical flow through a set of resources to accomplish some user goal. Server implementations Earlier we mentioned two separate open source frameworks that implement the Restful Objects specification. Restful Objects for .NET is a complete implementation of the specification, in beta only because it makes use of the Microsoft Web API framework (part of ASP.NET MVC4 - in ‘RC’ stage at the time of writing). The second framework, Restful Objects for Isis runs on the Java platform; this framework is working, but currently implements an earlier draft of the specification and needs further development and testing before release. We are actively involved in the development of these frameworks. With each of these frameworks, it is possible to take a domain object model, written as POCOs and POJOs respectively, and create a complete RESTful API conforming to the Restful Objects specification, without writing any further code. An online video recording (using the .NET framework) shows this being done in just a few minutes. This is possible because both of the frameworks build upon existing frameworks that implement the naked objects pattern - whereby an object-oriented user interface is created automatically from a domain object model using reflection, with public methods made available (by default) as user actions. The new Restful Objects frameworks reflect over a domain object model in a similar fashion, but render the objects’ capabilities as a RESTful API rather than a user interface. In both cases, they delegate to the existing frameworks (Naked Objects for .NET, and Apache Isis respectively) the responsibility for reflection, for object persistence, and other cross-cutting concerns. The new frameworks both recognise some simple domain object coding conventions, and annotations (‘attributes’ in .NET). Any public method on an object, for example, will by default be made available as an action in the Restful Objects API, but this may be overridden by annotating the method as Hidden. If an object defines a public method foo([params]) and also a public method validateFoo([params]), the latter will be recognised as providing logic for validating parameters supplied to the former before it is executed. Both frameworks also provide fine-grained authorization based on the user’s identity and/or roles. If, for a given domain type, the user is not authorized to view a given property, collection or action, then in the corresponding representations the links to that object member will never be rendered for that user. Were the user to attempt to gain access by constructing the URL for the resource directly, they would be returned a 404 error; if the user has permissions to view the object member, but not to modify it, then attempting the latter would return a 403. The source code for Restful Objects for .NET can be downloaded from Codeplex or the framework may be installed as a NuGet package; Restful Objects for Isis may be downloaded as source code or installed through a Maven archetype. For any developer already using Naked Objects for .NET or Apache Isis, the new frameworks mean that they can create a RESTful API to their existing domain models with literally minutes of effort. But our intent was always to appeal to developers with no knowledge of, nor interest in, the naked objects pattern, and we are indeed seeing interest from such developers. Clients Most of our development effort to date has been focused on the server side - the code that generates a Restful Objects API from a domain object model. We have also done some limited work on client applications that consume such APIs. In one example, the client is a conventional web application (written using ASP.NET MVC), where the controller methods make calls to a Restful Objects API implemented on another server. We hope in future to develop a small client-side application library for making the calls to the Restful Objects server and for transforming the returned JSON representations into objects that can be manipulated by C# or Java, say. Another example client we have written is a ‘single-page application’, consisting of a single web page, containing just a few lines of static HTML, and supported by a rich set of JavaScript functions using JQuery. The JavaScript makes the calls to the Restful Objects API on a server and renders the results as HTML in the browser. Again, we hope in future to be able to write a small JavaScript application library, specific to consuming a Restful Objects API, but drawing heavily upon JQuery - or perhaps someone else will pick up that baton. The generic nature of the Restful Objects specification suggests another possibility: generic clients that can work, unmodified, with any domain model via the RESTful API. We are aware of three different open source generic clients already under development, and these are listed on the Restful Objects website. All of them are single-page apps, written in JavaScript and using existing libraries. The JavaScript code within of these generic clients could also be used as the basis for creating one or more custom clients also. The three generic clients have very different styles of user interaction. A screenshot from one of them - A Restful Objects Workspace (AROW), being developed by Adam Howard, is shown below: (Click on the image to enlarge it) Counter arguments The idea of exposing domain entities via a Restful API is controversial. Some commentators have suggested that it is a bad idea; or that it creates security holes; others have even suggested that it is not possible to create a true Restful API that way. Let’s look at those arguments in detail. Rickard Öberg has asserted that exposing domain entities as resources "cannot be HATEOAS because there is no sensible way to create links between resources that expose the application state" [3]. But this is plainly not true: Restful Objects exposes entities as resources; is fully HATEOAS; and has very clear rationales for the links. Jim Webber has asserted that even if is possible, it is a bad idea because this results in a tight coupling of client and server, whereas in a Restful system client and server should be able to evolve independently [6]. He and others argue that you should expose only ViewModels and/or objects representing use-cases, both of which should be versioned, and which effectively insulate the client from underlying changes in the domain model. We suggest that this argument, while not totally wrong, is being applied indiscriminately. The truth, we believe, is much more nuanced: there are situations where this line of argument is valid, but many where it is not. Two factors need to be considered. The first is whether the client and server are both under the control of the same party. For a publicly-accessible Restful API - such as one operated by Twitter or Amazon - then the server and clients are owned by (many) separate parties and here, we agree, that exposing domain entities is not a good approach. Restful Objects, however, can work perfectly well with view models and/or use-case objects in this case, and these patterns are discussed in more detail in the specification. However, there are a great many potential uses of Restful APIs where client and server evolution are controlled by the same party - in the form of enterprise applications for primarily internal use. Here, exposing domain entities is not only safe, it is actually a good idea. Such applications, which fall into the category of ‘sovereign’ systems (as defined by Alan Cooper), typically need to give the user access to a much broader range of data and functionality than in a public-facing (or ‘transient’) application. The second factor is whether the client is ‘bespoke’ (written specifically to work with a particular domain model) or ‘generic’ (capable of working, unmodified, with any domain model). To date, almost all clients consuming Restful APIs, whether for intranet or public internet, are bespoke - so there is a need to insulate them from changes to the domain model. But that is not true of a generic client, which will respond automatically to changes in the domain model. To date, few people have considered the possibility of generic clients, though, because there was no comprehensive standard against which they could be written. We suggest that the concept of a generic client is actually more in-line with the spirit of REST (and the letter of it too) than a bespoke client. After all, at one level, a web browser is just a generic client to a restful interface, operating at a document level. Restful Objects enables the idea of a generic client operating at a higher level of abstraction: that of a domain object model. Portraying these two factors as a 2x2, as shown below, we can see that if you are dealing with an intranet application and/or a generic client, then exposing domain entities through a RESTful API is both safe and effective. Only where you are working on a public internet application and with bespoke clients is it necessary to heed the advice that domain entities should be entirely masked by ViewModels and/or use-case objects, to insulate clients from server changes. To reiterate: Restful Objects is equally applicable to all positions on this grid. The fact that the bottom-right corner has received most attention to date is, we suggest, more a reflection of the difficulties of building bespoke Restful APIs than an inherent limitation. Another common nostrum - that Restful APIs can only be built for applications that have a small, tightly-defined, state-transition diagram - is further evidence of people thinking only in that box. Öberg has further suggested that exposing the domain entities as a RESTful API cannot work for a public application because of authorization issues, citing the example where access to a ResetPassword action may be regulated by role, but access to a ChangePassword action must be restricted to a specific user. This particular example is actually trivial to code in both of the Restful Objects framework implementations described above. We suggest that a better example for the point that he is trying to make would be access to a specific object instance, rather than a method. This is a common need in public-facing systems: for example where a user must be able to access their own ShoppingBasket, but not be able to access anyone else’s ShoppingBasket just by guessing at the URL. While neither of the two Restful Objects implementations currently support instance-based authorization per se, there is another perfectly viable solution to this problem. As we mentioned earlier, the form of instance identifiers in URLs is not prescribed in the Restful Objects spec. It follows that this part of the URL may be encrypted by the server, for example using a session-generated private key. Invoking a service action resource to return MyShoppingBasket might then return an object representation whose "self" link was: ~/objects/baskets/xJDgTljGjyAAOmvBzIci9g The URL for the Total property on that shopping basket would therefore be: ~/objects/baskets/xJDgTljGjyAAOmvBzIci9g/properties/Total So parsing of resource identifiers remains straightforward, but it would be effectively impossible to retrieve another Basket directly. Certainly this is less pretty, but it is worth pointing out that REST is not about creating ‘pretty’ URLs. As Tim Berners-Lee has stressed, apart from the server address, URLs should be treated as opaque; it's the "rel" value that matters. That’s part of the point of HATEOAS. In both of the two Restful Objects frameworks described in this paper we plan to make session-key encrypted instance identifiers as a configurable option.. Conclusion Creating a RESTful API to a large enterprise system has, to date, been a very expensive exercise in development terms - requiring huge resources in design, development, testing and documentation. Using a framework that implements the Restful Objects specification means that the whole exercise becomes almost trivial - leaving the developers to concentrate their efforts where it is most needed - on the domain object model. Moreover, the standardization of resources and representations encouraged by the Restful Objects spec opens up a huge opportunity for client-side libraries, that will work with any Restful Objects server, and even for completely generic clients, that will work for any application coded to the standard. We hope you'll explore the Restful Objects specification and the framework implementations, and we'd be delighted to hear from anyone interested in writing a new open source framework or library designed to conform to the Restful Objects spec. References [1] Haywood, D, "Domain-driven design using Naked Objects", 2009, Pragmatic Bookshelf [2] Masse, M. "REST API Design Rulebook", 2011, O’Reilly [3] Öberg, R, "The Domain Model as REST Anti-pattern" [4] Pawson, R. "Case Study: Large-scale pure OO at the Irish Government", QCon London 2011 presentation [5] Pawson, R, "Naked Objects", 2004, PhD thesis, Trinity College, Dublin. [6] Webber, J, "Rest and DDD". About the Authors My humble opinion on the topic. by Alexandre Roba Re: My humble opinion on the topic. by Richard Pawson Very Cool by Adam Bell Handling concurrency issues by Dan Haywood RPC? by Mark Davidson No, RO is not an RPC system by Dan Haywood Moreover, the way that you get to that URL is by following links, with the appropriate "rel" attribute (2.7 of the spec). In this way the spec defines a fully HATEOAS API. Re: No, RO is not an RPC system by Mark Davidson Re: No, RO is not an RPC system by Ryan Riley RO only exposes the public properties of an object by Richard Pawson Spiro - Single Page Interface for Restful Objects by Stefano Cascarini Spiro will function as a generic client for any domain object model with a Restful Objects 1.0 API. The models (spiro-models.js) can also be used as the basis for building a completely custom UI - using standard backbone.js patterns. This is a work in progress - it is far from complete - but the parts that are implemented work quite well. You can find the code and more details here: restfulobjects.codeplex.com/wikipage?title=Spir... (If you are on the .NET platform, you can also install it as a ‘pre-release’ NuGet package: nuget.org/packages/RestfulObjects.Spiro/0.2.0-beta Any information that can be named can be a resource by Dan Haywood By that definition there's nothing wrong with the spec defining "the first name of a customer" as a resource, nor "the contents of a basket", nor, indeed "add to basket". Re: RO only exposes the public properties of an object by Ryan Riley Re: Any information that can be named can be a resource by Ryan Riley The spec doesn't mandate that you expose a fine-grained domain model by Dan Haywood As we discuss briefly in the article and the spec discusses in more depth (section 29, 33.3, 33.4, 33.5), there will be cases (internet, bespoke clients) when what should be exposed is a resource model that mediates to but does not expose an underlying domain object model. This is much closer to the style advocated by Oberg and Webber. Even here, though, RO offers benefit: you can develop your resource models as pojos/pocos, which mediates to/from the underlying domain model. This resource model can be tested outside of a big web stack, and you save yourself having to ever go down to the lower level of abstraction (of JSON, HTTP headers etc etc). And, the REST API you get is going to be much more uniform than if you coded it up yourself. Very Cool indeed by Jeroen van der Wal From a RestfulObjects consumer by Adam Howard The javascript frontend I'm working on needs to know one uri: the homepage resource. From there I can operate solely on rels and the defined media types[1]. So to take the table under the Links section (read .../services as urn:org.restfulobject:rels/services): var ro = new RO.Client("/"); var services = ro.findLinkByRel(".../services").follow(); // follow() figures out the correct HTTP method displayAsMenu(services); displayAsMenu will loop over each of the services and for each service it's member actions. Each action becomes a link in the menu that binds onclick to follow the .../invoke rel. The user may decide to click on the "Find by Name" link under the Products menu. The representation of that action tells me I need to ask the user for a name. After requesting the .../invoke rel with the name parameter (the invoke link has told me I can do a GET meaning that the operation is safe) it returns a representation of product 8071 in this case. var result = anAction.findLinkByRel(".../invoke").follow(args); The Content-Type of the response is application/json;profile="urn:org.restfulobjects:repr-types/object" so I know I should display the result as a domain object. displayAsDomainObject(result); I'm not going to go through the whole table but you should get the idea. While the URI hierarchy may be important to providers of a RestfulObjects API, as a client developer I can happily live in HATEOAS land. [1] roy.gbiv.com/untangled/2008/rest-apis-must-be-h... Re: Any information that can be named can be a resource by Mark Davidson Domain action resources are a more general abstracion by Dan Haywood At the other extreme, there is advice to map everything in terms of state transitions of resource models (eg Webber's Restbucks article here on InfoQ). Nothing wrong with that for a public web api. The resources that represent these state transitions are much more akin to domain actions. RO aims to support both of these approaches, to make it easy to expose either your domain objects, or domain services that act as a resource model in front of a hidden domain model. Either domain objects or services can expose arbitrary business logic through their actions, and these actions can have arbitrary pre- and post-conditions. Re: From a RestfulObjects consumer by Dan Haywood Re: My humble opinion on the topic. by Alexandre Roba Valuable approach to exposing Domain Objects by Pedro J. Molina The capability of consume data and use a generic client to render the UI is also a great feature, useful for some kinds of applications. I have been working several years with object models and code generation to create server components and client ones. I have been exploring Web Services, REST, OData and different as ways of communicating service consumer with service producers. See InfoQ Multichannel UserInterfaces. So, definitely, I am going to explore the applicability of Restful Objects to our domain. I see a great potential in Restful Objects for simplifying integration between systems, in the way it standardizes many aspects. I have some questions about using RO in production systems: - 1. How authentication is handled in/(hooked with) RO? Standard HTTP/S authentication? - 2. When retrieving an object, all properties are returned always? Is there any mechanism available to limit or customize the amount of data (properties to be retrieved)? Different cases of use will have different information needs (aka DTO) - 3. Is there a way to retrieve a direct object subgraph (sample: Invoice, Customer, Invoice Lines and related Products) (in order to avoid 1+N calls over the wire)? - 4. When retrieving let’s say “Country.Customers” is there any protection like a paging scheme to protect us from directly getting 1 million of objects and therefore degrade the performance of the system? - 5. There are plans for advanced/dynamic queries over collection of objects? E.g: return the customers in Spain that have bought a given product X in the last 3 years. I think that addressing these concerns enables RO to be used in production systems. Concurrency by Stefano Cascarini Re: Valuable approach to exposing Domain Objects by Richard Pawson "1. How authentication is handled in/(hooked with) RO? Standard HTTP/S authentication?" The RO spec does not mandate/assume any particular flavour of authentication - you can use whatever you would for any RESTful API. "2. When retrieving an object, all properties are returned always? Is there any mechanism available to limit or customize the amount of data (properties to be retrieved)? Different cases of use will have different information needs (aka DTO)" The domain object representation returns all properties that the given user is authorized to see. If you want to limit what is actually returned then you can simply return a 'view model', specific to that context. There is quite a bit on this within the spec. That said, though View Models can certainly be useful, the need for them is commonly overstated. Provided you can be sure that the representation only returns data that the user is authorized to see (which is the case) then in many cases, it may be much simpler just to customize which properties are actually displayed in the client. "3. Is there a way to retrieve a direct object subgraph (sample: Invoice, Customer, Invoice Lines and related Products) (in order to avoid 1+N calls over the wire)?" The Restful Objects 1.0 spec does not explicitly provide for eager loading, but there is a substantial discussion of the recognised need for this at the back of the spec, and it will almost certainly be added in v1.1. "4. When retrieving let’s say “Country.Customers” is there any protection like a paging scheme to protect us from directly getting 1 million of objects and therefore degrade the performance of the system?" For the Restful Objects for .NET implementation, there is now such a guard. The default is 20, but you can override this globally. Right now if you want paging, you need to write actions that support it. But, again, it is likely that the next version of the RO spec will support paging explicitly - there is again a sketch of this in the last section of the spec. "5. There are plans for advanced/dynamic queries over collection of objects? E.g: return the customers in Spain that have bought a given product X in the last 3 years." Again, a likely future extension to the spec. You can certainly do such queries in the RO.NET implementation (using dynamic LINQ) - but it's not yet defined at spec level. Re: Concurrency by Alexandre Roba Earlier discussions illustrate the 2x2 in the article by Richard Pawson Ryan makes the point that for the domain models he designs would not be suitable to be exposed as a Restful API - he wants a custom-defined Restful API - and so the value of Restful Objects is much lower. I think it would be fair to say that Ryan is describing is in the bottom-right hand corner of the 2x2 in the article - a potentially public-facing API where you don't have control of the client. Dan makes the point that Restful Objects still works for this, and still adds value, even though that value might is not as great as in the other three. We know from personal experience that even with a 'dream team' of REST-heads and the best current tooling - writing custom RESTful APIs is extraordinarily labour intensive. But what does concern me is that so many RESTful practitioners seem unable to imagine the other three spaces on that 2x2. There is huge potential out there for REST - much more than the narrow concept of REST that most practitioners advocate right now. But the reality is that it would be almost impossible to work in those spaces without something broadly like the Restful Objects spec and frameworks that implement it automatically. Pedro (in another thread) has picked up on this. Organisations that do have good domain models (and, admitedly, that's not all that common) now have the potential to make those models accessible *within the intranet* via a RESTful API - both to facilitate integration between disparate systems, and to facilitate development of new clients. We know large organisations interested in RO on both counts. The point is that they could never consider doing this using a custom RESTful API - there would simply be too much effort involved in defining it, implementing it, and testing it. Great stuff - possible application scenario? by Enrique Albert Congratulations on the new API, it is great news to see the NO continuously adapting new features. Can you indicate scenarios where you believe the Restful Objects API would be useful? If I have to develop an intranet solution, I can currently use the NO MVC, I can see the benefit of using the Restful Objects, for example, in system integration cases or if I had to develop a desktop client for example; that is, solutions that we normally use WCF services and DTOs; but, are there any other obvious scenarios besides those ones? Other application scenarios by Dan Haywood You mentioned system integration from a desktop client, which I'd see that more as two parts of the same app (with the client app being either bespoke or generic as fits). In domain-driven design (DDD) terms, that's one bounded context. But there's also system integration across bounded contexts. For example, one government department could invoke an RO API exposed by another government department. This would be a case where accessing the API through a resource model (bottom left of the 2x2) would be advisable. Another obvious case is using the RO API to support multiple client apps, eg a desktop app, a mobile app, etc. Here we are back into one bounded context (both the clients and server are owned by one team), and so could expose the underlying domain model directly. A slightly less obvious case is in migrating data; as Jeroen (above) mentioned, we're taking advantage of the RO API to start trial migrations of data from an existing system into a new Isis-based system. The RO API itself is throw-away, but the point is that it delegates to the main domain logic that we're building out. There's a bunch of benefits: the users get to validate the new system using data that is familiar to them, we get to validate our business logic, and we also get some stress testing for free. One final use case is in supporting phased migration from an existing system to a new system, which might take a number of years. We can wrap the existing system in an RO API, and put an RO API around the new system also. Clients written to the RO API can then access either the old system or new; 301 redirects can be used to hide the fact as to where the business logic invoked is being accessed from: existing system or new. Here's a few application scenarios by Richard Pawson Here's a couple of application scenarios that I'm personally interested in - but I'm hoping others might contribute more: 1. As a lingua-franca for integration between multiple systems within an intranet. Great thing is these can be on different technology platforms and with different concepts of a domain model: one might be strongly object-oriented, one might be strongly service-oriented, and one might be little more than a wrapper onto a database. 2. As a way to facilitate the development of new single-page apps for a single server. I'm astonished at how easy this is now proving to be once you have a comprehensive Restful API that delivers a consistent (small) set of JSON representations. Stef's work on the Spiro viewer (see earlier posting) shows this: he has a small set of backbone.js model objects that correspond to the RO representations, and that can form the basis of either a generic client or a bespoke client. 3. Combination of 1&2. Dan proposed a very interesting pattern for this, described in the spec, and we have spiked it. 4. Your own point about building desktop clients. I know of one organisation looking to use RO as the basis for communications between its WPF rich client and the server. Let's here some other ideas ... Re: Here's a few application scenarios by Jacques Bosch >4. Your own point about building desktop clients. I know of one organisation looking to use RO as the basis for communications between its WPF rich client and the server. Perhaps there is another also, but I think we are the client that Richard is referring to. :) We are in the early phases of building a complex enterprise system that will consist of many different projects. The core of this system is built on Naked Objects MVC. We have a WPF desktop application that forms part of the same domain. We are going to do a spike to let this application work with the same core domain exposed via RO and auto generated C# proxies, although the WPF UI will be bespoke, not generic. I'll report back when we have actually tried this. We will also use RO to expose an API for our business partners to integrate with us. So far we have found NO MVC extremely beneficial and I think the extra power and flexibility that RO for .Net will add to the toolbox is very exciting. Thanks for the pioneering work! Re: Any information that can be named can be a resource by Adam Howard So, .../customers/31 is a thing, a .../repr-types/object. And .../customers/31/actions/lastOrder is a thing, a .../repr-types/action. An action like lastOrder, or addToBasket, has a representation with information like it's returntype and required parameters. It can also tell us where to go when we want to know the result of invoking the action, which will be a noun, as well as which HTTP method is appropriate. In that case we follow the .../rels/invoke link. I wonder if changing the name of the rel to something more noun-like, .../rels/invocation or .../rels/action-result, would make it clearer of the difference between action metadata (.../repr-types/action) and the value of the invoked action (.../repr-types/action-result). I wasn't involved in the development of the spec but came to it recently and the community has been very receptive to input about changes and future needs to support client side applications. So maybe this is something that can be discussed more. In the case of the other member types an object has: value properties, like a customer's name, reference properties, like a customer's billing address, or collections, like a customer's payment methods, they are all public getter methods on the domain object and getters share the semantics of being side-effect free and taking zero arguments. This means we know we can access their return value through a GET. It also means that in some cases we can even safely inline the value into the domain object representation. The RestfulObjects spec currently only defines this inlining for the simple value properties, but has a proposal (section 34.4) for extending the spec to allow inlining of references and collections, on demand. The reality is that all of this data is the result of invoking methods. The customer representation having a "name" property means there was a public getName method and the value is the result of invoking that method. The extra step that RestfulObjects took was to figure out how we can reason about accessing the result of any public method on an object, not just those with getter semantics. If other APIs do handle this they almost always hard-code the HTTP method to POST, which violates [1]. [1] Re: Earlier discussions illustrate the 2x2 in the article by Adam Howard Suppose you walk into your favorite coffee shop and the line is long. You pull out your smartphone and go to the coffee shop's app, you tap the order button. You're then asked to choose what drink you want from a menu, then what extras you want in it, what size. You confirm your order, your credit card is charged, and in a few minutes your name is called out and you pick up your drink. The process you went through to order was very scripted: step 1, 2, 3, confirm. Now suppose you're not sure what you want to drink, so you wait in line. When you get to the register you ask: what has more calories, a shot of caramel or a shot of chocolate? The employee doesn't know but they can look it up on their computer. Caramel has 250, chocolate has 200. Yikes. Now you ask, what extras do you have that are under 100 calories? Back to the computer. Cinammon and fat-free milk. Ok, I'll have a fat-free latte with extra cinammon. I have $1.27 on this gift card. Put the rest on my credit card. The first scenario is a highly-scripted interaction that wants to guide the customer into placing their order as quickly as possible. This is a place where exposing a separate resource model is essential. The client may not be under your control and they don't fully understand your domain. POST .../MobileOrder { "DrinkName":"Mocha Frappacino", "ExtraName":"Caramel", "SizeName":"Venti", "CustomerName":"Bill" } The second scenario, however, requires the cashier to navigate around the domain of coffee ordering to answer the customer's questions and charge their credit card correctly. Having access to the full glory of the domain model allows the cashier, who understands the domain, to navigate directly to the answer. The coffee shop will probably build both the client and server and they will evolve together. GET .../extras/FindByName/invoke?Name=Caramel { "Name":"Caramel", "Calories":250 } GET .../extras/FindByName/invoke?Name=Chocolate { "Name":"Chocolate", "Calories":200 } GET .../extras/FindByMaxCalories/invoke?Calories=100 { "elements": [ { "Name":"Fat-free milk", "Calories":25 }, { "Name":"Cinammon", "Calories":10 } ] } I left out all of the hypermedia to make the above easier to read. Naming by Richard Pawson Re: No, RO is not an RPC system by Roberto Calero ~/objects/customers/31/actions/lastOrder/invoke => MVC'ish Before arguing anything else... what's the 1st class citizen object for the above? Customer or Order? Assuming it's Order then the url should be ~/objects/customers/31/lastOrder which effectively identifies the last order for customer = 31. The resource as such is uniquely identified. You can apply a POST, PUT, GET etc to the above and it'll be the actual operation (not the url) the one providing the context. ~/x/y/action/sub-action/etc are MVC standards in the other hand used to identify which controller/action/etc is running a given piece of logic. MVC is not REST of course. REST is nothing to do with pretty URLs by Dan Haywood But if you did want a URL to be surfaced that looked similar to ~/objects/customers/31/lastOrder, you could model this responsibility (of a Customer knowing how to obtain its last Order) as a property rather than an action. In this case its URL would be ~/objects/customers/31/properties/lastOrder rather than ~/objects/customers/31/actions/lastOrder. The consequence of modelling things this way is that the property would be eagerly evaluated (when returning the representation of the customer) rather than lazily evaluated (when invoking the action). The Restful Objects spec is not based on MVC by Richard Pawson ~/objects/customers/31/actions/lastOrder/invoke the 'actions/lastOrder' is not specifying a controller action, it is specifying an action (i.e. a method) on a domain object (customers/31). As it happens, the 'Restful Objects for .NET' implementation of the standard, as described in the article, does make use of ASP.NET MVC (in order to access the Web API framework), but it has just a single controller with a handful of generic methods - not a method for each of the actions that the domain model provides as you are inferring. The Restful Objects for Isis implementation is not, I think based on an MVC framework (though I'm not familiar with its internals and Dan might correct me on that). Re: The Restful Objects spec is not based on MVC by Dan Haywood The Restful Objects for Isis implementation is not, I think based on an MVC framework (though I'm not familiar with its internals and Dan might correct me on that). You are correct: RO on Isis uses a JSR-311 framework (JBoss RestEasy). Re: The Restful Objects spec is not based on MVC by Roberto Calero Just to add to Dan's comments, Robert, the Restful Objects spec is not based on the MVC pattern - it makes no assumptions about the architecture of the server that implements the API. Nobody has argued otherwise. In the URL: ~/objects/customers/31/actions/lastOrder/invoke the 'actions/lastOrder' is not specifying a controller action, it is specifying an action (i.e. a method) on a domain object (customers/31). There is not such a thing (in REST) like an action on a domain object. Furthermore, REST does not make any reference to "domain objects" but "resources" and the URL (actually URI) is the identifier for such resource. A REST operation (i.e. GET, POST, OPTIONS, etc) is performed on an URI and it's the operation the one providing context to it and not the URI. In your example, ~/objects/customers/31/actions/lastOrder/invoke is MVC-ish (at best) since it makes a reference to a "hard-action" in the URI to provide context. Moreover, it IS NOT the identifier for the resource you're working on (LastOrder? customer?) since what you're identifying is the action (but again) not the resource. Identifying actions/operations is not REST. It's not a discussion of pretty urls but just abiding to the concept of resource which is pivotal for REST. Restful Objects spec defines resources for domain objects and their members by Dan Haywood Per 5.2.1.1 of Fielding's dissertation, "any information that can be named can be a resource [such as] a temporal service (e.g. "today's weather in Los Angeles"). Our action resources are exactly the same thing. Adam Howard made the observation that we ought to have used "/invocation" in the URL rather than "/invoke". We agree with that; it might have prevented some confusion.. Re: Restful Objects spec defines resources for domain objects and their mem by Roberto Calero Trust us, we *do* understand that REST relates to resources and representations. I trust specs :-) RFC's are very trust-able. Humans sometimes aren't. Restful Objects provides an automatic binding of these to an underlying domain model. When Richard says ".../actions/lastOrder/invoke" specifies an action, it's just shorthand for saying that it is a resource which addresses the result of invoking the lastOrder action on the underlying domain object. That's cool for your framework and the compromises you've made for it. My discussion is not about your framework in specific but REST as purely described in the spec. Per 5.2.1.1 of Fielding's dissertation, "any information that can be named can be a resource [such as] a temporal service (e.g. "today's weather in Los Angeles"). Our action resources are exactly the same thing. Some agree, some do not. Adam Howard made the observation that we ought to have used "/invocation" in the URL rather than "/invoke". We agree with that; it might have prevented some confusion. Semantics really.. IF the context comes from the representation or the "rel" of the link then you have a situation where your above uri just becomes inconsistent and confusing. .../invoke with OPTIONS? That'd be plain confusing. Using templated urls is just an implementation issue and URL's indeed should be opaque. In the above url, what's .../invoke doing? updating the last order? retrieving it? deleting it? Even if you change "invoke" by "invocation" the situation does not get any clearer and that's just a result of trying to make the url to identify the operation rather than the resource. Re: Restful Objects spec defines resources for domain objects and their mem by Dan Haywood you have a situation where your uri ["~/objects/customers/31/actions/lastOrder/invoke"] just becomes inconsistent and confusing. .../invoke with OPTIONS? That'd be plain confusing I did consider using OPTIONS on "~/objects/customers/31/actions/lastOrder" as a way of obtaining the action "prompt" representation, such that GET/PUT/POST could be used against that same URI in order to invoke the action... Even if you change "invoke" by "invocation" the situation does not get any clearer and that's just a result of trying to make the url to identify the operation rather than the resource. ... but it would seem that your point isn't specifically to do with how the RO spec defines its bindings from the domain model to RESTful resources, but rather the more fundamental question that as to whether an operation/action can be made available as a resource. On that, I don't think we're ever likely to agree. As is clear from the article, RO came out of the naked objects movement, a principle being its support for behaviourally-complete objects, rather than simple CRUD systems. The key to that is exposing domain object behaviour through actions. But I do thank you for your comments here. I'll also look into OPTIONS again. The RO spec is v1.0, but we will continue to evolve it based on feedback from implementors, users and from commentators such as yourself.
http://www.infoq.com/articles/Intro_Restful_Objects/
CC-MAIN-2014-42
refinedweb
8,528
51.18
When. vi Well, some things do change. I have long since moved over to using EMACS, which is about as much change as this software engineer can take. Sure, I've heard of that cool development tool called Xcode, but hey, when we were young we didn't have Xcode, we had raw makefiles. They were interesting then, and I think you'll find that they still have legs. I recently decided to start dabbling again in C code and expand my horizons in OpenGL. My system environment is primarily UNIX. In one day, I hop back and forth quite a bit between the Solaris OS and Mac OS X, using X Windows. Looking around for example code to reverse engineer and analyze for both platforms is next to impossible. For example, the self-supported OpenGL tutorial site, NeonHelium, contains examples for multiple platforms and operating systems. The coding examples for the Macintosh are primarily for Cocoa or the old Mac OS. The majority of the other source code is for UNIX and Microsoft Windows. If you're a Cocoa programmer, then the source you'll find on the internet will suit you. However, if you just want to use the platform-independent source code that runs on a multitude of UNIX platforms, you'll have to do it old school--with makefiles. Ironically, those Mac OS X examples that go out of their way to demonstrate Cocoa compile and run just as easily in their UNIX state of code. An open systems development approach eliminates the Cocoa application dependencies from the fray. The source code I accumulated was primarily from two books: OpenGL Programming Guide (known as "the red book" in OpenGL programming circles, and available in PDF format) and Edward Angel's excellent textbook using OpenGL examples, Interactive Computer Graphics. I downloaded and built the example binaries on my Sun Ultra 10. Once my elation was over after watching the first couple of demos run on my Ultra 10, I decided to see if I could get these same examples to run on my PowerBook. After all, Mac OS X has Darwin, a *NIX kernel, right? The UNIX make utility was the key to my example code build success. When the make utility is executed, the program looks for a file named makefile or Makefile, by default. make The contents of the makefile contain information called dependencies and targets. Pretty much any code compiled uses a makefile. You will need Apple's Xcode package installed to play with these examples. For posterity or edification, here is my makefile to build a simple OpenGL application.) $(OBJECTS) $(LIBRARIES) ./GlutExample This is an example of a basic makefile. A makefile consists of an assignment of variable definitions and dependencies. For example: COMPILERFLAGS = -Wall CC = g++ CFLAGS = $(COMPILERFLAGS) LIBRARIES = -lGL -lGLU -lm -lobjc -lstdc++ The variable COMPILERFLAGS is assigned the compiler flags variable the value -Wall. This tells the compiler to give all warnings during the compile process. COMPILERFLAGS is passed to the variable CFLAGS in the following manner: COMPILERFLAGS -Wall CFLAGS CFLAGS = $(COMPILERFLAGS) Notice how the COMPILERFLAGS value is accessed when it is assigned to the CFLAGS variable. The referencing of variables uses the $(var) syntax, as is used with the CFLAGS assignment of $(COMPILERFLAGS). $(var) $(COMPILERFLAGS) The true power of the makefile is in its handling of dependency and build rules. Here is how I am going to build the source code example GlutExample.c. GlutExample: GlutExample.o $(OBJECTS) <tab>$(CC) $(FRAMEWORK) $(CFLAGS) -o $@ $(LIBDIR) \ $(OBJECTS) $(LIBRARIES) The first line says the target, the binary we are building, GlutExample, depends on the object file GlutExample.o. The second line has the commands we'll need to build the target program, GlutExample. The compiler invokes the commands which we set, such as the CFLAGS variable option. The make utility is very picky about spaces and tabs. In the above example, replace the <tab> label with a real tab in your editor. <tab> The OpenGL example is based around a C function, tetrahedron, found in Edward Angel's book Interactive Computer Graphics. I filled in the rest of the code to demonstrate how to call it from OpenGL. This is our working example that demonstrates how to build OpenGL code in C using makefiles. tetrahedron #include <stdlib.h> #include <GLUT/glut.h> /* Header File For The GLut Library*/ #define kWindowWidth 640 #define kWindowHeight 480 typedef GLfloat point[3]; point v[4] ={{0.0, 0.0, 1.0}, {0.0, 0.942809, -0.333333}, {-0.816497, -0.471405, -0.333333}, {0.816497, -0.471405, -0.333333}}; void triangle( point a, point b, point c) { glBegin(GL_LINE_LOOP); glVertex3fv(a); glVertex3fv(b); glVertex3fv(c); glEnd(); } void tetrahedron(void) { triangle(v[0], v[1], v[2]); triangle(v[3], v[2], v[1]); triangle(v[0], v[2], v[3]); } void display(void) { glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0,1.0,1.0); glLoadIdentity(); tetrahedron(); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize (kWindowWidth, kWindowHeight); glutInitWindowPosition (100, 100); glutCreateWindow ("simple opengl example"); glutDisplayFunc(display); glutMainLoop(); return 0; } In most source files you will download from the internet, you'll see the following headers in the source code. /* Header File For The OpenGL32 Library */ #include <OpenGL/gl.h> /* Header File For The GLu32 Library */ #include <OpenGL/glu.h> For Mac OS X builds, replace those header files with the following: #include <GLUT/glut.h> /* Header File For The GLut Library*/ Using Apple's X11, in the Applications/Utilities folder, open an xterm window (you can use the Terminal app if you prefer), and copy the example code and makefile to the same destination directory. To use these examples, you will need the Xcode tools installed, even though we are using makefiles. The Xcode install provides the frameworks and libraries required to build the code. xterm When the source code and makefile are copied into GlutExample.c and makefile, respectively, you can invoke the make utility. The example below shows the source directory before the make invocation. ~/Game_Dev/Glut_Test mnorton$ ls lutExample.c makefile Invoke the make utility. The make utility looks in the local directory for the makefile. If the build is successful, the OpenGL application will launch automatically when the build completes. ~/Game_Dev/Glut_Test mnorton$ make g++ -Wall -c -o GlutExample.o GlutExample.c g++ -framework GLUT -framework OpenGL -Wall -o GlutExample - L"/Library/Frameworks/GLUT.framework" - L"/System/Library/Frameworks/OpenGL.framework/Libraries" GlutExample.o -lGL -lGLU -lm -lobjc -lstdc++ ./GlutExample Here are the files the make utility created: the object file, GlutExample.o, and the executable binary file, GlutExample. ~/Game_Dev/Glut_Test mnorton$ ls GlutExample GlutExample.c GlutExample.o makefile ~/Game_Dev/Glut_Test mnorton$ If your make executed successfully, the last line of the makefile, ./GlutExample, launches the binary. You should see a wireframe tetrahedron in an OpenGL window. Be sure the makefile portion is working properly before venturing on into the EMACS territory of this demonstration. ./GlutExample For source code editing, I use EMACS almost exclusively. In *NIX circles, religious debates always surface over which old school editor is better, vi or EMACS. I find text entry to be easier using EMACS. It behaves like a normal text editor. EMACS provides more bells and whistles than any one developer could possibly master in a lifetime. But one cool feature you can master at this moment is how to use EMACS to edit source code and invoke the make utility from inside of EMACS. The EMACS environment is included as a part of the basic BSD install when your computer arrived at your doorstep. An Aqua flavor of EMACS exists, but I recommend the xemacs utility available from the fink package installer. For this demonstration, I will use the basic EMACS that shipped with your system. Let's load the OpenGL example code into our EMACS editing environment. From a UNIX prompt, this should be in the directory to which you copied the GlutExample.c file and the makefile. Simply type: xemacs fink emacs GlutExample.c Entering EMACS is the easy part. Using EMACS requires knowledge of a finger twister: two basic key sequences, Ctrl-X key sequences and meta (Esc) key sequences. To save your source file, you need to type the key sequence Ctrl-X Ctrl-S. To quit the EMACS utility, you use the key sequence, Ctrl-X Ctrl-C. Note that the Ctrl key is held down during the stroke of the X key and the C or S key. For a meta key, the Esc key is pressed and then released before the next keystroke. A meta-key command in EMACS docs is denoted as "M-keystroke." For the meta command M-x, press Esc, release it, and then press the X key. This is an M-x command keystroke. Whenever you see the meta key in EMACS, just think Esc key. Ctrl Esc That's the basic tour. You can now open source code in EMACS and quit EMACS. EMACS can be used as a development environment for many scripting and computer languages. EMACS even knows how to compile and run makefiles for C code. To compile C code, use the meta command M-x compile (Esc x). You will be prompted for a compile command, make -k. Accept the command-line option and hit the Enter key. make -k Enter No, there is nothing wrong with your eyesight. You read that correctly--EMACS will load your makefile and build the application. EMACS will display two buffers. The top is your source code; the bottom buffer is your makefile build status and displays the compile errors. Using the EMACS environment, you can put your cursor over the error in the bottom buffer, type Ctrl-C Ctrl-C, and the top buffer will display the line of code where the compile error occurred. Pretty slick for old school, huh? You've covered a lot of ground here. For starters, you know how to compile platform-independent OpenGL code on the Macintosh. You've created a simple makefile to use for building the OpenGL code. And you know how build applications in EMACS. Putting it all together, you have a great starting point if you want to learn OpenGL application development on the Macintosh. Mastering the makefile will help you load the platform-independent examples you find on the internet. The UNIX flavor of OpenGL examples do run with little or no modification. You may want to visit the nehe.gamdev.net and opengl.org for some great tutorials and examples. Related Reading Mac OS X Panther for Unix Geeks Apple Developer Connection Recommended Title By Brian Jepson, Ernest E. Rothman Michael J. Norton is a software engineer at Cisco.
http://www.macdevcenter.com/pub/a/mac/2005/04/01/opengl.html
CC-MAIN-2015-32
refinedweb
1,784
65.62
Create Cisco ACI network with Python (ACI toolkit) Introduction In this post, we have seen how to create some ACI objects using Python and Jinja2. While this was relatively straightforward, it’s not always as straigthforward as it seems. Things can become quickly more complex. Examples include: - assign a BD to a VRF, the BD and VRF each part of the same tenant - creating some application profiles with X EPGs and create contracts between them ). The painpoint of the approach we have been using so far is to construct the proper payload templates. These are not always one on one available in the documentation and it requires you to be familiar with the ACI object model. You can read everything about the ACI policy model here. You’ll soon notice that many of the ACI objects are interlinked with one another. As an example, a Bridge Domain links to a Tenant, but also to a VRF (often called Context as well). The fact that objects can be interlinked to one another makes it sometimes pretty hard to create the proper Jinja2 template. It’s not impossible, as ACI helps you quite a bit with providing an ‘API Inspector’. This tool can be found by logging into the APIC and go to ‘Help and Tools’ (upper right corner) and then select ‘Show API inspector’. The tool essentially logs all REST calls that are made through the UI. Hence, in order to find a particular REST API call, execute the function via the user interface. As an example: create a tenant using the user interface and look at the API inspector for an HTTP Post call. That will contain also the JSON body that was used. You can now use this REST call to use in your Python script. As mentioned before, while this works, it certainly is not the easiest way. Luckily, Cisco is also providing a Python toolkit to make it easier to interact with Cisco ACI. The toolkit can be found on Github). Use a virtual environment I would suggest you to work with a Python virtual environment. If you don’t know how to do that, please refer to this post, where some sections explain how to achieve this. Once inside the virtual environment, continue with the rest of this tutorial. Installing the ACI toolkit (venv) WAUTERW-M-65P7:ACI_Python_ACITookit wauterw$ git clone Cloning into 'acitoolkit'... remote: Enumerating objects: 276, done. ***Truncated*** Next step is to install the toolkit: (venv) WAUTERW-M-65P7:ACI_Python_ACITookit wauterw$ cd acitoolkit/ (venv) WAUTERW-M-65P7:acitoolkit wauterw$ python3 setup.py install /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/dist.py:274: UserWarning: Unknown distri bution option: 'tests_requires' warnings.warn(msg) running install running bdist_egg ***Truncated*** Finished processing dependencies for acitoolkit==0.4 Don’t forget to also install the pyyaml and pprint packages in your virtual environment. The installation of both these packages in covered in that same blog post. Parsing YAML files We will use a YAML file to pass the variables to our script. We have discussed that in this blog post. A big part of our code will be to deal with YAML files, so make sure you understand that blog post before you continue. Python script Read variables file Refer to the first line in below script. The part after Read YAML configuration is explained quite well in the blogpost we referred to already (this one). Nothing different here. Making use of Session object Note that to work with the acitoolkit we need to use a Session. The Session object is described here but a Session object is essentially under the hood dealing with the ACI login process (and token). Read ACI objects In below snippet, we will document how we can use acitoolkit to read all the configured tenants from our APIC. from acitoolkit.acitoolkit import * url = "" user = "admin" pwd = "---" session = Session(url, user, pwd) session.login() tenants = Tenant.get(session) for tenant in tenants: print(tenant.name) When we execute this script, you’ll see we indeed get back a list of all configured tenants. wauterw@WAUTERW-M-65P7 aci_toolkit % python3 get_tenants_toolkit.py tn-bjorn dvs-demo-dynamic mgmt common infra tn-qinq tn-automation Tenant_Wim Create ACI objects In below snippet there is a section behind the comment # Create ACI objects. This part is effectively creating the ACI objects. We are first creating the tenant via the Tenant object. This object is describeded here. As you can see, we need to pass a String variable, which in our case will be the tenant name we retrieved from parsing the YAML variables file. Exactly the same process for the other objects, VRF and BD. As a last step, we will call the push_to_apic function to effectively push the objects to the APIC. Note, that we simply use the entire Tenant tree. As an exercise to the reader, it’s helpfull to print the content of the two variables tenant.get_url() and tenant.get_json(). It will help you to understand how this push_to_apic function works. from acitoolkit.acitoolkit import * import yaml from pprint import pprint url = "" user = "admin" pwd = "---" session = Session(url, user, pwd) session.login() # Read YAML configuration yml_file = open("variables.yml").read() yml_dict = yaml.load(yml_file, yaml.SafeLoader) tenant_name = yml_dict['tenant'] vrf_name = yml_dict['vrf'] bd_name = yml_dict['bridge_domains'][0]['bd'] # Create ACI objects tenant = Tenant(tenant_name) vrf = Context(vrf_name, tenant) bd = BridgeDomain(bd_name, tenant) response = session.push_to_apic(tenant.get_url(), data=tenant.get_json()) print(response) Note: the source code can be found here. Login to your APIC and verify if the tenant, VRF and BD were created. Of course they were..haha. See below. Pay particul attention to the number 1 in the VRF and BD column. Python script: a bit more complex examples In the above snippet, we kept things easy. We just created a Tenant, a VRF and a BD. In what follows, we’ll take it a bit further and assign a subnet to the bridge domain and link the bridge domain to the VRF. Note that this is the example (first bullet) I referred to above when we talked about the fact that objects can be interlinked to one another. For the stuff that follows, refer to the ACI object model or simply refer to the below picture. Looking at the ACI object model above, you’ll notice that BD and VRF are all relative to the Tenant object. You will also see that a BD can have one or more subnets. Hence, in below snippet, we create a Subnet object and assign it to the BD object we created earlier. Next, again as per the ACI object model, the BD object is linked to a VRF object. Hence, we are calling the add_context method. This will assign the BD to the VRD object. Lastly, in this example, we will also assign an L3Out to the BD. The snippet below only shows the relevant difference to the previous script we already explored. # Create ACI objects tenant = Tenant(tenant_name) vrf = Context(vrf_name, tenant) bd = BridgeDomain(bd_name, tenant) subnet = Subnet('', bd) subnet.addr = bd_subnet bd.add_context(vrf) l3out = OutsideL3(bd_l3out, tenant) bd.add_l3out(l3out) bd.add_subnet(subnet) The rest of the script remains similar to what we saw earlier. The full source code can be found on here. To check if everything worked out, verify on the APIC. In below screenshot, you will see that Tenant_Blog has been created with a BD called BD_Blog and under that BD, you’ll see a subnet 10.16.100.1/24. The entire repo for this small project can be found at Github. A more complete example can be found here.
https://blog.wimwauters.com/networkprogrammability/2020-03-21-aci_python_acitookit/
CC-MAIN-2020-29
refinedweb
1,277
65.93
The implementation of the Naive Bayes classifier used in the book is the one provided in the NTLK library. Here we will see how to use use the Support Vector Machine (SVM) classifier implemented in Scikit-Learn without touching the features representation of the original example. Here is the snippet to extract the features (equivalent to the one in the book): import nltk def dialogue_act_features(sentence): """ Extracts a set of features from a message. """ features = {} tokens = nltk.word_tokenize(sentence) for t in tokens: features['contains(%s)' % t.lower()] = True return features # data structure representing the XML annotation for each post posts = nltk.corpus.nps_chat.xml_posts() # label set cls_set = ['Emotion', 'ynQuestion', 'yAnswer', 'Continuer', 'whQuestion', 'System', 'Accept', 'Clarify', 'Emphasis', 'nAnswer', 'Greet', 'Statement', 'Reject', 'Bye', 'Other'] featuresets = [] # list of tuples of the form (post, features) for post in posts: # applying the feature extractor to each post # post.get('class') is the label of the current post featuresets.append((dialogue_act_features(post.text),cls_set.index(post.get('class'))))After the feature extraction we can split the data we obtained in training and testing set: from random import shuffle shuffle(featuresets) size = int(len(featuresets) * .1) # 10% is used for the test set train = featuresets[size:] test = featuresets[:size]Now we can instantiate the model that implements classifier using the scikitlearn interface provided by NLTK and train it: from sklearn.svm import LinearSVC from nltk.classify.scikitlearn import SklearnClassifier # SVM with a Linear Kernel and default parameters classif = SklearnClassifier(LinearSVC()) classif.train(train)In order to use the batch_classify method provided by scikitlearn we have to organize the test set in two lists, the first one with the train data and the second one with the target labels: test_skl = [] t_test_skl = [] for d in test: test_skl.append(d[0]) t_test_skl.append(d[1])Then we can run the classifier on the test set and print a full report of its performances: # run the classifier on the train test p = classif.batch_classify(test_skl) from sklearn.metrics import classification_report # getting a full report print classification_report(t_test_skl, p, labels=list(set(t_test_skl)),target_names=cls_set)The report will look like this: precision recall f1-score support Emotion 0.83 0.85 0.84 101 ynQuestion 0.78 0.78 0.78 58 yAnswer 0.40 0.40 0.40 5 Continuer 0.33 0.15 0.21 13 whQuestion 0.78 0.72 0.75 50 System 0.99 0.98 0.98 259 Accept 0.80 0.59 0.68 27 Clarify 0.00 0.00 0.00 6 Emphasis 0.59 0.59 0.59 17 nAnswer 0.73 0.80 0.76 10 Greet 0.94 0.91 0.93 160 Statement 0.76 0.86 0.81 311 Reject 0.57 0.31 0.40 13 Bye 0.94 0.68 0.79 25 Other 0.00 0.00 0.00 1 avg / total 0.84 0.85 0.84 1056 The link to the NLTK book is broken. Also, you can use train_test_split function to do the random splitting into train/test data in one line. scikit-learn thanks rolisz! Thank you very much for you example. It was very helpful for getting me started with my experiments. You left a minor error, however: you should witch the order of 'p' and 't_test_skl' when asking for the classification report. The API lists the true labels first and then the predicted labels second: Oh dear, I have been typing for too long today... Should have been: "for *your example" "you should *switch" I hope I caught all of my errors.. Thank you Ruben, I fixed the code and the report. How would you do this with a Random Forest classifier? Initializing the classifier this way should work: classif = SklearnClassifier(RandomForestClassifier()) This comment has been removed by a blog administrator. How do I output the probability of the predicted instead of the classes? Hi Hock, you canàt get the probability with LinearSVC. Nut, there are other classifiers, the ones in sklearn.naive_bayes or sklearn.svm.SVC for example, that expose the method predict_proba that gives you what you need.
http://glowingpython.blogspot.it/2013/07/combining-scikit-learn-and-ntlk.html
CC-MAIN-2017-30
refinedweb
681
59.09
Every control on a Web Forms page must be uniquely identifiable. Generally, you assign a value to a control's ID property to uniquely identify it. This value becomes the instance name of the control — that is, the name by which you can refer to the control in code. For example, if you set the ID property of a TextBox control to "Text1," then you can reference the control in code as Text1. Text1 A number of data-bound controls, including the DataList, Repeater, and DataGrid controls, act as containers for other (child) controls. When these controls run, they generate multiple instances of the child control. For example, if you create a DataList template with a Label control in it, when the page runs, there are as many instances of that Label control in the page as there are records in the DataList control's data source. Note Controls that use templates, such as the DataList and Repeater controls, host template objects. For example, when the DataList control runs, it creates multiple instances of the DataListItem class. These template objects in turn contain individual controls such as labels, text boxes, buttons, and so on. INamingContainer interface.) When child controls are created at run time, the naming container is combined with the child control's ID property to create the value of the UniqueID property of each child control. The UniqueID property therefore becomes a fully-qualified identifier for a control, referencing its naming container as well the controls' individual ID value. In the example from above, the multiple instances of the Label control are created within the naming container — the namespace — of the parent DataList control. The UniqueID property of each Label control will reflect this namespace, which will have a format something like DataList1:_ctl:MyLabel, DataList1:_ct2:MyLabel, and so on. DataList1:_ctl:MyLabel DataList1:_ct2:MyLabel Note As a rule, you should not write code that references controls using the value of the generated UniqueID property. You can treat the UniqueID property as a handle (for example, by passing it to a process), but you should not rely on it having a specific structure. In addition to each container control providing a naming container for its child controls, the page itself also provides a naming container for all of its child controls. This creates a unique namespace within the application for all the controls on that page. Child controls can reference their naming container via the NamingContainer property. This property returns an object of type Control that you can cast to the appropriate DataList control, DataListItem object, and so on. Referencing the naming container is useful when you need access from a child control to a property of the container control. For example, in a handler for a child control's DataBinding event, you can access the DataItem object by getting it from the naming container.. If your page contains controls that are generated at run time, such as those in a template for the DataList, Repeater, or DataGrid controls, you cannot directly reference them by their ID, because the ID is not unique. However, there are various ways to find individual controls in the page. For details, see Referencing Controls in Web Forms Pages. Web Forms Data Binding
http://msdn.microsoft.com/en-us/library/1d04y8ss(VS.71).aspx
crawl-002
refinedweb
542
50.87
This time I want to use English to make this article useful for all others in the world:) As you know, Android MonkeyRunner is a good testing tool, but we could only develop monkeyrunner under a text editor like Vim, emacs, etc. Diego Torres Milano wrote a blog to make Monkeyrunner running on Eclipse, he had done that on Linux. But according to his article, he mislead guys thinking that his solution also works on Windows. But after serveral tries by myself and others’ comments, it does not work in Windows at all. Here I found a solution which also works on Windows(I have tried it by myself). And I believe that it also works on Linux, as the solution does not include any platform independent mechanism. Here we go - Install/Update latest PyDev (2.2.4 at present, I think it does not relate to the PyDev version) by Eclipse Marketplace or direct install link, see guide here - Extract Lib folder in ANDROID_SDK\tools\lib\jython.jar using 7-Zip/WinRAR to ANDROID_SDK\tools\lib folder, which would be like ANDROID_SDK\tools\lib\Lib - Add a Jython Interpreter under Window>Preferences>PyDev>Interpreter – Jython. Using the jython.jar from Android SDK\tools\lib folder. - Please notice that adding Android SDK\tools\lib and monkeyrunner.jar in Libraries. See snapshot below: - Click “Apply” and wait it finish. Press “OK” - Now you can use this new MonkeyRunner Interpreter to set up a PyDev project. But please make sure to choose “Jython” and using Grammar version “2.5” as Jython itself is not catching up with Python. The latest Jython is 2.5.2, but Android SDK uses 2.5.0. It’s OK to replace the old one with the latest one. That’s another story. Snapshot of project setup: Now you could write MonkeyRunner script with convenient features like auto-completion, grammmer error notice etc. Have fun:) Notice: you cannot click the Run button to execute monkeyrunner script, it will not use monkeyrunner.bat to execute. 15 thoughts on “Using Android monkeyrunner from Eclipse, both in Windows and Linux!” 请问安装的Eclipse版本是哪一版? 我用Indigo重复这些步骤,会一直出现错误 Indigo没问题呀,我就是用Indigo试的;另外建议你升级PyDev到最新版。 你确定从python.jar里面解压Lib文件夹出来了? Hi Sean, I used this approach and it seems to work so far. The problem is that when I try to run a script clicking the ‘run’ button in Eclipse I get NullPointerExceptions. In your post you mention we cannot use the ‘run’ in Eclipse. But I’m using Windows and I dont know other way to run this script. What do you suggest? Regards. Run it as the normal way to use Monkeyrunner. Plz refer to official guide of Monkeyrunner. for example: Thanks for the guide. I am also using Windows and was not able to get this running using DTM’s blog. I followed your steps but I get this error when I run the file: ERROR: ANDROID_VIEW_CLIENT_HOME not set in environment I’m new to Eclipse and Monkeyrunner and would appreciate any help with resolving this error. What file? I think it’s the problem in DTM’s example file? You should ask in his post. Dear Sir, I ran MonkeyRunner on Win7 OS as your post, but always met the problem: ‘adb rejected adb port forwarding command cannot bind socket’. I just installed Android SDK, python2.7, Eclipse, PyDev. Then use the sample code in Android developer web site. Could you show me how to fix the problem? Many thanks. Try command ‘adb kill-server’ and ‘adb start-server’ to restart the adb. It could be found under your android sdk directory, subfolder platform-tools. Hi, Seganw I tried the command, but it cannot work too. I do not know if any unknown setting in Win7 or AD problem. Still trying… Hi Sean. I don’t know you are still watching this, but it’s worth a shot. I did all your steps, the only difference is that the Interpreter option under Grammar version does not show MR, only default and the path to the jython.jar I’m getting the following error: java.lang.NoClassDefFoundError: Could not initialize class com.android.monkeyrunner.MonkeyDevice Here is my code: from com.android.monkeyrunner import MonkeyDevice, MonkeyRunner, MonkeyImage, MonkeyManager device= MonkeyDevice for i in range(5): device = MonkeyRunner.waitForConnection(8) if device != None: print “Device found…” break; device.press(“KEYCODE_NOTIFICATION”, “DOWN_AND_UP”) time.sleep(1) device.press(“KEYCODE_BACK”, “DOWN_AND_UP”) do you have any thoughts? appreciate it As previous comment said, these steps just make enable “auto-completion, grammmer error notice etc”. You still have to start script from monkeyrunner. I misunderstand it once. I met the same issue You can configure Eclipse to run your code by configuring monkeyrunner.bat as Run/External Tools. Location: […]\skd\tools\monkeyrunner.bat; Worlking Dir.: is your directory of your apk and other resources. Arguments: e.g. c:\workspace.py\MyDonkeyrunner\src\testpack\example.py Neither Chinese nor English can understand your article now. 成天看后宫动漫和网络YY小说,对你理解文章确实有难度……
https://fclef.wordpress.com/2011/11/24/using-android-monkeyrunner-from-eclipse-both-in-windows-and-linux/
CC-MAIN-2016-30
refinedweb
821
60.11
Hottest Forum Q&A on CodeGuru - January 19th Introduction: Lots of hot topics are covered in the Discussion Forums on CodeGuru. If you missed the forums this week, you missed some interesting ways to solve a problem. Some of the hot topics this week include: - Why do get I compiler error C2065 when compiling my VC++ 6.0 project in VC7? - Should I call destroywindow before deleting an object? - Where is the error log? - How do I search for keys using wildcards in a map? - How can I use a *void -> *short in printf? tiff, a junior member, converted his project from VC++ 6.0 to VC7 and got compiler error 2065. He accepted all default conversions from VC7, but it still does not work. What can be done here?. Because VC++6.0 and VC++.NET put this function in different locations, I did change the included path, and made it use Dynamic ATL because in VC++.NET this function is in atlmfc/include. But nothing works. So, guys, do you know why this does not work? This question is again for you. Can you answer it? If yes, e-mail me and I will publish the answer in next week's column. brraj is curious whether he should call DestroyWindow() prior deleting the object. Do you know whether this is correct or not? I have a CEdit *m_pEdit pointer; it will be dynamically created by using the new and create functions and deleted by using delete. My question is before deleting should we call the DestroyWindow function? For example: m_pEdit->DestroyWindow(); delete m_pEdit; m_pEdit = 0; Is the destroywindow necessary? My senior says that window will not be destroyed if I don't call DestroyWindow. Well, your senior is correct. You will need to call DestroyWindow() before calling delete. Any class derived from CWnd will call DestroyWindow from its destructor. But remember, if you derive your own class from a CWnd-derived class, and you override the function, you will need to call DestroyWindow explicitly in your own code. paraglidersd is working on a project that is nearly finished. But, he still gets an error in which he needs the error log on the system. Unfortunately, he is not able to find the log. Do you know where such a log file is located? I am running an Visual C++ application on a laptop that does not have Visual Studio installed. I (obviously) created the application on a different machine. This application is a simple Win32 application with no interactive windows. It merely runs (after double-clicking couldn't find anything on MSDN. Help? I am on a deadline and am pulling my hair out. The error log could be the Dr. Watson Log. Enter DRWTSN32 in START/RUN. Also, check that you have the correct version of the DLLs on your laptop. These are the DLLs required on the laptop: - MSVCP60.DLL - PSAPI.DLL An another option is to debug your application remotely. Aks82 is working with maps where he needs to search for wildcards. Is it possible? I am planning to create a map for my given data set. I was wondering whether there is a way to search for keys by using regular expressions. Basically, if the user forgets the key, can s/he search for the key using wildcards? Fr example, if HostID is one such key, is there a way I can search for all keys that begin with 'Host' as 'Host*' or something? I know that the find() method allows one to search for the key as an exact expression. But my tool needs to be able to search for the 'keys' using regular expressions. Unfortunately, there isn't any function to search for wildcards. But, to get the desired result, you can use the std::lower_bound() function. Here is how it might look: #include <iostream> #include <map> #include <string> using namespace std; typedef std::pair<string,int> PAIR; int main() { map<string,int> my_map; my_map.insert( PAIR("Server1",1) ); my_map.insert( PAIR("Host3",3) ); my_map.insert( PAIR("Server2",2) ); my_map.insert( PAIR("Host1",1) ); my_map.insert( PAIR("Something Else",6) ); my_map.insert( PAIR("Host2",2) ); my_map.insert( PAIR("guest",44) ); my_map.insert( PAIR("HostID",32) ); // find all entries that start with "Host" string str_to_find = "Host"; int nLength = str_to_find.length(); map<string,int>::iterator it = my_map.lower_bound("Host"); while (it != my_map.end()) { if (it->first.substr(0,nLength) != str_to_find) break; cout << it->first << " " << ;it->second << endl; ++it; } return 0; } yiannakop asked a very interesting question. Hi everyone. Suppose I have the following code: void *var; int c; // c given by user... ... ... switch (c) { case 1: var = (short*)malloc(sizeof(short)); scanf("%d",(short*)var); printf("content of var: %d\n",*(short*)var); break; case 2: // same for float ... ... ... } The above program works fine with all types (int, float, double), but not for short. I suppose %d for shorts is not right under Solaris 5.7? The following code should work: scanf("%hd",(short*)var); Microsoft says that %h is a MS-specific extension. Orginally quote from MSDN: "The optional prefixes to type, h, l, and L, specify the .size. of argument (long or." But, the C99 standard also contains this extension. Take a look at the whole thread to learn more about this topic.
http://www.developer.com/tech/article.php/3304221/Hottest-Forum-QA-on-CodeGuru---January-19th.htm
CC-MAIN-2017-04
refinedweb
881
69.18
Merge branch 'release/3.1.1' into. There are certainly faster JSON libraries out there. However, if your goal is to speed up your development by adding JSON support with a single header, then this library is the way to go. If you know how to use a std::vector or std::map, you are already set. See the contribution guidelines for more information. The single required source, file json.hpp is in the single_include/nlohmann directory or released here. All you need to do is add #include <nlohmann/json.hpp> // for convenience using json = nlohmann::json; to the files you want to use JSON objects. That's it. Do not forget: repository/json@v3.1.0. Also, the multiple header version can be installed by adding the -DJSON_MultipleHeaders=ON flag (i.e., cget install nlohmann/json -DJSON_MultipleHeaders=ON). Beside the examples below, you may want to check the documentation where each function contains a separate code example (e.g., check out emplace()). All example files can be compiled and executed on their own (e.g., file emplace.cpp). // }." // } // }‘s: ns, where personis defined). get<your_type>(), your_typeMUST be DefaultConstructible. (There is a way to bypass this requirement described later.) from_json, use function at()to access the object values rather than operator[]. In case a key does not exist, atthrows an exception that you can handle, whereas operator[]exhibits undefined behavior. operator=definitions, code like your_variable = your_json;may not compile. You need to write your_variable = your_json.get<decltype your_variable>();instead. std::vector: the library already implements these. from_json/ to_jsonfunctions: If a type Bhas a member of type A, you MUST define to_json(A)before to_json(B). Look at issue 561 for more details.‘s; } }; } Yes. You might want to take a look at unit-udt.cpp in the test suite, to see a few examples. If you write your own serializer, you'll need to do a few things: basic_jsonalias than nlohmann::json(the last template parameter of basic_jsonis the JSONSerializer) basic_jsonalias (or a template parameter) in all your to_json/ from_jsonmethods! } }; Though JSON is a ubiquitous data format, it is not a very compact format suitable for data exchange, for instance over a network. Hence, the library supports CBOR (Concise Binary Object Representation), MessagePack, and UBJSON (Universal Binary JSON Specification)); //); Though it's 2018 class contains the UTF-8 Decoder from Bjoern Hoehrmann which is licensed under the MIT License (see above). Copyright © 2008-2009 Björn Hoehrmann bjoern@hoehrmann.de. I deeply appreciate the help of the following people. parse()to accept an rvalue reference. get_ref()function to get a reference to stored values. has_mapped_typefunction. int64_tand uint64_t. std::multiset. dump()function. std::locale::classic()to avoid too much locale joggling, found some nice performance improvements in the parser, improved the benchmarking code, and realized locale-independent number parsing and printing. nullptrs. -Weffc++warnings. std::min. <iostream>with <iosfwd>. find()and count(). .natvisfor the MSVC debug view. Thanks a lot for helping out! Please let me know if I forgot someone. The library itself contains of a single header file licensed under the MIT license. However, it is built, tested, documented, and whatnot using a lot of third-party tools and services. Thanks a lot! The library is currently used in Apple macOS Sierra and iOS 10. I am not sure what they are using the library for, but I am happy that it runs on so many devices.. \uDEAD) will yield parse errors. std::string), note that its length/size functions return the number of stored bytes rather than the number of characters or glyphs. -fno-rtticompiler flag. -fno-exceptionsor by defining the symbol JSON_NOEXCEPTION. In this case, exceptions are replaced by an abort()call. tsl::ordered_map(integration) or nlohmann::fifo_map(integration). To compile and run the tests, you need to execute $ mkdir build $ cd build $ cmake .. $ cmake --build . $ ctest --output-on-failure For more information, have a look at the file .travis.yml.
https://fuchsia.googlesource.com/third_party/json/+/refs/tags/v3.1.1
CC-MAIN-2020-50
refinedweb
653
51.14
An Intensive Exploration Of jQuery With Ben Nadel (Video Presentation) The following video and slide show is my presentation: An Intensive Exploration Of jQuery With Ben Nadel. The presentation covers most aspects of the very robust feature-set provided by the jQuery library. While I cannot go too in-depth on any one particular topic, I do try to cover and thoroughly demonstrate what I feel are the most important parts of the jquery library. I hope that in this presentation I can at truly get across to you how essential the jQuery library is to optimal web development; I tell you this in all seriousness - I will never start another web development project without jQuery. The following is a table of contents for the video, An Intensive Exploration Of jQuery With Ben Nadel: - Reader Comments Very well done. I've been in love with jQuery since I started looking into it. I would have to say so far my favorite feature would be the ability to do this: function show(i){ } $('ul.li').eq(i).show().siblings().hide(); after I learned how to use it, the ".siblings()" function really rocked my world. Great job Ben! Was hoping you'd post this. You have a great teaching style. Hope you do more of these in the future. :) Yay! wOOt! </cliff>ools, etc.) so they can see which one "feels" best for them (and sometimes see what plugins are available). They're fine choices (at least as far as the main ones go), it's usually down to preference. @Thomas, During the live presentation, someone asked, "Why do you think jQuery has become so popular with ColdFusion developers." I basically said that, well one, its becoming popular everywhere, so we're just part of that revolution. But two, when you try jQuery, there's something that just feels "right" about it. Even if you've tried other Javascript libraries, jQuery feels more natural for some reason. I can't even put my finger on it, it just does. much so stayed with Prototype. I later tried Ext and liked it, so now that's what I use. Some of my friends also didn't really like jQuery for whatever reason. That's why I say try them all out. As far as the basic features, it's pretty quick to give them a spin and see what feels right/better. Having said that, I don't want anybody to take this as a knock on jQuery. It's an excellent framework. @Thomas, At the end of the day, you gotta go with what feels best for you, cause at the end of the day, you need to get your work done as efficiently as possible. Just as a point .. you were referring to 1000 seconds with the delay/speed for the effects.. while it says 1000 it's milliseconds, thus 1000 = 1 second.. but i'm assuming you already know this :) @BinaryKitten, Ahh, yes, thanks. That's the nerves talking :) None of your scripts work in my Netscape 4! What are you trying to pull? What software did you use to make the presentation? Did you post it somewhere else? Maybe UGTV? @Mike Leung re: Netscape 4.... urmm ... they stopped support and production of the netscape browser completely in march 2008. @Mike, If it makes you feel any better, it doesn't work in my Hyper Text Player either :) @Henry, I used Camtasia. Thanks Ben for posting this online. Great presentation Great presentation, Ben! It seems like so long ago that we first discovered jQuery. And the engineering under the hood has only improved since then! If you haven't already, you might want to take a look at Slideshare ( I see it used for slides in presentations by front-end developers at Yahoo, Google, and Mozilla - including jQuery's John Resig. @David, Thanks for the kind words. I love jQuery and I don't mind trying to get others to love it to :) of people irk me to no end. Man, if I could get my hands on THAT guy, I would teach him a thing or two. Give him whatfore! By the way, the next thing you don't like yet, but you will is ExtJS. It's for business applications. @Ben I agree that you have to use what you are comfortable with, but at the same time you do need to at least try to get comfortable with what a majority of the community is using. The only reason that I say that is because there are times when you will leave a project and others will have to come in behind you. If you are using some obscure library (not to say that jQuery, prototype or mootools are) you have created a bigger issue than there needed to be. @Glen, When I talked about my initial response, I said it was a "fear response," not a legitimate one :) jQuery is awesome, and yes, your advice is always much appreciated. @Brandon, That is a good point. If you are jumping in and out of projects, having the same library understanding creates a common language and methodology. Awesome! The best jQuery Video Tutorial of the Web! @Fernando, Rock on! Thanks a lot. code was doing. Your presentation help bring some light where there was only darkness. Like most folks I don't get to start at the edge of the pool, but get thrown right into the middle of the deep end. You've given me a life preserver so at least I'm keeping my head above the murky water. Thanks again, Jim Holy s-word Ben. That was maybe the most effective, comprehensive, well-architected presentation I've ever even seen. You literally took people foreign to jQuery from beginner to intermediate knowledge in under 2 hours, with a primer on plug ins and a pretty good overview of the voodoo of closures! So impressive. Thanks a million, on behalf of the world. Also, it wouldn't hurt to post a link where folks can donate to "The Ben Nadel Educational Fund" to cover the cost of your Camtasia license and all of the extra bandwidth you're going to need when you start doing video blogs 24/7 instead of textual, hint. Also, since Camtasia lets you do record a sub-selection of your screen it doesn't hurt to position your IDE/slideshow/browser chromelessly around a 640x480 portion of the screen to cut down the filesize dramatically. That resolution is usually enough for good code visibility and audio isn't affected obv. I noticed your blow was down for a while last night, I'm assuming because of 8 million people watching that video at the same time. Blog. That was the most ridiculous typo probably in history (judged by the distance between the intended letter (g) and the typed letter (w). That'll teach me to try to author comments while conversing on a telephone. @Jim, @David, Thank you guys so much for the kind words. That really means a whole lot to me! Really, you have no idea. @David, I will play around with the 800x600 next time. I thought about going that way, but it just seemed so small. The size I did was actually a sub-section of my screen (I have a huge monitor). But, the live presentation was done on an 800x600 and it went find. Good feedback. Thanks for the great video! Cheers, Amazing tutorial, I love jQuery, but there is alot of boring stuff about it out there. This tutorial really makes me realize how strong it actually is. Thanks so much! WOW! Great presentation! As a professional educator, may I say you have a rare gift. There are a lot of people in the world who understand a subject well, but have little talent when it comes to communication that knowledge. You obviously have both. Even though I had only a modest knowledge of programing in general and none of jQuery, I found your presentation both understandable and engaging. The only thing I would change (if indeed your authorware has this option) would be to include a pause button so viewers could stop and digest the information every so often (your explanation of closures was great, but afterward I needed a break, so some kind of mental-Tums!). Thank you. provide JQuery some weight as far standards go. Hi Ben - Brilliant presentation from what I have seen so far - any chance of making it available as a download? The length creates problems for me watching on line in one go. Cheers, Bruce @Fred, Thank you so much for the kind words. Coming from an actual teacher, what you say has serious weight and is very meaningful to me. @Bruce, I can look into figuring that out. Ben - I'll second the request for an 'offline' copy of this. All the best </cliff> Superb video Ben, your presentation style is very clear and easy to follow. Seems you're a natural? I've already got an off-line version by the way - looked in my browser cache (I use Opera) found a huge file (134Mb), changed the extension to .flv - plays back in any media player that includes FLV playback! Hope you don't mind? <goes scurrying to look in his cache... B->> @RPR, No problem my man. At the end of the day, I'm just trying to add value to your work life :) OK - so I've been trying to capture a copy using both Firefox and Opera on a work PC running Vista which I'm not familiar with. Can anyone tell me where the cache for Opera or Firefox could be on this machine? cheers Bruce offline version pls Fanstastic presentation sir... you should have put your work-out credentials on the screen also :-). Any luck making an offline version available? This is by far the best video presentation on JQuery I have seen on the internet; and I have searched far and wide, high and low. @LBJ, Thank you very much! That is a great compliment. Super Good Lessons. Thanks for your time and effort! @Geoffrey, Thank man, glad to know its appreciated. Hi Ben me again. Just watched this for the third time (I have a copy on local). There's a lot to take in (nearly 2 hours) but each viewing makes things clearer. Was wondering if your presentation material (html/cfm) is available anywhere? The code examples (demos) you show would be good to study in more detail? It looks as though you've done these materials on some fancy server-side thing, so maybe it's not straight-forward for you to just zip-up and offer as a download? Sorry Ben, ignore my last (stupid) comment. Yes, there's a link at the top!! Very impressive, thanks a lot for this fantastic work. @Axel, Glad you liked it! Amazing. Really opened my eyes to the power of jQuery, and demonstrated the possibilities excellently with good clear easy to understand English. Thank you Ben. @Trevor, Glad you liked it. jQuery is really awesome. I seriously never start a web application project without it! It's as essential as the keyboard in front of me. skills you have). @Marko, Nothing wrong with some plain old Javascript :) Of course, when you include jQuery in every site, the need for it becomes less. Although, even in the callback methods, I still find myself using the THIS object rather than wrapping it in $() every time... depends on what I need to do. Great video. Added this to Thanks Ben for doing and sharing this. I am in love with jQuery, it's bad my boyfriend is a jealous type :) @Anna, No worries - it just forces him to up the effort to compete ;) awesome!! Thanks, nice stuff. Need I say WOW! It looks really amazing so far.... I'm trying it out now as we speak... (Fingers Crossed) Also how do you get Adobe ColdFusion Certified? Is it worth it? How much does it cost? @Jody, I think the test was like $150 or something. You just have to sign up to take it at a testing center. I can't remember where I found the list of testing locations (they are not Adobe specific - just general testing centers that host it). I think it was worth it because it really got me to look into things I had not previously explored. Amazing work Ben. Best tutorial i've every watched, took a while to get through but it covered so much. Thanks for sharing your knowledge. Praise be to jQuery!! Thank you for this! You definitely helped build my knowledge of jQuery (I've only been using it for about a week now); especially to do with the closure stuff you mentioned. Awesome presentation!!! I've used jQuery before and I loved it right away. But I never got the chance to learn all the possibilities of it. Thanks for the Video and the inspiration. greez, Timo PS: I blogged an article about your presentation on my website. @Timo, Really glad you liked it! nice presentation! Is there a way to download this video? ability to make working in JS faster and easier, just as it's said to be, but your video has really opened my eyes. Thank you! Not only that, but I plan to spend a lot more time learning new tricks as I continue to scan thru your blog. Thanks Ben, your obvious enthusiasm and excitement for your field not only shows, but is contagious as well. Kudos! Cheers, Lelando @Lelando, Wow, that's awesome! It's great to hear the perspective from a designer and know that it's making sense. Trust me, the more you play with jQuery, the more you're going to love it. Excellent presentation. Very laconic and informative. Thanks Hey I guess the video is down. Is there anyway you can upload to youtube or vimeo or some other service? Greatly appreciated. @Billy, The site may have been down for some reason - it seems to work now. It's hosting up on S3 so it should be pretty stable (assuming the video HTML page serves up). It's not the best streaming - you might need to give it a minute to start depending on your connection. Ben it is a very nice video. Can you not make it download able to make it maximum good to use. Ben, I must say that you have done incredible job with this video. I'm new to jQuery and started reading about it today only. Somehow I reached to your blog but believe me I never felt so easy with any new technology as I'm feeling about jQuery now. I'm not able to locate the download link of your demo application/s which you shown in the video. Can you share that with me? I would probably do some hands-on that. Thanks, Rohit I really appreciate your hard work.And this is an amazing tutorial.Thanks a lot. @Haansi, @Rohit, Thanks guys - I'm really glad you liked the video. I don't have a download link directly (I'm relatively new to creating such large videos). If you look at the source of the video page, you can probably just grab the video SRC value from the HTML markup. @LV, Thanks! In slide 6 when you refer to anonymous methods, I think the term is "late binding". This is a fantastic introduction to jQuery though. Thanks for you hard work and expertise, Ben! @Elliott, "Anonymous method" was the right term for what that slide was trying to show. Methods that don't have a name are anonymous. The example shown was an onload type of function (specifically, "document ready"). But it could just as easily have been done with a named function: function myNamedFunc () {alert("I have a name!");} $(myNamedFunction); That too would have been late binding (execution time binding of an event handler), but it wouldn't have been an anonymous method. Ben was trying to show the "(function(){something})" syntax. That's what jQuery makes heavy use of, to avoid cluttering up the namespace with function names that would are going to be used only once anyway. If you really like late binding, even later than Windows DLLs and Unix SOs, check out Objective C. You don't so much call a method as pass a message to it, like Smalltalk. So you can add and remove objects at execution time, as if it were an interpreted language, such as JavaScript. A compiled language with the freedom of an interpreted language! Awesome! Love this video. I'm doing alot of ajaxy event driven stuff at the moment with jquery, this video is really informative and acts as a really good refresher on the jQuery api as well. This is as useful (if not more) than the paid for peepcode series on the subject. Thanks a lot Ben @Elliott, @Steve, Oh man, I tried Objective-C for about a week (when curious about iPhone development). All that allocation and deallocation of memory is enough to drive a man insane :) Thank goodness for garbage collecting on the server! @Steve, Glad you like it my man. If you have any suggestions on other topics, feel free to drop me a line. @Ben, I honestly don't think you need any more praise with regards to your great work with making this video - but i thought i would give you some anyway :) I am a very recent convert to jQuery, and like you i think I cant go back. The Ajaxy stuff seems way to simple. Thanks alot dude. ps. I wonder what jQuery would be like if implemented for Java itself... Ben, Thanks a million for posting this. I finally had time to circle back to combine jquery with ExtJS and your presentation was a huge spring board. I especially appreciate the code demos at: @Richard, Ha ha, thanks :) I agree, the AJAX stuff in jQuery is just really easy to use. And, what's great is that you can always fall back on the $.ajax() method if you need more control. Glad that you're enjoying the language; I started using it a while back and am still very much in love with it. @Steven, Very cool my man. The ExtJS stuff you've demoed in the past looks really awesome. It's great to see that jQuery and ExtJS can be mutually beneficial. Nice. Is there a downloadable version of the video ? It would make a great reference material. Awesome presentation, very clear and understandable a great kick start to anyone learning JQuery. Although I have used it for some time I have only touched on the basics, thanks and hope to see other such presentations in the future. Maybe OO Development using JQuery? best practices? P.S.wish I lived in the US, so little going around on JavaScript and JQuery in the UK in the way of conferences and talks. in the section 17. jQuery closures, the code <CODE> var jThis = $(this) </CODE> seems superfluous. because .each() passes objLink as the second argument to the function. Other than that, a nice overview. @Quinton, Glad you liked it my man. jQuery has come a long way since this - but the basics still hold pretty constant. @Terrence, The object passed to the each() callback is the same as "this". However, neither of these is the same as $(this), which is a jQuery collection containing the current DOM element context. In other words, assuming I'm remembering correctly, "this" and "objLink" are both DOM node references, not jQuery references (which would be $(this) or $(objLink)). ... it's been a while, so I might be remembering incorrectly. Fantastic intro to jQuery! I'm curious if there's a place to get one's hands on the page you used to demonstrate selectors. I think that alone could be a great demo for evangelizing for jQuery in the workplace. Wow. I couldn't leave without saying that was freaking awesome man. Mad props for you and your time. I'm much more confident to use jQuery now. Cheers from NZ! @Matthew, Glad you liked it. This was one of my first presentations... ever. I guess I never got around to making a zip file of the slides. I can do that. @Mido, Thanks my man! This is a bit old, but the basics are still the same. Glad this was able to help out. Sad its not downloadable :( Fantastic exploration! This is the most fantastic jQuery video tutorial. This is the video tutorial that made me fell in love with jQuery and JavaScript in general. This is now in top 10 search results on Google! Great Job! Do you have a plan to do an updated jQuery video. That would be greeeat! please is there any place i can download the whole folder with one link,like torrent.thanks I know this is a bit dated, but as I am just able to start using jQuery in my client's CF site recently, I've found this to be helpful. Good job. Thanks Ben! Thank you for your generosity in sharing this little gem. The time and effort you've put into this are much appreciated. Thanks guys! Sorry the video is so dated at this point. Maybe one of these days I'll try to create a more up-to-date version!! Excellent presentation ! I have just started to learn jQuery and done a few video course, yours is by far the best...very easy to follow/understand. You mentioned you may create an updated version soon...please do !
https://www.bennadel.com/blog/1492-an-intensive-exploration-of-jquery-with-ben-nadel-video-presentation.htm
CC-MAIN-2022-21
refinedweb
3,629
83.36
RedisDays Available Now On-Demand. This post is part three of a series of posts introducing the Redis-ML module. The first article in the series can be found here. The sample code for this post requires several Python libraries and a Redis instance running Redis-ML. Detailed setup instructions to run the code can be found in either part one or part two of the series. Logistic Regression Logistic regression is another linear model for building predictive models from observed data. Unlike linear regression, which is used to predict a value, logistic regression is used to predict binary values (pass/fail, win/lose, healthy/sick). This makes logistic regression a form of classification. The basic logistic regression can be augmented to solve multiclass classification problems. The example above, taken from the Wikipedia article on Logistic Regression, shows a plot of the probability of passing an exam relative to the hours spent studying. Logistic regression is a good technique for solving this problem because we are attempting to determine pass/fail, a binary selector. If we wanted to determine a grade or percentage on the test, simple regression would be a better technique. To demonstrate logistic regression and how it can be used in conjunction with Redis, we will explore another classic data set, the Fisher Iris Plant Data Set. Data Set The Fisher Iris database consists of 150 data points labeled with one of 3 different species of Iris: Iris setosa, Iris versicolor, and Iris Virginica. Each data point consists of four attributes (features) of the plant. Using logistic regression, we can use the attributes to classify an Iris into one of the three species. The Fisher Iris database is one of the data sets included in the Python scikit learn package. To load the data set, use the following code: from sklearn.datasets import load_iris iris = load_iris() We can print out the data in a table and see that our data consists of sepal length, sepal width, petal length and petal width, all in centimeters. sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) 0 5.1 3.5 1.4 0.2 1 4.9 3.0 1.4 0.2 2 4.7 3.2 1.3 0.2 3 4.6 3.1 1.5 0.2 4 5.0 3.6 1.4 0.2 5 5.4 3.9 1.7 0.4 Our target classification is encoded as integer values 0, 1, and 2. A 0 corresponds to Iris Setosa, a 1 corresponds to Iris Versicolor and a 2 corresponds to an Iris Virginica. We can see in both plots that there are a few outliers that get misclassified, but most of our Iris types cluster together in distinct groups. Performing a Logistic Regression The code to perform a logistic regression in scikit is similar to the code we used previously to perform a linear regression. We first need to create our training and test sets, then we fit a logistic regression. To split the training and test sets, we use the following code: x_train = [ x for (i, x) in enumerate(iris.data) if i%10 !=0 ] x_test = [x for (i, x) in enumerate(iris.data) if i%10 == 0 y_train = [ y for (i, y) in enumerate(iris.target) if i%10 != 0 ] y_test = [ y for (i, y) in enumerate(iris.target) if i%10 == 0 ] For this example, we split our data into blocks of 10 elements, put the first element into the test set and put the remaining 9 elements into the training set. To ensure our data contains selections from all three classes, we’ll need to use a more involved process in this example than previous examples. Once we construct our training and test sets, fitting the logistic regression requires two lines of code: logr = LogisticRegression() logr.fit(x_train, y_train) y_pred = logr.predict(x_test) The final line of code uses our trained logistic regression to predict the Iris types of our test set. Redis Predictor As with our linear regression example, we can build a logistic regression predictor using Redis. The Redis-ML module provides ML.LOGREG.SET and ML.LOGREG.PREDICT functions to create logistic regression keys. To add a logistic regression model to Redis, you need to use the ML.LOGREG.SET command to add the key to the database. The ML.LOGREG.SET command has the following form: ML.LINREG.SET key intercept coeef [...] and the ML.LOGREG.PREDICT function is used to evaluate the logistic regression from the feature values and has the form: ML.LOGREG.PREDICT key feature [...] The order of the feature values in the PREDICT command must correspond to the coefficients. The result of the PREDICT command is the probability that an observation belongs to a particular class. To use Redis to construct a multiclass classifier, we have to emulate the One vs. Rest procedure used for multiclass classification. In the One vs. Rest procedure, multiple classifiers are created, each used to determine the probability of an observation being in a particular class. The observation is then labeled with the class it is most likely to be a member of. For our three-class Iris problem, we will need to create three separate classifiers, each determining the probability of a data point being in that particular class. The scikit LogisticRegression object defaults to One vs. Rest (ovr in the scikit API) and fits the coefficients for three separate classifiers. To emulate this procedure in Redis, we first create three logistic regression keys corresponding to the coefficients fit by scikit: r = redis.StrictRedis('localhost', 6379) for i in range(3): r.execute_command("ML.LOGREG.SET", "iris-predictor:{}".format(i), logr.intercept_[i], *logr.coef_[i]) We emulate the One vs. Rest prediction procedure that takes place in the LogisticRegression.predict function by iterating over our three keys and then taking the class with the highest probability. The following code executes the One vs. Rest procedure over our test data and stores the resulting labels in a vector: # Run predictions in Redis r_pred = np.full(len(x_test), -1, dtype=int) for i, obs in enumerate(x_test): probs = np.zeros(3) for j in range(3): probs[j] = float(r.execute_command("ML.LOGREG.PREDICT", "iris-predictor:{}".format(j), *obs)) r_pred[i] = probs.argmax() We compare the final classifications by printing out the three result vectors: # Compare results as numerical vector print("y_test = {}".format(np.array(y_test))) print("y_pred = {}".format(y_pred)) print("r_pred = {}".format(r_pred)) The output vectors show the actual Iris species (y_test) and the predictions made by scikit (y_pred) and Redis (r_pred). Each vector stores the output as an ordered sequence of labels, encoded as integers. y_test = [0 0 0 0 0 1 1 1 1 1 2 2 2 2 2] y_pred = [0 0 0 0 0 1 1 2 1 1 2 2 2 2 2] r_pred = [0 0 0 0 0 1 1 2 1 1 2 2 2 2 2] Redis and scikit made identical predictions, including the mislabeling of one Virginica as a Versicolor. You may not have a need for a highly-available, real-time Iris classifier, but by leveraging this classic data set you’ve learned how to use the Redis-ML module to implement a highly available, real-time classifier for your own data. In the next post, we’ll continue our examination of the features of the Redis-ML module by looking at the matrix operations supported by Redis-ML and how they can be used to solve ML problems. Until then, if you have any questions regarding these posts, connect with the author on twitter (@tague). By continuing to use this site, you consent to our updated privacy agreement. You can change your cookie settings at any time but parts of our site will not function correctly without them.
https://redis.com/blog/introduction-redis-ml-part-three/
CC-MAIN-2022-21
refinedweb
1,309
56.55
Mastering data structures in Ruby — Binary Trees A tree is a data structure that allows us to represent different forms of hierarchical data. The DOM in HTML pages, files, and folders in our disc, and the internal representation of Ruby programs are all different forms of data that can be represented using trees. On this post, we are going to focus on binary trees. The way we usually classify trees is by their branching factor, a number that describes how nodes multiply as we add more levels to the tree. Unsurprisingly, the branching factor of binary trees is two. This is how a binary tree looks like: 13 / \ / \ / \ 8 17 / \ / \ 1 11 15 25 /\ /\ /\ /\ x x x x x x x x Notice that we have one node at the root level, then two nodes one level below, then four nodes two levels below, and so on and so forth… Also, the tree representation in the previous section is a balanced binary tree. When we get to binary search, we will see what it means for a tree to be balanced, but for now, let’s say that a tree is balanced when its nodes are arranged in such a way that tree is as short as it possibly could be. Since a lot of search algorithms depend on the height of the trees, keep them short is crucial for performance reasons. On trees, elements are represented by nodes. In our case those nodes will have four attributes: Every node has a parent and up to two children. The only exception to this rule is the first node (called root) which has no parent. As it happens with other hierarchical data structures, there are different ways to traverse trees. Depth-first and breadth-first are the most common ones: Binary Trees Interface The interface of a binary tree is tightly coupled with the way we are going to use it. For instance, an AVL Tree (binary search) will have methods to rotate and balance whereas a regular one will not. In this post, we are going to build a general purpose binary tree that we could use to store in-memory representations of binary expressions like 1 + 2 * 3 and a=b. The same expressions in its tree form: + = / \ / \ 1 * a b /\ 2 3 On future posts, we may extend what we built here to implement a bit more “complex” stuff like binary search, data compression or priority queues. But for now, let’s stick to a modest interface. Let’s start by reviewing methods and attributes. Implementation Details Initialize This is the tree’s constructor it sets the root node of the tree an initializes its size. def initialize root self.root = root self.size = 1 end Insert Left This method inserts a new node rooted at the left child of the specified node. The complexity of this method is O(1). You will notice the code on this method is a bit cluttered because we add checks to prevent unwanted mutations. While it’s application-specific, in general when you mutated a node you have to make some arrangements on the structure of the tree, this is something that’s out of the scope of this post but will be addressed in the future. I’m not a big fan of adding guards on an educational code, but I think that in this case the cost does not overweight the benefits. def insert_left(node, data) return unless node raise "Can't override nodes :(" if node.left self.size += 1 node.left = Node.new node, data end Insert Right This method works like insert_left but uses the right child instead. The complexity of this method is O(1). def insert_right(node, data) return unless node raise "Can't override nodes :(" if node.right self.size += 1 node.right = Node.new node, data end Remove Left This method removes the subtree that’s rooted at the left child of the given node. Notice that as opposed to what happens with languages like C, in Ruby, the only thing we need to do to remove the subtree is set it to nil. From there, it’s up to the garbage collector to release allocated nodes. The complexity of this method is O(n) where n is equal to the number of nodes in the subtree. def remove_left(node) if node&.left remove_left(node.left) remove_right(node.left) node.left = nil self.size -= 1 end end Remove Right This method removes the subtree that’s rooted at the left child of the given node. As well as it happens with remove_left, the complexity of this method is O(n). def remove_right(node) if node&.right remove_left node.right remove_right node.right node.right = nil self.size -= 1 end end Merge This is a class method that creates a new binary tree by merging two existing ones. Optionally, this method allows you to specify a value for the root node of the merged tree. These are the steps to merge the two trees: - Create a root node for the merged tree. - Create an empty tree to hold the merge. - Take the merged tree and point the left child of its root node to the root node of the left tree. - Take the merged tree and point the right child of its root node to the root node of the right tree. - Adjust the size in the merged tree. def self.merge left_tree, right_tree, data = nil raise "Can't merge nil trees." unless left_tree && right_tree root = Node.new(nil, data); res = BTree.new root res.root.left = left_tree.root res.root.right = right_tree.root res.size = left_tree.size + right_tree.size res end This method prints the contents of the current tree recursively applying a “pretty” format (actually, it’s just indentation, but still…) def print print_rec self.root endprivate def print_rec node, indent=0 print_node node, indent print_rec node.lhs, indent + 1 if node&.lhs print_rec node.rhs, indent + 1 if node&.rhs enddef print_node node, indent data = node&.data&.to_s || "nil" puts data.rjust indent * 4, " " end When to use trees Trees are one of the most used data structure in CS. In the wild you can find trees in the source code of database systems, user interfaces, data compression algorithms, parsers, and schedules, just to name a few. So, they are pretty much everywhere. So, that’s it for general purpose binary trees. I hope you enjoyed it! You can get the source code from here. Next time, Binary Search Trees. Thanks for reading! Also, don’t forget to clap if you like this post :) PS: This post is part of a series on mastering data structures, as we move forward subjects that were thoroughly discussed on previous posts are sometimes glossed over if mentioned at all. If you want to get the most of this series, I recommend you to start from the very first post on singly linked list. PS2: If you are a member of the Kindle Unlimited program, you can read a bit polished version of this series for free on your Kindle device.
https://medium.com/amiralles/mastering-data-structures-in-ruby-binary-trees-e7c001050a52?source=collection_home---5------8-----------------------
CC-MAIN-2019-30
refinedweb
1,186
72.97
Step 5: Create and Deploy the EDI Receive Pipeline Updated: November 27, 2015 In this topic, you configure the EDI Receive bridge that receives an X12 850 PO message from an FTP server, processes it, transforms it to an ORDERS05 IDOC, and then routes it to the XML One-Way Bridge that you deployed in the previous step. To configure an EDI Receive pipeline Log into the BizTalk Services Portal. You can get the URL for the BizTalk Services Portal from your BizTalk Services subscription. For more information about logging into the portal, see. Create a partner for Fabrikam and Contoso. On the left pane, select Partners, and then from the Partners page, select Add Partner. Create an agreement between the two partners. On the Agreements page, select the EDI tab if you are not already on that tab. Then click Add. Set the following values for the General Settings tab. TABLE REMOVED Select Continue. Selecting Continue adds two new tabs: one for receive settings and the other for send settings. Each tab is for a one-way agreement between the two partners, one for receiving messages and the other for sending messages. The properties in the Receive Settings tab define how the EDI receive bridge is configured. This bridge receives incoming EDI messages that are sent to Fabrikam. Similarly, the properties in the Send Settings tab define how the EDI send bridge is configured. This bridge sends EDI messages from Fabrikam to its trading partners like Contoso. To specify the receive settings From the agreements page, select the Receive Settings tab. Enter the following values for the Transport section: For Transport Type, select FTP. In the scenario used in this tutorial, Contoso sends the X12 850 message using an FTP location. Provide the name of the FTP Server from where the messages are picked. Enter the username and password to connect to the FTP Server. Enter the relative path on the server from where to pick the X12 850 message. Enter the following values for the Protocol section: Enter whether you want to receive technical (TA1) and functional acknowledgements (997). Under Schemas, select the plus sign and specify the following values: In the Transform section, select the plus sign to add a transform to the agreement. From the drop-down list, select the X12_00401_850.xsd schema and the transform you created earlier (AzureTransformations.trfm). The schema as well as the transform is deployed to the BizTalk Services subscription when you deployed the BizTalk Service project in the previous step: On the Route page, under Route Settings, select Add to add a route destination. Set the Rule Name to SendToBridge. Under Route action, select the plus sign to add a new row and set the following values: Set Target Type to Http Header Set Header Name to Content-Type Set Value Type to Constant Set Constant Value to application/xml Under Route destination, set Transport type to Azure BizTalk Bridge and in the text box enter the entity name of the bridge on the message flow surface. For this tutorial, you specified the bridge name as B2BConnector. Using this name the bridge deployment endpoint is built, which is http://<mybiztalkservicename>.biztalk.windows.net/default/B2BConnector. With this configuration, all the messages processed by the agreement are routed to the XML One-Way Bridge bridge you deployed earlier. Select Save. On the Route page, under Message Suspension Settings, enter the Transport Type as Azure Service Bus, and then enter the following values: Set the route destination type to BasicHttpRelay. Enter the Service Bus namespace, issuer name, and issuer key. Enter the endpoint URL where a relay receiver service is already running. For this tutorial, specify this as Suspend. So, the complete URL where a failed message is sent is http://<servicebus_namespace>.servicebus.windows.net/Suspend. To specify the send settings From the agreements page, select the Send Settings tab. Retain the default values Inbound URL, Transform, and Batching tabs. In the Protocol tab, under Schemas, enter the following values: In the Transport section, under Transport Settings, enter the following values: Set the Transport Type to FTP/S. Enter the required values for the FTP transport. In the Transport section, under Message Suspension Settings, enter the following values: Set the Transport Type to Azure Service Bus. Set the route destination type to BasicHttpRelay. Specify the Service Bus namespace, issuer name, and issuer key. Specify the endpoint URL where a relay receiver service is already running. For this tutorial, specify this as Suspend. So, the complete URL where a failed message is sent is http://<servicebus_namespace>.servicebus.windows.net/Send_Failure. Select Deploy Agreement to deploy the agreement. Once the agreement is deployed, to test the solution, you can go ahead and drop a test PO 850 message in the folder on the FTP server that you specified as part of the agreement. More details on how to test the solution are provided in the next topic, Step 6: Test the Solution.
https://msdn.microsoft.com/nb-no/library/hh859734
CC-MAIN-2018-22
refinedweb
828
64
User Details - User Since - Nov 5 2020, 2:06 AM (10 w, 1 d) Today Eugene.Zelenko, thanks for the review! fixed Eugene.Zelenko, thanks for the review! fixed Yesterday - use back-ticks - fix the first document line - add default values - fix the first document line - add default values njames93, the purpose is to flag it indeed. This approach was found valueable in Yandex.Taxi, as it is very easy to forget that you're in a coroutine and may not use blocking API. The bug does affect performance (e.g. all coroutine threads wait for fs), it cannot be found by functional tests (as it is not a functional invariant violation) and may be rather tricky to debug (as the performance harm depends on many things like IO limits, page cache size, current load, etc.). It can be caught during code review, but it suffers from human errors. Rather than playing catch-me-if-you-can games, the check can be automated. As C/C++ standard libraries contain quite many blocking functions and C++20 gains official coroutine support, I find it valuable for the C++ community to have an already compiled list of such blocking functions and the check that uses it. Wed, Jan 13 fix Mon, Jan 11 friendly ping, any comments on the patch? Tue, Jan 5 - review fixes - drop of atomic::is_always_lock_free check Wed, Dec 30 review fixes fix the mess up Nov 27 2020 - git diff -U9999 - git dif -U9999 Nov 26 2020 - revert decls marking - move static vectors out of namespace - mark mt-unsafe decls and check for marks in exprs - fix sort order lebedev.ri, any comments? Nov 23 2020 - sort order - do not link with FrontendOpenMP Any comments on the patch? Nov 18 2020 - remove garbage files - concurrent -> concurrency - concurrent -> concurrency - split the patch apart Nov 17 2020 - move plugin to concurrent group - naming fixes - use anyOf - Libc -> FunctionSet Nov 16 2020 - minor changes to docs Nov 10 2020 - add Libc option - improve docs Nov 6 2020 - use static instead of namespace {} - don't use SourceRange() - revert unrelated changes to .rst don't include any system headers in tests
http://reviews.llvm.org/p/segoon/
CC-MAIN-2021-04
refinedweb
358
68.1
Well I'm starting to study a bit on Swing and the JFrame, so please, if this looks ugly please cut me a bit of slack >.< This is what I've got: package Core.Engine; import javax.swing.*; import java.awt.*; @SuppressWarnings("serial") public class Engine extends JFrame { static int i; static JPanel PRIME; public Engine(){ JButton[] BUTTON = new JButton[9]; PRIME = new JPanel(new GridLayout(5,6)); String b[] = {"1", "2","4", "3", "5", "6", "7", "8", "9", "0"}; for(i = 0; i < BUTTON.length; i++){ BUTTON[i] = new JButton(b[i]); BUTTON[i].setSize(30,30); }; PRIME.add(BUTTON[i]); } public static void main(String args[]){ Engine frame = new Engine(); frame.add(PRIME); frame.pack(); frame.setVisible(true); } } I get an ArrayIndexOutOfBoundsException on line 27 and 30 which would be in the Main argument... Here's the exact error: Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 9 at Core.Engine.Engine.<init>(Engine.java:27) at Core.Engine.Engine.main(Engine.java:30) Could anyone tell me where I went wrong? The array has 9 elements. I set 9 chars... How did it go out of bounds? Foot Note: If you have any ideas on how I could improve this code please don't be afraid to mention it!
http://www.javaprogrammingforums.com/whats-wrong-my-code/14689-what-did-i-do-wrong-here.html
CC-MAIN-2014-10
refinedweb
211
68.67
I try to use pythran in a function that need an int array and for the second arg a dict with tuples of int as keys and an int as value: myarray = np.array([[0, 0], [0, 1], [1, 1], [1, 2], [2, 2], [1, 3]]) dict_with_tuples_key = {(0, 1): 1, (3, 7): 1} #pythran export update_dict((int, int):int dict, int[][]) def update_dict(dict_with_tuples_key, myarray): # do something with dict_with_tuples_key and myarray # return and updated dict_with_tuples_key return dict_with_tuples_key File "/usr/lib/python2.7/inspect.py", line 526, in findsource file = getfile(object) File "/usr/lib/python2.7/inspect.py", line 403, in getfile raise TypeError('{!r} is a built-in module'.format(object)) TypeError: <module 'sys' (built-in)> is a built-in module From your backtrace, it seems you're importing sys. In that kind of situation, pythran tries to get the source of the import module to compile it. As sys is a built-in module, it fails.
https://codedump.io/share/NFBJBjF46RCk/1/pythran-export-dict-with-tuples-as-key
CC-MAIN-2018-17
refinedweb
158
62.88
Is Your Leadership Style Killing Your Employees? By User12601034-Oracle on Apr 27, 2011 I attended a webinar yesterday by Kevin Kruse, author of We: How to Increase Performance and Profits through Full Engagement. I typically try to find at least one or two nuggets of information in the webinars that I attend, and when I registered, I figured that any webinar on employee engagement would give me at least one good nugget. Boy was I wrong – the whole webinar was engaging! Let’s start with some statistics (I’ll get to the killing part later). According to polls done by BlessingWhite, Conference Board and Gallup, fewer than 1 in 3 employees are engaged in their work; only 45% are “satisfied” with their work; and employee disengagement costs companies around $300 billion annually. Additionally, higher performing organizations tend to have more engaged employees (56%) than low performing organizations (27%); and companies ranked high in employee engagement had better shareholder return (17.9%) than companies ranked low in employee engagement (-4.9%). Now, about that killing part. The engagement surveys also showed that dissatisfied employees weighed about 5 pounds more than their colleagues and were more susceptible to cardiovascular events. Even more surprising, a person’s job satisfaction has a direct correlation on their marital happiness and on the likeliness their kids will misbehave in school (this is called the “spillover effect”). And much of one’s engagement and job satisfaction comes from a person’s interaction with his or her boss. In fact, five questions can determine the quality of boss/employee interaction: -. If you’d like, you can even take a short quiz to see if you are suffering from boss-related health issues. In the webinar, Kevin provided a “GReaT” model of leadership. The capitalization is a reminder for Growth, Recognition and Trust, which are three high-impact drivers of engagement. Leaders can take certain step to drive GReaT leadership, including: As a leader, you may want to think about the five questions and how you drive GReaT leadership. The culture that you create with your team members has an impact beyond just your team – it impacts every team member and every person in his or her family. Simply changing how you interact with your employees could lead to higher levels of engagement, fewer health related issues, and better family relationships for your employees…and wouldn’t that be a great impact to have on your universe? Sandy, just wanted to thank you the blog posts amplifying some of the themes from the webinar. Leaders are SO critical to the engagement process we really need to get people's attention about the impact on business and personal outcomes alike. Thanks! Kevin Posted by Kevin Kruse on July 13, 2011 at 07:57 AM MDT # Absolutely, Kevin. Thanks for a great webinar! Posted by Sandy E on August 12, 2011 at 04:41 AM MDT #
https://blogs.oracle.com/sandye/entry/is_your_leadership_style_killing
CC-MAIN-2016-07
refinedweb
486
51.68
1Estimating the Hurdle RateEstimating the Hurdle Rate2Hurdle Rate Investment Decision specifies that a firm should invest in assets only if they expect them to earn more than the hurdle rate. What should be this Hurdle rate? o Suppose you borrow all funds required to fund a project, paying 12% pa interest on the funds borrowed.o The project should earn at least 12% so as to be profitable. Cost of Capital represents the minimum return that a firm needs to earn on its projects. It is the compensation for time and risk in the use of capital by a project. How is it estimated? As a firm raises funds from different sources, so the weighted average of the individual costs is the cost of capital (the hurdle rate)26-Nov-132Estimating the Hurdle Rate3Cost of Debt Discount rate which equates the present value of interestpayments and principal repayments with the net proceeds of thedebt issued or its current market price. nt n0 t nt=1d dC FP= +(1+k ) (1+k )3 1 2 n n0 1 2 3 n nd d d d dC C C C FP = + + +........+ +(1+k ) (1+k ) (1+k ) (1+k ) (1+k )where, P0 = net amount realised on debt issue (or CMP)Ct= Periodic interest on DebenturesFn = Face Value/ Redemption pricen = Maturity period Kd= Cost of debtEstimating the Hurdle Rate 4Cost of Debt - IllustrationFn=100/- ; C= 14/- ; n = 5 Years; P0= 97/-5t 5t=1d d14 10097= +(1+k ) (1+k ) n 0d0 n(F -P )C+nk(P +F )2 where, P0= net amount realised on issue of Preference share /CMPPDt= Dividend on Preference SharesFn= Redemption Valuen = Maturity period of Preference Shareskp = Cost of Preference Capital3 1 2 n n0 1 2 3 n np p p p pPD PD PD PD FP = + + +........+ +(1+k ) (1+k ) (1+k ) (1+k ) (1+k )n 0tp0 n(F -P )PD +nk(P +F )2 An approximation:26-Nov-135Estimating the Hurdle Rate9Cost of Preference Capital-IllustrationABC Ltd. issues Rs.1,000/- face value preference shares carrying 12%dividend, redeemable at par after 3 years. Net amount realised todayis Rs.960/-. Tax Rate= 40% . What is the cost of Preference Capital?t0 tt=1pPDP =(1+k ) tp0PDk =Por Irredeemable (Perpetual) Preference Shares p(1000-960)120+133.333k = =13.6054%(1000+960)9802 3t 3t=1p p120 1000960= +(1+k ) (1+k )By trial & error: kp= 13.7147% orFn= 1000 ; PDt= 120 ; n =3 years; P0= 960Estimating the Hurdle Rate10Cost of Equity There is no legal obligation to pay dividends to the shareholders Quantum of dividends is also not fixed. Cost of Equity shares is an Implicit cost.(not explicit) It is an Opportunity Cost. (Returns forgone on the next best investment opportunity of comparable risk) Methods of computing Cost of Equity Capital: Dividend Discount Model (DDM) Capital Asset Pricing Model (CAPM)26-Nov-136Estimating the Hurdle Rate11Cost of Equity Capital: Dividend Discount Model (DDM) Discount rate that equates the present value of the stream ofexpected future dividends with the current market price/IssuePrice.3 1 20 1 2 3e e e eD D D DP = + + +..........+(1+k ) (1+k ) (1+k ) (1+k ) t0 tt=1eDP =(1+k ) Dt= Expected Dividend per share at time tke= Cost of EquityEstimating the Hurdle Rate 12DDM - Constant Growth If the dividends are expected to grow at a constant rate g, and ke> g, then, Assumptions: D1> 0 Dividends grow at a constant growth rate g =ROE*b Dividend Payout ratio (1-b) is constant1 2 31 1 1 10 1 2 3 4e e e eD D (1+g) D (1+g) D (1+g)P = + + + +..........+(1+k ) (1+k ) (1+k ) (1+k ) 10eDP =(k - g)1e0Dk = +gPOr26-Nov-137Estimating the Hurdle Rate13DDM - Constant GrowthTrueValue Ltd intends to pay a dividend of Rs. 5/- next year, and expects the dividends to grow @ 6% each year till perpetuity. The Companys market price currently is Rs. 50/-. What of the cost of Equity shares?TrueValue Ltd intends to pay a dividend of Rs. 5/- next year, and expects the dividends to grow @ 6% each year till perpetuity. The Companys market price currently is Rs. 50/-. What of the cost of Equity shares?D1= Rs.5/- ; g= 6% forever, P0= Rs.50/-. Find ke. ke= (5 / 50) + 6% = 10%+ 6% = 16%.Mostly used for companies in the mature stage of their life cycle1e0Dk = + gPEstimating the Hurdle Rate14How to estimate Growth Rate?1. Historical Growth Rates: If earnings and dividends growth rates have been relatively stable in the past, and investors expect these trends to continue in the future, then the past realised growth rates may be used to estimate the expected future growth rates. 2. Retention Growth Model: Firms pay some part of their net income as dividends and retain the balance. Growth rate of a firm will be depended on the net income retained by it and the rate it earns on the retentions., Thusg = ROE*Retention ratio, where Retention ratio = 1- Pay-out ratio3. Analysts forecasts: Security analysts provide forecasts on regular basis, using non-constant growth.26-Nov-138Estimating the Hurdle Rate 15DDM - Multiple Growth Rate nt-1 t n n+10 n t nt=1e e e nD (1+g ) P DP = + where P = (1+k ) (1+k ) k - gnt-1 t n+10 t nt=1e e e nD (1+g ) D 1P = + (1+k ) (1+k ) k - g ` ) ` ` ) ) P0= 125 ; D0= 3.50; g1-3=15% g4-6=12% g7+=8%26-Nov-139Estimating the Hurdle Rate17Cost of Equity Capital: Bond-Yield plus Risk Premiumke: Bond Yield on Cos long-term debt + Bond risk premium (3-5%)Bonds of NCE Ltd has a yield of 11% and the bond risk premium isestimated at 3.8%, then the estimated cost of equity is 14.8% (11% +3.8%)Estimating the Hurdle Rate 18Cost of Equity Capital: Capital Asset Pricing Model (CAPM) Expected rate of return on any security Ri is given by:i fR =R + Equity Risk Premiumwhere,Ri= Rate of return on security iRf= Risk-free rate of returnRm= Rate of return on Market Portfolio i= beta of security iRm- Rf= Market Risk PremiumRf =10%; Rm=15%; A= 0.5; B =1.0; C = 1.5; Find Ri RA =10% + 0.5(15%-10%) = 12.5% RB =10% + 1.0(15%-10%) = 15.0% RC =10% + 1.5(15%-10%) = 17.5%i f i m fR =R + (R - R ) 26-Nov-1310Estimating the Hurdle Rate 19External and Internal Equity Equity External equity (additional /Outside) and Internal equity(Retained Earnings) In both cases, the shareholders are providing the funds to the firm, hence they would expect same returns on both. But, Internal Equity is cheaper than External Equity due to: New equity is issued at less than the Current Market Price; Issue of new Equity involves Floatation Costs. As External Equity is expensive than Internal Equity, an adjustment has to be made.1re0DInternal Equity k = +gP1e0DExternal Equity k = +gICMP = Rs 100/- ; I0 = Rs 95/- ; D1 = Rs 5/- ; g = 6% re5k = +6%=11%100e5k = +6%=5.26%+6%=11.26%95Estimating the Hurdle Rate 20Weighted Average Cost of Capital After having calculated the cost of individual components of the capital structure, we need to calculate the Weighted Average Cost of Capital (WACC). Weights MAY be either:(a) Market Value of the various forms of financing, which the company intends to employConsistent with the objective of maximization of shareholders wealth(b) Book Value of the Capital Structure. WACC should be estimated on post-tax basis.0 e dE DWACC(k )=k +k (1-t)(D+E) (D+E)26-Nov-1311Estimating the Hurdle Rate21Marginal Cost of Capital WACC is calculated based on the various sources of capital already employed by the firm. Such WACC provides a historical perspective can at best be used to compare with some predetermined cost of capital. However, the most important use of the concept of Cost of Capital is to evaluate Investment decisions. Therefore, more relevant cost to be worked out should be cost of raising new funds to finance new projects and not the historical costs which have been incurred in the past. Therefore, weighted average cost of capital should be calculate for the incremental or marginal capital Weighted Marginal Cost of Capital (WMCC).Risk & Return26-Nov-1312Stock Returns Stock Returns = (845 - 625) /625 = 35.20% Total Stock Returns = (845-625+25) /625= 39.20%23 Stock returns are considered instead of stock prices. Stock Returns = (Pt- Pt-1) /Pt-1 Log Returns = ln(Pt/Pt-1) Total Stock Returns = (Pt- Pt-1+Dt) /Pt-1Estimating the Hurdle RateThe price of Reliance share is Rs. 845/- today and was Rs 625/- oneyear ago. It paid a dividend of Rs. 25/- per share. What is the returnover the one-year period?Stock returns over the years24-40%-20%0%20%40%60%80%1991-921992-931993-941994-951995-961996-971997-981998-991999-002000-012001-022002-032003-042004-052005-062006-072007-082008-092009-102010-112011-12Stock returns are high but are volatile also.Estimating the Hurdle RateInstrument Nominal Real Average Risk Premium(over 1-year GOI)1- year G-Sec yield9.29 1.71 0.00Corporate Bonds (AAA rated)13.50 6.35 4.21Equity Shares22.89 15.98 13.61Average Rates of Return (1978 -2011)% per year26-Nov-1313Stock ReturnsExpected Stock Returns = (0.30*16%) + (0.50*11%) + (0.20*6%) = 11.50%25 Expected Stock returns : E(Ri) = pi*RiEstimating the Hurdle RateState of Economy Probability of OccurrenceRate of Return (%)Boom 0.30 16Normal 0.50 11Recession 0.20 6 Portfolio returns is the weighted average of the individual stock returns.where, x1, x2: %age of funds invested in stock 1,2.r1, r2: %age return of stock 1,2. Portfolio Returns261 1 2 2Portfolio Returns = x r + x rEstimating the Hurdle RateExpected returns on security A is 16 % and on security B is14 per cent and an investor wants to create a portfolio oftwo asset with equal weightage. What would be theexpected portfolio returns?E(Portfolio AB) =(0.5 x 14%) + (0.5 x 16%) = 15%26-Nov-1314Risk Risk is the possibility of adverse outcome. In finance, risk is defined as the likelihood of outcome being different from expected outcome. Actual outcome may be better or worse than expected. 1. Company Specific: Risks which are unique to a company.Law suits, strikes, Successful /Unsuccessful projects etc. Impact of such factors can be minimized, hence are called Diversifiable risks.2. Market Risks are caused by factors which systematically affect all or most firms. War, Inflation, Change in Govt. Policies. Interest Rates etc.Such risks cannot be eliminated, hence called Non-diversifiable riskOriginates from the system, hence called Systematic Risk. Statistical measure of risk is Standard Deviation (or Variance )27 Estimating the Hurdle RatePortfolio Risk28 Estimating the Hurdle RateYearStock A Stock BPortfolio AB(50%) (50%)2008 40% -10% 15%2009 -10% 40% 15%2010 35% -5% 15%2011 -5% 35% 15%2012 15% 15% 15%Average 15% 15% 15%Standard Deviation 22.64% 22.64% 0.00%26-Nov-1315Portfolio Risk29 Estimating the Hurdle Rate-20%-10%0%10%20%30%40%50%2008 2009 2010 2011 2012Stock A-20%-10%0%10%20%30%40%50%2008 2009 2010 2011 2012Stock B-20%-10%0%10%20%30%40%50%2008 2009 2010 2011 2012Portfolio AB-20%-10%0%10%20%30%40%50%2008 2009 2010 2011 2012-20%-10%0%10%20%30%40%50%2008 2009 2010 2011 2012-20%-10%0%10%20%30%40%50%2008 2009 2010 2011 2012When stocks are When stocks are Portfolio Risk30Stock 1Stock 22 21 1x 2 22 2x 1 2 12x x 1 2 12 1 2x x =1 2 12x x 1 2 12 1 2x x =Stock 1 Stock 22 2 2 21 1 2 2 1 2 12 1 2Portfolio Variance = x + x + 2(x x )where, x1, x2: %age of funds invested in stock 1,2. 1, 2: standard deviation return of stock 1,2 12: coefficient of correlation between stock 1 & 2.Estimating the Hurdle Rate26-Nov-1316Risk (Standard Deviation) selected stocks31Stock Standard DeviationHindustan Unilever 28.60%Hero Honda 28.80%Infosys 29.20%NTPC 30.00%Bharati Airtel 33.50%ONGC 37.30%L&T 46.90%Tata Motors 54.80%Sterlite Industries 59.60%Tata Steel 62.30%Estimating the Hurdle RatePortfolio Risks & Returns32Hero Honda Tata Steel Stock Returns 16% 24%Stock Standard Deviation 29% 62%Weights 50% 50%Portfolio Returns: 20.00%Portfolio Standard Deviation:Case 1: = + 1 45.50%Case 2: = 0 34.22%Case 3: = - 1 16.50%Estimating the Hurdle Rate26-Nov-131710%15%20%25%30%1 4 7 10 13 16 19 22 25No. of SecuritiesReducing Risk33Standard DeviationMarket RiskEstimating the Hurdle RateBy combining stocks, the Company specific risks get eliminated (hence Diversifiable Risks), while the Market risk still remains (hence Non-diversifiable Risks). Company Specific RisksRisk (Standard Deviation & Beta) selected stocks34Stock Standard Deviation Beta ( )Hindustan Unilever 28.60% 0.41Hero Honda 28.80% 0.57Infosys 29.20% 0.55NTPC 30.00% 0.72Bharati Airtel 33.50% 0.76ONGC 37.30% 0.95L&T 46.90% 1.38Tata Motors 54.80% 1.48Sterlite Industries 59.60% 1.66Tata Steel 62.30% 1.77Stocks with high standard deviation also have high betaEstimating the Hurdle Rate26-Nov-1318Risk-Return relationship of various securitiesRisk (%)Returns (%)Equity SharesPreference SharesCorporate BondsGovernment BondsRisk-free securitiesEstimating the Hurdle Rate 35SMLBeta and its Estimation26-Nov-1319Estimating Beta Beta of a security measures the Market (Non-diversifiable or Systematic) risk. Methods of estimating beta:Regression methodBottom-up BetaAccounting Beta37 Estimating the Hurdle Rate#1 Regression Method38Year RaRm1 12 102 14 123 16 134 12 85 -5 36 21 147 20 148 16 89 9 410 -2 -111 13 1012 15 1613 19 1214 15 1015 20 17-10-50510152025-5 0 5 10 15 20Stock Return (Ra)Market Return (Rm)Estimating the Hurdle Rate26-Nov-1320Estimating Beta Regression MethodCoefficientsStandard Error t Stat P-valueLower 95%Upper 95%Intercept -0.075 2.317 -0.032 0.975 -5.080 4.930Rm 1.307 0.209 6.264 0.000 0.857 1.758Regression StatisticsR Square 0.751Adjusted R Square 0.732Observations 15a mR = -0.075+1.307 R75% of the risk is due to Market39Estimating the Hurdle RateRegression Statistic Coefficient of Determination (R2) : Goodness of Fit- indicates the %age of risk attributed to Market risk. R2 = 0.751 or 75% of change in stock returns are due to changes in market returns, (1-R2) or 25% is caused by firm specific reasons. Standard Error (se):Indicates the error in estimate between estimated Beta and true betaSe =0.209: true beta lies between 1.037 1(0.209) with 67% confidence.40 Estimating the Hurdle Rate26-Nov-1321Direct Method41Year RaRmRa-AvgRaRm-AvgRm(Ra avg Ra)x(Rm-AvgRm)(Rm-AvgRm)21 12 10 -1 0 0 02 14 12 1 2 2 43 16 13 3 3 9 94 12 8 -1 -2 2 45 -5 3 -18 -7 126 496 21 14 8 4 32 167 20 14 7 4 28 168 16 8 3 -2 -6 49 9 4 -4 -6 24 3610 -2 -1 -15 -11 165 12111 13 10 0 0 0 012 15 16 2 6 12 3613 19 12 6 2 12 414 15 10 2 0 0 015 20 17 7 7 49 49 Estimating the Hurdle RateBeta estimation42 na a m m(a,m)1(R - Avg R )(R - Avg R ) 455Covariance = = = 32.50(n-1) 14 2nm m(m)1(R - Avg R ) 348Variance = = =24.86(n-1) 14(a,m)a(m)Covariance32.50 (Beta)= = = 1.3075Variance 24.86Estimating the Hurdle Rate26-Nov-1322Comparing Returns By regressing stock returns on Market returns, we get Further, using CAPM, we have In terms of excess returns: (Regress excess returns on security on excess returns on market) In terms of raw returns: (Regress raw returns on security on raw returns on market) In both regressions, slope of the regression is the beta of the security. Intercept is measure of stock performance relative to the market: In excess returns regression: if intercept is 0, the stock performed as per market. if intercept is +ve (-ve), stock performed better (worse) than the market.43j f j m fR =R + (R - R )j j mR =+ Rj f j j mR =R (1- )+ REstimating the Hurdle Ratej f j m fR - R = (R - R )Comparing Returns In raw returns regression: the intercept has to be compared with predicted intercept . If , stock performed better than expected If , stock performed worse than expected Measure of stock performance in case of excess returns regression and in case of raw returns regression is called Jensens Alpha.44f > R (1-)f < R (1-)f - R (1-)Estimating the Hurdle RatefR (1-)26-Nov-1323Issues in estimating BetaLength of estimation period: Longer period provides more data points but firms might change in its risk profile over longer period. Most estimates of beta use 5 year dataReturn Interval: Daily /Intra day interval increases data points but increases non-trading bias Monthly/Weekly interval reduces non-trading bias.Choice of Market Index: Most estimates use stock market index Should be a broad based index 45Estimating the Hurdle RateRisk-free rate of return (Rf) Return on a riskless asset is the risk-free rate. Risk-less asset should be:Default free security i.e. should be issued by Government. No uncertainty of reinvestment i.e. there are no cash flows prior to the end of horizon period. Zero Coupon securities. Tenure of G-Securities:Depends upon the cash Flows being analyzed.If CFs are 5yr CFs, we need 5 yr rate. Risk-free rate is the rate of return on Zero-coupon Govt. Securities that matches the time horizon of the cash flows being analyzed.46Estimating the Hurdle Rate26-Nov-1324Factors affecting Beta1. Type of Business: More sensitive the business is to market conditions, higher would be the Beta of the security. Cyclical firms higher beta Food processing/tobacco firms are less sensitive to market conditions. If product purchase is discretionary, such firms would have higher beta (P&G/HUL vs. Designer Wear)47 Estimating the Hurdle RateFactors affecting Beta%age change in OperatingProfitDOL=%age change in SalesEBIT*100EBITDOL=Sales*100Sales482. Degree of Operating Leverage:Firms with high DOL will also have high variability in operating profits, hence higher Beta.Estimating the Hurdle Rate26-Nov-1325Factors affecting Beta3.Degree of Financial Leverage: As Financial Leverage increases, Beta also increases.49%age change in EPSDFL=%age change in Operating ProfitEPS*100EPSDFL=EBIT*100EBITEstimating the Hurdle RateLevered and Unlevered Beta50 Assets of a levered firm are financed by debt & equity. Asset beta should be weighted average of equity beta & debt beta. For an all-equity financed firm (Unlevered), the asset beta and equity beta would be same.Asset or Unlevered Equity DebtE D= +D+E D+E| | | | | |\ \ Estimating the Hurdle Rate26-Nov-1326Levered and Unlevered Beta51 For a levered firm, if we assume that beta of Debt is zero, then asset beta would be: As leverage increases, equity beta also increases. If we consider the tax deductibility of debt, equity beta of a levered firm, would be: (developed by Hamada,1972)Equity Asset or UnleveredD = 1+(1-t)E (| | | (\ Asset or Unlevered EquityE =D+E| | |\ Equity AssetD = 1+E| | |\ Estimating the Hurdle Rate# 2 Bottom-up Beta What if historical stock prices are not available? Steps:1. Identify the businesses in which the firm operates(pure-play firms)2. Estimate the unlevered betas of these pure-play publicly traded firms in each businesses.3. Take the weighted average of the unlevered betas, using proportion of firm value in each business as the weights.4. Use current value of debt & equity of the firm, to compute the levered beta.52 Estimating the Hurdle Rate26-Nov-1327Bottom-up Beta53Find the beta of Co. X with Equity of Rs 517 million and Debt of Rs. 113 million. Betas of comparable firms are as follows:(tax rate = 35%)Estimating the Hurdle RateCompany Beta Equity DebtA 0.60 250 150B 1.25 16 26 C 0.95 612 152D 0.75 187 158E 0.55 32 9Bottom-up Beta54( ) ( 0.82= = 0.634051+(1-0.35) 0.4512Average Levered Beta = 0.82Average D/E = 495/1097 = 0.4512Unlevered Beta = ( | | | (\ Equity D1+(1-t)E ( | | | (\ UD= 1+(1-t)E113= 0.63405 1+(1-0.35) =0.72413517 ( | | | (\ Estimating the Hurdle RateCompany Beta Equity DebtA 0.60 250 150B 1.25 16 26 C 0.95 612 152D 0.75 187 158E 0.55 32 9Average 0.82 1,097 495Levered beta of Co. X26-Nov-1328Levered and Unlevered Beta55Company Y has an Equity Beta of 0.80 when the firm had D/E of 0.2:1(t=30%). The company intends to increase the D/E to 1:1. What wouldbe the beta at D/E of 1:1. (Assume Debt has tax advantage).Given: at D/E of 0.2:1 and t = 30%To estimate: Unlevered Beta:Equity (Levered) Beta at D/E of 1:1Equity = 0.80U0.80 = =0.70180.201+(1-0.30)1 (| | | (\ E1 =0.7018 1+(1-0.30) =1.19311 (| | | (\ Equity at D/E of 1:1Estimating the Hurdle Rate# 3 Accounting Beta Instead of regressing stock returns on index returns, if we use the accounting earnings -> accounting beta. Pitfalls: Accounting earnings are relatively smoothen. (betas are likely to be closer to one) Likely to be influenced by accounting policies depreciation method, allocation of expenses etc.56 Estimating the Hurdle Rate26-Nov-1329Beta of select companies57Company Beta R2ABB 0.89 0.53ACC 0.70 0.38BHEL 1.02 0.59ITC 0.53 0.36Reliance Infrastructure 1.79 0.74SBI 1.10 0.64Suzlon 1.55 0.49Unitech 1.68 0.42Tata Steel 1.44 0.62Estimating the Hurdle RatePortfolio Beta58 Estimating the Hurdle Rate Beta of a portfolio of securities is the weighted average of its individual securities betas. Portfolio= 1*w1+ 2*w2+ 3*w3+ .+ n*wnCompany Beta Weights Beta * WABB 0.89 0.30 0.267ACC 0.70 0.15 0.105BHEL 1.02 0.45 0.459ITC 0.53 0.10 0.053Portfolio Beta 0.884
https://www.scribd.com/document/209986573/Estimating-the-Hurdle-Rate
CC-MAIN-2019-43
refinedweb
3,719
55.64
Introduction The AllegroGraph Exporter (agtool export) is a command-line utility for exporting data from a triple-store. It can use multiple CPU cores to export in parallel. The agtool program is the general program for AllegroGraph command-line operations. (In earlier releases, there was a separate agexport program. A change from earlier releases is the --username option is now the --user option.) Usage agtool export [OPTIONS] DBNAME FILE where DBNAME is an AllegroGraph triple store name, and FILE is a file name. For example, this command exports the triples from the lesmis triple-store into a file named lesmis.rdf using the RDF/XML format: agtool export --port 10035 --output rdfxml lesmis lesmis.rdf the FILE argument Note that if you use a dash ( -) for the FILE argument, then agtool export will send the data to standard output. Parallel export is not possible in this case. If exporting in parallel, then the FILE argument is used as a template for the output file names. For example, if exporting with 5 workers to /data/output/lubm.nt, then agtool export will send data to: - /data/output/lubm-0.nt - /data/output/lubm-1.nt - /data/output/lubm-2.nt - /data/output/lubm-3.nt - /data/output/lubm-4.nt Options The following options may be used with agtool export: Triple-store options - -c CATALOG, --catalog CATALOG - Specify the catalogname of the triple-store; If the store is in the root catalog, then either omit this option or use the empty string ("") 1 . The default is to use the root catalog. - --server SERVER - Specify the name of the server where the triple-store resides. - -p PORT, --port PORT - Set this to the front-end port of the server where the triple-store resides. agtool exportcan run either on the server in which the triple-store resides or remotely. If run remotely, then you must also specify a usernameand password. The default value for the port is 10035. - -u USERNAME, --user USERNAME - Specify a username for the triple-store when accessing it remotely; use with --password. (The long form of this option was --usernamein earlier releases.) - --password PASSWORD - Specify the password for the triple-store when accessing it remotely; use with --user. Main options - --save-metadata FILENAME - Save attribute definitions and the static filter to FILENAME. See Triple Attributes for information on attributes. - --blank-node-handling STYLE - Determine how blank nodes are treated when exporting in parallel. This can be together or distribute. The first places all triples with blank nodes into the same export file whereas the second allows blank nodes to be distributed to multiple files. Note that if blank nodes are distributed, then the import process must be told to treat them as if they all come from the same context (cf. agtool load's job based bulk node strategy). The default is together. - -i IF-EXISTS, --if-exists IF-EXISTS - Controls how agtool export behaves when output files already exist. append If an export file exists, then append the new data to it. overwrite If an export file exists, then delete it and write the new data. fail If an export file exists, then do not export any data. The default is to fail if any export files exist. Note that when exporting in parallel all of the output files are checked and the if-existsbehavior applies to them as a group. I.e., if if-existsis fail then the export will fail if any of the output files already exists. - -o FORMAT, --output FORMAT - Set the output format. This can be one of: Other options: - --compress - If specified, then the output file or files will be compressed using gzip. The default is to have no compression. - -n, --namespaces - Use namespace abbreviations when exporting (for Turtle and RDF/XML). The default is to not use namespaces. - --parallel - Use multiple output files and export workers (see --workersfor greater control). The default is to export to a single file. - --workers COUNT - Specify the number of workers to use when exporting in parallel. The default value depends on the number of CPU cores in the machine doing the export. Notes and examples: Export the lubm-50 triple-store in turtle format in parallel with 15 workers. Any triples with a blank node will be written to the same file: agtool export --port 9002 --output turtle --workers 15 lubm-50 /disk1/gwking/DATA/l50.nt Export the lubm-50 triple-store on in parallel with blank nodes distributed across multiple files. Because the number of workers is not specified, agtool export will make its own determination based on the number of CPU cores. Any existing output files will be overwritten. Output data will be compressed. agtool export --if-exists overwrite --server \ --output rdfxml \ --parallel \ --blank-node-handling distribute \ --compress \ lubm-50 /disk1/gwking/DATA/l50.nt.gz
https://franz.com/agraph/support/documentation/current/agexport.html
CC-MAIN-2018-43
refinedweb
805
57.57
When you have a file-like object in Python, .read() will always return bytes. You can use any of the following solutions to get a str from the binary file. Option 1: Decode the bytes You can call .decode() on the bytes. Ensure to use the right encoding. utf-8 is often correct if you don’t know what the encoding is, but binary = myfile.read() # type: bytes text = binary.decode("utf-8") # Short version text = myfile.read().decode("utf-8") Option 2: Wrap the file so it appears like a file in text mode Use io.TextIOWrapper like this: import io text_file = io.TextIOWrapper(myfile, encoding="utf-8") text = text_file.read()
https://techoverflow.net/2019/11/09/how-to-read-str-from-binary-file-in-python/
CC-MAIN-2020-45
refinedweb
112
63.46
Technical Support On-Line Manuals RL-ARM User's Guide #include <stdio.h> int ungetc ( int iChar, /* character to store */ FILE* stream); /* file stream */ The function ungetc stores a character back into the data stream. The parameter iChar defines the character to store. The parameter stream is a file pointer defining the data stream to write to. The function is included in the library RL-FlashFS. The prototype is defined in the file stdio.h. The function can be invoked only once between function calls that read from the data stream. Subsequent calls to ungetc fail with an EOF return value. fgetc, fputc #include <rtl.h> #include <stdio.h> void main (void) { int c; FILE *fin; fin = fopen ("Test.txt","r"); if (fin != NULL) { while ((c = fgetc (fin)) == ' '); // Skip leading spaces ungetc (c, fin); // Unget the first non-space ungetc (c, fin); // This call fails with EOF. The file cursor // is now positioned to a non-space character fclose (fin); } }
http://www.keil.com/support/man/docs/rlarm/rlarm_ungetc.htm
crawl-003
refinedweb
160
77.64
Distributing Narayana JTS via DockerGytis Trikleris Feb 9, 2015 7:12 AM Hello, I'm posting here in order to get some feedback regarding the way to distribute Narayana JTS via Docker. I have a dockerfile which starts transaction service and recovery manager and then registers them with JacORB name server running in the same container. Currently dockerfile [1] and quickstart [2] are available on my Github repos. I would move them to Narayana account once I'm sure that everybody is happy with the implementation. I've decided to use Supervisor [3] to manage transaction service, recovery manager, and same server processes which are named respectively: TM, RM, and NS. Transaction manager and name server are accessible via ports 9998 and 9999. Additionally, ports 4711 and 4712 are exposed, which are default Narayana recovery ports. More information about how to configure and run container is available in readme files. Thanks in advance, Gytis [1] [2] [3] 1. Re: Distributing Narayana JTS via DockerTom Jenkinson Feb 9, 2015 12:24 PM (in response to Gytis Trikleris) I had chance to kick the tyres of this now and think it looks to be heading in the right direction for me. I had to make a tweak to the quickstart to add the jboss.org repositories: I wonder if goldmann might have some opinion on whether it looks like the right kind of thing to get hosted at? For example the use of supervisor to launch multiple processes in one docker. 2. Re: Distributing Narayana JTS via DockerMarek Goldmann Feb 18, 2015 3:11 AM (in response to Tom Jenkinson) I had a (really) quick look at the Dockerfiles, but I have one comment. Please bear in mind that I do not know the Narayana project. So, supervisord is not the best idea to use in a container. This is not necessarily how the Docker image should look. We should really avoid it and instead create multiple images that could be linked to each other (if required). If the three services do not talk to each other and just show 3 types of services - these should probably be 3 different images. Probably another, 4th should be introduced to give a base for all 3 images or just use CMD in a smart way to make it overrideable at boot and stick with one. Of course documentation is the key here. Otherwise - I like the work you've done and I would be happy to create a repository to add narayana to jboss/* namespace on Docker HUB if this is what we want. Tom? 3. Re: Distributing Narayana JTS via DockerTom Jenkinson Feb 18, 2015 6:53 AM (in response to Marek Goldmann) Thanks for the feedback Marek, we certainly want to get something on Docker HUB but we would like to try to ensure we have the right approach and philosophy here before pushing it further. Gytis is reviewing your comments and he will respond to you shortly. I know that part of the issue is that when running the transaction manager in a different machine to the name service it is best if we can be sure the name service is running first. Maybe we could look at creating a really lightweight feature pack of wfly-core to spin up the CORBA name service and JTS in one vm. 4. Re: Distributing Narayana JTS via DockerMarek Goldmann Feb 18, 2015 6:58 AM (in response to Tom Jenkinson) Cool. You can ensure that naming service is running first by properly linking the containers: 5. Re: Distributing Narayana JTS via DockerGytis Trikleris Feb 18, 2015 8:23 AM (in response to Marek Goldmann) Hello Marek and thanks for your feedback. The reason we have three services is that all of them interact with each other. Name server process is used to expose transaction manager process for clients. And recovery manager process is used by transaction manager. The reason I chose to use supervisor was firstly this article on Docker web page:. I assumed it is an acceptable technique to use. Additionally, my worry was that linking multiple containers would increase already complicated networking setup. Do you reckon that our image could be rejected from Docker hub because of multiple process? Or would it be still possible to use supervisor? (I'm mainly asking to understand our options, I'm not trying to push the idea of supervisor). 6. Re: Distributing Narayana JTS via DockerMarek Goldmann Feb 18, 2015 8:31 AM (in response to Gytis Trikleris) Hiya Gytis! Nah, nobody said that because of supervisord someone will reject anything Using supervisord is an option of course. I just try to not involve it as much a possible since it's a bit of anti-pattern. The roots of Docker are: start a process in a container and do it right. Supervisord fights a bit with this introducing a init-like system. Sometimes it's good to use it, but in most cases it's not, since delegation of responsibility is a bit messed when you use supervisord. There is no network set up overhead involved when using links. The only thing you need to do is to use the prepared environment variables to point to the other service. 7. Re: Distributing Narayana JTS via DockerGytis Trikleris Feb 18, 2015 8:50 AM (in response to Marek Goldmann) Thanks, Marek. I will try splitting this into separate images an will report back on this thread. 8. Re: Distributing Narayana JTS via DockerGytis Trikleris Mar 2, 2015 6:11 AM (in response to Marek Goldmann) Here are two new docker files: narayana-docker/jts-split at master · Gytis/narayana-docker · GitHub. I've split the original docker file into two: name-server and transaction-service and linked them together. Additionally, I have modified transaction service's main method to start recovery manager in separate thread instead of using separate process. For now that change is in my repo and Narayana distribution is stored in my Dropbox. Same test as before can be used for these images. Looking forward to hear some feedback from you guys. 9. Re: Distributing Narayana JTS via DockerTom Jenkinson Mar 2, 2015 8:44 AM (in response to Gytis Trikleris) Hi Gytis, Great work so far thanks. The overall look is close to what we need in my opinion. I guess its in the process of development but just pointing out that I think the goal should be to remove narayana-docker/jts at master · Gytis/narayana-docker · GitHub and to move folder "jts-split" to just "jts" in order to make it clear of our expectations for the default setup. The first thing I think we should tweak is that I think rather than mutating a jbossts-properties.xml on disk we should use system properties. So what I mean is remove: narayana-docker/docker-entrypoint.sh at master · Gytis/narayana-docker · GitHub Then change: To include -D<OBJECT_STORE_LOC> -D<NAME_SERVICE_SETTING> The other thing I think we should tweak is, if possible, consider using port 4711 (which appears to be the default JacORB port) and therefore not require the following change. I think you would just need to change the readme port opening stuff and this setting: narayana-docker/Dockerfile at master · Gytis/narayana-docker · GitHub Those changes are all about minimizing the delta from the defaults. Do you agree or do you feel strongly on the port/config file editing? Thanks, Tom 10. Re: Distributing Narayana JTS via DockerGytis Trikleris Mar 2, 2015 9:29 AM (in response to Tom Jenkinson) Tom, you're right "jts-split" and "jts" are both there only temporally until we decide on the final design. I agree with Narayana changes, they could be replaced with properties. However, we will still need a way to tell jacorb to use a specific port (either by modifying config file or via system property), because by default it uses a random port. Also, 4711 will work for name server, but not for transaction service, since 4711 is taken by the recovery manager. For name server I picked 3528 since it is the same port used by WildFly. Gytis. 11. Re: Distributing Narayana JTS via DockerTom Jenkinson Mar 2, 2015 10:22 AM (in response to Gytis Trikleris) Thanks for the response, Gytis. How about changing: To be 3528 as well then? I am trying to suggest that the ORB port on the name server and the JTS might be best to match if that is possible? 12. Re: Distributing Narayana JTS via DockerGytis Trikleris Mar 2, 2015 10:27 AM (in response to Tom Jenkinson) Ah fair enough, that sounds reasonable. Which port do you prefer? 3528 or 4710? As I mentioned before, 3528 is IIOP port used in WildFly. And I picked 4710 for transaction service, so that it would be next to default recovery manager ports (4711 and 4712) 13. Re: Distributing Narayana JTS via DockerTom Jenkinson Mar 2, 2015 11:19 AM (in response to Gytis Trikleris) Lets go with 3528 - it seems sensible to align with container ports already established by WFLY for the ORB. Its a mechanic of the ORB rather than the TM really which is just leveraging it as the transport. 14. Re: Distributing Narayana JTS via DockerMark Little Mar 3, 2015 7:45 AM (in response to Tom Jenkinson) What is the solution to having a remote object store (not within the docker image)? Will there be a quickstart/tutorial around this ?
https://developer.jboss.org/thread/252190
CC-MAIN-2017-51
refinedweb
1,584
69.31
Introduction, which you saw in Part 1 (see Resources), and XMLReader, which is easier and faster than SAX, offer additional parsing approaches. All the XML extensions are now based on the libxml2 library by the GNOME project. This unified library allows for interoperability between the different extensions. This article will cover PHP5's XML parsing techniques, focusing on parsing large or complex XML documents. It will offer some background about parsing techniques, what method is best suited to what types of XML documents, and, if you have a choice, what your criteria for choosing should be. SimpleXML Part 1 provided essential information on XML and focused on quick-start Application Programming Interfaces (APIs). It demonstrated how SimpleXML, combined with the Document Object Model (DOM) as necessary, is the ideal choice if you work with straightforward, predictable, and relatively basic XML documents. XML and PHP5 Extensible Markup Language (XML) is described as both a markup language and a text-based data storage format; it offers a text-based means to apply and describe a tree-based structure to information. In PHP5, there are totally new and rewritten extensions for parsing XML. Those that load the entire XML document into memory include SimpleXML, the DOM, and the XSLT processor. Those parsers that provide you with one piece of the XML document at a time include the Simple API for XML (SAX) and XMLReader. SAX functions the same way it did in PHP4, but it's not based on the expat library anymore, but on the libxml2 library. If you are familiar with the DOM from other languages, you will have an easier time coding with the DOM in PHP5 than previous versions. XML parsing fundamentals The two basic means to parse XML are: tree and stream. Tree style parsing involves loading the entire XML document into memory. The tree file structure allows for random access to the document's elements and for editing of the XML. Examples of tree-type parsing include the DOM and SimpleXML. These share the tree-like structure in different but interoperable formats in memory. Unlike tree style parsing, stream parsing does not load the entire XML document into memory. The use of the term stream in this context corresponds closely to the term stream in streaming audio. What it is doing and why it is doing it is exactly the same, namely delivering a small amount of data at a time to preserve both bandwidth and memory. In stream parsing, only the node currently being parsed is accessible, and editing the XML, as a document, is not possible. Examples of stream parsers include XMLReader and SAX. Tree-based parsers Tree-based parsers are so named because they load the entire XML document into memory with the document root being the trunk, and all children, grandchildren, subsequent generations, and attributes being the branches. The most familiar tree-based parser is the DOM. The easiest tree-based parser to code is SimpleXML. You will look at both. Parsing with the DOM The DOM standard, according the W3C, is "... a platform and language neutral interface that will allow programs and scripts to dynamically access and update the content, structure and style of documents." The libxml2 library by the GNOME project implements the DOM, along with all its methods, in C. Since all of the PHP5 XML extensions are based on libxml2, there is complete interoperability between the extensions. This interoperability greatly enhances their functionality. You can, for instance, use XMLReader, a stream parser, to get an element, import it into the DOM and extract data using XPath. That is a lot of flexibility. You'll see this in Listing 5. The DOM is a tree-based parser. The DOM is easy to understand and utilize since its structure in memory resembles the original XML document. DOM passes on information to the application by creating a tree of objects that duplicates exactly the tree of elements from the XML file, with every XML element being a node in the tree. DOM is a W3C standard, which gives the DOM a lot of authority with developers due to its consistency with other programming languages. Because the DOM builds a tree of the entire document, it uses a lot of memory and processor time. The DOM in action If you're forced by your design or any other constraint to be a one trick pony in the area of parsers, this is where you want to be due to flexibility alone. With the DOM, you can build, modify, query, validate and transform XML Documents. You can use all DOM methods and properties. Most DOM level 2 methods are implemented with properties properly supported. Documents parsed with the DOM can be as complex as they come because of its tremendous flexibility. Remember, however, that flexibility comes at a price if you load a large XML document into memory all at once. The example in Listing 1 uses the DOM to parse the document and retrieves an element with getElementById. It's necessary to validate the document by setting validateOnParse=true before referring to the ID. According to the DOM standard, this requires a DTD which defines the attribute ID to be of type ID. Listing 1. Using the DOM with a basic document <?php $doc = new DomDocument; // We must validate the document before referring to the id $doc->validateOnParse = true; $doc->Load('basic.xml'); echo "The element whose id is myelement is: " . $doc->getElementById('myelement')->tagName . "\n"; ?> The getElementsByTagName() function returns a new instance of class DOMNodeList containing the elements with a given tag name. The list, of course, has to be walked through. Altering the document structure while iterating the NodeList returned by getElementsByTagName() affects the NodeList you are iterating (see Listing 2). There is no validation requirement. Listing 2. DOM getElementsByTagName method DOMDocument { DOMNodeList getElementsByTagName(string name); } The example in Listing 3 uses the DOM with XPath. Listing 3. Using the DOM and parsing with XPath <?php $doc = new DOMDocument; // We don't want to bother with white spaces $doc->preserveWhiteSpace = false; $doc->Load('book.xml'); $xpath = new DOMXPath($doc); // We start from the root element $query = '//book/chapter/para/informaltable/tgroup/tbody/row/entry[. = "en"]'; $entries = $xpath->query($query); foreach ($entries as $entry) { echo "Found {$entry->previousSibling->previousSibling->nodeValue}," . " by {$entry->previousSibling->nodeValue}\n"; } ?> Having said all of those nice things about the DOM, I'm going to wind up with an example of what not to do with the DOM just to make the point as strongly as possible, and, then, in the next example, how to save yourself. Listing 4 illustrates loading a large file into the DOM simply to extract the data from a single attribute with DomXpath. Listing 4. Using the DOM with XPath the wrong way, on a large XML document <?php // Parsing a Large Document with DOM and DomXpath // First create a new DOM document to parse $dom = new DomDocument(); // This document is huge and we don't really need anything from the tree // This huge document uses a huge amount of memory $dom->load("tooBig.xml"); $xp = new DomXPath($dom); $result = $xp->query("/blog/entries/entry[@ID = 5225]/title") ; print $result->item(0)->nodeValue ."\n"; ?> This final, follow-up example in Listing 5 uses the DOM with XPath in the same way, except the data is passed one element at a time by XMLReader using expand(). With this method, you can convert a node passed by XMLReader to a DOMElement. Listing 5. Using the DOM with XPath the right way, on a large XML document <?php // Parsing a large document with XMLReader with Expand - DOM/DOMXpath ); $xp = new DomXpath($dom); $res = $xp->query("/entry/title"); echo $res->item(0)->nodeValue; } } } } ?> Parsing with SimpleXML The SimpleXML extension is another choice for parsing an XML document. The SimpleXML extension requires PHP5 and includes built-in XPath support. SimpleXML works best with uncomplicated, basic XML data. Provided that the XML document isn't too complicated, too deep, and lacks mixed content, SimpleXML is simpler to use than the DOM, as its name implies. It is more intuitive if you are working with a known document structure. SimpleXML in action SimpleXML shares many of the advantages of the DOM and is more easily coded. It allows easy access to an XML tree, has built-in validation and XPath support, and is interoperable with the DOM, giving it read and write support for XML documents. You can code documents parsed with SimpleXML simply and quickly. Remember however, that, like the DOM, SimpleXML comes with a price for its ease and flexibility if you load a large XML document into memory. The following code in Listing 6 extracts <plot> from the example XML. Listing 6. Extracting the plot text <?php $xmlstr = <<<XML <?xml version='1.0' standalone='yes'?> <books> <book> <title>Great American Novel</title> <plot> Cliff meets Lovely Woman. Loyal Dog sleeps, but wakes up to bark at mailman. </plot> <success type="bestseller">4</success> <success type="bookclubs">9</success> </book> </books> XML; ?> <?php $xml = new SimpleXMLElement($xmlstr); echo $xml->book[0]->plot; // "Cliff meets Lovely Woman. ..." ?> On the other hand, you might want to extract a multi-line address. When multiple instances of an element exist as children of a single parent element, normal iteration techniques apply. The following code in Listing 7 demonstrates this functionality. Listing 7. Extracting multiple instances as $book) { echo $book->plot, '<br />'; } ? In addition to reading element names and their values, SimpleXML can also access element attributes. In the code shown in Listing 8, you access attributes of an element just as you would elements of an array. Listing 8. Demonstrating SimpleXML accessing the attributes[0]->success as $success) { switch((string) $success['type']) { case 'bestseller': echo $success, ' months on bestseller list<br />'; break; case 'bookclubs': echo $success, ' bookclub listings<br />'; break; } } ?> This final example (see Listing 9) uses SimpleXML and the DOM with XMLReader. With XMLReader, the data is passed one element at a time using expand(). With this method, you can convert a node passed by XMLReader to a DOMElement, and then to SimpleXML. Listing 9. Using SimpleXML with the DOM and XMLReader to parse a large XML document <?php // Parsing a large document with Expand and SimpleXML ); $sxe = simplexml_import_dom($n); echo $sxe->title; } } } } ?> Stream-based parsers Stream-based parsers are so named because they parse the XML in a stream with much the same rationale as streaming audio, working with a particular node, and, when they are finished with that node, entirely forgetting its existence. XMLReader is a pull parser and you code for it in much the same way as for a database query result table in a cursor. This makes it easier to work with unfamiliar or unpredictable XML files. Parsing with XMLReader The XMLReader extension is a stream-based parser of the type often referred to as a cursor type or pull parser. XMLReader pulls information from the XML document on request. It is based on the API derived from C# XmlTextReader. It is included and enabled in PHP 5.1 by default and is based on libxml2. Before PHP 5.1, the XMLReader extension was not enabled by default but was available at PECL (see Resources for a link). XMLReader supports namespaces and validation, including DTD and Relaxed NG. XMLReader in action XMLReader, as a stream parser, is well-suited to parsing large XML documents; it is a lot easier to code than SAX and usually faster. This is your stream parser of choice. This example in Listing 10 parses a large XML document with XMLReader. Listing 10. XMLReader with a large XML file <?php $reader = new XMLReader(); $reader->open("tooBig.xml"); while ($reader->read()) { switch ($reader->nodeType) { case (XMLREADER::ELEMENT): if ($reader->localName == "entry") { if ($reader->getAttribute("ID") == 5225) { while ($reader->read()) { if ($reader->nodeType == XMLREADER::ELEMENT) { if ($reader->localName == "title") { $reader->read(); echo $reader->value; break; } if ($reader->localName == "entry") { break; } } } } } } } ?> Parsing with SAX The Simple API for XML (SAX) is a stream parser. Events are associated with the XML document being read, so SAX is coded in callbacks. There are events for element opening and closing tags, for the content of elements, for entities, and for parsing errors. The primary reason to use the SAX parser rather than the XMLReader is that the SAX parser is sometimes more efficient and usually more familiar. A major disadvantage is that SAX parser code is complex and more difficult to write than XMLReader code. SAX in action SAX is likely familiar to those who worked with XML in PHP4, and the SAX extension in PHP5 is compatible with the version they're used to. Since it's a stream parser, it's a good choice for large files, but not as good a choice as XMLReader. This example in Listing 11 parses a large XML document with SAX. Listing 11. Using SAX to parse a large XML file <?php //This class contains all the callback methods that will actually //handle the XML data. class SaxClass { private $hit = false; private $titleHit = false; //callback for the start of each element function startElement($parser_object, $elementname, $attribute) { if ($elementname == "entry") { if ( $attribute['ID'] == 5225) { $this->hit = true; } else { $this->hit = false; } } if ($this->hit && $elementname == "title") { $this->titleHit = true; } else { $this->titleHit =false; } } //callback for the end of each element function endElement($parser_object, $elementname) { } //callback for the content within an element function contentHandler($parser_object,$data) { if ($this->titleHit) { echo trim($data)."<br />"; } } } //Function to start the parsing once all values are set and //the file has been opened function doParse($parser_object) { if (!($fp = fopen("tooBig.xml", "r"))); //loop through data while ($data = fread($fp, 4096)) { //parse the fragment xml_parse($parser_object, $data, feof($fp)); } } $SaxObject = new SaxClass(); $parser_object = xml_parser_create(); xml_set_object ($parser_object, $SaxObject); //Don't alter the case of the data xml_parser_set_option($parser_object, XML_OPTION_CASE_FOLDING, false); xml_set_element_handler($parser_object,"startElement","endElement"); xml_set_character_data_handler($parser_object, "contentHandler"); doParse($parser_object); ?> Summary PHP5 offers an improved variety of parsing techniques. Parsing with the DOM, now fully compliant with the W3C standard, is a familiar option, and is your choice for complex but relatively small documents. SimpleXML is the way to go for basic and not-too-large XML documents, and XMLReader, easier and faster than SAX, is the stream parser of choice for large documents. 3: Advanced techniques to read, manipulate, and write XML (Cliff Morgan, developerWorks, March 2007): Learn more techniques to read, manipulate, and write XML in PHP5 in this final article of a three-part series on XML for PHP developers. - SAX, the power API (Benoît Marchal, developerWorks, August 2001): Read this introduction to SAX, compare DOM and SAX, and then put SAX to work. - Reading and writing the XML DOM in PHP (Jack Herrington, developerWorks, December 2005): Explore three methods to reading XML: the DOM library, the SAX parser, and regular expressions. Also, look at how to write XML using DOM and PHP text templating. -): In this tip, explore APIs for XML pipelines and find why the familiar XMLReader interface is appropriate for many XML components. -'ve read this article, post your comments and thoughts in this forum moderated by the XML zone editors. - XML zone discussion forums: Participate in any of several XML-centered forums. - developerWorks blogs:.
http://www.ibm.com/developerworks/library/x-xmlphp2/index.html
CC-MAIN-2014-10
refinedweb
2,532
54.02
On Wed, May 16, 2001 at 02:36:44PM -0700, H. Peter Anvin wrote:> > But.> > > > It's still completely braindamaged: (a) these interfaces aren't> disjoint. They refer to the same device, and will interfere with each> other; (b) it is highly undesirable to tie the naming to the interfaces> in this way. It further restricts the namespaces you can export, for one> thing.We do this already with ide-scsi. A device is visible as /dev/hdaand /dev/sda at the same time. Or think IDE-CDRW: /dev/hda,/dev/sr0 and /dev/sg0.All at the same time.It is perfectly normal to export different interfaces for thesame device. This is basically, what subfunctions on PCI do: Samedevice with different interfaces. Just that we do it through a driver with ide and through thehardware with a multi function PCI card.Applications don't care about devices. They care about entitiesthat have capabilities and programming interfaces. What they_really_ are and if this is only emulated is not important.Sorry, I don't see your point here :-
https://lkml.org/lkml/2001/5/16/123
CC-MAIN-2022-21
refinedweb
177
61.43
Homeland Security Uncovers Critical Flaw in X11 517 Amy's Robot writes ." While serious, the flaw has already been corrected. OpenBSD fixed on Jan. 21, 2000 (Score:4, Informative) Re:OpenBSD fixed on Jan. 21, 2000 (Score:2) Re:OpenBSD fixed on Jan. 21, 2000 (Score:5, Informative) FYI, they do often send the cleaned version back to the codes maintainers, but they can't force them to use the re-arranged code, or port it to other systems. Sorry. Re:OpenBSD fixed on Jan. 21, 2000 (Score:5, Funny) That is one brilliant policy! Kudos to whomever implemented that! It reminds of an incedent about 12 years ago. A bunch of us entry level programmers were sitting around and this one guy pipes up and says "Look! I wrote an entire function (it was C) in one line!" He did, too. It was one of those 'for' loops with a 'while' and a bunch of things in one line. It was impossible to read. I just shook my head and said, "If there's a bug in that code, and I get assigned to it, I'm coming for you!" Re:OpenBSD fixed on Jan. 21, 2000 (Score:5, Interesting) That reminds me of the Kernighan quote, which I heartily agree with: Re:OpenBSD fixed on Jan. 21, 2000 (Score:5, Insightful) Funny, and almost right. Put all your brains, but half of your cleverness into coding. IOW, use all your intellect to simplify the code, and only be "clever" (that's a mild pejorative if you haven't figured it out by now) when there's no other way to accomplish the task. I have to admit, though, that I was young once, and foolish, and thought it was the height of brilliance to write code (especially C, but even Pascal) in as few lines as possible. Re:OpenBSD fixed on Jan. 21, 2000 (Score:3, Insightful) And the collorary to that: If you are (trying to be) clever, leave comments about what you're doing. Whoever might have to review/fix your code will greatly appriciate it. Remember, that person might be YOU. While I still try to be clever a little too often, it makes it incredibly much easier to fix. Re:OpenBSD fixed on Jan. 21, 2000 (Score:3, Informative) HTH. Cheers. Re:OpenBSD fixed on Jan. 21, 2000 (Score:5, Funny) More specifically, March 10th of 2006. Seven weeks ago. Best part was the CVS log: Re:OpenBSD fixed on Jan. 21, 2000 (Score:3, Insightful) Related news (Score:5, Funny) Government officials were unwilling to cite their sources for this information instead choosing to simply say "we are watching you". Re:Related news (Score:5, Funny) Re:Related news (Score:5, Interesting) It all depends... (Score:3, Funny) Have you paid your Moses Fee? (let my packets go....) [as sung to 'let my people go'] Re:Related news (Score:3, Funny) (that's the job of Congress and industry trade groups) Re:Related news (Score:3, Insightful) Re:Related news (Score:3, Funny) $#$#%... [signal lost] Re:Related news (Score:4, Insightful) Re:Related news (Score:5, Funny) Re:Related news (Score:5, Informative) You're misinterpreting what the problem was. It was a change from this: if (getuid() == 0 || geteuid != 0) to this: if (getuid() == 0 || geteuid() != 0) Re:Related news (Score:5, Insightful) You're misinterpreting what the problem was. It was a change from this: if (getuid() == 0 || geteuid != 0) to this: if (getuid() == 0 || geteuid() != 0) This is why stuff should compile *without warnings*. It drives me nuts to compile something and see hundreds of warnings spit out. (And yes, gcc will throw a warning if you compare a function pointer with 0 instead of NULL) Re:Related news (Score:5, Funny) It drives me nuts too. That's why i use the -fsyntax-only option whenever I compile anything. It gets rid of the warnings so you know your code is safe! Re:Related news (Score:3, Informative) I don't know about ANSI, but ISO/IEC 9899:1999(E) (a.k.a. "C99"), under section 7.17 "Common definitions <stddef.h>" states: Under section 6.3.2.3 "Pointers", the "null po Re:Caution: Sometimes 0 != NULL (Score:3, Informative) You need a better compiler. Re:Related news (Score:3, Insightful) Re:Related news (Score:3, Insightful) I think you owe the GP an apology. UIDs (Score:5, Informative) The effective UID is normally associated with permission to access files. Well, Linux actually uses the filesystem UID (fsuid or fuid) for that, but that one nearly always tracks the effective UID for compatibility. There is also a saved UID (suid or svuid) that is helpful for apps that need to swap UIDs back and forth. It's not used for anything else. Re:Related news (Score:3, Funny) I just saw a story.. (Score:3, Funny) Re:Related news (Score:4, Funny) No, no, that's a flaw in X10, not X11. That missing remote behaviour is an undocumented feature. Only one? (Score:3, Interesting) Re:Only one? (Score:4, Funny) Only one that they are telling us about... Way to go, boys! (Score:5, Funny) Re:Way to go, boys! (Score:2) Another score for open source! (Score:3, Insightful) (And yes, I know that some gov't agencies have a deal to view the Windows source code, but there are WAAAY fewer eyeballs looking at it, and from what I've heard the code is a big badly documented mess.) Re:Another score for open source! (Score:3, Funny) Excluding Outlook Express I guess. Re:Another score for open source! (Score:3, Funny) Perhaps it's part of their market effort to get people to uprade to Outlook. Any word on the fix? (Score:5, Funny) A missing parentheses in a bit of code is to blame...the flaw has already been corrected. Any word on exactly what the fix was? Re:Any word on the fix? (Score:2) Re:Any word on the fix? (Score:5, Funny) Re:Any word on the fix? (Score:2) How do you know they didn't just remove the match-less parenthesis instead? Re:Any word on the fix? (Score:4, Funny) Re:Any word on the fix? (Score:5, Informative) Success (Score:3, Funny) I wonder (Score:3, Funny) Re:I wonder (Score:4, Funny) > by reading the binary or by utilizing a machine-coded matrix? I don't know, but I bet Chloe O'Brian is lurking nearby. And she's probably scowling. watch out for their patches, though (Score:5, Funny) OS X? (Score:4, Interesting) Re:OS X? (Score:5, Informative) Easy (Score:3, Funny) Advisory (Score:2, Insightful) Crap. (Score:2) Re:Crap. (Score:2) Not Quite (Score:5, Funny) Actually, it was not a missing parenthesis, but a missing parenthetical.double r; r = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) ); if ( r < 0.5 ) gotroot(true); And the patched code:double r; r = ( (double)rand() / ((double)(RAND_MAX)+(double)(1)) ); if ( r < 0.5 ) gotroot(true); (just kidding!) Missing *pair* of parentheses (Score:5, Informative) This results in making use of the function address rather than the return value of the function, which could cause difficulties. Re:Missing *pair* of parentheses (Score:3, Informative) gcc 3.4.3 says all is fine. You can make it complain if you change geteuid != 0 to !geteuid - then it points out "warning: the address of `geteuid', will always evaluate as `true'" This is not a remote root vunerability (Score:5, Insightful) Re:This is not a remote root vunerability (Score:3, Insightful) AFAIK this exploit can be used over the net, but only if you've enabled remote logins in your Xconf. I'm not aware of any distro that does that by default, and the Xconf "sample" that comes with XFree86 or Xorg both have remote logins disabled. I realize that it's too much too assume that anyone geek enough to enable remote X sessions is also geek enough to protect his system adequately, but most of the time that will be the case. Re:This is not a remote root vunerability (Score:4, Informative) Missing the point..... (Score:5, Interesting) I wonder how many potential security holes Coverity's uncovered by scanning Windows source....oh wait....they can't. Well I'm sure if they signed an NDA they could tell M$ and get it fixed in a....um...err...sorry, you'll have to wait for the next patch cycle. Re:Missing the point..... (Score:5, Interesting) While I hate to sound like all the other OSS apologists that have posted so far ("yeah there's an exploit, but think of how many we could find if we could run it on the Windows source!" and other such tripe that ignores the fact that a serious bug was found in OSS software), your argument is a bunch of crap. You're basically saying that exploits in closed-source software are unknown and unpublicized, which is ridiculous. As for your Apache example, it would be just as simple to see what version of IIS a machine is running and look through MS KB to find the known exploits against it. Or look at bugtraq. Or anywhere else on the Internet. Just because the source is a secret doesn't mean the details of the available exploits are too. Oh and knowing the line of source code on which that the error exists is entirely irrelevant to the discussion -- having that knowledge doesn't make using an exploit any easier or more difficult. It may assist in developing new exploits, but when attempting to use one that has been found, that knowledge is superfluous. Critique... (Score:5, Interesting) That last one makes things tough. How can you have security when everything is known? Well, in practice that is the only context security is even possible. "Security through obscurity" really means "we don't know what our opponents know and we're not even sure what we know". If, however, you assume that your opponents know everything then you don't take shortcuts. You plan for contingencies, you have fallback positions, you have not just a plan but a roadmap of possibilities and how to deal with them. (At least, for any scenario too complex to actually have a complete solution for. For simpler problems, such as a chess puzzle or - for the past decade - the entire game of draughts, it is possible to map a complete, guaranteed winning strategy that will work no matter what the opponent does. Such a solution exists for the complete game of Chess and indeed for the complete game of Go, but has not yet been found. For any given computer system, such a solution must also exist for the operator/admin, but the chief problem has always been to get them to bother even putting the bits of solution that are known in place.) Wow. Homeland Security.... (Score:5, Funny) Jack: I'm running out of time. I need that salelite image. Chloe: I opened a socket into a NASA server and retasking the satelite. Jack: Great, download the image to my PDA. Chloe: I need your IP address. Jack: 1.2.123.129 Chloe: I'm having some trouble. I'm hacking into a secure server at CTU, and sending the image to your PDA. Jack: I've got it. Thanks Chloe. Chloe: Whatever... Where was the warning? (Score:3, Interesting) the usual confusion (Score:5, Insightful) It's pretty sad that Windows and Macintosh have conditioned people to think that every window system is just a piece of code; the notion that a window system could be an API standard with multiple implementations doesn't seem to occur tothem. seriously? (Score:3, Insightful) And even those window servers are compiled from sources derived from the reference sources, with patches. Do you actually know of any implementations of X other than the two you Re:the usual confusion (Score:3, Insightful) The name 'X11' effectively refers to a code base because the 'sample implementation', which was extended for specific hardware by XFree86 and X.org, is the basis of almost all X Servers in existance. For example, Sun and HP both ship their own X Servers, but the base upon which they implemented their device-depen Mac OS X Tiger (Score:3, Interesting) Re:Mac OS X Tiger (Score:3, Interesting) [freedesktop.org] Difference (Score:3, Interesting) Critical vulnerability in X11, missing parens are to blame, report: "missing parens in code leaves X11 vulnerable, the problem is fixed." --vs-- Critical vulnerability in Windows, missing parens are to blame (but that's under NDA), report: "the incompetent programmers of the Redmont monopolist did it again, your Windows is totally open to hackers due to a bad, bad vulnerability. While we're on this, let's discuss also how OSX and Linux are infinitely cooler than Windows will ever be, and how Windows users are clueless idiots." Re:Here is the actual flaw: (Score:2, Funny) (X11 sucks monkey cock Re:Already Corrected? (Score:5, Insightful) Re:Already Corrected? (Score:5, Funny) Re:Already Corrected? (Score:3, Insightful) Re:Already Corrected? (Score:2) Your servers are running X? What for? Re:Already Corrected? (Score:2) Re:Already Corrected? (Score:4, Insightful) Re:Already Corrected? (Score:2) Re:Already Corrected? (Score:2) Re:Already Corrected? (Score:2) Oracle installation runs as an X11 client, and requires that only the client libraries be installed. The X11 server runs on the administrator's desktop. Of course, TFA doesn't bother to explain if the hole is in the server or the client libraries. I'm assuming they mean the server, but who the hell knows? Agree with the sentiment, but.... (Score:3, Insightful) Why? (Score:3, Informative) Re:Already Corrected? (Score:2) *triggered*! Doing: ssh foo;su -;apt-get update;apt-get dist-upgrade; ssh bar... What auto-update services were you talking about, again? As restarting most daemons is likely to cause disruption, you can't do this without thinking; thus, fully automatic updates are a bad idea unless the users are mindless. As servers are not operated by monkeys but by people who are *supposed* to have a clue, notification is a must, but actually applying the update shouldn't be done as a cronjob. Only 6.9 and 7.0 (Score:2) As this only affects 6.9 and 7.0 (RTFM), you'd need some form of auto-update to actually be exposed. Most distroes are still at 6.8. M. Re:Only 6.9 and 7.0 (Score:2) ... or Article, whatever suits you... I sure whish I could edit my own posts sometimes. M. Re:Already Corrected? (Score:2) Its Linux we're talking about. It might upgrade X11 though - but thats a good thing. Re:Already Corrected? (Score:4, Funny) Yes. Re:Sometimes gentoo is a pain. (Score:5, Insightful) Not reading the article doesn't seem to be much of a problem. It's really not very clear. For example, is this a problem with X.org X11 specifically? Is Apple's X11.app affected? The article just says the problem is with "The X Window System", without mentioning any particular implementations. It took some digging to find the actual advisory: Re:Sometimes gentoo is a pain. (Score:3, Informative) If you're running ~x86, then you've got the vulnerable version. It's a local exploit, one that is trivially simple for an experienced programmer to use. Re:Sometimes gentoo is a pain. (Score:3, Informative) Re:So does this mean? (Score:2) Little known fact... (Score:5, Funny) The compiler just does what you ask. (Score:5, Informative) Re:So does this mean? (Score:5, Insightful) I had a quick look on Coverity's website and this appears to be the relevant line of code: - if (getuid() == 0 || geteuid != 0) + if (getuid() == 0 || geteuid() != 0) In the case of the first line, "geteuid != 0" is valid C code but checks whether or not the address of the geteuid function is 0. The second line is what the programmer intended to write, which calls the geteuid function and checks the value returned by that function. The problem (if there is one) lies with the language, not the compiler, since both of the above lines are legal C code. Solutions to this kind of problem probably involve both a movement towards higher level languages (which are typically more verbose and don't allow low-level memory manipulation), and more extensive static code analysis. In the case of Xorg and the kernel, moving to a higher level language isn't really an option (not yet, at least). Re:So does this mean? (Score:3, Interesting) Solutions to this kind of problem probably involve both a movement towards higher level languages (which are typically more verbose and don't allow low-level memory manipulation) I think we can both agree Python is a higher level language. And guess what: import os if os.getuid() != 0 or os.geteuid = 0: is completely valid. It's not high level vs low level languages here that's at issue. It' Re:So does this mean? (Score:3, Funny) So no, it is indeed just a closing paranthesis that is missing. Why exactly that bloke considered this 'seemingly harmless', I don't know though... that is rather like saying "The car crash was caused by something as seemingly harmless as a severed brakeline." Re:How did it get through? (Score:2) as in example if(somefunc(foo > 0)) {bar} it compiles alright and even works, but it really isnt somefunc(foo) > 0 that is getting tested. the mistake is an easy one to make, and most modern languages consider it valid (even java if the func accepts a boolean argument). i never really understood WHY is the X run as root, write a god damn device wrapper that keeps the device handlers separately in root permissions and keep the X it Re:How did it get through? (Score:2) if (((people((wouldstop() == TRUE)(((&& (using_shitty_shortcuts() == FALSE)))))))) { } It's possible that something like this may be easier to spot. And while we're at it, start using your curly braces correctly as well. Re:Should have written it in Lisp! (Score:2) That is used as test each semester for MIT students. So, if it were available on the web, then it would remove an afternoons work. Re:I don't understand the intention of the fixed c (Score:4, Insightful)
https://slashdot.org/story/06/05/02/2216235/homeland-security-uncovers-critical-flaw-in-x11
CC-MAIN-2018-09
refinedweb
3,107
66.74
Access our original blog post announcing ACS retirement. Classic Azure Portal retired April 2018 As of April 2nd 2018, the classic Azure Portal located at will be completely retired, and all requests will be redirected to the new Azure Portal at. ACS namespaces will not be listed in the new Azure Portal whatsoever. If you need to create, delete, enable, or disable an ACS namespace going forward, please contact Azure support. Starting from May 1 you will not be able to create new ACS namespaces. You can still manage existing namespace configurations by visiting the ACS management portal directly, located at https://{your-namespace}.accesscontrol.windows.net. This portal allows you to manage service identities, relying parties, identity providers, claims rules, and more. It will be available until November 7, 2018. Who is affected by this change? This announcement affects any customer who has created one or more ACS namespaces in their Azure subscriptions. If your apps and services do not use Access Control Service, then you have no action to take. action is required? If you use ACS in some capacity, you should immediately begin your migration strategy. In the.
https://azure.microsoft.com/en-gb/blog/7-month-retirement-notice-access-control-service/
CC-MAIN-2018-39
refinedweb
191
56.96
Mean Value Coordinates for Closed Triangular Meshes - Jessica Greene - 2 years ago - Views: Transcription 1 Mean Value Coordnates for Closed Trangular Meshes Tao Ju, Scott Schaefer, Joe Warren Rce Unversty (a) (b) (c) (d) Fgure : Orgnal horse model wth enclosng trangle control mesh shown n black (a). Several deformatons generated usng our 3D mean value coordnates appled to a modfed control mesh (b,c,d). Abstract Constructng a functon that nterpolates a set of values defned at vertces of a mesh s a fundamental operaton n computer graphcs. Such an nterpolant has many uses n applcatons such as shadng, parameterzaton and deformaton. For closed polygons, mean value coordnates have been proven to be an excellent method for constructng such an nterpolant. In ths paper, we generalze mean value coordnates from closed D polygons to closed trangular meshes. Gven such a mesh P, we show that these coordnates are contnuous everywhere and smooth on the nteror of P. The coordnates are lnear on the trangles of P and can reproduce lnear functons on the nteror of P. To llustrate ther usefulness, we conclude by consderng several nterestng applcatons ncludng constructng volumetrc textures and surface deformaton. CR Categores: I.3.5 [Computer Graphcs]: Computatonal Geometry and Object Modelng Boundary representatons; Curve, surface, sold, and object representatons; Geometrc algorthms, languages, and systems Keywords: barycentrc coordnates, mean value coordnates, volumetrc textures, surface deformaton Introducton Gven a closed mesh, a common problem n computer graphcs s to extend a functon defned at the vertces of the mesh to ts nteror. For example, Gouraud shadng computes ntenstes at the vertces of a trangle and extends these ntenstes to the nteror usng lnear nterpolaton. Gven a trangle wth vertces {p, p, p 3 } and assocated ntenstes { f, f, f 3 }, the ntensty at pont v on the nteror of the trangle can be expressed n the form ˆf[v] = f j () where w j s the area of the trangle {v, p j, p j+ }. In ths formula, note that each weght w j s normalzed by the sum of the weghts, w to form an assocated coordnate j. The nterpolant ˆf[v] s then smply the sum of the f j tmes ther correspondng coordnate. Mesh parameterzaton methods [Hormann and Grener 000; Desbrun et al. 00; Khodakovsky et al. 003; Schrener et al. 004; Floater and Hormann 005] and freeform deformaton methods [Sederberg and Parry 986; Coqullart 990; MacCracken and Joy 996; Kobayash and Ootsubo 003] also make heavy use of nterpolants of ths type. Both applcatons requre that a pont v be represented as an affne combnaton of the vertces on an enclosng shape. To generate ths combnaton, we smply set the data values f j to be ther assocated vertex postons p j. If the nterpolant reproduces lnear functons,.e.; v = p j, w j the coordnate functons are the desred affne combnaton. For convex polygons n D, a sequence of papers, [Wachspress 975], [Loop and DeRose 989] and [Meyer et al. 00], have proposed and refned an nterpolant that s lnear on ts boundares and only nvolves convex combnatons of data values at the vertces of the polygons. Ths nterpolant has a smple, local defnton as a ratonal functon and reproduces lnear functons. [Warren 996; Warren et al. 004] also generalzed ths nterpolant to convex shapes n hgher dmensons. Unfortunately, Wachspress s nterpolant does not generalze to non-convex polygons. Applyng 2 (a) (b) such a generalzaton for arbtrary closed surfaces and show that the resultng nterpolants are well-behaved and have lnear precson. Appled to closed polygons, our constructon reproduces D mean value coordnates. We then apply our method to closed trangular meshes and construct 3D mean value coordnates. (In ndependent contemporaneous work, [Floater et al. 005] have proposed an extenson of mean value coordnates from D polygons to 3D trangular meshes dentcal to secton 3..) Next, we derve an effcent, stable method for evaluatng the resultng mean value nterpolant n terms of the postons and assocated values of vertces of the mesh. Fnally, we consder several practcal applcatons of such coordnates ncludng a smple method for generatng classes of deformatons useful n character anmaton. Mean value nterpolaton (c) Fgure : Interpolatng hue values at polygon vertces usng Wachspress coordnates (a, b) versus mean value coordnates (c, d) on a convex and a concave polygon. the constructon to such a polygon yelds an nterpolant that has poles (dvsons by zero) on the nteror of the polygon. The top porton of Fgure shows Wachspress s nterpolant appled to two closed polygons. Note the poles on the outsde of the convex polygon on the left as well as along the extensons of the two top edges of the non-convex polygon on the rght. More recently, several papers, [Floater 997; Floater 998; Floater 003], [Malsch and Dasgupta 003] and [Hormann 004], have focused on buldng nterpolants for non-convex D polygons. In partcular, Floater proposed a new type of nterpolant based on the mean value theorem [Floater 003] that generates smooth coordnates for star-shaped polygons. Gven a polygon wth vertces p j and assocated values f j, Floater s nterpolant defnes a set of weght functons w j of the form tan w j = [ α j ] + tan p j v [ ] α j (d). () where α j s the angle formed by the vector p j v and p j+ v. Normalzng each weght functon w j by the sum of all weght functons yelds the mean value coordnates of v wth respect to p j. In hs orgnal paper, Floater prmarly ntended ths nterpolant to be used for mesh parameterzaton and only explored the behavor of the nterpolant on ponts n the kernel of a star-shaped polygon. In ths regon, mean value coordnates are always non-negatve and reproduce lnear functons. Subsequently, Hormann [Hormann 004] showed that, for any smple polygon (or nested set of smple polygons), the nterpolant ˆf[v] generated by mean value coordnates s well-defned everywhere n the plane. By mantanng a consstent orentaton for the polygon and treatng the α j as sgned angles, Hormann also shows that mean value coordnates reproduce lnear functons everywhere. The bottom porton of Fgure shows mean value coordnates appled to two closed polygons. Note that the nterpolant generated by these coordnates possesses no poles anywhere even on non-convex polygons. Contrbutons Horman s observaton suggests that Floater s mean value constructon could be used to generate a smlar nterpolant for a wder class of shapes. In ths paper, we provde Gven a closed surface P n R 3, let p[x] be a parameterzaton of P. (Here, the parameter x s two-dmensonal.) Gven an auxlary functon f[x] defned over P, our problem s to construct a functon ˆf[v] where v R 3 that nterpolates f[x] on P,.e.; ˆf[p[x]] = f[x] for all x. Our basc constructon extends an dea of Floater developed durng the constructon of D mean value coordnates. To construct ˆf[v], we project a pont p[x] of P onto the unt sphere S v centered at v. Next, we weght the pont s assocated value f[x] by p[x] v and ntegrate ths weghted functon over S v. To ensure affne nvarance of the resultng nterpolant, we dvde the result by the ntegral of the weght functon p[x] v taken over S v. Puttng the peces together, the mean value nterpolant has the form x ˆf[v] = w[x,v] f[x]ds v x w[x,v]ds (3) v where the weght functon w[x, v] s exactly p[x] v. Observe that ths formula s essentally an ntegral verson of the dscrete formula of Equaton. Lkewse, the contnuous weght functon w[x, v] and the dscrete weghts w j of Equaton dffer only n ther numerators. As we shall see, the tan [ ] α terms n the numerators of the w j are the result of takng the ntegrals n Equaton 3 wth respect to ds v. The resultng mean value nterpolant satsfes three mportant propertes. Interpolaton: As v converges to the pont p[x] on P, ˆf[v] converges to f[x]. Smoothness: The functon ˆf[v] s well-defned and smooth for all v not on P. Lnear precson: If f[x] = p[x] for all x, the nterpolant ˆf[v] s dentcally v for all v. Interpolaton follows from the fact that the weght functon w[x, v] approaches nfnty as p[x] v. Smoothness follows because the projecton of f[x] onto S v s contnuous n the poston of v and takng the ntegral of ths contnuous process yelds a smooth functon. The proof of lnear precson reles on the fact that the ntegral of the unt normal over a sphere s exactly zero (due to symmetry). Specfcally, p[x] v x p[x] v ds v = 0 snce p[x] v p[x] v s the unt normal to S v at parameter value x. Rewrtng ths equaton yelds the theorem. v = x p[x] / p[x] v ds v x p[x] v ds v 3 Notce that f the projecton of P onto S v s one-to-one (.e.; v s n the kernel of P), then the orentaton of ds v s non-negatve, whch guarantees that the resultng coordnate functons are postve. Therefore, f P s a convex shape, then the coordnate functons are postve for all v nsde P. However, f v s not n the kernel of P, then the orentaton of ds v s negatve and the coordnates functons may be negatve as well. 3 Coordnates for pecewse lnear shapes In practce, the ntegral form of Equaton 3 can be complcated to evaluate symbolcally. However, n ths secton, we derve a smple, closed form soluton for pecewse lnear shapes n terms of the vertex postons and ther assocated functon values. As a smple example to llustrate our approach, we frst re-derve mean value coordnates for closed polygons va mean value nterpolaton. Next, we apply the same dervaton to construct mean value coordnates for closed trangular meshes. 3. Mean value coordnates for closed polygons Consder an edge E of a closed polygon P wth vertces {p, p } and assocated values { f, f }. Our frst task s to convert ths dscrete data nto a contnuous form sutable for use n Equaton 3. We can lnearly parameterze the edge E va p[x] = φ [x]p where φ [x] = ( x) and φ [x] = x. We then use ths same parameterzaton to extend the data values f and f lnearly along E. Specfcally, we let f[x] have the form f[x] = φ [x] f. Now, our task s to evaluate the ntegrals n Equaton 3 for 0 x. Let E be the crcular arc formed by projectng the edge E onto the unt crcle S v, we can rewrte the ntegrals of Equaton 3 restrcted to E as xw[x,v] f[x]de x w[x,v]de = w f (4) w where weghts w = φ [x] x p[x] v de. Our next goal s to compute the correspondng weghts w for edge E n Equaton 4 wthout resortng to symbolc ntegraton (snce ths wll be dffcult to generalze to 3D). Observe that the followng dentty relates w to a vector, w (p v) = m. (5) where m = p[x] v x de s smply the ntegral of the outward unt p[x] v normal over the crcular arc E. We call m the mean vector of E, as scalng m by the length of the arc yelds the centrod of the crcular arc E. Based on D trgonometry, m has a smple expresson n terms of p and p. Specfcally, To evaluate the ntegral of Equaton 3, we can relate the dfferental ds v to dx va ds v = p [x].(p[x] v) p[x] v dx where p [x] s the cross product of the n tangent vectors p[x] to P at x p[x]. Note that the sgn of ths expresson correctly captures whether P has folded back durng ts projecton onto S v. m = tan[α/]( (p v) p v + (p v) p v ) where α denotes the angle between p v and p v. Hence we obtan w = tan[α/]/ p v whch agrees wth the Floater s weghtng functon defned n Equaton for D mean value coordnates when restrcted to a sngle edge of a polygon. Equaton 4 allows us to formulate a closed form expresson for the nterpolant ˆf[v] n Equaton 3 by summng the ntegrals for all edges E k n P (note that we add the ndex k for enumeraton of edges): ˆf[v] = k w k f k k w k (6) where w k and f k are weghts and values assocated wth edge E k. 3. Mean value coordnates for closed meshes We now consder our prmary applcaton of mean value nterpolaton for ths paper; the dervaton of mean value coordnates for trangular meshes. These coordnates are the natural generalzaton of D mean value coordnates. Gven trangle T wth vertces {p, p, p 3 } and assocated values { f, f, f 3 }, our frst task s to defne the functons p[x] and f[x] used n Equaton 3 over T. To ths end, we smply use the lnear nterpolaton formula of Equaton. The resultng functon f[x] s a lnear combnaton of the values f tmes bass functons φ [x]. As n D, the ntegral of Equaton 3 reduces to the sum n Equaton 6. In ths case, the weghts w have the form φ [x] w = x p[x] v dt where T s the projecton of trangle T onto S v. To avod computng ths ntegral drectly, we nstead relate the weghts w to the mean vector m for the sphercal trangle T by nvertng Equaton 5. In matrx form, {w,w,w 3 } = m {p v, p v, p 3 v} (7) All that remans s to derve an explct expresson for the mean vector m for a sphercal trangle T. The followng theorem solves ths problem. Theorem 3. Gven a sphercal trangle T, let θ be the length of ts th edge (a crcular arc) and n be the nward unt normal to ts th edge (see Fgure 3 (b)). Then, m = θ n (8) where m, the mean vector, s the ntegral of the outward unt normals over T. Proof: Consder the sold trangular wedge of the unt sphere wth cap T. The ntegral of outward unt normals over a closed surface s always exactly zero [Flemng 977, p.34]. Thus, we can partton the ntegral nto three trangular faces whose outward normals are n wth assocated areas θ. The theorem follows snce m θ n s then zero. Note that a smlar result holds n D, where the mean vector m defned by Equaton 3. for a crcular arc E on the unt crcle can be nterpreted as the sum of the two nward unt normals of the vectors p v (see Fgure 3 (a)). In 3D, the lengths θ of the edges of the sphercal trangle T are the angles between the vectors p v and p + v whle the unt normals n are formed by takng the cross 4 m E (a) -n -n v Fgure 3: Mean vector m on a crcular arc E wth edge normals n (a) and on a sphercal trangle T wth arc lengths θ and face normals n. product of p v and p + v. Gven the mean vector m, we now compute the weghts w usng Equaton 7 (but wthout dong the matrx nverson) va w = ψ 3 θ θ m n m n (p v) At ths pont, we should note that projectng a trangle T onto S v may reverse ts orentaton. To guarantee lnear precson, these folded-back trangles should produce negatve weghts w. If we mantan a postve orentaton for the vertces of every trangle T, the mean vector computed usng Equaton 8 ponts towards the projected sphercal trangle T when T has a postve orentaton and away from T when T has a negatve orentaton. Thus, the resultng weghts have the approprate sgn. 3.3 Robust mean value nterpolaton The dscusson n the prevous secton yelds a smple evaluaton method for mean value nterpolaton on trangular meshes. Gven pont v and a closed mesh, for each trangle T n the mesh wth vertces {p, p, p 3 } and assocated values { f, f, f 3 },. Compute the mean vector m va Equaton 8. Compute the weghts w usng Equaton 9 3. Update the denomnator and numerator of ˆf[v] defned n Equaton 6 respectvely by addng w and w f To correctly compute ˆf[v] usng the above procedure, however, we must overcome two obstacles. Frst, the weghts w computed by Equaton 9 may have a zero denomnator when the pont v les on plane contanng the face T. Our method must handle ths degenerate case gracefully. Second, we must be careful to avod numercal nstablty when computng w for trangle T wth a small projected area. Such trangles are the domnant type when evaluatng mean value coordnates on meshes wth large number of trangles. Next we dscuss our solutons to these two problems and present the complete evaluaton algorthm as pseudo-code n Fgure 4. Stablty: When the trangle T has small projected area on the unt sphere centered at v, computng weghts usng Equaton 8 and 9 becomes numercally unstable due to cancellng of unt normals n that are almost co-planar. To ths end, we next derve a stable formula for computng weghts w. Frst, we substtute Equaton 8 nto Equaton 9, usng trgonometry we obtan T ψ ψ θ 3 (b) -n -n -n 3 v (9) w = θ cos[ψ + ]θ cos[ψ ]θ + sn[ψ + ]sn[θ ] p k v, (0) // Robust evaluaton on a trangular mesh for each vertex p j wth values f j d j p j x f d j < ε return f j u j (p j x)/d j totalf 0 totalw 0 for each trangle wth vertces p, p, p 3 and values f, f, f 3 l u + u // for =,,3 θ arcsn[l /] h ( θ )/ f π h < ε // x les on t, use D barycentrc coordnates w sn[θ ]d d + return ( w f )/( w ) c (sn[h]sn[h θ ])/(sn[θ + ]sn[θ ]) s sgn[det[u,u,u 3 ]] c f, s ε // x les outsde t on the same plane, gnore t contnue w (θ c + θ c θ + )/(d sn[θ + ]s ) totalf+ = w f totalw+ = w f x totalf/totalw Fgure 4: Mean value coordnates on a trangular mesh where ψ ( =,,3) denotes the angles n the sphercal trangle T. Note that the ψ are the dhedral angles between the faces wth normals n and n +. We llustrate the angles ψ and θ n Fgure 3 (b). To calculate the cos of the ψ wthout computng unt normals, we apply the half-angle formula for sphercal trangles [Beyer 987], cos[ψ ] = sn[h]sn[h θ ], () sn[θ + ]sn[θ ] where h = (θ +θ +θ 3 )/. Substtutng Equaton nto 0, we obtan a formula for computng w that only nvolves lengths p v and angles θ. In the pseudo-code from Fgure 4, angles θ are computed usng arcsn, whch s stable for small angles. Co-planar cases: Observe that Equaton 9 nvolves dvson by n (p v), whch becomes zero when the pont v les on plane contanng the face T. Here we need to consder two dfferent cases. If v les on the plane nsde T, the contnuty of mean value nterpolaton mples that ˆf[v] converges to the value f[x] defned by lnear nterpolaton of the f on T. On the other hand, f v les on the plane outsde T, the weghts w become zero as ther ntegral defnton φ [x] p[x] v dt becomes zero. We can easly test for the frst case because the sum Σ θ = π for ponts nsde of T. To test for the second case, we use Equaton to generate a stable computaton for sn[ψ ]. Usng ths defnton, v les on the plane outsde T f any of the dhedral angles ψ (or sn[ψ ]) are zero. 4 Applcatons and results Whle mean value coordnates fnd ther man use n boundary value nterpolaton, these coordnates can be appled to a varety of applcatons. In ths secton, we brefly dscuss several of these applcatons ncludng constructng volumetrc textures and surface deformaton. We conclude wth a secton on our mplementaton of these coordnates and provde evaluaton tmes for varous shapes. 5 Fgure 5: Orgnal model of a cow (top-left) wth hue values specfed at the vertces. The planar cuts llustrate the nteror of the functon generated by 3D mean value coordnates. 4. Boundary value nterpolaton As mentoned n Secton, these coordnate functons may be used to perform boundary value nterpolaton for trangular meshes. In ths case, functon values are assocated wth the vertces of the mesh. The functon constructed by our method s smooth, nterpolates those vertex values and s a lnear functon on the faces of the trangles. Fgure 5 shows an example of nterpolatng hue specfed on the surface of a cow. In the top-left s the orgnal model that serves as nput nto our algorthm. The rest of the fgure shows several slces of the cow model, whch reveal the volumetrc functon produced by our coordnates. Notce that the functon s smooth on the nteror and nterpolates the colors on the surface of the cow. 4. Volumetrc textures These coordnate functons also have applcatons to volumetrc texturng as well. Fgure 6 (top-left) llustrates a model of a bunny wth a D texture appled to the surface. Usng the texture coordnates (u,v ) as the f for each vertex, we apply our coordnates and buld a functon that nterpolates the texture coordnates specfed at the vertces and along the polygons of the mesh. Our functon extrapolates these surface values to the nteror of the shape to construct a volumetrc texture. Fgure 6 shows several slces revealng the volumetrc texture wthn. 4.3 Surface Deformaton Surface deformaton s one applcaton of mean value coordnates that depends on the lnear precson property outlned n Secton. In ths applcaton, we are gven two shapes: a model and a control mesh. For each vertex v n the model, we frst compute ts mean value weght functons w j wth respect to each vertex p j n the undeformed control mesh. To perform the deformaton, we move the vertces of the control mesh to nduce the deformaton on the orgnal surface. Let ˆp j be the postons of the vertces from the deformed control mesh, then the new vertex poston ˆv n the deformed model s computed as ˆv = ˆp j. Notce that, due to lnear precson, f ˆp j = p j, then ˆv = v. Fgures and 7 show several examples of deformatons generated wth ths Fgure 6: Textured bunny (top-left). Cuts of the bunny to expose the volumetrc texture constructed from the surface texture. process. Fgure (a) depcts a horse before deformaton and the surroundng control mesh shown n black. Movng the vertces of the control mesh generates the smooth deformatons of the horse shown n (b,c,d). Prevous deformaton technques such as freeform deformatons [Sederberg and Parry 986; MacCracken and Joy 996] requre volumetrc cells to be specfed on the nteror of the control mesh. The deformatons produced by these methods are dependent on how the control mesh s decomposed nto volumetrc cells. Furthermore, many of these technques restrct the user to creatng control meshes wth quadrlateral faces. In contrast, our deformaton technque allows the artst to specfy an arbtrary closed trangular surface as the control mesh and does not requre volumetrc cells to span the nteror. Our technque also generates smooth, realstc lookng deformatons even wth a small number of control ponts and s qute fast. Generatng the mean value coordnates for fgure took 3.3s and.9s for fgure 7. However, each of the deformatons only took 0.09s and 0.03s respectvely, whch s fast enough to apply these deformatons n real-tme. 4.4 Implementaton Our mplementaton follows the pseudo-code from Fgure 4 very closely. However, to speed up computatons, t s helpful to precompute as much nformaton as possble. Fgure 8 contans the number of evaluatons per second for varous models sampled on a 3GHz Intel Pentum 4 computer. Prevously, practcal applcatons nvolvng barycentrc coordnates have been restrcted to D polygons contanng a very small number of lne segments. In ths paper, for the frst tme, barycentrc coordnates have been appled to truly large shapes (on the order of 00, 000 polygons). The coordnate computaton s a global computaton and all vertces of the surface must be used to evaluate the functon at a sngle pont. However, much of the tme spent s determnng whether or not a pont les on the plane of one of the trangles n the mesh and, f so, whether or not that pont s nsde that trangle. Though we have not done so, usng varous spatal parttonng data structures to reduce the number of trangles that 6 mportant generalzaton would be to derve mean value coordnates for pecewse lnear mesh wth arbtrary closed polygons as faces. On these faces, the coordnates would degenerate to standard D mean value coordnates. We plan to address ths topc n a future paper. Acknowledgements We d lke to thank John Morrs for hs help wth desgnng the control meshes for the deformatons. Ths work was supported by NSF grant ITR References BEYER, W. H CRC Standard Mathematcal Tables (8th Edton). CRC Press. Fgure 7: Orgnal model and surroundng control mesh shown n black (top-left). Deformng the control mesh generates smooth deformatons of the underlyng model. Model Trs Verts Eval/s Horse control mesh (fg ) Armadllo control mesh (fg 7) Cow (fg 5) Bunny (fg 6) Fgure 8: Number of evaluatons per second for varous models. must be checked for coplanarty could greatly enhance the speed of the evaluaton. 5 Conclusons and Future Work Mean value coordnates are a smple, but powerful method for creatng functons that nterpolate values assgned to the vertces of a closed mesh. Perhaps the most ntrgung feature of mean value coordnates s that fact that they are well-defned on both the nteror and the exteror of the mesh. In partcular, mean value coordnates do a reasonable job of extrapolatng value outsde of the mesh. We ntend to explore applcatons of ths feature n future work. Another nterestng pont s the relatonshp between mean value coordnates and Wachspress coordnates. In D, both coordnate functons are dentcal for convex polygons nscrbed n the unt crcle. As a result, one method for computng mean value coordnates s to project the vertces of the closed polygon onto a crcle and compute Wachspress coordnates for the nscrbed polygon. However, n 3D, ths approach fals. In partcular, nscrbng the vertces of a trangular mesh onto a sphere does not necessarly yeld a convex polyhedron. Even f the nscrbed polyhedron happens to be convex, the resultng Wachspress coordnates are ratonal functons of the vertex poston v whle the mean value coordnates are transcendental functons of v. Fnally, we only consder meshes that have trangular faces. One COQUILLART, S Extended free-form deformaton: a sculpturng tool for 3d geometrc modelng. In SIGGRAPH 90: Proceedngs of the 7th annual conference on Computer graphcs and nteractve technques, ACM Press, DESBRUN, M., MEYER, M., AND ALLIEZ, P. 00. Intrnsc Parameterzatons of Surface Meshes. Computer Graphcs Forum, 3, FLEMING, W., Ed Functons of Several Varables. Second edton. Sprnger- Verlag. FLOATER, M. S., AND HORMANN, K Surface parameterzaton: a tutoral and survey. In Advances n Multresoluton for Geometrc Modellng, N. A. Dodgson, M. S. Floater, and M. A. Sabn, Eds., Mathematcs and Vsualzaton. Sprnger, Berln, Hedelberg, FLOATER, M. S., KOS, G., AND REIMERS, M Mean value coordnates n 3d. To appear n CAGD. FLOATER, M Parametrzaton and smooth approxmaton of surface trangulatons. CAGD 4, 3, FLOATER, M Parametrc Tlngs and Scattered Data Approxmaton. Internatonal Journal of Shape Modelng 4, FLOATER, M. S Mean value coordnates. Comput. Aded Geom. Des. 0,, 9 7. HORMANN, K., AND GREINER, G MIPS - An Effcent Global Parametrzaton Method. In Curves and Surfaces Proceedngs (Sant Malo, France), HORMANN, K Barycentrc coordnates for arbtrary polygons n the plane. Tech. rep., Clausthal Unversty of Technology, September. hormann/papers/barycentrc.pdf. KHODAKOVSKY, A., LITKE, N., AND SCHROEDER, P Globally smooth parameterzatons wth low dstorton. ACM Trans. Graph., 3, KOBAYASHI, K. G., AND OOTSUBO, K t-ffd: free-form deformaton by usng trangular mesh. In SM 03: Proceedngs of the eghth ACM symposum on Sold modelng and applcatons, ACM Press, LOOP, C., AND DEROSE, T A multsded generalzaton of Bézer surfaces. ACM Transactons on Graphcs 8, MACCRACKEN, R., AND JOY, K. I Free-form deformatons wth lattces of arbtrary topology. In SIGGRAPH 96: Proceedngs of the 3rd annual conference on Computer graphcs and nteractve technques, ACM Press, MALSCH, E., AND DASGUPTA, G Algebrac constructon of smooth nterpolants on polygonal domans. In Proceedngs of the 5th Internatonal Mathematca Symposum. MEYER, M., LEE, H., BARR, A., AND DESBRUN, M. 00. Generalzed Barycentrc Coordnates for Irregular Polygons. Journal of Graphcs Tools 7,, 3. SCHREINER, J., ASIRVATHAM, A., PRAUN, E., AND HOPPE, H Inter-surface mappng. ACM Trans. Graph. 3, 3, SEDERBERG, T. W., AND PARRY, S. R Free-form deformaton of sold geometrc models. In SIGGRAPH 86: Proceedngs of the 3th annual conference on Computer graphcs and nteractve technques, ACM Press, WACHSPRESS, E A Ratonal Fnte Element Bass. Academc Press, New York. WARREN, J., SCHAEFER, S., HIRANI, A., AND DESBRUN, M Barycentrc coordnates for convex sets. Tech. rep., Rce Unversty. WARREN, J Barycentrc Coordnates for Convex Polytopes. Advances n Computatonal Mathematcs. ABC. Parametric Curves & Surfaces. Overview. Curves. Many applications in graphics. Parametric curves. Goals. Part 1: Curves Part 2: Surfaces arametrc Curves & Surfaces Adam Fnkelsten rnceton Unversty COS 46, Sprng Overvew art : Curves art : Surfaces rzemyslaw rusnkewcz Curves Splnes: mathematcal way to express curves Motvated by loftsman Loop Parallelization - - Loop Parallelzaton C-52 Complaton steps: nested loops operatng on arrays, sequentell executon of teraton space DECLARE B[..,..+] FOR I :=.. FOR J :=.. I B[I,J] := B[I-,J]+B[I-,J-] ED FOR ED FOR A frequency decomposition time domain model of broadband frequency-dependent absorption: Model II A frequenc decomposton tme doman model of broadband frequenc-dependent absorpton: Model II W. Chen Smula Research Laborator, P. O. Box. 134, 135 Lsaker, Norwa (1 Aprl ) (Proect collaborators: A. Bounam, where the coordinates are related to those in the old frame as follows. Chapter 2 - Cartesan Vectors and Tensors: Ther Algebra Defnton of a vector Examples of vectors Scalar multplcaton Addton of vectors coplanar vectors Unt vectors A bass of non-coplanar vectors Scalar product, Communication Networks II Contents 8 / 1 -- Communcaton Networs II (Görg) -- Communcaton Networs II Contents 1 Fundamentals of probablty theory 2 Traffc n communcaton networs 3 Stochastc & Marovan Processes Computer-Aided Design. Computer aided clothing pattern design with 3D editing and pattern alteration Computer-Aded Desgn 44 (2012) 721 734 Contents lsts avalable at ScVerse ScenceDrect Computer-Aded Desgn journal homepage: Computer aded clothng pattern desgn wth 3D edtng An Integrated Semantically Correct 2.5D Object Oriented TIN. Andreas Koch An Integrated Semantcally Correct 2.5D Object Orented TIN Andreas Koch Unverstät Hannover Insttut für Photogrammetre und GeoInformaton Contents Introducton Integraton of a DTM and 2D GIS data Semantcs Multiple View Image Reconstruction: A Harmonic Approach Multple Vew Image Reconstructon: A Harmonc Approach Justn Domke and Yanns Alomonos Center for Automaton Research, Department of Computer Scence Unversty of Maryland, College Park, MD, 2742, USA.
http://docplayer.net/355097-Mean-value-coordinates-for-closed-triangular-meshes.html
CC-MAIN-2018-17
refinedweb
5,343
60.45
Welcome to the discussion thread about this lecture section. Here you can feel free to discuss the topic at hand and ask questions. Hi @filip can you take a quick look for me - the contract compiled and created the abi & wasm files. I am adding, modifying and removing dogs but the event is not firing the event to notify that something has changed. S-Mac-Pro:src seamo$ cleos push action inline modify ‘[4, “toto”, 7]’ -p seamo executed transaction: 9d01af002c3e178af0a4185056c33bacfd456eee4131e775a6f4943308980143 112 bytes 227 us /esc for text #) # inline <= inline::modify {“dog_id”:4,“dog_name”:“toto”,“age”:7} warning: transaction executed locally, but may not be confirmed by the network yet ] S-Mac-Pro:src seamo$ cleos push action inline modify ‘[4, “lolo”, 6]’ -p seamo executed transaction: b7f21aea5f8113864de533c84a4ad597c1e42318174b147788f49b1708b8a8ba 112 bytes 215 us /esc for text #) # inline <= inline::modify {“dog_id”:4,“dog_name”:“lolo”,“age”:6} warning: transaction executed locally, but may not be confirmed by the network yet ] S-Mac-Pro:src seamo$ ---- code – #include <eosio/eosio.hpp> #include <eosio/print.hpp> using namespace eosio; CONTRACT hellotable : public contract { public: using contract::contract; hellotable(name receiver, name code, datastream<const char*> ds):contract(receiver, code, ds) {} ACTION insert(name owner, std::string dog_name, int age){ require_auth(owner); dog_index dogs(get_self(), get_self().value); // owner will need to have the resources available to pay for the contract storage. dogs.emplace(owner, [&](auto& row){ //lambda function to initialise the row(s) we use. row.id = dogs.available_primary_key(); row.dog_name = dog_name; row.age = age; row.owner = owner; }); send_summary(owner, "inserted dog"); } ACTION erase(int dog_id){ dog_index dogs(get_self(), get_self().value); // the next two lines are not necessary for the delete but if not used anyone could delete anyones dog. auto dog = dogs.get(dog_id, "unable to find dog."); //this returns the data require_auth(dog.owner); auto iterator = dogs.find(dog_id); // this returns the position of the data (ike a pointer to the data) dogs.erase(iterator); //iterator is used to modify and delete rows. send_summary(dog.owner, "erased dog"); } ACTION modify(int dog_id, std::string dog_name, int age){ //get the index of the table dog_index dogs(get_self(), get_self().value); //fetch the current data of our dog auto dog = dogs.get(dog_id, "unable to find dog."); //this returns the data //require auth the owner require_auth(dog.owner); //get the iterator to be able to find and modify the row in the table auto iterator = dogs.find(dog_id); dogs.modify(iterator, dog.owner, [&](auto& row){ row.age = age; row.dog_name = dog_name; }); send_summary(dog.owner, "modified dog"); } //remove all dogs by a specific owner ACTION removeall(name owner){ dog_index dogs(get_self(), get_self().value); auto owner_index = dogs.get_index<"byowner"_n>(); // this gives access to the second index auto iterator = owner_index.find(owner.value); while (iterator != owner_index.end()){ // modify dog row etc.. add, delete, print etc.. owner_index.erase(iterator); iterator = owner_index.find(owner.value); } send_summary(owner, "removedall dog"); } ACTION notify(name owner, std::string msg){ require_auth(get_self()); //inline action call to notify require_recipient(owner); // notifies the user that action was complete - reciept to user (action made on behalf of the user) } private: TABLE dog{ int id; std::string dog_name; int age; name owner; uint64_t primary_key() const {return id;} uint64_t by_owner() const {return owner.value;} }; // triggers the notify function void send_summary(name owner, std::string message){ action( permission_level{get_self(), “active”_n}, get_self(), “notify”_n, std::make_tuple(owner, message) ).send(); } typedef multi_index<“dogs”_n, dog, indexed_by<“byowner”_n, const_mem_fun<dog, uint64_t, &dog::by_owner>>> dog_index; }; Hmmm, I copied your code and tried it myself. It worked for me. So I can’t find any errors there. Did you add the eosio.code permission to your inline account? And are you sure you have the latest contract deployed to that account? Hmmm indeed!!! Thanks a million for looking into it for me… I’ll go back over it and see if I missed a step… I am on the latest version of EOS “server_version_string”: “v1.8.2”. I have moved on and about to start the DApp stuff now. Everything worked for me apart from the inline action stuff. I will rewrite & redo the commands to see what I missed. Thanks again!! I really appreciate the reply!! Okay, created a folder and rewrote my project. Tried compiling it, stuck at Warning, action <notify> does not have a ricardian contract. 3 mins gone at the time of posting still stuck. Tried restarting the process, same result. @filip any help will be appreciated. UPDATE 30 minutes gone. Sorry for the slow response. I have been sick for the past week. That’s just a warning, not an error. You are good to continue. We have no need of a ricardian contract. I explain what that is later on in the course. @filip Still having problems getting pass : eosio-cpp -abigen -I include -R resource -contract hellotable -o hellotable.wasm src/hellotable.cpp error: error reading ‘/Users/cherrybluemoon/src/hellotable.cpp’ 1 error generated. OR: eosio-cpp -abigen -o hellotable.wasm src/hellotable.cpp Hi @filip, I have a questio: if a smart contract has permission to execute inline actions on behalf of an user account, how does the user know which actions will be executed? Wouldn’t it be possible for the contract to steal tokens from the user, for example? I might have misunderstood your question, but that’s why we have the require statements? to create checkpoints in the code. Ivo
https://forum.ivanontech.com/t/inline-actions-discussion/9013
CC-MAIN-2020-24
refinedweb
897
51.34
(For more resources on Microsoft Sharepoint, see here.) Site Directory options There are two main approaches to providing a Site Directory feature: - A central list that has to be maintained - Using a search-based tool that can provide the information dynamically List-based Site Directory With a list-based Site Directory, a list is provisioned in a central site collection, such as the root of a portal or intranet. Like all lists, site columns can be defined to help describe the site's metadata. Since it is stored in a central list, the information can easily be queried, which can make it easy to show a listing of all sites and perform filtering, sorting, and grouping, like all SharePoint lists. It is important to consider the overall site topology within the farm. If everything of relevance is stored within a single site collection, a list-based Site Directory, accessible throughout that site collection, may be easy to implement. But as soon as you have a large number of site collections or web applications, you will no longer be able to easily use that Site Directory without creating custom solutions that can access the central content and display it on those other sites. In addition, you will need to ensure that all users have access to read from that central site and list. Another downside to this approach is that the list-based Site Directory has to be maintained to be effective, and in many cases it is very difficult to keep up with this. It is possible to add new sites to the directory programmatically, using an event receiver, or as part of a process that automates the site creation. However, through the site's life cycle, changes will inevitably have to be made, and in many cases sites will be retired, archived, or deleted. While this approach tends to work well in small, centrally controlled environments, it does not work well at all in most of the large, distributed environments where the number of sites is expected to be larger and the rate of change is typically more frequent. Search-based site discovery An alternative to the list-based Site Directory is a completely dynamic site discovery based on the search system. In this case the content is completely dynamic and requires no specific maintenance. As sites are created, updated, or removed, the changes will be updated in the index as the scheduled crawls complete. For environments with a large number of sites, with a high frequency of new sites being created, this is the preferred approach. The content can also be accessed throughout the environment without having to worry about site collection boundaries, and can also be leveraged using out of the box features. The downside to this approach is that there will be a limit to the metadata you can associate with the site. Standard metadata that will be related to the site include the site's name, description, URL, and to a lesser extent, the managed path used to configure the site collection. From these items you can infer keyword relevance, but there is no support for extended properties that can help correlate the site with categories, divisions, or other specific attributes. How to leverage search Most users are familiar with how to use the Search features to find content, but are not familiar with some of the capabilities that can help them pinpoint specific content or specific types of content. This section will provide an overview on how to leverage search to provide features that help support users finding results that are only related to sites. Content classes SharePoint Search includes an object classification system that can be used to identify specific types of items as shown in the next table. It is stored in the index as a property of the item, making it available for all queries. The contentclass property can be included as part of an ad hoc search performed by a user, included in the search query within a customization, or as we will see in the next section, used to provide a filter to a Search Scope. Search Scopes Search Scopes provide a way to filter down the entire search index. As the index grows and is filled with potentially similar information, it can be helpful to define Search Scopes to put specific set of rules in place to reduce the initial index that the search query is executed against. This allows you to execute a search within a specific context. The rules can be set based on the specific location, specific property values, or the crawl source of the content. The Search Scopes can be either defined centrally within the Search service application by an administrator or within a given Site Collection by a Site Collection administrator. If the scope is going to be used in multiple Site Collections, it should be defined in the Search service application. Once defined, it is available in the Search Scopes dropdown box for any ad hoc queries, within the custom code, or within the Search Web Parts. Defining the Site Directory Search Scope To support dynamic discovery of the sites, we will configure a Search Scope that will look at just site collections and subsites. As we saw above, this will enable us to separate out the site objects from the rest of the content in the search index. This Search Scope will serve as the foundation for all of the solutions in this article. To create a custom Search Scope: - Navigate to the Search Service Application. - Click on the Search Scopes link on the QuickLaunch menu under the Queries and Results heading. - Set the Title field to Site Directory. - Provide a Description. - Click on the OK button as shown in the following screenshot: - From the View Scopes page, click on the Add Rules link next to the new Search Scope. - For the Scope Rule Type select the Property Query option. - For the Property Query select the contentclass option. - Set the property value to STS_Site. - For the Behaviorsection, select the Include option. - From the Scope Properties page, select the New Rule link. - For the Scope Rule Type section, select the Property Query option./li> - For the Property Query select the contentclass option. - Set the property value to STS_Web. - For the Behavior section, select the Include option. The end result will be a Search Scope that will include all Site Collection and subsite entries. There will be no user generated content included in the search results of this scope. After finishing the configuration for the rules there will be a short delay before the scope is available for use. A scheduled job will need to compile the search scope changes. Once compiled, the View Scopes page will list out the currently configured search scopes, their status, and how many items in the index match the rules within the search scopes. Enabling the Search Scope on a Site Collection Once a Search Scope has been defined you can then associate it with the Site Collection(s) you would like to use it from. Associating the Search Scope to the Site Collection will allow the scope to be selected from within the Scopes dropdown on applicable search forms. This can be done by a Site Collection administrator one Site Collection at a time or it can be set via a PowerShell script on all Site Collections. To associate the search scope manually: - Navigate to the Site Settings page. - Under the Site Collection Administration section, click on the Search Scopes link. - In the menu, select the Display Groups action. - Select the Search Dropdown item. - You can now select the Sites Scope for display and adjust its position within the list. - Click on the OK button when complete. Testing the Site Directory Search Scope Once the scope has been associated with the Site Collection's search settings, you will be able to select the Site Directory scope and perform a search, as shown in the following screenshot: Any matching Site Collections or subsites will be displayed. As we can see from the results shown in the next screenshot, the ERP Upgrade project site collection comes back as well as the Project Blog subsite. (For more resources on Microsoft Sharepoint, see here.) Site Directory page The initial configuration of the Site Directory Search Scope pointed to the standard results search page. While this may work fine in some cases, a custom results page will allow you to fine tune the user experience and also make additional searches or refinements a little easier. Creating the Site Directory page We will now add a custom page to the Search Center to support our Site Directory search page. Once added to the Search Center it will then be configured to be the default destination for the Site Directory search scope. To create the page: - Navigate to the default Search Center. - Click Site Actions | Show Ribbon. - Select the Page tab. - Select the View All Pages action. - Select the Documents tab. - Click the New Document action and select Page as shown in the next screenshot: - Set the Title field to the value Site Directory. - Provide a Description. - Provide a URL Name such as Site -Directory. - Ensure that for the Page Layout, (Welcome Page) Search results is selected. - Click on the Create button. Configure the Site Directory page settings We will now see a standard search results page and will need to make a few minor changes in order to be used to support the Site Directory requirements. To configure the page's settings: - Click on Site Actions | Edit Page. - From Search Box Web Part, select the Edit Web Part action. - Within the Miscellaneous section, change the Target search results page URL to Site-Directory.aspx so that it directs the request to our Site Directory page. - Click the OK button. - From the Search Core Results Web Part, select the Edit Web Part action. - Within the Location Properties section, set the Scope property to Site Directory. - Within the More Results Link Options, check the checkbox to Show More Results Link. - Click on the OK button. Adding a Site Directory tab With both the search query and result pages there is a control that will display contextual tabs that can be used to navigate to customized search pages. The All Sites and People tabs are added by default, but additional tabs can be configured. To make it easy for users to search the Site Directory from the Search Center, we will add a Site Directory tab. Please note, since the values are stored in a set of central lists within the Search Center, you only need to configure the tabs once for the regular search pages and once for the results pages. To add a new tab: - Click on the Add New Tab option under the existing tabs. - Set the Tab Name property to Site Directory. - Set the Page property to Site-Directory.aspx . - Set the Tooltip property to Click for relevant sites. - Click on the Save button. Common Searches The search system's query engine is extremely powerful, but most users are not familiar enough with how to format the queries for advanced searches. A great way to address this is by providing a list of common search keywords and saved queries. This will allow users to quickly and easily initiate a search and it will work with the Refinement Web Part to provide additional drill through capabilities. This Common Searches information can be saved in a simple link list within the Search Center. Like the search tabs feature, this provides an easy way for the Search Center administrator to maintain the configuration through the standard SharePoint UI. The standard link list template is sufficient, but if you want to potentially have different lists for different search tabs, then I recommend that you add a lookup field to the Tab Name field of the Tabs in Search Results list. A sample view of the list is displayed in the next screenshot: Defining Common Searches Adding a saved search is as simple as adding an entry to the link list. The key to this solution is in the formatting of the linked URL. There are three main parameters in the URL that you will frequently need to use. Simple saved query In its simplest form, keywords are passed to the results page in the URL's query string. This is the same result as a user passing in a simple keyword in the search box. It might look like this:. aspx?k=HR!. The URL can be separated into two parts with the first part being the path to the results page:, and the remaining part which identifies the keyword query that will be executed: k=HR. Advanced saved query Through the query language it is possible to specify additional keywords and logical operations. The following example will search for Blog subsites and apply a refiner to ensure that any returned sites are within the MySites area. The query would look like this: k=Blog&r=site%3D%22http%3A%2F%2Fintranet%2F my%2Fpersonal%22. The keyword part is set to k=Blog. The refiner part is set to r=site%3D%22http%3A%2 F%2Fintranet%2Fmy%2Fpersonal%22. Adding Common Searches to the Site Directory page To add the Common Searches list to the Site Directory page we will simply add a standard list view set to the summary view, which will present a bulleted list. Additional properties can be set to change the title and overall display if desired. Alternatively this can be displayed via a Client OM script or a Server OM Web Part if additional control is needed over the rendered display. Site Directory displayed The completed Site Directory page with the Common Searches listing is displayed in the following screenshot: A close up view of the Common Searches list view Web Part is displayed in the following screenshot: Related sites Web Part In addition to making it easy for the users to execute ad hoc site searches, it may also be valuable to dynamically display a listing of related web sites. To provide this feature, one approach would be to create a Web Part that allows the site owner to specify some related keywords, and then perform the Site Directory search and display a list of relevant sites. Creating the Web Part To add the additional Web Part: - Open the SPBlueprints.WebParts project in Visual Studio 2010. - Browse the installed templates and select Visual C# | SharePoint 2010. - Right-click on the project file and select Add | New Item. - From the template selection screen select the Web Part option. - Provide the name RelatedSites and click on the Add button. - Edit the RelatedSites.webpart file, and add in the custom properties as shown in the following: <property name="Title" type="string">Related Sites</property> <property name="Description" type="string">SPBlueprints - The Related Sites web part will search for sites with matching keywords.</property> <property name="SearchProxyName" type="string">Search Service Application</property> <property name="SearchScopeName" type="string">Site Directory</ property> <property name="DisplayLimit" type="int">5</property> <property name="KeywordList" type="string">sites</property> - Start by editing the RelatedSites.cs file and add in the following references: using System.Collections; using System.Data; using System.Text; using Microsoft.SharePoint.Administration; using Microsoft.Office.Server.Search; using Microsoft.Office.Server.Search.Query; using Microsoft.Office.Server.Search.Administration; - Next we will need to define the Web Part's properties starting with the Search Proxy Name property. This property will be used to manage the connection to the Search service application. private string _searchProxyName; [WebBrowsable(true), Category("Configuration"), WebDisplayName("Search Proxy Name"), WebDescription("Please provide the name of your Search Service Application."), Personalizable(PersonalizationScope.Shared)] public string SearchProxyName { get { return _searchProxyName; } set { _searchProxyName = value; } } - Next we will define the Search Scope Name property which can be used to target the desirable content for display. private string _searchScopeName; [WebBrowsable(true), Category("Configuration"), WebDisplayName("Search Scope Name"), WebDescription("Please provide the name of your Search Scope."), Personalizable(PersonalizationScope.Shared)] public string SearchScopeName { get { return _searchScopeName; } set { _searchScopeName = value; } } - Next we will define the Display Limit property used to determine how many records to display. private int _displayLimit; [WebBrowsable(true), Category("Configuration "), WebDisplayName("Result limit"), WebDescription("The number of items to display."), Personalizable(PersonalizationScope.Shared)] public int DisplayLimit { get { return _displayLimit; } set { _displayLimit = value; } } - Next we will define the Keywords property where the site administrator will actually set the keywords. private string _keywordList; [WebBrowsable(true), Category("Configuration"), WebDisplayName("Keywords"), WebDescription("Comma delimited list of keywords"), Personalizable(PersonalizationScope.Shared)] public string KeywordList { get { return _keywordList; } set { _keywordList = value; } } - The output will be built within a Literal control defined within the class, and instantiated within the CreateChildControls() method as shown in the following: protected Literal _output; protected override void CreateChildControls() { this._output = new Literal(); this._output.ID = "output"; this.Controls.Add(this._output); } - With all of the setup work complete, we can now define the Display() method that can be called from the OnLoad() method . The method starts by defining StringBuilder that we will use to build the output of the Web Part, and then checks to see if there are any keywords set. Since the keywords are stored within a single string property and are comma delimited, we will do a simple split command to load the values into an array. If there are no keywords, there will be no content to display. protected void Display() { StringBuilder messages = new StringBuilder(); string[] keywords = this._keywordList.Split(','); if (keywords[0] != "") { - Next we attempt to connect to the Search Proxy specified in the Web Part properties. There is a try /catch block here in order to handle issues related to connecting to the Search service application differently than errors returned as part of a search. try { SearchQueryAndSiteSettingsServiceProxy settingsProxy = SPFarm. Local.ServiceProxies.GetValue<SearchQueryAndSiteSettingsServicePro xy>(); SearchServiceApplicationProxy searchProxy = settingsProxy. ApplicationProxies.GetValue<SearchServiceApplicationProxy>(this. searchProxyName); // Query and Display of Web Part Catch { this.EnsureChildControls(); this._output.Text = "Error: Please specify a Search Service Application."; } - Now we can instantiate FullTestSqlQuery and prepare the data objects. FullTextSqlQuery mQuery = new FullTextSqlQuery(searchProxy); try { ResultTableCollection resultsTableCollection; DataTable results = new DataTable(); - The formatted query will be broken into two parts, with the first part being the same in all cases and then the addition of the dynamic keywords with a variable number of items. We will then define a simple for loop to append the query to include a dynamic part that covers each keyword. Since we are looking for matches for any of the keywords, the OR operator will be used, which will require that we set the scope predicate starting with the second keyword. The query can also be tailored to exclude other content in your environment as needed. mQuery.QueryText = "SELECT Title, Path, SiteName FROM SCOPE() Where "; for (int i = 0; i <= keywords.GetUpperBound(0); i++) { if (i > 0) mQuery.QueryText += " OR "; mQuery.QueryText += " ((\"scope\" = '" + _searchScopeName + "') AND Contains('" + keywords[i] + "'))"; } - The remaining FullTextSqlQuery properties can now be set and the query executed. The returned DataTable object can now be checked for results to see if the list needs to be rendered. mQuery.ResultTypes = ResultType.RelevantResults; mQuery.TrimDuplicates = true; mQuery.RowLimit = DisplayLimit; resultsTableCollection = mQuery.Execute(); if (resultsTableCollection.Count > 0) { ResultTable relevantResults = resultsTableCollection[ResultType. RelevantResults]; results.Load(relevantResults, LoadOption.OverwriteChanges); - The output can be as simple or as complex as needed. For this example, I will create a simple HTML bulleted list with a link to the site. A DIV container and the list will be defined, and then we will iterate through the rows, and write out each link. messages.AppendFormat(@"<div id='RelatedSites'><ul>"); foreach (DataRow row in results.Rows) { messages.AppendFormat(@"<li><a href='{1}'>{0}</a></li>", row["Title"].ToString(), row["Path"].ToString(), row["SiteName"]. ToString()); } messages.AppendFormat(@"</ul></div>"); } - With the display complete we can now render the output, complete the catch block to handle any exceptions, and dispose our Query object. this.EnsureChildControls(); this._output.Text = messages.ToString(); } catch (Exception ex) { this.EnsureChildControls(); this._output.Text = "Error: " + ex.Message.ToString(); } finally { mQuery.Dispose(); } Display Related sites Web Part Once deployed, the Related sites Web Part can be configured to set the desired keywords in a comma delimited list. The rendered screen is shown as follows: Summary This article leveraged the search features and configuration along with the Server OM to create a set of solutions that can be used to provide users with easy and intuitive ways to locate relevant sites. The customizations are grouped as follows: - Visual Studio 2010 - Web Part : Creating a custom Web Part that can display related sites based on a keyword property. - Browser based configuration - Configure Search Scopes: Create a Search Scope that automatically filters the content to show only site objects, and excludes any other type of content. - Search Results Page: A custom search results page that works with our custom search scope, and also includes some additional Web Parts to enhance the user's ability to find relevant sites. - Configure Core Results Web Part: The Core Results Web Part was configured to show our Site Directory and interactive search results. This article showed how you can develop effective solutions that provide easy ways for users to find the relevant sites and resources needed to ensure better collaboration and process efficiency. These solutions are very easy to implement and can deliver immediate value. Further resources on this subject: - Microsoft SharePoint : Creating Various Content Types [Article] - Microsoft SharePoint: Recipes for Automating Business Processes [Article] - Working with Client Object Model in Microsoft Sharepoint [Article] - Setting Up a Development Environment [Article]
https://www.packtpub.com/books/content/building-site-directory-sharepoint-search
CC-MAIN-2015-22
refinedweb
3,594
53.61
Buttonwood All bets are off Spreading the risk has spread the losses THERE is such a thing as a free lunch. That, at least, is what pension funds have been told in recent years. Diversify into new asset classes and your portfolio can improve the trade-off between risk and return because you will be making uncorrelated bets. Boy, did pension funds diversify. They bought emerging-market equities, corporate bonds, commodities and property, while giving money to hedge funds and private-equity managers with their complex strategies and high fees. The idea was to “be like Yale”, the university endowment fund run by David Swensen, a celebrated investor, which started to diversify into hedge funds and private equity in the 1980s. Compared with other institutional investors over the past 20 years, Yale had very little exposure to conventional equities. It also produced remarkably strong returns. But those who thought Yale had found the key to success have been disappointed. Every one of those diversified bets has turned sour this year. In retrospect, it looks like the strategy had two problems. The first was that all risky assets were boosted by the same factors: low interest rates and healthy global growth. That encouraged investors to use leverage, or borrowed money, to enhance returns. The result was what Jeremy Grantham of GMO, a fund-management group, describes as “the first truly global bubble”. As confidence has unravelled, investors have been forced to sell all those asset classes simultaneously, driving down prices across the board. The second, and related, problem is that some of the asset classes were quite small. Initially, this illiquidity was attractive since it seemed to offer more alluring returns. And as more investors became involved, their liquidity duly improved. But they still suffer from the “rowing boat” factor. When everyone tries to exit the asset class at once, the vessel capsizes. Furthermore, some of these asset classes were always likely to be driven by the same factors as stockmarkets. Private-equity funds, for example, give investors exposure to the same kinds of risks as quoted companies, only with added leverage. So was the whole idea of diversification a write-off from the start? The strategy's defenders say no. They argue that pension funds (and other institutional investors) had made too big a bet on equities in the 1990s. When the bet went wrong with the bursting of the dotcom bubble, funds went into deficit. They accept that, in a crisis, correlations head towards one; in other words, all asset classes (except government bonds) tend to fall together. But the diversifiers have three counter-arguments. The first is that any correlation less than one is still worth having. Hedge funds may have performed badly this year but their losses have been far lower than those of equity markets. Second, there is a difference between short-term correlations and long-term ones. If you take a five- or ten-year view, it still looks as if property, commodities and the rest offer some diversification benefits. They did so during the equity bear market of 2000-02, for example. Third, consultants like Colin Robertson of Hewitt Associates argue that diversification does work when it is applied in a sophisticated way. There is no point in diversifying if the investment does not offer a genuinely different source of return (much of private equity falls into this category) or if the asset is already overvalued. Yet even allowing for this, diversification has surely not offered the benefits most pension funds expected. Indeed, it may have had perverse results. In the old days, with equities trading at below-average valuations, funds would now be on a buying spree. They could afford to ignore the short-term risks because of the long-term nature of their liabilities. Pension funds thus acted as an automatic stabiliser for the market. This time round, that does not seem to be happening. One reason may be accounting changes which make pension-fund managers more focused on the short term. Another, however, may be the strategic drive to diversification. The Wall Street Journal has reported that CalPERS, America's largest public-pension fund, has been selling shares to meet commitments to put more money into private-equity firms. The final problem with diversification has been the cost. Investing in quoted shares via an index fund is very cheap—a fraction of a percentage point. But diversified asset classes cost more to trade and involve higher management fees, expenses that eat into pension-fund returns. So perhaps diversification has been a free lunch after all. Not for the pension funds, but for the fund managers. From the print edition: Finance and economics Readers' comments Reader comments are listed below. Comments are currently closed and new comments are no longer being accepted. Sort:
http://www.economist.com/node/12516688/email
CC-MAIN-2013-48
refinedweb
801
55.95
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives "Qi is an award-winning Lisp-based." "Our mission is to gather some of the most talented functional programmers to build the Integrated Functional Programming Environment (IFPE). IFPE is intended to give the functional programmer complete integrated type-secure access to every feature of his computer, from Internet to email. The IFPE will be freely available under the GNU licence." There is also a quick "15-minute" intro for ML programmers, as well as one for Lisp programmers. I haven't used it yet, but I noticed it was only mentioned here once in a comment; and it looks like the sort of thing that drives people around here wild. every feature of his computer, from Internet to email Wow, that really runs the gamut! ;-) 1.. While it is easy to have optional currying from n-ary functions with macros, the reverse seems much more difficult, if possible at all (depending on the macro system). 2. Really unimportant, but CL newbies might be interested: "In Common Lisp, (FUNCTION (LAMBDA (X) X)) cannot be applied to an object directly, but needs to be used in a call (e.g. using FUNCALL). ((FUNCTION (LAMBDA (X) X)) 7) gives EVAL: #'(LAMBDA (X) X) is not a function name (FUNCALL (FUNCTION (LAMBDA (X) X)) 7) gives 7." ((lambda (x) (+ 1 x)) 2) => 3, however. (see the note at the end of 3. "In Qi an abstraction can always be used in place of a function." Does that mean there is only one namespace? 4. How does the pattern matching cope with, e.g., arrays? Accessing arrays is quite cumbersome already -- (get-array (value *ordinance_survey*) [536 567] unknown) --. 5. Typechecking aside, how does it compare with CL+predicate dispatching+pattern matching? I'm particularly interested in its support for macros. What good predicate/pattern matching extensions exist for Lisp? Also, are there any good libraries for static type/assertion checking? I quite like the syntax we used in Distel, which was Martin Björklund's idea. Whereas in Erlang you write: case Result of {ok, Value} -> Value; {error, Reason} -> exit(Reason) end in Distel you write: (mcase result (['ok value] value) (['error reason] (erl-exit reason))) In summary: 'foo foo ,foo I wrote the pattern-matcher as a simple interpreter. This is an example of fun code that you're allowed to write in Emacs Lisp but would get rotten fruit thrown at you in Common Lisp :-) lists Code generated by CLAW does run on Movitz but when I showed off my "Erlang OS" to my Erlang friends they threw rotten fruit at me and told me not to bother them until it runs e.g. Yaws :-) 1. "Predicate Dispatching in the Common Lisp Object System by Aaron Mark Ucko Submitted to the Department of Electrical Engineering and Computer Science on May 11, 2001, in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Science and Engineering Abstract." The code is in Appendix A (and also available for download at the given URL). I haven't had time to play with it yet, since I haven't needed to. It's probably good enough for most uses, given that the paper also demonstrates a non trivial use. 2. As for pattern matching, I think cl-unification can do the." It is accompanied by a couple control flow macros: ( "In order to make the use of the UNIFICATION library easier, a few utility macros are provided. The macros MATCH, MATCHING, and MATCH-CASE can be used to unify two (or more) objects and then to build a lexical environment where the variables present in the to objects (or templates) are bound to the values resulting from the application of UNIFY. * MATCH is a "single shot" macro. It does one unification and executes forms in an appropriate lexical environment. * MATCH-CASE is equivalent to CASE. It tries to match a single object (or template) against a set of clauses. The forms associated to the first clause for which there is a successful unification, are then executed within an appropriate lexical environment. * MATCHING is equivalent to COND. Each clause contains a head" Regarding static typing, keep in mind that CL errs on the expressive side of the typing spectrum: it allows expressions which can't be shown to be well-typed at compile time. A static typing library would therefor have to implement only a subset of CL (TypeL is trying to do something like that). Nevertheless, SBCL and CMUCL can often infer the type of expressions and issue warnings if type assertions are wrong/not respected.. Well, since Qi is supposed to be better than Haskell, and Haskell can have optional and keyword arguments, I'd bet it wouldn't be too hard to implement. I stand corrected :) Qi looks quite interesting as a Lisp program, i.e. as an example of a Lisp-hosted general purpose language. Take a look at it bootstrapping itself in Sources/Qi 6.1 in Qi.txt. Sources/Qi 6.1 in Qi.txt Is it just me, or does the type system resemble that of Epigram(or vice versa)? It uses sequent calculus notation to define types BTW, I was unable to find any sequents on that site :-( Probably have to check once more.
http://lambda-the-ultimate.org/node/657
CC-MAIN-2022-21
refinedweb
900
61.36
This document is for Django's SVN release, which can be significantly different from previous releases. Get old docs here: Django 1.0 Django’s cache framework¶ A fundamental trade-off¶ CACHE_BACKEND setting in your settings file. Here's an explanation of all available values for CACHE_BACKEND. Memcached¶ By far the fastest, most efficient type of cache available to Django, Memcached is an entirely memory-based cache framework originally developed to handle high loads at LiveJournal.com and subsequently open-sourced by Danga Interactive. It's used by sites such as Facebook and Wikipedia to reduce database access and dramatically increase site performance. Memcached is available for free at . It runs as a daemon and is allotted a specified amount of RAM. All it does is provide an fast interface for adding, retrieving and deleting arbitrary data in the cache. All data is stored directly in memory, so there's no overhead of database or filesystem usage. After installing Memcached itself, you'll need to install the Memcached Python bindings, which are not bundled with Django directly. Two versions of this are available. Choose and install one of the following modules: - The fastest available option is a module called cmemcache, available at . -_BACKEND, separated by semicolons. In this example, the cache is shared over Memcached instances running on IP address 172.19.26.240 and 172.19.26.242, both on port 11211: CACHE_BACKEND = 'memcached:/): CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11212;172.19.26.244:11213/' A final point about Memcached is that memory-based caching has one. Database caching¶ backend uses the same database as specified in your settings file. You can't use a different database backend for your cache table. Database caching works best if you've got a fast, well-indexed database server. Filesystem caching¶'re on Windows, put the drive letter after the file://, like this:. Each cache value will be stored as a separate file whose contents are the cache data saved in a serialized ("pickled") format, using Python's pickle module. Each file's name is the cache key, escaped for safe filesystem use. Local-memory caching¶:///' Note that each process will have its own private cache instance, which means no cross-process caching is possible. This obviously also means the local memory cache isn't particularly memory-efficient, so it's probably not a good choice for production environments. It's nice for development. Dummy caching (for development)¶ CACHE_BACKEND like so: CACHE_BACKEND = 'dummy:///' Using a custom cache backend¶ While Django includes support for a number of cache backends out-of-the-box, sometimes you might want to use a customized cache backend. To use an external cache backend with Django, use a Python import path as the scheme portion (the part before the initial colon) of the CACHE_BACKEND URI, like so: CACHE. CACHE_BACKEND arguments¶ Each cache backend may take arguments. They're given in query-string style on the CACHE_BACKEND setting. Valid arguments are as follows: timeout: The default timeout, in seconds, to use for the cache. This argument defaults to 300 seconds (5 minutes). max_entries: For the locmem, filesystem and database backends, the maximum number of entries allowed in the cache before old values are deleted. This argument defaults to 300. cull_frequency: The fraction = "locmem:///?timeout=30&max_entries=400" Invalid arguments are silently ignored, as are invalid values of known arguments. The per-site cache¶_CLASSES setting, as in this example: MIDDLEWARE_CLASSES = ( _CLASSES below if you'd like the full story. Then, add the following required settings to your Django settings.. Additionally, the cache middle.. The per-view cache¶ def my_view(request): ... my_view = cache_page(my_view, 60 * 15) Or, using Python 2.4's decorator syntax: = ('', (r'^foo/(\d{1,2})/$', my_view), ) then requests to /foo/1/ and /foo/23/ will be cached separately, as you may expect. But once a particular URL (e.g., /foo/23/) has been requested, subsequent requests to that URL will use the cache. Specifying per-view cache in the URLconf¶ = ('', (r'^foo/(\d{1,2})/$', my_view), ) Here's the same thing, with my_view wrapped in cache_page: from django.views.decorators.cache import cache_page urlpatterns = ('', (r'^foo/(\d{1,2})/$', cache_page(my_view, 60 * 15)), ) If you take this approach, don't forget to import cache_page within your URLconf. Template fragment caching¶. For example: {%. The low-level cache API¶.) The cache module, django.core.cache, has} Finally, you can delete keys explicitly with delete(). This is an easy way of clearing the cache for a particular object: >>> cache.delete('a'). Upstream caches¶. - Your Django Web site number of HTTP headers exist to instruct upstream caches to differ their cache contents depending on designated variables, and to tell caching mechanisms not to cache particular pages. We'll look at some of these headers in the sections that follow. Using Vary headers path (e.g., "/stories/2005/jun/23/bank_robbed/"). vary_on_headers view decorator, like so: from django.views.decorators.vary import vary_on_headers # Python 2.3 syntax. def my_view(request): # ... my_view = vary_on_headers(my_view, 'User-Agent') # Python 2.4+ decorator syntax. upstream.utils.cache import patch_vary_headers def my_view(request): # ... response = render_to_response('template_name', context) patch_vary_headers(response, ['Cookie']) return response patch_vary_headers takes an HttpResponse instance as its first argument and a list/tuple of case-insensitive header names as its second argument. For more on Vary headers, see the official Vary spec. Controlling cache: Using other headers¶ Other problems with caching are the privacy of data and the question of where data should be stored in a cascade of caches. A user usually faces two kinds of caches: his or her own browser cache (a private cache) and his or her., 3,600¶ Django comes with a few other pieces of middleware that can help optimize your apps' performance: - django.middleware.http.ConditionalGetMiddleware adds support for modern browsers to conditionally GET responses based on the ETag and Last-Modified headers. - django.middleware.gzip.GZipMiddleware compresses responses for all moderns browsers, saving bandwidth and transfer time. Order of MIDDLEWARE_CLASSES¶ If you use caching middleware, it's important to put each half in the right place within the MIDDLEWARE_CLASSES adds Cookie - GZipMiddleware adds Accept-Encoding - LocaleMiddleware adds.
http://docs.djangoproject.com/en/dev/topics/cache/
crawl-002
refinedweb
1,027
58.18
> On July 1, 2015, 1:59 p.m., Stephan Erb wrote: > > For sake of transparency: Turns out not everyone thinks this is a great > > idea. For details, see: > > > > Bill Farner wrote: > Would it be reasonable to impose namespacing of labels, and expect the > downstream consumer to use globbing if they want to consume data that spans > namespaces? Advertising Yeah, I guess namespacing should solve the issue. Unless of course people start to use way to many labels, but I guess Mesos/Aurora will run into problems long before the monitoring solution does. - Stephan ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: ----------------------------------------------------------- On June 30, 2015, 9:36 p.m., Stephan Erb wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > > ----------------------------------------------------------- > > (Updated June 30, 2015, 9:36 p.m.) > > > Review request for Aurora, Bill Farner and Zameer Manji. > > > Bugs: AURORA-1052 > > > > Repository: aurora > > > Description > ------- > > Map Aurora task metadata to Mesos task labels > > > Diffs > ----- > > src/main/java/org/apache/aurora/scheduler/mesos/MesosTaskFactory.java > e934f570e4a728470408970485abe0809487d312 > > src/test/java/org/apache/aurora/scheduler/mesos/MesosTaskFactoryImplTest.java > 1b2a7948ebb946a2e12b0eded6acf4ce3c8e20f9 > > Diff: > > > Testing > ------- > > ./gradlew -Pq build > > > Thanks, > > Stephan Erb > >
https://www.mail-archive.com/reviews@aurora.apache.org/msg01300.html
CC-MAIN-2017-17
refinedweb
186
51.55
I have read from here and here, and made multiple spiders running in the same process work. However, I don't know how to design a signal system to stop the reactor when all spiders are finished My code is quite similar with the following() After all the crawler stops, the reactor is still running. If I add the statement crawler.signals.connect(reactor.stop, signal=signals.spider_closed) to the setup_crawler function, reactor stops when first crawler closed. Can any body show me howto make the reactor stops when all the crawler finished? What I usually do, in PySide (I use QNetworkAccessManager and many self created workers for scrapping) is to mantain a counter of how many workers have finished processing work from the queue, when this counter reach the number of created workers, a signal is triggered to indicate that there is no more work to do and the application can do something else (like enabling a "export" button so the user can export it's result to a file, etc). Of course, this counter have to be inside a method and have to be called upon a signal is emitted by the crawler/spider/worker. It might not be a elegant way of fixing your problem, but, Have you tried this anyway? Further to shackra's answer, taking that route does work. You can create the signal receiver as a closure which retains state, which means that it keeps record of the amount of spiders that have completed. Your code should know how many spiders you are running, so it should be a simple matter of checking when all have run, and then running reactor.stop(). e.g Link the signal receiver to your crawler: crawler.signals.connect(spider_finished, signal=signals.spider_closed) Create the signal receiver: def spider_finished_count(): spider_finished_count.count = 0 def inc_count(spider, reason): spider_finished_count.count += 1 if spider_finished_count.count == NUMBER_OF_SPIDERS: reactor.stop() return inc_count spider_finished = spider_finished_count() NUMBER_OF_SPIDERS being the total number of spiders you are running in this process. Or you could do it the other way around and count down from the number of spiders running to 0. Or more complex solutions could involve keeping a dict of which spiders have and have not completed etc. NB: inc_count gets sent spider and reason which we do not use in this example but you may wish to use those variables: they are sent from the signal dispatcher and are the spider which closed and the reason (str) for it closing. Scrapy version: v0.24.5
http://www.dlxedu.com/askdetail/3/c935c86e339cd226b5e7f300d623f161.html
CC-MAIN-2018-47
refinedweb
419
60.85
I have written the following code: class FigureOut: first_name = None last_name = None def setName(self, name): fullname = name.split() self.first_name = fullname[0] self.last_name = fullname[1] def getName(self): return self.first_name, self.last_name f = FigureOut() f.setName("Allen Solly") name = f.getName() print (name) ('Allen', 'Solly') list Python will return a tuple in this case since the return specifies comma separated values. Multiple values can only be returned inside containers. You can look at the byte code generated from a function returning values like yours by using dis.dis. For comma separated values, it looks like: def foo(a, b): return a,b import dis dis.dis(foo) 2 0 LOAD_FAST 0 (a) 3 LOAD_FAST 1 (b) 6 BUILD_TUPLE 2 9 RETURN_VALUE As you can see the values are first loaded on the stack and then a BUILD_TUPLE (grabbing the previous 2 elements placed on the stack) is generated. Python knows to create tuple due to thhe commas being present. You could alternatively specify another return type, for example a list, for this case a BUILD_LIST is going to be issued following the same semantics as it's tuple equivalent: def foo(a, b): return [a, b] import dis dis.dis(foo) 2 0 LOAD_FAST 0 (a) 3 LOAD_FAST 1 (b) 6 BUILD_LIST 2 9 RETURN_VALUE To summarize, one actual object is returned, if that object is of a container type, it in essence can contain multiple values giving the impression of multiple results.
https://codedump.io/share/VWLuu1zMwzQh/1/how-does-python-return-multiple-values-from-a-function
CC-MAIN-2017-17
refinedweb
246
53.92
05 December 2017 0 comments Python, Web development, PostgreSQL The documentation about how to use synonyms in Elasticsearch is good but because it's such an advanced topic, even if you read the documentation carefully, you're still left with lots of questions. Let me show you some things I've learned about how to use synonyms in Python with elasticsearch-dsl. I'm originally from Sweden but moved to London, UK in 1999 and started blogging a few years after. So I wrote most of my English with British English spelling. E.g. "centre" instead of "center". Later I moved to California in the US and slowly started to change my own English over to American English. I kept blogging but now I would prefer to write "center" instead of "centre". Another example... Certain technical words or namings are tricky. For example, is it "go" or is it "golang"? Is it "React" or is it "ReactJS"? Is it "PostgreSQL" or "Postgres". I never know. Not only is it sometimes hard to know which is right because people use them differently, but also sometimes "brands" like that change over time since inception, the creator might have preferred something but the masses of people call it something else. So with all that in mind, not only has the nature of my documents (my blog post texts) changed in terminology over the years. My visitors are also coming both from British English and American English. Or, suppose that I knew the perfect way to phrase that relational database that starts with "Postg...". Even if my text is always spelled one particular way, perfectly, my visitors will most likely refer to it as "postgres" sometimes and "postgresql" sometimes. The simple solution, match all! Let's jump straight into the code. People who have used elasticsearch_dsl should be familiar with most of this: from elasticsearch_dsl import ( DocType, Text, Index, analyzer, Keyword, token_filter, ) from django.conf import settings index = Index(settings.ES_INDEX) index.settings(**settings.ES_INDEX_SETTINGS) synonym_tokenfilter = token_filter( 'synonym_tokenfilter', 'synonym', synonyms=[ 'reactjs, react', # <-- important ], ) text_analyzer = analyzer( 'text_analyzer', tokenizer='standard', filter=[ # The ORDER is important here. 'standard', 'lowercase', 'stop', synonym_tokenfilter, # Note! 'snowball' comes after 'synonym_tokenfilter' 'snowball', ], char_filter=['html_strip'] ) class BlogItemDoc(DocType): oid = Keyword(required=True) title = Text( required=True, analyzer=text_analyzer ) text = Text(analyzer=text_analyzer) index.doc_type(BlogItemDoc) This code above is copied from the "real code" but a lot of distracting things that aren't important to the point, have been removed. The magic sauce here is that you create a token_filter and you can call it whatever you want. I called mine synonym_tokenfilter and that's also what the instance variable is called. Notice the list of synonyms. It's a plain list of strings. Specifically, it's a list of 1 string reactjs, react. Let's see how Elasticsearch analyzes this: First with the text react. $ curl -XGET '' { "tokens" : [ { "token" : "react", "start_offset" : 0, "end_offset" : 5, "type" : "<ALPHANUM>", "position" : 0 }, { "token" : "reactj", "start_offset" : 0, "end_offset" : 5, "type" : "SYNONYM", "position" : 0 } ] } Note that the analyzer snowball, converted reactjs to reactj which is wrong in a sense, because there's not plural "reacts", but it ultimately doesn't matter much. At least not in this particular case. Secondly, analyze it with the text reactjs: $ curl -XGET '' { "tokens" : [ { "token" : "reactj", "start_offset" : 0, "end_offset" : 7, "type" : "<ALPHANUM>", "position" : 0 }, { "token" : "react", "start_offset" : 0, "end_offset" : 7, "type" : "SYNONYM", "position" : 0 } ] } Same tokens! Just different order. Now, the real proof is in actually doing a search on this. Look at these two screenshots: It worked! Different ways of phrasing your search but ultimately found all the documents that matched independent of different people or different authors might prefer to spell it. Try it for yourself: how it would look like before, when synonyms for postgres and postgresql had not been set up yet: One immediate thought I have is what a mess I've been in blogging about that database. Clearly I struggled to pick one way to spell it consistently. And here's what it would look like once that synonym has been set up: Go is a programming language. That term, too, struggles with a name ambiguity. Granted, I rarely hear people say "golang", but it's definitely a written word that turns up a lot. The problem with setting up a synonym for go == golang is that "go" is common English word. It's also the stem of the word "going" and such. So if you set up a synonym, like I did for react and reactjs above, this is what happens: This is now the exact search results as if I had searched for go. But look what it matched! It matched "Go" (good) but also "Going real simple..." (bad) and "...I should go" (bad). If someone searches for the simple term "go" they probably intend to search for the Go programming language. All that snowball stemming is critical for a bunch of other non-computer-term searches so we can't remove the stemming. The solution is to use what's called "Simple Contraction". And it looks like this: all_synonyms = [ 'go => golang', 'react => reactjs', 'postgres => postgresql', ] That basically means that a search for go is a search for golang. And a document that uses the word go (alone) is indexed as golang. What happens is that the word go gets converted to golang which doesn't get stemming converted down to any other forms. However, this is no silver bullet. Any search for the term go is ultimately a search for the word golang and the regular English word go. So the benefit of all of this was that we got rid of search results matching on going and gone. The case for go is similar to the case for react. Both of these words are nouns but they're also verbs. Should people find "reacting to events" when they search for "react"? If so, use react, reactjs in the synonyms list. Should people only find documents related to noun "React" when they search for "event handing in react" ? If so, use react => reactjs in the synonyms list. It's up to you and your documents and what your users tend to search for. AVKO.org publishes a list of all British to American English synonyms. You can download the whole list here. Unfortunately I can't find a license for this file but the compiled synonyms file is part of this repo which is licensed under MIT. I download this list and keep it in the repo. Then when setting up the analyzer and token filters I load it in like this: synonyms_root = os.path.join( settings.BASE_DIR, 'peterbecom/es-synonyms' ) american_british_syns_fn = os.path.join( synonyms_root, 'be-ae.synonyms' ) with open(american_british_syns_fn) as f: for line in f: if ( '=>' not in line or line.strip().startswith('#') ): continue all_synonyms.append(line.strip()) Now I can finally enjoy not having to worry about the fact that sometimes I spell it "license" and sometimes I spell it "licence". It's all the same now. Brits and Americans, rejoice on common ground! Although I don't have a big problem with this on my techy blog but you can use the Simple Contraction technique to list unambiguously bad spelling. Add dont => don't to the list of synonyms and a search for dont is a search for don't. Last but not least, the official Elasticsearch documentation is the place to go. This blog post hopefully phrases it in more approachable terms. Especially for Python peeps. Follow @peterbe on Twitter
https://www.peterbe.com/plog/synonyms-with-elasticsearch-dsl
CC-MAIN-2020-40
refinedweb
1,252
65.42
If you want to know whether a material is used by any objects in the scene, you can check the UsedBy property. Here’s a Python snippet that finds unused materials in the current material library: from siutils import si if Application.Version().split('.')[0]>= "11": si = si() # win32com.client.Dispatch('XSI.Application') from siutils import log # LogMessage from siutils import disp # win32com.client.Dispatch from siutils import C # win32com.client.constants matlib = si.ActiveProject.ActiveScene.ActiveMaterialLibrary for mat in matlib.Items: if mat.UsedBy.Count == 0: log( '%s <Not used>' % mat.Name ) Hi Steve, how are you? Just one question. Is there an easy way to make this script check if materials are being used in other passes? As far as I see it only checks current pass… Thank you! Easy? I guess easy enough, but kinda ugly…I think you would need to cycle through all the passes, making each one current and then checking the UsedBy count. Or loop over the paritions and see which ones have a material??? Thanks for pointing that out…I forgot about how some things “disappear” when you change the current pass. I noticed the Material Manager Ok, thank you!! Thant’s just what I was trying. Cycling though the passes. I thought there could be a “hidden” bit in the sdk to find if it’s being used in any other pass. Cheers! Sorry to buzz you again… but I’ve notices that when you try to delete a material that’s not being used in the current pass but it is used in any other pass it alerts you saying you’re going to delete a shared property, so… SI knows about it. Hmmm… There must be a way to access that info. Don’t you think? 😉 Maybe you could check the Owners, to see if there is a partition owner? I notice that Owners.Count = 2, even for a material applied to a partition in the non-current pass. You got it! It seems it always has an Owner, which is the Material Library it belongs to, but if it has more than one, those are the partitions (even if they’re empty), groups, etc. where it’s been applied. Thank you!
https://xsisupport.com/2012/05/31/checking-if-a-material-is-being-used-by-somebody-anybody/
CC-MAIN-2019-09
refinedweb
373
76.42
I? Well after writing that little diatribe I was able to create a fairly nice piece of code to help me do what I want. It's called: for_each_cursor. # # This provides a way to run a function on all the cursors, one after another. This maintains # all the cursors and then calls the function with one cursor at a time, with the view's # selection state set to just that one cursor. So any calls to run_command within the function # will operate on only that one cursor. # # The called function is supposed to return a new cursor position or None, in which case value # is taken from the view itself. # # After the function is run on all the cursors, the view's multi-cursor state is restored with # new values for the cursor. # def for_each_cursor(self, function, *args, **kwargs): view = self.view selection = view.sel() # copy cursors into proper regions which sublime will manage while we potentially edit the # buffer and cause things to move around key = "tmp_cursors" cursors = [c for c in selection] view.add_regions(key, cursors, "tmp", "", sublime.HIDDEN) # run the command passing in each cursor and collecting the returned cursor for i in range(len(cursors)): selection.clear() regions = view.get_regions(key) cursor = regions* selection.add(cursor) cursor = function(cursor, *args, **kwargs) if cursor is not None: # update the cursor in its slot regions* = cursor view.add_regions(key, regions, "tmp", "", sublime.HIDDEN) # restore the cursors selection.clear() selection.add_all(view.get_regions(key)) view.erase_regions(key) ** Looks cool! Keep in mind that messing with the cursors breaks cmd+u (soft undo) functionality though. Re: the other questions.memberlist.php?mode=viewprofile&u=2 jps hasn't visited the site in almost a month. I would say "no", "community", and "no". It blows my mind how many people have decided that Jon has checked out. Can a man not go on holiday? Yeah, he definitely can! I think a lot of people would just really appreciate maybe a one-line message about what's going on. You know I like Sublime Text as much as anyone. I started back before Sublime Text X even, and I've written my fair share of plugins (though nothing like Package Control - thanks for that!). But along with that, I also remember when Jon seemed to be fully into the project, and it hasn't seemed that way in a long time. Even if he just posted this: "Guys, I'm really burned out. I'll be back full steam in six months." people wouldn't have to wonder anymore. I think Jon is still working on ST. I think he will probably post something if he decides that he is ever done entirely with ST. I'm not really worried in that sense. I think the general lack of "official" technical support and Jon's general distance from the community are to blame for people's worries. I feel Jon doesn't have an affinity for or doesn't have time to interface with his community much, and that just makes people nervous. I have always felt if he had an official spokes person in the forums that was in the know of the progress of sublime, could hand out official statements, support, post teasers of what's to come, etc. it would help a lot. I think more so in these recent times, there is an expectation of more lively interaction with companies we buy products from. We expect twitter feeds, facebook statuses, etc. I know that I personally would not care to do all of that, but if I had a company, I would probably have someone who likes all of that doing it for me. I personally use products that only see an update once every 3 to 4 months...if your lucky. I don't think Jon having a span of 2 months with no release is a big deal. I think if Jon just interacted with the community more, there would be less flack about long times in between dev releases. I think that is the real problem, not that sublime is still alive or dead, people just want the appearance of it being alive...whether it is or not . I think in this case it also has a lot to do with the expectation created when the project first started. We used to get new builds almost daily; now we're at two months and counting. Again, I think if Jon just said something like "Hey, I'm moving to a longer-term release process. Look for builds every few/every six months, rather than the old nightly frequency", then people would be less worried.
https://forum.sublimetext.com/t/run-command-vs-internal-api-and-the-future-of-sublime/12654/1
CC-MAIN-2016-44
refinedweb
779
74.29
Hi, On Wed, Sep 10, 2008 at 1:35 PM, JonY <10walls at gmail.com> wrote: > Ramiro Polla wrote: [...] >>> Index: libavdevice/vfwcap.c >>> =================================================================== >>> --- libavdevice/vfwcap.c (revision 15290) >>> +++ libavdevice/vfwcap.c (working copy) >>> @@ -19,6 +19,10 @@ >>> * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA >>> */ >>> >>> +#if !defined (WINVER) || (WINVER< 0x0500) >>> +#warning vfwcap requires Windows 2000/XP >>> +#define WINVER 0x0500 >>> +#endif >> >> I think there are other parts of FFmpeg that expect Windows to be >> 2000/XP already (GetProcessTimes() IIRC). Besides this shouldn't be in >> compile-time since FFmpeg specifies all the CFLAGS to be used. >> >> I haven't tested with Windows 98 for over a year now, and I don't think >> anyone cares about it anymore... I don't even know if it will refuse to >> run because of missing functions or just not work properly. > > Good point, I haven't heard of any FFmpeg users still on 9X systems. > Should configure add -DWINVER=0x0500 for MinGW builds? I tested the executable on Windows 98. It runs ok, but -benchmark prints out random numbers and vfwcap doesn't work. I'm thinking maybe -DWINVER=0x0500 should be default for MinGW, and there should be a --target-os=win98 which just sets it appropriately. This way GetProcessTimes() isn't found (and the av_gettime() fallback is used), and there should either be a check for HWND_MESSAGE (which would disable vfwcap) or the implementation could create and use an invisible window instead of using HWND_MESSAGE (only under win98 though). Either way I'm not interested in implementing any of the above ATM. Ramiro Polla
http://ffmpeg.org/pipermail/ffmpeg-devel/2008-September/058168.html
CC-MAIN-2015-32
refinedweb
266
73.27
In a previous post, I provided a template in Groovy that would allow NiFi users to port their ExecuteScript Groovy scripts into the faster InvokeScriptedProcessor (ISP) processor. ISP is faster than ExecuteScript because the script is only reloaded when the code or other config changes, versus ExecuteScript which evaluates the script each time the processor is invoked. Since that post, I've gotten a couple of requests (such as this one) for an ISP template written in Jython, so users that have ExecuteScript processors using Jython scripts can benefit from the ISP performance gains. Ask and ye shall receive :) The following Jython script is meant to be pasted into an InvokeScriptedProcessor's Script Body property, and there is a comment indicating where to add imports and the ExecuteScript code: #//////////////////////////////////////////////////////////// #// imports go here #//////////////////////////////////////////////////////////// from org.apache.nifi.processor import Processor,Relationship from java.lang import Throwable class E(): def __init__(self): pass def executeScript(self,session, context, log, REL_SUCCESS, REL_FAILURE): #//////////////////////////////////////////////////////////// #// Replace 'pass' with your code #//////////////////////////////////////////////////////////// pass #end class class JythonProcessor(Processor): REL_SUCCESS = Relationship.Builder().name("success").description('FlowFiles that were successfully processed are routed here').build() REL_FAILURE = Relationship.Builder().name("failure").description('FlowFiles that were not successfully processed are routed here').build() log = None e = E() def initialize(self,context): self.log = context.logger def getRelationships(self): return set([self.REL_SUCCESS, self.REL_FAILURE]) def validate(self,context): pass def onPropertyModified(self,descriptor, oldValue, newValue): pass def getPropertyDescriptors(self): return [] def getIdentifier(self): return None def onTrigger(self,context, sessionFactory): session = sessionFactory.createSession() try: self.e.executeScript(session, context, self.log, self.REL_SUCCESS, self.REL_FAILURE) session.commit() except Throwable, t: self.log.error('{} failed to process due to {}; rolling back session', [self, t]) session.rollback(true) raise t #end class processor = JythonProcessor() Like the Groovy version, you just need to add your imports to the top of the file, and paste your ExecuteScript Jython code into the executeScript() method, replacing the "pass" line. As always, please let me know how/if this works for you, and if you have any comments, questions, or suggestions. The script is available as a Gist also. Cheers! Hi Matt. Thank you very much for the code, I will try it in different ways and I will give you good feedback. Thanks for your time. Hello good morning Matt. After several adaptations were made to the Jython code, it was inserted into your template. As a result, data can be processed faster and bottlenecks avoided. Really thank you very much for your help, your template is clear and effective. Greetings awesome, thanks Matt! Hi Matt, thanks for the helpful posts. I got a question. I add a global variable to my python file which counts how many times the functions is used for specific method (simply let's say a counter). When the InvokeScriptedProcessor is stopped, I expect it to write the final value of the counter to a file for me. But it appears always to be empty outside of onTrigger method! when I print it inside onTrigger, it shows the counter, but outside of the class it's always empty. And the point is that I want only the final result that's why I want it when the processor is stopped. I tried to add the counter as an attribute of the class, and update it each time by the global variable. but again it appears empty. I don't know what is the good way to return such data from onTrigger when the InvokeScriptedProcessor is stopped? I don't think the script engine keeps track of global variables, so you would want it to be a member of the class that implements the Processor interface (i.e. the one with onTrigger). Note that when the script is reloaded (if any properties change on the main dialog, including the code itself), a new instance of the class will be created, so the member variable holding the count will be re-initialized. If you need to keep track of data in between runs of onTrigger, you can also use the processor's State Management capabilities, check out part 3 of my ExecuteScript Cookbook (the same technique applies to InvokeScriptedProcessor):
https://funnifi.blogspot.com/2017/11/invokescriptedprocessor-template.html
CC-MAIN-2018-22
refinedweb
694
55.34
current position:Home>leetcode 1606. Find Servers That Handled Most Number of Requests(python) leetcode 1606. Find Servers That Handled Most Number of Requests(python) 2022-02-01 08:09:48 【Wang Daya】 「 This is my participation 11 The fourth of the yuegengwen challenge 24 God , Check out the activity details :2021 One last more challenge 」 describe server is available, assign the request to that server. - Otherwise, assign the request to the next available server (wrapping around the list of servers and starting from 0 if necessary). For example, if the ith server is busy, try to assign the request to the (i+1)th server, then the (i+2)th server,. Copy code. Copy code Example 3: Input: k = 3, arrival = [1,2,3], load = [10,12,11] Output: [0,1,2] Explanation: Each server handles a single request, so they are all considered the busiest. Copy code Example 4: Input: k = 3, arrival = [1,2,3,4,8,9,10], load = [5,2,10,3,1,2,2] Output: [1] Copy code Example 5: Input: k = 1, arrival = [1], load = [1] Output: [0] Copy code Note: 1 <= k <= 10^5 1 <= arrival.length, load.length <= 10^5 arrival.length == load.length 1 <= arrival[i], load[i] <= 10^9 arrival is strictly increasing. Copy code analysis According to the meaning , Yes k Servers , Number from 0 To k-1 , Used to process multiple requests at the same time . Every server has unlimited computing power , But you can't process multiple requests at once . Assign the request to the server according to a specific algorithm : - The first i individual ( from 0 Start index ) Request arrival - If all servers are busy , The request will be discarded ( Not at all ) - If the first (i % k) Servers available , Then assign the request to the server - otherwise , Assign the request to the next available server ( Wrap around the list of servers and, if necessary, from 0 Start ), for example , If the first i Server busy , Then try to assign the request to the (i+1) Servers , And then there's the (i+2) Servers , And so on . Given a strictly increasing array of positive integers arrival , among arrival[i] It means the first one i The arrival time of a request , And another array load , among load[i] It means the first one i Load of requests ( Time required to complete ). The goal is to find the busiest server . If the server successfully processes the most requests of all servers , The server is considered the busiest . Returns the containing the busiest server ID A list of . You can return in any order ID. The topics are complicated , But after understanding, it is also relatively simple , The main idea of the solution is to maintain two lists , One is the list of idle servers free , The other is the list of servers that are executing the request buzy , Every time you get a new request , If free It's empty , Discard the request , If it is not empty, start from the (i % k) Look back for idle servers among servers , If not, start from 0 Start looking for idle servers . There are two key points to note , One is initialization free and buzy Choose the class that can be sorted automatically , The other is in free It's best to have built-in functions when looking for idle servers in , If not, use dichotomy to find , If these two steps are not handled well, it is easy to timeout , My code is also just over , Time consuming is still very serious . answer from sortedcontainers import SortedList class Solution(object): def busiestServers(self, k, arrival, load): """ :type k: int :type arrival: List[int] :type load: List[int] :rtype: List[int] """ free = SortedList([i for i in range(k)]) buzy = SortedList([],key=lambda x:-x[1]) count = {i:0 for i in range(k)} for i,start in enumerate(arrival): while(buzy and buzy[-1][1]<=start): pair = buzy.pop() free.add(pair[0]) if not free: continue id = self.findServer(free, i%k) count[id] += 1 free.remove(id) buzy.add([id, start+load[i]]) result = [] MAX = max(count.values()) for k,v in count.items(): if v == MAX: result.append(k) return result def findServer(self, free, id): idx = bisect.bisect_right(free, id-1) if idx!=len(free): return free[idx] return free[0] Copy code Running results Runtime: 5120 ms, faster than 10.00% of Python online submissions for Find Servers That Handled Most Number of Requests. Memory Usage: 38.6 MB, less than 50.00% of Python online submissions for Find Servers That Handled Most Number of Requests. Copy code Original link :leetcode.com/problems/fi…
https://en.pythonmana.com/2022/02/202202010809464446.html
CC-MAIN-2022-27
refinedweb
783
59.03
On Tue, 24 Apr 2001, giacomo wrote: > > > On Tue, 24 Apr 2001, Berin Loritsch wrote: > > > Donald Ball wrote: > > > > > > hey guys. it seems that when you load a aggregate part: > > > > > > <map:part > > > > > > that the namespace specified becomes the default namespace for the > > > included document, not just for the newly created element. that can break > > > stylesheets. i'd say that the namespace should only apply to the element. > > > others? > > > >. > > Well, I don't know how to do that at the moment but feel free to > implement it if you know how :/ . Sorry, it will be simple if we add a attribute "prefix" to the map:part element, right (will add it soon)? BTW: Should we rename the "ns" attribute to "namespace"? Giacomo --------------------------------------------------------------------- To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org For additional commands, email: cocoon-dev-help@xml.apache.org
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200104.mbox/%3CPine.LNX.4.31.0104242258210.1846-100000@lap1.otego.com%3E
CC-MAIN-2014-42
refinedweb
140
65.83
Make own component from Quasar component Hello, Maybe my question is not really Quasar specific but I did not find any answer till now. I want to make my own component from Quasar components. For example I have this in my vue file: <q-select filter <q-select filter As you can see I use two q-select component with different arrays but with the same class definitions (filter and filter-placeholder). I would like to make a new component where these two class style added and I would like to use the new component in my vue like this: <m-select : <m-select : So I would write only the difference. I tried everything: mixins and extends as well but nothing works. This is my “almost” solution: <template> <q-select filter filter-placeholder=“select”/> </template> <script> import { QSelect } from ‘quasar’ export default { name: ‘m-select’, components: { QSelect }, mixins: [QSelect] } </script> But this is not working because I have warnings that I need to use required props “value” and “options”. What is the solution? Is it possible in Vue.js? Thanks - spectrolite I’m sure this is not only possible but very powerful. My way of doing this is quite messy so I’m joining my voice to yours, hoping that @rstoenescu will find the time to share a preferred/optimal way of extending quasar locally. IMHO this will also help foster more component PRs. - benoitranque Some kind of @extendsingle file component would be ideal. Pretty sure Raz already does this internally, so its a matter of learning how it is done… The q-side-link is an extension of q-item. From the source, looks like Raz uses Vue mixins. This may be what you are looking for - s.molinari Yeah, Quasar uses mixins quite a bit. Scott I am in need of making a pin pad component for easy entry of user pins. Seems like it would make use of a combination of existing Quasar components. Maybe then some tutorial on using the mixins and/or making deriving “quasar” components from exiting ones. I agree if with @spectrolite about the commmunity contributing compenents. As the library of Q components grows Quasar will gain more momentum and thus be able to attract more funding. It’s a win for @rstoenescu to show teach us how to fish. As far as I know, there is no way in Vue.js to really inherit from components, because Vue uses composition over inheritance. One option to write reusable components is mixins. A Mixin is basically a Vue component which is merged into your component. For example, you always want to print out foo barin a dozen of your components. Instead of manually writing created () => { console.log('foo bar') }in each component you could define a mixin: export const FooBarMixin { created: function () { console.log('foo bar') } } And in each of your components you would write the following: import FooBarMixin from 'FooBarMixin' export default { // Your normal component stuff mixins: [FooBarMixin] } Now your component is merged with the mixin and prints out foo bar. But you can’t use mixins to extend the functionality of a whole single file component. If you use a Quasar component as a mixin, all the internal methods and data would be merged into your component, but the template would not. So you had to manually copy the template from that Quasar component into your component, which would not make it update safe. But how do you achieve the outcome @losika asked for? The solution is simple. Here you do not want to extend a Quasar component, instead, you want to wrap it, to hide some of the implementation details. So let’s write such a wrapping component: <template> <q-select : </template> <script> import { QSelect } from 'quasar' export default { props: ['value', 'options'], methods: { handleChange (newVal) { this.$emit('input', newVal) } }, components: { QSelect } } </script> Note that we are passing valueto the QSelectcomponent and we are listening for the changeevent on the QSelect. This is because v-model="foo"is just syntactic sugar for :value="foo" @input="foo = $event.target.value". Also note, that in order to pass additional props that are defined on QInputto the inner component, each of them has to be explicitly defined on the wrapping components and passed to the QInput. So often when you just want to use Quasar components to build another component you are using composition and not inheritance. For example @dgk in your example, you do not need to change something on existing components, but you want to build a new component based on Quasar components. So let’s say we build up your pin pad based on QBtn: <template> <div> <div v- <div v- <q-btn @ {{ (row-1)*3 + col }} </q-btn> </div> </div> </div> </template> <script> import { QBtn } from 'quasar' export default { data () { return { pin: '' } }, methods: { handleClick (digit) { this.pin += digit } }, components: { QBtn } } </script> Hopefully, that clarifies a bit when to use mixins and when to just build your component on top of other components. If I am missing something please let me know :slight_smile: - spectrolite Thank you @spectrolite :slight_smile: If I have the time to, I can open up a new topic in the Show&Tell channel where I could post this and also elaborate on some other things like using Stylus variables. - spectrolite @a47ae I’m sure @rstoenescu would greatly appreciate it, and most of it would probably end up in the docs at some point. Go for it ! This is still work in progress and I will update this If I have time to, but this should be a good starting point: @rstoenescu I am really glad that all of you find this guide helpful and would love if this could be added as an official doc page. But I am on vacation until next week, so I do not have time to look into it, but maybe you could already take the existing post and next week I will update it and maybe rewrite some stuff. :slight_smile: Thanks :slight_smile: Will write you as soon as I am back! It works like a charm! Thank you very much. I have to correct myself. I works when the parent component declared as single file component (like QSelect, QBtn, etc. you can see in source code). But if the parent component created in js file (like QList) then it does not work for me. What works is the following: import { QList } from ‘quasar’ export default { name: ‘m-list’, mixins: [ QList ], props: { noBorder: { default: true }, separator: { default: true } } } It there a better solution for that? And another problem is with refs. a47ae’s solution does not work on QModal. First of all you have to copy methods without that it will complain on ‘open’ mehtods: <template> <q-modal noBackdropDismiss noEscDismiss v-bind: </template> <script> import { QModal } from ‘quasar’ export default { name: ‘m-modal’, components: { QModal }, methods: Object.assign({}, QModal.methods) } </script> But after that the problem is with $refs.content. The error is “Cannot set property ‘scrollTop’ of undefined”. You can see that in QModal’s setTimeout function there is a reference to this.$refs.content what returns undefined. Is there any way to copy refs?
http://forum.quasar-framework.org/topic/673/make-own-component-from-quasar-component
CC-MAIN-2018-09
refinedweb
1,192
62.17
PnP IconPicker control Icon Picker control in the SPFx web part. This control will allow us to search and selects an icon from office-ui-fabric-react icons. Icon Picker Control import { IconPicker } from '@pnp/spfx-controls-react/lib/IconPicker'; Step – Add state property to hold selection icon Create Interface to Store State, add below code below last import statement export interface IControlsState { icon:any; } Modify component class as below export default class Controls extends React.Component<IControlsProps, IControlsState> { constructor(props: IControlsProps, state: IControlsState) { super(props); this.state = {icon: ""}; } public render(): React.ReactElement<IControlsProps> { .... .... .... Modify> <br></br> <IconPicker buttonLabel={'Select Office UI Fabric Icon'} onChange={(iconName: string) => { this.setState({icon: iconName}); }} onSave={(iconName: string) => { this.setState({icon: iconName}); }} /> <br></br> Selected Icon is: <br></br> <Icon style={{fontSize:'24px'}} iconName={this.state.icon} </div> ); } So if you look at above code, what we are doing is. - Added IconPicker control - On selection of icon from Icon picker control, set the state of selected icon name. - Display this icon by using Icon control of Office UI fabric and setting the iconName property from state object. For now, let us see this web part in action. Run gulp serve gulp serve Open workbench in context of any SharePoint online site. On Page load. Click on Select Office UI Fabric Icon. On selecting Icon and click save. Conclusion – This control can be very useful when we want to provide the end-user capability of selecting icons and using them for the user interface. Thanks for reading, hope you enjoyed it.
https://siddharthvaghasia.com/2020/03/24/pnp-iconpicker-control-in-spfx/
CC-MAIN-2020-40
refinedweb
256
50.43
Hi John, I added one line that shows the problem. If you > do a set_xticks() first, then the usual ([]) doesn't > blank them. Here's code that should suppress the xticks > for all but the bottom-most graphs in each column. It > works if you comment out the set_xticks line (HERE HERE > HERE). Hi Charles, As an aside, you'll be interested in these super secret undocumented methods of the Subplot class, which I implemented so I wouldn't have to do the if i % COLS == 1 tricks that I came to know and love in matlab. I too find myself making lots-o-subplots and blanking out the labels Subplot methods: def is_first_col(self): def is_first_row(self): def is_last_row(self): def is_last_col(self): which enables you to write your loop for i in range(1,NUMPLOTS+1): ax = subplot (ROWS, COLS, i) ax.set_xticks((0,1,2)) title('Simple ' + str(i)) if ax.is_first_col(): ax.set_ylabel('voltage (mV)') else: ax.set_yticklabels ([]) if ax.is_last_row(): ax.set_xlabel('time (s)') else: ax.set_xticklabels ([]) Now onto your problem. Thanks for the example script. I am not sure which version of matplotlib you are working with, but it appears there is a clear bug in Axis.set_ticklabels in the part which reads for s, label in zip(self._ticklabelStrings, self._ticklabels): label.set_text(s) label.update_properties(override) when len(self._ticklabelStrings) is less than len(self._ticklabels), the label text doesn't get updated. Duh! Something like this should work better. At least with matplotlib-0.32a it handles your example def set_ticklabels(self, ticklabels, *args, **kwargs): """ Set the text values of the tick labels. ticklabels is a sequence of strings. Return a list of AxisText instances """ ticklabels = ['%s'%l for l in ticklabels] self._ticklabelStrings = ticklabels override = {} override = backends._process_text_args(override, *args, **kwargs) Nnew = len(self._ticklabelStrings) existingLabels = self.get_ticklabels() for i, label in enumerate(existingLabels): if i<Nnew: label.set_text(self._ticklabelStrings[i]) else: label.set_text('') label.update_properties(override) return existingLabels > Is this comment better for -users or -devel? Or bug > tracking? I think users, since my answer includes a *possible* fix which others may find useful. Let me know how this works for you; I'll take a closer look on Thurs when I have some breathing room again. JDH
https://discourse.matplotlib.org/t/ticklabels/114
CC-MAIN-2021-43
refinedweb
380
61.02
The goal of my script is to get input from a user for how much time they want before shutdown and also the message they want to appear during the shutdown process. My problem is I cannot figure out exactly how to put the variables into the shutdown command and have it execute properly. import os time = (input("How much time till shutdown?")) message = input("What is your shutdown message?") shutdown = "shutdown /f /r /t", time "c", message os.system(shutdown) You need to assemble (by concatenating) the string shutdown so that it matches exactly what you want, including the quote marks around the comments. For this purpose it is good to use single quotes for the string literals used in the concatenation so that unescaped double quotes can be freely used inside the strings. Something like: time = input("How much time till shutdown? ") message = input("What is your shutdown message? ") shutdown = 'shutdown /f /r /t ' + time + ' /c "' + message +'"' print(shutdown) A typical run: How much time till shutdown? 60 What is your shutdown message? Goodbye shutdown /f /r /t 60 /c "Goodbye"
https://codedump.io/share/1cwKUaZWBvw5/1/using-a-variable-inside-the-shutdown-command
CC-MAIN-2017-43
refinedweb
183
73.47
#include <hallo.h> * David Weinehall [Fri, Aug 13 2004, 08:28:17AM]: > > I would say that even cosmetical changes are ok for an NMU, provided that the > > maintainer doesn't object to them (But I don't have interest in filing such > > bugs, let alone interest to NMU). > > Just for the record, Ryan has just uploaded v0.2.34 (the latest upstream > version, afaik) of esd, so at least I am very pleased... Please try Fine. After the gqview and esd now, this is almost suitable for the best-packaging reference guide... :( Regards, Eduard. -- "The quiet ones are the ones who change the universe. The loud ones only take the credit." L.M. ItB
https://lists.debian.org/debian-devel/2004/08/msg00819.html
CC-MAIN-2017-04
refinedweb
114
81.63
Get the highlights in your inbox every week. CRI-O: All the runtime Kubernetes needs CRI-O: All the runtime Kubernetes needs Learn about CRI-O, a lightweight alternative to using Docker as the runtime for Kubernetes. Subscribe now.. The Container Runtime Interface (CRI) Initially, Kubernetes was built on top of Docker as the container runtime. Soon after, CoreOS announced the rkt container runtime and wanted Kubernetes to support it, as well. So, Kubernetes ended up supporting Docker and rkt, although this model wasn't very scalable in terms of adding new features or support for new container runtimes.The Container Runtime Interface (CRI) was introduced to fix this problem. The CRI consists of the image service and the runtime service. The idea behind the CRI was to decouple the kubelet (the Kubernetes component responsible for running a set of pods on a local system) from the container runtime using a gRPC API. That enables anyone to implement the CRI as long as they implement all the methods. Also, there were problems when trying to update Docker versions to work with Kubernetes. Docker was growing in scope and adding features to support other projects, such as Swarm, which weren't necessary for Kubernetes and were causing instability with Docker updates. What is CRI-O? The CRI-O project started as a way to create a minimal maintainable runtime dedicated to Kubernetes. It emerged from Red Hat engineers' work on a variety of tools related to containers, like skopeo, which is used for pulling images from a container registry, and containers/storage, which is used to create root filesystems for containers supporting different filesystem drivers. Red Hat has also been involved as maintainers of container standardization through the Open Container Initiative (OCI). CRI-O is a community driven, open source project developed by maintainers and contributors from Red Hat, Intel, SUSE, Hyper, IBM, and others. Its name comes from CRI and OCI, as the goal of the project is to implement the Kubernetes CRI using standard OCI-based components.Basically, CRI-O is an implementation of the Kubernetes CRI that allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods. It currently supports runc and Clear Containers, but in principle any OCI-conformant runtime can be plugged in. CRI-O supports OCI container images and can pull from any compliant container registry. It is a lightweight alternative to using Docker as the runtime for Kubernetes. The scope of the project is tied to the CRI. Currently the only supported user of CRI-O is Kubernetes. Given this, the project maintainers strive to ensure that CRI-O always works with Kubernetes by providing a stringent and comprehensive test suite. These end-to-end tests are run on each pull request to ensure it doesn't break Kubernetes, and the tests are constantly evolving to keep pace with changes in Kubernetes. Components CRI-O is made up of several components that are found in different GitHub repositories. OCI-compatible runtimes CRI-O supports any OCI-compatible runtime, including runc and Clear Containers, which are tested using a library of OCI runtime tools that generate OCI configurations for these runtimes. Storage The containers/storage library is used for managing layers and creating root filesystems for the containers in a pod. OverlayFS, device mapper, aufs, and Btrfs are implemented, with Overlay as the default driver. Support for network-based filesystem images (e.g., NFS, Gluster, Cefs) is on the way. Image The containers/image library is used for pulling images from registries. It supports Docker version 2 schema 1 and schema 2. It also passes all Docker and Kubernetes tests. Networking The Container Network Interface (CNI) sets up networking for the pods. Various CNI plugins, such as Flannel, Weave, and OpenShift-SDN, have been tested with CRI-O and are working as expected. Monitoring CRI-O's conmon utility is used to monitor the containers, handle logging from the container process, serve attached clients, and detect out of memory (OOM) situations. Security Container security separation policies are provided by a series of tools including SELinux, Linux capabilities, seccomp, and other security separation policies described in the OCI specification. Pod architecture CRI-O offers the following setup: The architectural components are broken down as follows: - Pods live in a cgroups slice; they hold shared IPC, net, and PID namespaces. - The root filesystem for a container is generated by the containers/storage library when CRI CreateContainer/RunPodSandbox APIs are called. - Each container has a monitoring process (conmon) that receives the master pseudo-terminal (pty), copies data between master/slave pty pairs, handles logging for the container, and records the exit code for the container process. - The CRI Image API is implemented using the containers/image library. - Networking for the pod is setup through CNI, so any CNI plugin can be used with CRI-O. Status CRI-O version 1.0.0 and 1.8.0 have been released; 1.0.0 works with Kubernetes 1.7.x. The releases after 1.0 are version matched with major Kubernetes versions, so it is easy to tell that CRI-O 1.8.x supports Kubernetes 1.8.x, 1.9.x will support Kubernetes 1.9.x, and so on. Try it yourself - Minikube supports CRI-O. - It is easy to set up a Kubernetes local cluster using instructions in the CRI-O README. - CRI-O can be set up using kubeadm; try it using this playbook. How can you contribute? CRI-O is developed at GitHub, where there are many ways to contribute to the project. - Look at the issues and make pull requests to contribute fixes and features. - Testing and opening issues for any bugs would be very helpful, for example by following the README and testing various Kubernetes features using CRI-O as the runtime. - Kubernetes' Tutorials is a good starting point to test out various Kubernetes features. - The project is introducing a command line interface to allow users to play/debug the back end of CRI-O and needs lots of help building it out. Anyone who wants to do some golang programming is welcome to take a stab. - Help with packaging and documentation is always needed. Communication happens at #cri-o on IRC (freenode) and on GitHub issues and pull requests. We hope to see you there. Learn more in Mrunal Patel's talk, CRI-O: All the Runtime Kubernetes Needs, and Nothing More, at KubeCon + CloudNativeCon, which will be held December 6-8 in Austin, Texas.
https://opensource.com/article/17/12/cri-o-all-runtime-kubernetes-needs
CC-MAIN-2019-39
refinedweb
1,088
55.84
| Join Rate It (22) Last post 08-01-2007 8:01 AM by Johnny be good. 173 replies. Sort Posts: Oldest to newest Newest to oldest Below is a C# and VB.NET class that demonstrates using System.Net.Mail to send an email. Download C# System.Net.Mail HelperDownload VB.NET System.Net.Mail Helper Calling the function from code MailHelper.SendMailMessage("fromAddress@yourdomain.com", "toAddress@yourdomain.com", "bccAddress@yourdomain.com", "ccAddress@yourdomain.com", "Sample Subject", "Sample body of text for mail message") MailHelper.cs using public class MailHelper{ /// <summary> /// Sends an mail message /// </summary> /// <param name="from">Sender address</param> /// <param name="to">Recepient address</param> /// <param name="bcc">Bcc recepient</param> /// <param name="cc">Cc recepient</param> /// <param name="subject">Subject of mail message</param> /// <param name="body">Body of mail message</param> public static void SendMailMessage(string from, string to, string bcc, string cc, string subject, string body) { // Instantiate a new instance of MailMessage MailMessage mMailMessage = new MailMessage(); // Instantiate a new instance of SmtpClient SmtpClient mSmtpClient = new SmtpClient(); // Send the mail message mSmtpClient.Send(mMailMessage); }} MailHelper.vb Imports Public ' Set the sender address of the mail message mMailMessage.From = New MailAddress(from) ' Set the recepient address of the mail message mMailMessage.To.Add(New MailAddress(recepient)) ' Check if the bcc value is nothing or an empty string If Not bcc Is Nothing And bcc <> String.Empty Then ' Set the Bcc address of the mail message mMailMessage.Bcc.Add(New MailAddress(bcc)) End If ' Check if the cc value is nothing or an empty value If Not cc Is Nothing And cc <> String.Empty Then ' Set the CC address of the mail message mMailMessage.CC.Add(New MailAddress(cc)) End If '.Normal ' Instantiate a new instance of SmtpClient Dim mSmtpClient As New SmtpClient() ' Send the mail message mSmtpClient.Send(mMailMessage) End SubEnd Class Web.config <? IdiosMachi wrote:the String class has a new method in 2.0 called IsNullOrEmpty that is pretty handy, and would save a little code for your CC/BCC conditions. Nice. Thanks for pointing that out. That's one of those little tidbits I missed in .NET 2.0. deokule2003 wrote:I want to send thousands emails. Shall I use this function in loop or any optimization is required to given code? That depends on what thousands is equal to. You might be better off purchasing a commercial componenet to handle that and/or creating a multi-threading Windows service. HTH,Ryan Strongtypes, Please I used the mail Helper.Vb that you put out and to be quite honest, I dont know how it works. I copied the code in to a new webpage and I cant see anything, how do I know its working. Please am sorry that I might be sounding a little dumb but hey!!! just lost. I will appreciate all the help. Most importantly I want the mail to respond onclick a submit button, how do I achieve that. PLEASE Thanks Hi, you put the class MailHelper in the App_Code subfolder, create a new file or download the codefile, and in the button's click eventhandler you can call it like StrongTypes explained in his original post. Grz, Kris. The file download comes with the files as it should be placed in your project. Also, when you add a class file in Visual Studio, it will prompt you to place it in the App_Code folder. Otherwise, create a folder in the root of the project named App_Code and place it there. How you call the helper is in my original thread.
http://forums.asp.net/t/971802.aspx
crawl-001
refinedweb
592
65.73
Details on how to use the multiple subdomains presently activated for Wikibooks. (From discussions moved from the Staff Lounge). Subdomains introducedEdit In case you hadn't noticed, Wikibooks now has subdomains. I set up redirects from and to Wikibooks portal, and then changed the main page of this wiki (which is now officially the English wiki) to be Main page. So if someone types en.wikibooks.org, it will take them to the English site, but if someone types wikibooks.org, it will take them to the old portal. All other URLs under the wikibooks.org domain are redirected to the corresponding page in en.wikibooks.org, so hopefully no links to us from outside will be broken. Linking to other Wikimedia projectsEdit To link to other projects from wikibooks, you can use the following prefixes, which link to the same language of a different project: - w: wikipedia - wikt: wiktionary - q: wikiquote - b: wikibooks - m: meta-wiki Note that any links you created to Wikipedias using syntax such as [[fr:Accueil]] will be broken -- they are now interlanguage links which will be extracted from the text and displayed in the sidebar. To link to a wikipedia of a different language, use w:fr:Accueil. -- Tim Starling 14:13, 20 Jul 2004 (UTC) QuestionsEdit (see also the Talk page.) - So now we have to change all of the links that used to say 'en:' to 'w:en:', manually. Is there a way you could do this with a bot or something? Could we change all of them at once? - SamE 15:24, 20 Jul 2004 (UTC) - It might be best to contact someone who operates a bot. -- Tim Starling 01:44, 21 Jul 2004 (UTC) - How do you create a new subdomain? Not that I want to, just that everything is en: right now, even the large German Wikibooks. Could you publicize this? - Go to the subdomain, there are instructions. - I went to the subdomain ( ) and got an "does not exist" error. The I went to and there was the new domain. Now ( ) works too and is redirected to die Hauptseite. But I still found no instructions. Did I anything wrong? I reregistered my nick, which worked. Do I guess right, and we have to migrate the pages manually?--berni 08:54, 21 Jul 2004 (UTC) PortalEdit - And could you also put the portal back at? Thanks for all your work, Tim. - SamE 00:58, 21 Jul 2004 (UTC) - As for putting the portal back: well, I considered attempting to serve the page from the English but with a wikibooks.org domain name, just using a rewrite rule, but making rewrite rules cross into different virtual hosts is tricky. Putting in a redirect to en was definitely the easiest solution. The other solutions would be: - Write a PHP script to serve the page from the en.wikibooks wiki when requests come in for wikibooks.org or - Create a static HTML page based on the current portal and put it in the appropriate location. Easy to set up but hard to maintain. - Create a separate self-contained wiki for the sole purpose of serving the portal page - Is the URL of the portal really that important? -- Tim Starling 01:44, 21 Jul 2004 (UTC) The portal should be at or with a redirection from one to the other. Many people think that a web site has to start with www. Yann 12:16, 24 Jul 2004 (UTC) Moving pages across subdomains, and TranswikiEdit - I just wondered if there is the posibility to move a page completely to an other subdomain (including history and discussion). This would be a nice feature. I also do not know if there are legal problems, if we just copy the content.--berni 09:08, 21 Jul 2004 (UTC) - The m:transwiki system was designed to move content from one project or language to another while maintaining some record of the authors. I don't think the ability to move pages into a different domain works yet, so that's probibly the best method to use. Also note that the transwiki log is only for pages that spend some time in the transwiki namespace at the target wiki, therefore if a page is going from the main namespace at en directly to the main namespace at de, it doesn't need to be registered in the log. Gentgeen 15:59, 21 Jul 2004 (UTC) - Ok. As far, as I understood, I have to copy & paste the main article, and maybe the discussion-page. Then I've to go over the history to find all authors, who contributed to that page und write them on the discussion page (or if they are very few, I put them in the summary, when creating the new page. The I'll list the original page at the votes for deletion. Am I right with this process or did I overlook something important? --berni 08:13, 22 Jul 2004 (UTC) - For people who are familar with Java I've written a short Javaprogramm, that reads a page at en.wikibooks.org and gives a sorted list of the authors, that contributed to that page (it only works, if there arent more then 10000 changes to that page). See Usage: Call it with the name of the page e.g. use "java ExtractAuthorBot Wikibooks:Staff_lounge" to get a list of persons who contributed to this page. --berni 09:05, 22 Jul 2004 (UTC) - I wrote some article for computer games and they'll be moved to ja.wikibooks but I can't know how to move them completely to ja.wikibooks. How to do it? Is it necessary to do something before click "Create wiki" button? shows "Wiki does not exist". PiaCarrot 10:35, 22 Jul 2004 (UTC) - This is an unclear issue. As far as I know, it isn't possible to import history yet. User:Chriss suggested to keep the articles in the english wikipedia as long, as this is not cleared.--berni 14:07, 22 Jul 2004 (UTC)
http://en.m.wikibooks.org/wiki/Wikibooks:Subdomains
CC-MAIN-2014-52
refinedweb
1,007
73.17
Re: WPF--a threat to WinForms? - From: raylopez99 <raylopez99@xxxxxxxxx> - Date: Mon, 25 Aug 2008 02:55:17 -0700 (PDT) On Aug 24, 7:31 pm, JDeats <Jeremy.De...@xxxxxxxxx> wrote: WinForms is less threatened by WPF because of the nature of WinForms applications. In my opinon Enterprise desktop applications already have a very rich framework with WinForms and quite effective set of controls to get the job done. From an architecture perspective I don't see WPF having a lot to offer over WinForms to most enterprise desktop application developers, but Enterprise web applications certainly could benefit from the paradigm WPF offers, it's a very dramatic, empowering shift for a web developer. WPF on the desktop just gives developers wanting to unify the web and desktop experience an option. Over time it will probably replace WinForms, but I could see WinForms sticking around for a few generations. JDeats--you'll be interested by the below extract on the benefits of WPF over WinForms. But the problem I have is that I just fired up VS2008 and wrote a 'hello world' program, and found to my chagrin: WPF has no "lightning bolt" to pick from an array of Event Handlers (*noted by others as well, see below*), rather, you simply double click on a control and a 'default' event handler arises (such as double clicking on a button will give a "_Click" handler). This is because the controls in WPF do not have as rich a set of EventHandlers as in WinForms. For example, with a label, you cannot use it for debugging by writing "label1.Text = myObject.ToString()" for example-- the .Text extension method does not exist for label. A minor point, but I only spent five minutes in WPF and have concluded it's still beta-ware--apparently a better programming interface called "Blend" was released about three months ago by MSFT, in an attempt to overcome this, but I think the 'steep learning curve' associated with WPF is because of the rewrite of the library, which is not like WinForms or MFC. Also you need to have a dual monitor IMO to program WPF--they back more information than can fit in one screen. Of course dual monitors are always a good idea in programming. *See here: RL From the excellent book: C#3.0 in a Nutshell by Albahari et al. (O’Reilly), p. 166. Windows Presentation Foundation (WPF) WPF is a rich client technology new to Framework 3.0. Framework 3.0 comes preinstalled on Windows Vista-and is available as a separate download for Windows XP SP2. The benefits of WPF are as follows: It supports sophisticated graphics, such as arbitrary transformations, 3D rendering, and true transparency. Its primary measurement unit is not pixel-based, so applications display correctly at any DPI (dots per inch). It has extensive dynamic layout support, which means you can localize an application without danger of elements overlapping. Rendering uses DirectX and is fast, taking good advantage of graphics hardware acceleration. User interfaces can be described declaratively in XAML files that can be maintained independently of the "code-behind" files-this helps to separate appearance from functionality. Here are its limitations: The technology is less mature than Windows Forms or ASP.NET. Its size and complexity make for a steep learning curve. Your clients must run Windows Vista-or Windows XP with Framework 3.0 or later. The types for writing WPF applications are in the System. Windows namespace and all subnamespaces except for System. Windows . Forms. Windows Forms Windows Forms is a rich client API that-like ASP.NET-is as old as the .NET Framework. Compared to WPF, Windows Forms is a relatively simple technology that provides most of the features you need in writing a typical Windows application. It also has significant relevancy in maintaining legacy applications. It has a number of drawbacks, though, compared to WPF: Controls are positioned and sized in pixels, making it easy to write applications that break on clients whose DPI settings differ from the developer's. The API for drawing nonstandard controls is GDI+, which, although reasonably flexible, is slow in rendering large areas (and without double buffering, flickers horribly). • Controls lack true transparency. Dynamic layout is difficult to get right reliably. The last point is an excellent reason to favor WPF over Windows Forms- even if you're writing a business application that needs just a user interface and not a "user experience." The layout elements in WPF, such as Grid, make it easy to assemble labels and text boxes such that they always align-even after language changing localization—even without messy logic and without any flickering. Further points made on the next page: WPF is faster, since GDI+ hardware accelerators never became popular with manufacturers, who chose DirectX. Thus GDI+ is actually “considerably slower” than GDI, let alone WPF. On positive side, Windows Forms relatively simple to learn and has third-party support. RL . - References: - WPF--a threat to WinForms? - From: raylopez99 - Re: WPF--a threat to WinForms? - From: JDeats - Prev by Date: Delete a row in the database - Next by Date: Re: WPF--a threat to WinForms? - Previous by thread: Re: WPF--a threat to WinForms? - Next by thread: Re: WPF--a threat to WinForms? - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2008-08/msg02459.html
crawl-002
refinedweb
882
54.12
2010/4/20 Eric Blake <eblake redhat com>: > On 04/20/2010 12:56 PM, Matthias Bolte wrote: >> Otherwise compiling with -Werror will fail. >> --- >> >> >> static int >> -remoteAuthenticate (virConnectPtr conn, struct private_data *priv, int in_open, >> +remoteAuthenticate (virConnectPtr conn, struct private_data *priv, int in_open >> +#if !HAVE_SASL && !HAVE_POLKIT >> + ATTRIBUTE_UNUSED >> +#endif >> + , >> virConnectAuthPtr auth >> #if !HAVE_SASL && !HAVE_POLKIT >> ATTRIBUTE_UNUSED > > I'd rather use mine instead as it occupies fewer lines and fewer > #ifdefs. Paolo Bonzini confirmed off-list that gcc treats > ATTRIBUTE_UNUSED as _maybe_ unused, not _must be_ unused. > > > So I rebased mine on top of yours and pushed it. > Ah, okay. Yes that's more compact and better readable. Matthias
https://www.redhat.com/archives/libvir-list/2010-April/msg00858.html
CC-MAIN-2017-09
refinedweb
105
57.37
Join devRant Search - "hate my job" - -."31 - Boss: we have a project, we will need an application on Mac with objective-c. Me: But I am a Java developer, I never touched a Mac or objective-c ! Boss: it's ok, use Google, you will find some useful stuff there.. Me: But.. Boss: we have a week for a demo18 -.9 - - The thing that I hate the most about my job: Manager: We need to get this done. Me: okay. (after some scouring online) this open source library looks like a perfect fit for the requirement. Manager: oh sweet. *some eons later* Me: dude, I developed this general purpose utility and I think this might be helpful to other developers and something that we could open source. Manager: uh no. Company policy. Me: but we make use of open source libraries all the time. Manager: that's different.4 - seem uncommen because of all the 'hate my job rants' but I love my current dev job and that makes it for me ideal 😄2 -.15 -.47 - Front End programming is the worst of all worlds. I am a Full Stack developer that during every interview says "i can do front end stuff if needed". Something gets lost in translation and becomes "I do only front end stuff". I don't mind front end development, but i hate urgent nitpicking that happens every time. Everyone else on the team works by regular tasks and deliveries (sprints and release dates), but my work consists of being the brush of the creative mind of someone else, that could not figure out how to make a good design before sending it out to me. I am not a designer, a designer job is a creative one, i am just a brush that the team uses to complain why this button looks wrong on this not designed platform.10 - Boss comes in and gives me some js code for syncing data (he hacked it together the other day, really messy with like 5 callback lamdas stacked into each other) Boss: Make it faster and more reliable and add some progress indicator So i look at the code and he literally pulls all the data as one json (20+ MiB). Server needs multiple minutes to generate the response (lots of querys), sometimes even causing timeouts.... So i do what everyone would do and clean up the code, split the request into multiple ones, only fetching the necessary data and send the code back to my boss. He comes in and asks me what all this complexity is about. And why i need 5 functions to do what he did in one. (He didn't -.-). He says he only told me to "make it faster and show progress" not "to split everything up". So I ask him how he wants to do this over HTTP with just one request... His response: "I don't care make it work!". Sometimes i hate my job -.-11 - My boss is a grumpy 25 year oldish "Mr. I know it all". We all hate him for that attitude. Just joining recently, the code base which I got introduced to was totally new and I was overwhelmed. Boss told me to write an Sql query to wipe the table data. I being reckless wrote a query to wipe the table only and submitted it to my boss. Few hourse later we were informed by our peers that a certain url was not working. On further investigating we found out that my boss carelessly copy and pasted my query and executed it which wiped an important table clean. Now he doesn't talk to me straight and I can't look him in the eye because obviously I burst into laughter. Job well done☺️2 - - -!13 - - I did it! I FUCKING DID IT! I got the new job, where I am paid better and won't get abused! The culture is better, pay is better! My struggle now! Do I do finger guns to my current boss after telling him? I hate that asshole.13 - .8 - - - -*8 - Spent the entire day trying to solve a bug yesterday. Came to work today, solved in 5 minutes. This is why I love and hate my job.1 - - One thing I HATE about my job is dealing with customers, unfortunately they seem to be quite a vital part of the business3 - -!"1 - I hate fucking searching for new job! But I hate my company also! And I hate autocomplete trying to suggest ducking! No I am never going to write fucking ducking!fuck!4 -, I hate my job, they give me many tasks (many of tasks are not on my field) with no trainings and no time, plus a salary that is not that good. I get angry, so I talked with the boss about leaving yhe company. Well, our discussion ended up by convincing me to stay with the double of the salary. So, I accepted the offer. Don't know whether it's the right decision or not..9 - - The more I work here the clearer it gets: I just fucking can't make websites anymore. I totally can't work on graphics, I can't transform a PSD into responsive HTML. I fucking despise CSS, computers having different resolutions, having different browsers, doing mobile, doing iOS/safari which is always something extra. I'm tired of not getting the appropriate resources and then people asking me why it just doesn't look the same. BECAUSE IT'S NOT MY FUCKING JOB! I MAKE STUFF WORK, I DON'T MAKE IT LOOK PRETTY, I HATE PRETTY THINGS12 - Holy sweet god of FPS. I tought that 40 fps in Skyrim on HIGH is good on linux but then something weird happened when i forgot to change to my dedicated GPU. I got 60 fps on my integrated GPU. Wait whaaaaaat ?????? It seems that Integrated GPU is preforming better somehow. I know this is software issue since its impossible that vega 8 is better then RX560X. So then for fun i booted with 4.21 with all latest AMD patches and holy shit. Never have i seen such an huge FPS boost. 60 fps on high. And thats capped due to vsync. Turned to ULTRA. 60 FPS. I double checked its not integrated and nope its dedicated. IDK wtf have you done AMD but continue doing this so that people who hate AMD will see that AMD LOVES OPENSOURCE. Not the way Microsoft loves Open-source oh god no. I mean they really love Open-source. #include <stdio.h> #include <devRant.h> int main() { If (devRant.containsAmdDevs() == true) { printf("Good job AMD devs"); } }72 - - I hate being a fucking tech support dude. Everyone thinks it is my job to fix their device. Some girl asked me to replace her iPhone 6 plus screen a few days ago. I reluctantly said yes. I bought a screen. And I started the process. I opened the box for the new screen and it was just the screen with no digitizer. That was completely my fault. I was an idiot. I immediately buy the correct one on amazon and tell the girl, I'm sorry you won't have a phone for two days. As soon as the new package comes in, I will do the repair. 3 Days Later: Today. Her: Has it come in yet? Me: No, I'm going to call Amazon Amazon: We're sorry, the thing you asked for was out of stock, you'll have to buy it again. He was very nice, and he gave me free shipping, but this was not my fault! Her: I have to wait 2 more days? That's like a whole week without a phone! I had to do this for free and pay $40 for the new part. I am never telling anyone I am a developer again. I feel so fucking bad, and she's mad. And I can't do anything about it.6 - ...12 - !( - - Why I hate my job: 18 out of 21 developers are Chinese daily smokers barely speak English. Why I love my job: We build software/hardware to predict future earthquakes and save lives and hundred million or even billions dollars in damages. And of course make China super rich by selling - I left for lunch early to drive five miles away to an abandoned parking lot so that I could cry about an email I received... this week has been fantastic.11 - - When you can't rant about the stupid shit at work because your fucking idiot coworkers are on devRant and you can't afford to lose the job quite yet...2 - - I hate hearing people say that it shouldn't take you that long. As if they know how long it takes to do your job. Easy fix or change my ass!7 -8 - - My productivity has become 2x or more lately not because I like my job, but because I hate it so much that I want to get rid of it as soon as possible. Every fucking element about this job can make me vomit.3 - I moved to US a month ago and now I work as a steward in a hotel but I hate this job so fucking much. The fact that I know how to code and work in this shitty job feels like I lose chromosomes every second I work there. I just really wish I could work on what I love and code a lot. When I'm at work I listen to programming podcasts (I listen to this app podcasts too) and just think about the internship I'm doing. Code is always running in my head and it all feels like I waste my time even doe I need it for now until I can have enough money to move to another state. I hope this situation changes for the better.7 - Got a call from a recruiter today Recruiter: I'm trying to fill a full stack position in Charlotte. Me: not interested R: why M: I hate NC R: what can I do to make you reconsider M: I want 120k R: Ok, well please pass this opportunity along if you know someone who is looking I *actually* just moved from there. Guess someone didn't read my job history. Convo was seriously less than a minute.9 - -.20 - - Mom: "so... you do things with the computer, but what do those things do?" or "I have no clue what you just said, but good luck with that" Father: "couldn't you have gotten a real job? Like machining or warehouse management, where actual work can be seen rather than sitting on your &ss all day, typing on that keyboard" - he changed his view real quick when I showed him my paycheck. ($3/ hr more than him) Brother: "cool" Ex-gf: "I hate you" (more because of my starting pay)3 - - Useless. Was freelancing while still in high school and after graduating got a job immediately after moving out to other city. Was studying just because my parents wanted me to. Was studying there for a half the year, then dropped out and focussed just on my job. After a year moved to the Netherlands to study, because my parents didn’t like that I dropped out. Guess what? I dropped out after a half the year and got a good paying job there. Perhaps the only thing I got from studies was some friends I am still keeping in touch with. And also it gave me good pretext to end up where I am now, otherwise probably I would have stayed in my home country, that I must say, hate living in.1 - - I hate my stupid non confident ass. I was just negotiating for a pay for a project that I would work at after my day job, because I'm familiar with it and they really can't get a better person to finish it. And I get shy when talking to the boss and totally lowball it and now I'm working for peanuts. Fuck. : - Ms Word doesn't support attachments in mail merge and the fucking CEO wants me to add in the support. I hate my job now. Time to resign.. - just calmly got up from my desk and walked into a meeting room to scream. I hate this fucking job - - My biggest regret is the same as my best decision ever made. The company I work for specializes in performing integrations and migrations that are supposed to be near impossible. This means a documented api is a rare sight. We are generally happy if there even IS an (internal) api. Frequently we resort to front-end scraping, custom server side extensions and reverse-engineered clients. When you’re in the correct mindset it’s an extreme rush to fix issues that cannot be fixed and help clients who have lost most hope. However, if your personal life is rough at the moment or you are not in a perfect mental state for a while it can be a really tough job. Been here for 3+ years and counting. Love and hate have rarely been so close to each - - Client: can you put the instagram icons on our websites. Me: yes, could you send me the links? Silence for 2 weeks. Waiting for a bollocking now and I just know it's going to be my fault. Why are people such wankers. I fucking hate my job, the part that involves interacting with wankers with huge egos and no clue about anything.2 - -.2 -.4 - Referring to my previous rant... Now I got a job as a java developer. 😑😑😑 Karma bites you in the ass.3 - - - Working on Sunday because deadline is next week. P.S. We got project yesterday.... Wish me luck.....2 - - My rant is that I low key hate devRant. I'm 23, I'm an average software engineer, with some expertise in machine learning and with a decent job. But seeing all your cool stories, skills and rants makes me feel like I don't know shit and everyone else is just more driven, skillful and passionate, taking care of a 1000 pet projects at a time and dominating their work routine. Oh impostor syndrome, how I've missed you! P.S.: I still love your rants, keep them coming.3 - - fucking hate hr trainings. What a waste of time. Let me do my job. If you are sending me to training, at least make it relative to my field.1 - - - Does anyone else love being a developer but find it so hard to find a company they love to work for - - My workplace has been forcing me to work everyday for almost a month now. I've been working at least 8-10 hours from Saturday to Thursday, and 2-3 hours on Friday as well. I'm so exhausted. I can't sleep properly. All I do is work. I have no time left to do things that I want to enjoy. I tried coding today but I'm too exhausted to do it. I was literally at 0 productivity today. I hate seeing my computer now. I don't know how to overcome this especially during the current lockdown situation. The work I do is not valued or appreciated and it's mentally breaking me honestly. I don't know what I want anymore. For sure another job but I need at least a temporary fix till the lockdown is over. For those who know me or read through my profile, yup it's the same company. The reason I haven't left them even after all this is because this is a really tough time for me financially and I have no other sources of income and right now at my place there are no job opportunities. So the only option is to continue with the existing work place.7 - Why does your kind have to create multiple files, classes, and variables to do one simple task? It makes no fucking sense. Your code was literally written to confuse anyone that has to work on it. Every fucking time I see this shit, my head explodes. It makes no sense. What the fuck is this? What other form of retardation do you have? Then you have the essay writing galore of code and the never-ending typos that change every goddamn file as if it's not hard enough to trace where the random crap jumps into. Complimentary? Complomentory? Complmntary? Carrier, carrie, carrer? Fuck you, you fucking piece of shit. Not only do you import entire files and each script runs random methods and variables you can't possibly trace, you- you fuck. I can't even grep the fuck out of this crap. Admit it, you did this for job securiy because you belong in a goddamn asylum, you sick demented fuckhole. The time you spent abstracting shit into oblivion could have been used fixing typos or writing consistent variables or consistent fucking I don't fucking know anymore. I hate this shit'm on my train, leaving after my last day of work. I didn't hate that job, quite the opposite actually, but Im sure the new job will give me more opportunities to grow professionally. Now I'm just sad I'll miss all of the familiar faces, and all the usual things I was doing. So many emotions and I don't even know where to start. Oh and I'm drunk too - Why I hate my job some days, the whole app crashes if you expand the details, then hit edit but is fine if you hit edit then expand the details. You gotta be kidding me feel like I am getting years older every day in this job.. Could someone help me find a mobile developer position (Android I prefer)..2 -. -.10 - At my job we have these days where we have to BUG-BASH, meaning we do any stupid thing on the software to test it and find bugs. I hate it. I didn't sign to be a QA.2 - I hate my brain. When I want to program it's always when I'm outside doing chores, on a trip or when I'm at my non-dev job. When I finally get home and ready for working on projects my brain feels like watching Schwarzenegger films instead. Fuck you brain.1 - - Reasons i hate Christmas #147 I have job interview tomorrow (27th) and i my hair looks like bum of a vulture with diarrhea Near me is at least 15 barbershops. NOT. A. SINGLE. ONE. OPEN. UNTIL TOMORROW BECAUSE OF CHRISTMAS Why the fuck has this holiday of manipulation and lies and misery and shit to have so huge impact on normal flow of life for so many days! I HATE CHRISTMAS15 - - - Is it weird that I'm excited to get to test my code for my side project that I'm working on? It feels like I should hate this since I'm going to graduate next year and my career will be doing this as a job. Really, though, I'm glad to make sure my code is designed properly. It gives me confidence in my programming skills. BTW, if anyone is trying to use a build tool in Python there are NO guides to get started that I've seen! I had to go through trial and error to get pybuilder running - Manager X: (logs a support ticket) "Agent is unable to access system using the password provided." Me: "You're going to have to narrow it down a little, we have over 1000 active agents." I hate the support side of my job... - I hate those mother fucking, Cock sucking, dick farting retarded faggots, who get the opportunity of a new job/internship just because they have a certain "relative" in the said company/organisation. I mean its ok that you are getting an opportunity, but just don't act all-knowing-god-tier while you don't even know how to print a statement in c++ and got it. How many more relationships should I increase of mine so that I get into the same position like them. One of my friends got the internship just because his girlfriend's brother works for the firm. Now that's just super barbaric unless he gave a blowjob to the gf's brother. Their Fucking assholes need to be drilled by a giant pile drivers.5 - When your company forces you to program forms on the webpage with the sole intention of capturing your data then blasting you with emails, phone calls, and texts :( sometimes I really do hate my job2 - - - - Me: -Lack of experience -slow learner/fast learner -not really a team player -always keep a positive attitude -but when I started doing smthing, I'll finish it. -willing to learn I wonder if anyone would still hire me to their company.. Let me know.. I fucking hate my workplace and the owner. You hire me for doing smthing else, and you always told me to do smthing else that is not even related to my job. I'm not your fucking ass cleaner. = = you shit on that thing, you clean it yourself. Fucking fucking fuck! - !!rant I just hate job ads which have a pseudo-language (Java or C for ex) code snippet inviting you for an interview. Oh my God they are so fucking LAME. I actually pass on these job offers.1 -. - .2 - I recently made 2 years of experience but don’t know if I should still apply for an entry level position. I still feel uneasy with my skills, but don’t love where I’m working. What would you guys do if you were in my shoe?10 - - When you almost cry before you sleep coz you hate your job and feel when you wake up your gonna lose another day of your life but your not getting any good offers.. - - The most scary thing happens to me is that I wrote a code in staging without any bugs and breaks in production... fuck4 - I hate my coworker. I'm currently working in IT, but both my former full-time programming and my IT work has taught me how to dig for things and find them. He has learned this, and is CONSTANTLY bringing me things that have NOTHING to do with my job because he's too fricking LAZY to do it himself. "Hey, there's a credit memo on this Amazon statement. I'd like to know what it was for, thanks." SO LOG ONTO AMAZON AND LOOK FOR IT WITH YOUR OWN TWO EYEBALLS. I've got my own work to do without doing your AP detective work for you. THAT'S PART OF YOUR JOB. But unfortunately I REALLY hate conflict and so I just do it for him, seething the whole time and knowing I've just reinforced the behavior. EDIT: Before anyone says it, no it is not because he's stuck. If someone is at the end of their rope I'm glad to help them. But I've taken to asking him "so what have you tried?" And every single time he says "nothing." It's gotten to the point he'll literally say, "Hey can you do this for me? I haven't looked at it at all or tried anything." But he just doesn't catch on.5 - IBM Websphere stack (Rational, Portal, etc)...I had to use it in my first job in a bank. A very disgusting pack of shit software...From these days i hate IBM with passion. - I am learning to hate working for a large company! Tuition reimbursement I earned is getting denied because of the freaking beuracracy! The ONE perk I actually had left at my job and they took that away too! FML3 - fucking hate my job! This site sucks ass and I have no motivation to work on it! Would love to get a new job with a fresh sleek site, but unfortunately my autism kicks in bad during technical interviews. Oh fuck me!5 - Holy shit sometimes I hate my job. Current assignment: translate a C++ COM class to C#. Requirement: interface should not change. I ask the other team using this interface to ask me any questions so I can address their concerns. First FUCKING thing they ask for: a diagram explaining how to access the interface. For. Fuck. Sake. The goddamn thing is not changing. At all. I have said that to every stakeholder every time. It's a changed reference and a tweak to some calls to make them .Net calls. Why am I redocumenting something that was documented years ago?2 - - How to cope with getting cockblocked by coronavirus before job change? I signed a contract for a job in a foreign country. I was excited for the advantages like better work/life balance, finally getting to linux dev env, friendlier company. But now, I can not even apply for work permit because of restrictions. Due to already having signed contract already, I completely lost my touch with my current job. I hate it so much that I am having unpaid leaves even though I could do nothing since we are working half team at the same time. Dont tell me to “learn new skills”, I tried, it does not work for me. I am not in the mood for learning. New company is great that they reassured me I would not lost the opportunity, I would join them whenever I can. So I dont fear losing job but uncertainty kills me. European travel ban was up to 15 May, prolonged to 15 june, which prevents me to apply for work visa. I guess this was the last straw that broke camel’s back.14 - Recruiter logic: I know that developers receive a lot of messages from recruiters, so I'm sending you the third mail within a week to make sure you don't miss my special deluxe job offer! I hate these recruitment spam bots...2 - Im curious to know how people organise their workspace on windows. I learned coding with linux. I scored my first real job and they use mac. All good. But at home i game and sometimes i feel like coding/gaming at the same time. But fark i hate the terminal. How do you make it more ho my zsh/ linux terminal comfortable?7 - - For me is the same Friday than Monday. I do not know if it is good or bad. Maybe it is good because I do not hate my job or bad because I do not make so much fun on weekends. 😗😗 - I talked to the client how functionality should look like on UI, draw a mockup, designed and made changes to db schema, created REST api, made documentation how to use it, told frontend developer to make changes on frontend application according to the documentation and mockups. Still no one have fucking clue how to do it. Fucking testers can’t write anything, only clicking. So I sent curl code how the fucking request should look like exactly then resolved bugs they reported as won’t fucking fix because I will not be also making fucking frontend. Probably they even don’t know what curl is. What a fucking fuck. And that’s what I am mostly doing from Monday till Friday to keep this project going. It’s cause client are nice guys and we are doing something good, not some fucking ai, blockchain, big data, financial scam everyone is wanking around. And friends are asking, why I drink. - - I feel guilty for being sick. It's a stupid and unreasonable feeling. While it's easy to think "Well, I'm sick, what the hell am I supposed to do?" I hate the feeling of being a burden to anybody or holding back a deployment because my body decided to fail at the wrong time. I could have gotten sick during those days when I was waiting for the other team to resolve an issue on their side. I could have gotten sick then and I wouldn't miss anything but no, I got sick after they resolved the issue and the ball is in my court. Now I'm holding the progress back. Now with a high temperature, I'm still trying to test and make the changes on my side after work hours when I was on sick leave earlier. I can't do that either because now their environment has issues that wouldn't allow me to test. It's like if I died today, my soul wouldn't rest until this product is deployed. It's pathetic but it's the truth. But I guess it's time to surrender now. There's nothing else I can do. It's beyond my control. It's just a job, isn't it? Isn't it?10 -. - - How can someone like everyone at their workplace yet somehow hate what they do so much? I need to get out of this fucking city. I need to leave this fucking useless pointless css with a sprinkle of php job. I just feel do frustrated.3 - Never got one as is, but went so close to it than I could smell the smell of death out if it. Short story: it's due to my hate of Drupal 8, but I just don't know if I was badly introduced to it through a car wreck of a project or if I simply just hate it and it's insanly hard way to do simple stuff. In November I went to the point where development was no longer a pleasure, and I was doing lots and lots of small mistakes that almost got my ass fired (made a rant about it). Nothing was enjoyable, I stop going to the gym, ate badly, saw no one excepted my roommate... The day they switched me to write test scenarios with Behat, the sun started to shine again. Now that I'm back on Drupal knowing all this, I know that I'll have to leave the company once I have my diploma, because there's no point to stay in a place doing something you don't enjoy while you get tons of job proposals on LinkedIn To all the people who are deep down in it: stay strong, save your ass as soon as possible and find something else, but keep some time to heal. - - - Rookie here in need of help. Is it possible to become a backend dev within 4 months? I have been learning frontend on and off for a couple of years because I hate my job as a salesman. But I always imagined myself more as backend developer.20 - - For me the worst job would be to develop front-end stuff as the sole dev in a design company. Imagine having to go to great lengths just to have everything done perfectly down to half pixels. I've had to develop a couple of projects for an external design company and their lead designer was an absolute cunt about quarter pixels. I'm glad they fired him and working with them had become somewhat sane again... Some things in front-end are either impossible or near impossible to get perfectly and nobody will pay for those wasted days anyway. Oh and by the way: Please get rid of IE. I fucking hate it almost as much as my ex's mom. - I hate applying for jobs. Because I spend time adding information to the job searching sites only to be forwarded to the individual company to enter my details once again. Why can't they all just accept the details I have, at least carry over the basics.1 - / - When people tell me they're computer illiterate, they either know their exact Windows version to the Service Pack and can utilize the task manager effectively or they don't know that their computer desktop is different from their browser homepage. There is no in-between. - - I hate when they give new people that don't know the software the job to update requirements. We used to have 2 use cases that touched a functionality. Now we have three. The requirement was added for the third case. He held us up bitching that that the newly added requirement for Case 3 didn't include Case 1 and 2. Dude. That shit has been in the software for 4 years. Those requirements were written by requirements guys that are better than you. Don't waste my time with semantics. Only I'm allowed to waste my time on semantics. -. - 😥😫 I am like useless crap. Got stuck with single relatively easy task for 2 weeks, all it requires is understand how to create objects in enterprise system we work with, it worked out, but then occasionally I need to understand how to nest stuff. My mind says: "OK, I'M DONE" - You are done, my task is not, you moron! I cannot concentrate with only subpar motivations I have like "I need this job in future", "I have nice boss and colleagues", "this must be easy, just divide the task to pieces". Now I have chop off some means of communication to not have embarassing dialog about my work. I hate it, this must stop now, I always tell myself to find other opportunities to solve my case, however I easily got addicted to games again + SoundCloud + YT + new side project. OH MY. I want to finish this fault of the work already, just one night! Ah, I forgot... It's highly unlikely that I will be energized at night. I was when I walked to office - I could differ home from work, I had good morning routine, I did my job well enough. Now, when I speak of that, it sounds more like another excuse to not work, doesn't it?2 - So, this is a story of me leaving my current job. I am in a maintenance PHP project. I usually love PHP but I hate the way this project is done, therefore I hate this project Now, see the attitude change in people when they come to know I will no longer be there: > 7:49 AM : *gets a mail without context with some photographs* > 9:00 AM : *I leave for my doctor's visit which is once in 3 months* > 10:00 AM: I see, still no email with context, well, I'll go back to sleep > 12:00 PM: I see, *gets an email from the manager*, so you want this news to be updated with these new images At this point, I deliberately postponed the task, because I am salty because you are sending images with no context. > 3:00 PM: Okay, this is done. *send e-mail, WhatsApp, and hangout to the manager that task is done* > 3:08 PM: Post a rant on devRant!5 - Right now everything is a CMS. And I hate it. It lets people who don't know what they're doing, think they know what they're doing and make a mess for the real developers. All I can hope for us some huge educational effort so that if you want to use WordPress you can, but know that there are much better options. Every change has to be approved ahead of time anyway and its literally my job to keep this website running. I don't need you poking around breaking stuff.3 -. - My manager, while apparently trying to blast us over taking too much time to understand a product (that no one in the team knows about completely): I don't understand why you guys don't understand the severity of it. How will you support the product if you don't even know it? There's no comments or anything also, just code! You guys should be able to grasp it! I'm sorry, what now? (The part about no comments is true, by the way) -* - Sometimes I think to keep development as a hobby for my side projects and not as a full time job. Hate how development/programming has to compromised for businesses. Hoping some of you will get what I'm trying to - First day at the new building during my Monday hospital IT job. My boss went on about opening portals in time, aliens, all kinds of shit, then I get a call, "hey, we need help carrying a body to the morgue, get up here", turns out i'm lookout and forward-runner (lookout to keep patients away from the room during cart loadup, forward runner for doors) I hate Mondays...5 -! - - How do you know it's the right time to switch jobs? I don't hate my current job, but I don't love it anymore either. Considering starting to passively look for a new opportunity, but I don't want to make the wrong move.😕5 - I fucking hate my current job. I switched to this one a few months ago, since my last one was getting stale after being there for nearly a decade. I'm currently considering to return to my last position. Did somebody face a similar situation? Got a new position, only to realize that it sucks?3 - I see that a lot of people hate Mondays. For me Monday is the best day of the week, which is also the most productive one. And I honestly love my job. So wish you a good Monday guys :)1 -
https://devrant.com/search?term=hate+my+job
CC-MAIN-2020-24
refinedweb
6,318
81.73
Created on 2020-01-28 13:52 by Ananthakrishnan, last changed 2020-02-19 18:22 by mark.dickinson. This issue is now closed. can we add an lcm and gcd function that can work as: lcm(4,6) # returns 12 gcd(4,6) # returns 2 There is math.gcd(): You can use numpy.lcm(): Is it common to need lcm()? Do you have examples of applications which need lcm()? It's trivial to implement lcm(): I suggest to reject this feature request, since I never needed lcm() in 10 years of Python, whereas I use gcd() time to time. Uses for lcm are common enough that it is provided by Excel and the C++ boost. You can use it for working out problems like: - if event A happens every 14 days, and event B happens every 6 days, then A and B will occur together even lcm(14, 6) days. By the way, the "trivial" implementation given in the Stackoverflow link has a bug: if both arguments are zero, it raises instead of returning zero. I wish that gcd took an arbitrary number of arguments, I often need to find the gcd of three or more numbers, and this is a pain: gcd(a, gcd(b, gcd(c, gcd(d, e))))) when I could just say gcd(a, b, c, d, e) and have it work. Likewise of lcm. (For that matter, the gcd of a single number a is just a.) reduce(gcd, [a, b, c, d, e]) I created this issue as i came across the following question: There are n students in class A,and m students in class B.each class divides into teams for a competition.What is the biggest possible team size that can be divided,such that each team has same number of members. We can solve this type of problems easily if we have lcm() in math library.And there are lots of real life applications where we have to use lcm. Should i proceed with adding a pull request for adding a 'lcm' function in python's math module. I must say that the problem (with two classes divided into teams) seems to me to be exactly one that can be solved with gcd, and lcm itself is mostly useless for it. some problems that needs lcm function: 1:find the least number which when divided by 'a','b','c','d' leaves remainder 'e' in each case. 2:person A exercises every 'n' days and person B every 'm' days. A and B both exercised today. How many days will it be until they exercise together again? 3:The LCM is important when adding fractions which have different denominators we have to use the lcm function when, 1) an event that is or will be repeating over and over. 2) To purchase or get multiple items in order to have enough. 3) To figure out when something will happen again at the same time. All these shows lcm function should be included in the math library. So can i proceed with adding pull request to add lcm function in python's math module. -1 Given that we had gcd(), I don't see any value to adding *lcm()* as well. Once you have gcd(), getting the least common multiple is trivial. Also, it is rare to ever need a lcm() function. I don't think I've ever seen it in real code. I agree with Raymond that it's really seldom needed. However, I'd like to point out that the "trivial" implementation might not be so trivial after all: as Steven said, it mishandles (0,0) case. And even Tim fell into that trap, so it can't be said it's easily avoided. I agree that this case doesn't really appear in "real world" tasks, but that's not really an excuse: imagine a factorial that didn't work for 0. Also, a bit more often used case: seeing the code for lcm of 2 arguments, people might (and do; I've seen it) generalize to 3 or more arguments in a way that seems logical and is often correct, but not always (a*b*c//gcd(a,b,c)). And one more tidbit: the usual formula for lcm doesn't really work for the case of fraction inputs. I know that currently math.gcd doesn't handle fractions, but it could. It's imaginable that that feature will some day be added (just like pow recently started supporting modular inverses), and then suddenly lcd implementations will silently give the wrong result for fractions. A smart man;-) once said that the main criteria for inclusion in stdlib is that the function is often needed by users, _and_ it's often implemented wrong. I think lcm doesn't really satisfy the first criterion, but it does satisfy the second. I agree with Vedran Čačić.As the modules are not made for one person but it is for the ease of coding.There are so many functions which i don't use but used by other people.We are using functions to make coding easy and if lcm function is added many people will find it usefull. +0 from me. Another use is computing the Carmichael function for composite numbers (like an RSA encryption modulus, in which context the Carmichael function is routinely used). But only +0 instead of +1 because it's so easy to build from gcd(). I don't agree it's tricky at all. While lcm(0, 0) undoubtedly should return 0 in a general-purpose library function, in my own code I've never supplied that. Because in every application I've ever had for it, I would rather get an exception if I ever passed two zeroes - that would always have been a mistake. +0 from me as well; agreed with everything that Tim said (except that I've never had a need for the Carmichael function; my RSA implementations do the inefficient thing based on (p-1)(q-1)). This is somewhat reminiscent of comb and perm: lcm is often taught as a natural counterpart to gcd, so despite the fact that it's less fundamental and has less utility, people often expect to see the two together. @Ananthakrishnan: do you want to put together a PR? I'll commit to reviewing it if you do. Yes,I want to put together a PR. Great. For clarity, here's a Python function giving the behaviour I'd expect from a 2-arg lcm: from math import gcd def lcm(a, b): if a == 0: return 0 return abs(a // gcd(a, b) * b) If a < b, what is better, a // gcd(a, b) * b or b // gcd(a, b) * a ? Or there is no difference? I'd guess a // gcd(a, b) * b would be faster, on the basis that division is slower than multiplication. But I doubt it's worth worrying about for this implementation, given that the gcd call is likely to be the bottleneck as a and b get large. And I in turn agree with everything Mark said ;-) But I'll add that while the analogy to comb/perm is a good one, I think the case for lcm is stronger than for perm. Not only is lcm more common in real life (although, no, not at all common overall!), perm was added at the same time as prod, and perm is essentially a special case of prod. Perhaps add, is_prime() as well. Conceptually, it is in the same family of functions. Unlike lcm(), it is one that I would use and would improve Python's value as a teaching tool. Raymond, there was a thread a while back about adding an `imath` module, to package the number-theoretic functions that frequently come up. _All_ these things should really go into that instead, IMO. `math` started as a thin wrapper around C's libm, and has been losing its once-exclusive focus on functions for working with Python floats. I think that focus was valuable. In that older thread, I suggested a `probable_prime(n)` predicate function, and posted my pure-Python Miller-Rabin implementation. Simple (as such things go), but I wouldn't aim to compete with (say) gmp. I don't think `is_prime(n)` is suitable for Python. Proving that a large integer absolutely is prime is either quite slow or quite complicated. In practice, even professionals in critical applications are happy to settle for probabilistic assurances, because an arbitrarily tiny probability of error can be obtained very much faster than a deterministic proof. Anyway ;-) , ya, I like the idea, but I'd sure like it to go into a module where it obviously belongs. Also a function, e.g., to generate primes, and ... I dislike the idea of adding a is_prime() function in Python since users will likely stress Python with huge numbers and then complain that Python is too slow. Correct is_prime() is very slow for large numbers, and statistically approach is not 100% correct... Or we may have to add both approaches. But then you have to pick an algorithm which would fit "most use cases". I concur with Tim, it would be better to put such opinionated code on PyPI ;-) -- I'm -0 on adding math.lcm(). For example, I failed to find any request to add such function on python-ideas archives. This issue seems to be the first user request. I'm not opposed to add it. Some people seem to like the idea of the completeness of the stdlib (gcd & lcm go together). So if someone wants to add it, go ahead and propose a PR :-). :-] is_probable_prime is another matter, but there is an enormous amount of bikeshedding about the API of that one. And yeah, I managed to leave out 2. Speaking about "often implemented wrong"... :-)) I'd have to hear back from Raymond more on what he had in mind - I may well have been reading far too much in the specific name he suggested. Don't much care about API, etc - pick something reasonable and go with it. I'm not overly ;-) concerned with being "newbie friendly". If someone is in a context where they need to use probabilistic solutions, there is no substitute for them learning something non-trivial about them. The usual API for a Miller-Rabin tester supports passing in the number of bases to try, and it's as clear as anything of this kind _can_ be then that the probability of getting back True when the argument is actually composite is no higher than 1 over 4 to the power of the number of bases tried. Which is also the way they'll find it explained in every reference. It's doing nobody a real favor to make up our own explanations for a novel UI ;-) BTW, purely by coincidence, I faced a small puzzle today, as part of a larger problem: Given that 25 is congruent to 55 mod 10, and also mod 15, what's the largest modulus we can be certain of that the congruence still holds? IOW, given x = y (mod A), and x = y (mod B) what's the largest C such that we can be certain x = y (mod C) too? And the answer is C = lcm(A, B) (which is 30 in the example). On Fri, Jan 31, 2020 at 12:15:37PM +0000, Vedran Čačić wrote: > > Vedran Čačić <vedgar@gmail.com> added the comment: > >. :-] Lots of things are fast and still absolutely correct. And I think that you probably are underestimating just how slow trial division is for testing primes. I can imagine you've probably tested it on "large numbers" like a trillion-trillion (1e18) and thought that's acceptably fast. Try it on numbers like this: P =') Q = P*(313*(P-1)+1)*(353*(P-1)+1) Q is a 397 digit Carmichael Number. Its smallest factor is P, which has 131 digits. If your computer is fast enough to perform a thousand trillion (1e15) divisions per second, your trial division will take more than 3e107 years to complete. That's 10 million trillion trillion trillion trillion trillion trillion trillion trillion trillion times longer than the universe has existed. In a practical sense, your algorithm is never going to terminate. The sun will burn out and die first. Upgrading your CPU is not going to help. My old and slow PC can prove that Q is a composite number, using pure Python, in less than six seconds. And I'm sure that a better programmer than me would be able to shave off some of that time. > is_probable_prime is another matter, but there is an enormous amount > of bikeshedding about the API of that one. What is there to bikeshed about the API? The API should be a function that takes a single integer, and returns False if that number is certainly composite[1] otherwise True. I will argue that any other details -- including whether it is probabilistic or not -- are *quality of implementation* issues and not part of the API. For the same reason, the name should be just "is_prime" (or "isprime") and not disturb naive users by calling it "probably prime". Experts will read the documentation and/or the source code, and everyone else can happily ignore the microscopic[2] chance of a false positive with really huge numbers. (Sometimes ignorance really is bliss.) We can argue on technical grounds just what the implementation should be, but that's not part of the API, and the implementation could change: - Miller-Rabin, or Baillie–PSW, or AKS, or something else; - whether probabilistic or deterministic; - what error rate (if any) we are willing to accept; etc. If we choose the fast, simple Miller-Rabin test, with just 30 random iterations the error rate is less than approximately one in a trillion trillion. If we tested a million numbers ever second, it would take over 18,000 years (on average) to find one false positive. If that isn't good enough, increase the number of iterations to 50, in which case you would expect one false positive every 20,000 trillion years. In comparison, it is estimated that cosmic rays cause memory bit flips as often one error per 4GB of RAM per day. This probabilistic algorithm is more reliable than your determinstic computer. I don't have much time for worries about Miller-Rabin being "probabilistic". When I was young and naive I worried about it a lot, and maybe that was a legitimate worry for a naive Miller-Rabin implementation that did, say, five iterations (probability of a false positive about 0.1%). But with 30 or 50 rounds, I am confident that nobody will ever experience such a false positive, not if they spend their entire lifetime doing nothing but calling `is_prime`. Let's be real: if the banks are willing to trust the transfer of billions of dollars to probabilistic primality testing, why are we worring about this? (By the way: for smallish numbers, under 2**64, no more than twelve rounds of Miller-Rabin are sufficient to deterministically decide whether a number is prime or not. Likewise for Baillie–PSW, which is also completely deterministic for numbers up to 2**64.) [1] For ease of discussion, we'll count zero and one as composite. [2] More like nanoscopic or picoscopic. Tim: Considering that congruence is _defined_ as x=y(mod m) :<=> m|y-x, it's really not so surprising. :-) Steven: It seems that we completely agree about inclusion of is_probabilistic_prime in stdlib. And we agree that it should be called isprime (or is_prime if Raymond names it;). About the bikeshedding, see Tim's comment. :-P About absolutely correct algorithms: first, what exactly is your claim? If it's "I can write an algorithm in pure Python that can for every number of 397 digits mathematically exactly determine whether it is prime in under 6 seconds", I'd very much like to see that algorithm. (I guess it must be publicly available, since we're speaking about inclusion in Python stdlib.) I really don't have much expertise in number theory that I can be convinced it doesn't exist, but I very much doubt it. [Steven] > ... Try it on numbers like this: > ... > Q = P*(313*(P-1)+1)*(353*(P-1)+1) > > Q is a 397 digit Carmichael Number. Its smallest factor is P, > which has 131 digits. > ... > My old and slow PC can prove that Q is a composite number, > using pure Python, in less than six seconds. > > And I'm sure that a better programmer than me would be > able to shave off some of that time. The pure-Python Miller-Rabin code i posted in the aforementioned thread is typically at least 100 times faster than that. But it's not deterministic. Because it tries (pseudo)random bases, it all depends on how many it needs to try on each run before finding a witness to that Q is composite. It usually (at least 75% of runs) finds one on the first try. BTW, it doesn't much matter that this is "pure Python" - for ints of this size the bulk of the time regardless is spent in CPython's C-coded bigint arithmetic. I expect that your code must be doing more than _just_ Miller-Rabin, and in the Q case is paying through the nose for "optimizations" that all fail before getting to Miller-Rabin.. "Real" Miller-Rabin picks bases at random, relying only on properties that have been proved independent of the argument size. [Vedran] > Tim: Considering that congruence is _defined_ as > x=y(mod m) :<=> m|y-x, it's > really not so surprising. :-) Didn't say it was ;-) Was just noting the odd coincidence that I just happened to stumble into a real use for lcm(), not previously mentioned in this report, while doing something else. > [Steven] > > ... Try it on numbers like this: > > ... > > Q = P*(313*(P-1)+1)*(353*(P-1)+1) > > > > Q is a 397 digit Carmichael Number. Its smallest factor is P, > > which has 131 digits. [Tim] > The pure-Python Miller-Rabin code i posted in the aforementioned > thread is typically at least 100 times faster than that. This is exactly the sort of argument about quality of implementation which isn't, or shouldn't be, part of the argument about the API, IMO. Just as the choice of Timsort over Quicksort or Bubblesort *wink* isn't part of the list.sort() API, let alone the implementation choices in Timsort such as MIN_GALLOP. I'm happy to have a discussion about implementation, here or off-list, I'm sure I will learn a lot. But briefly, the Q I quoted above was carefully designed (not by me, I hasten to add!) to be a preudoprime to the first 307 prime bases, so it's something of a worst-case scenario for my version. > BTW, it doesn't much matter that this is "pure Python" - for ints of > this size the bulk of the time regardless is spent in CPython's > C-coded bigint arithmetic. A fair point, thank you. >, they just want an answer True or False and are prepared to trust that the function author knows what they are doing. If someone cares about the small details like how many bases to try, they might also care about details like: - which specific bases are used; - whether to use Miller-Rabin at all; - how many trial divisions to do first; - which specific primality test to use; etc. What if the implementation shifts away from Miller-Rabin to (say) Baillie-PSW? Then your maxsteps parameter becomes meaningless. Do we deprecate it or leave it there in case the implementation changes again to something that has a concept of number of steps? I think that if someone cares sufficiently about the details, then they probably won't be satisfied with a single isprime function, but may want is_fermat_prime, is_miller_rabin_prime, is_lucas_prime etc. > > . Correct. The first twelve primes make up such a minimal set. [Tim] >> The pure-Python Miller-Rabin code i posted in the >> aforementioned thread is typically at least 100 >> times faster than that. [Steven] > This is exactly the sort of argument about quality of > implementation which isn't, or shouldn't be, part of > the argument about the API, IMO. I wasn't at all talking about API at that point. I was backing the argument _you_ were making, that trial division is a hopelessly inefficient implementation, compared to what's possible with probabilistic methods, regardless of API. You were in fact underselling that argument, because it's straightforward to get an order or two of magnitude faster than you demonstrated. > the Q I quoted above was carefully designed (not by me, I hasten > to add!) I know the paper it came from. > to be a preudoprime to the first 307 prime bases, so it's > something of a worst-case scenario for my version. Which is why I have no problem picturing how this "should be" approached: the original Miller-Rabin (which uses random bases, not succumbing to the "premature optimization" catastrophe magnet) has no problem with Q (or with anything else!), and hasn't really been improved on for general-purpose use since it was published. It's a darned-near perfect mix of simplicity, brevity, clarity, robustness, generality, and speed. "Good enough" by a long shot on all counts. >>, Then they can accept the default. In what conceivable sense is that a burden? > they just want an nswer True or False and are prepared > to trust that the function author knows what they are > doing. But the function author can't possibly know what the _user_ needs this for. In some apps degree of certainty is far more important than speed, while in other apps it's the opposite. Be realistic here? Your argument here makes sense for always-right functions, but you're not willing to accept one of those for this specific purpose (& neither am I). > If someone cares about the small details like how > many bases to try, It's not a "small detail" where it matters: it is THE way to trade off computation time against confidence in the results. It's a _fundamental_ aspect of how Miller-Rabin works. > they might also care about details like: > > - which specific bases are used; > - whether to use Miller-Rabin at all; > - how many trial divisions to do first; > - which specific primality test to use; Then they should go use a special-purpose library ;-) Letting them fiddle the single most important parameter isn't down in the weeds, it's a top-level control knob. My answers to all the above: - bases are picked via randrange(2, n-1) - yes, because no better general-purpose algorithm is known - none - although I'll allow that there may be a speed advantage in some apps if a gcd is done with a relatively small primorial first (more efficient than one-at-a-time trial divisions by small primes) - Miller-Rabin > What if the implementation shifts away from Miller-Rabin > to (say) Baillie-PSW? It can't ;-) I would absolutely document that Miller-Rabin _is_ the algorithm being used, with the random-bases implementation choice. Then the curious can search the web for a mountain of good information about it. > Then your maxsteps parameter becomes meaningless. Do we > deprecate it or leave it there in case the implementation > changes again to something that has a concept of number > of steps? All moot, given that I have no interest in hiding the specific algorithm in use. YAGNI. > I think that if someone cares sufficiently about the details, > then they probably won't be satisfied with a single isprime > function, but may want is_fermat_prime, is_miller_rabin_prime, > is_lucas_prime etc. Again, go to a special-purpose library if that's what they want. And, again, I don't agree with your characterization of the Miller-Rabin maxsteps parameter as a "detail". It's a fundamental aspect of what the algorithm does. Which casual users can certainly ignore, but at their own risk. > ... > Correct. The first twelve primes make up such a minimal set. And if you don't care about picking the fixed bases from a prefix of the primes, you only need 7 bases for a 100% reliable test through 2**64: 2, 325, 9375, 28178, 450775, 9780504, and 1795265022. Or so this table claims: But I don't care here. Using a fixed set of bases is begging for embarrassment (for every fixed finite set of bases, there are an infinite number of composites they'll _claim_ are prime). There are no systemic failure modes when using random bases. If someone wants to continue the discussion on is_prime(), I suggest to open a separated issue. New changeset f2ee21d858bc03dd801b97afe60ee2ea289e2fe9 by ananthan-123 in branch 'master': bpo-39479:Add math.lcm() function: Least Common Multiple (#18547)
https://bugs.python.org/issue39479
CC-MAIN-2020-45
refinedweb
4,166
70.43
I am transforming a db from postgres to mysql. Since i have cannot look for a tool that does the secret itself, i am likely to convert all postgres sequences to autoincrement ids in mysql with autoincrement value. So, how do i list all sequences inside a Postgres DB (8.1 version) with details about the table by which it's used, the following value etc having a SQL query? Remember that i can not make use of the information_schema.sequences view within the 8.4 release. The next query gives names of sequences. SELECT c.relname FROM pg_class c WHERE c.relkind = 'S'; Typically a sequence is known as as ${table}_id_seq. Simple regex pattern matching provides you with the table title. To obtain last worth of a sequence make use of the following query: SELECT last_value FROM test_id_seq; Run: psql -E, after which \ds after some discomfort, i first got it. the easiest method to accomplish this would be to list all tables select * from pg_tables where schemaname = '<schema_name>' after which, for every table, list all posts with characteristics select * from information_schema.columns where table_name = '<table_name>' then, for every column, test if it features a sequence select pg_get_serial_sequence('<table_name>', '<column_name>') after which, obtain the details about this sequence select * from <sequence_name> Partly examined but looks mostly complete. select * from (select n.nspname,c.relname, (select substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128) from pg_catalog.pg_attrdef d where d.adrelid=a.attrelid and d.adnum=a.attnum and a.atthasdef) as def from pg_class c, pg_attribute a, pg_namespace n where c.relkind='r' and c.oid=a.attrelid and n.oid=c.relnamespace and a.atthasdef and a.atttypid=20) x where x.def ~ '^nextval' order by nspname,relname; Credit where credit arrives... it's partially reverse designed in the SQL drenched from the d on the known table which had a sequence. I am sure it may be cleaner too, but hey, performance wasn't an issue.
http://codeblow.com/questions/list-all-sequences-inside-a-postgres-db-8-1-with-sql/
CC-MAIN-2017-43
refinedweb
331
60.92
Rainbow is a code syntax highlighting library written in Javascript. It was designed to be lightweight (~2.5kb), easy to use, and extendable. It is completely themable via CSS. You can see rainbow in action at. You can also build/download custom packages from there. Include some markup for code you want to be highlighted: def openFile(path):file = open(path, "r")content = file.read()file.close()return content Include a CSS theme file in the <head>: Include rainbow.js and whatever languages you want before the closing </body>: By default dist/rainbow.min.js comes with some popular languages bundled together with it. Rainbow 2.0 introduced support for node.js. All of the existing API methods should work, but there is also a new Rainbow.colorSync method for synchronous highlighting. npm install --save rainbow-code var rainbow = ;var highlighted = rainbow;console; Rainbow 2.0 should work in the following browsers: For older browsers you can download the legacy 1.2.0 release. Currently supported languages are: In your markup the data-language attribute is used to specify what language to use for highlighting. For example: var testing = true; Rainbow also supports the HTML5 style for specifying languages: var testing = true; And the Google prettify style: var testing = true;. A SASS mixin was added to simplify defining styles that should only apply for a specific language. Using it looks like this: You can pass a single language or a list of languages. Rainbow has four public methods:: Rainbowcolor; Each time this is called, Rainbow will look for matching pre blocks on the page that have not yet been highlighted and highlight them. You can optionally pass a callback function that will fire when all the blocks have been highlighted. Rainbowcolor {console;}; The second option is passing a specific element to the color method. In this example we are creating a code block, highlighting it, then inserting it into the DOM: var div = document;divinnerHTML = '<pre><code data-var foo = true;</code></pre>';Rainbowcolordiv {document}; The final option is passing in your code as a string to Rainbow.color. Rainbowcolor'var foo = true;' 'javascript' {console;}; If you want to prevent code on the page from being highlighted when the page loads you can set the defer property to true. Rainbowdefer = true; Note that you have to set this before DOMContentLoaded fires or else it will not do anything. As of right now there is one extra option for color. This option allows you to have an extra class added to every span that Rainbow renders. This can be useful if you want to remove the classes in order to trigger a special effect of some sort. To apply a global class you can add it in your markup: var hello = true; Or you can pass it into a Rainbow.color call like this: Rainbowcolor'var hello = true;'language: 'javascript'globalClass: 'animate'; Rainbow.extend is used to define language grammars which are used to highlight the code you pass in. It can be used to define new languages or to extend existing languages. A very simple language grammer looks something like this: Rainbow; Any pattern used with extend will take precedence over an existing pattern that matches the same block. It will also take precedence over any pattern that is included as part of the generic patterns. For example if you were to call Rainbow; This would mean that function will be highlighted as <span class="keyword magic">function</span>, but return, continue, and break will still be highlighted as just <span class="keyword">return</span>, etc. By default languages are considered to be standalone, but if you specify an optional third parameter you can have your language inherit from another one. For example the python language grammars inherit from the generic ones: Rainbow; If you wanted to remove the default boolean values you should be able to do something like this: Rainbow;. The simplest way to define a grammar is to define a regular expression and a name to go along with that. Rainbow; This allows you to take a more complicated regular expression and map certain parts of it to specific scopes. Rainbow; For more complicated matches you may want to process code within another match group. Rainbow; Sometimes a language supports other languages being embedded inside of it such as JavaScript inside HTML. Rainbow supports that out of the box. Here is an example of how you would highlight PHP tags inside of HTML code. Rainbow; You should be able to nest sub-patterns as many levels deep as you would like.. You cannot match part of a subgroup directly to a scope. For example if you have: matches:1: 'name.attribute'2: 'value'pattern: //g This will result in code name="value" being highlighted as: name="value" You see the value class never gets applied. To achieve what you really want you would have to use a subpattern like this: name: 'name.attribute'matches:matches:1: 'value'pattern: /\"\"/gpattern: //g This means the entire block is wrapped with name.attribute scope and then within that any part in double quotes will be wrapped as value. That means the entire block will behighlighted as name="value" In this example you could avoid subpatterns completely by using a regex like this to begin with: /\"\"/g The addAlias function allows you to map a different name to a language. For example: Rainbow; This allows you to highlight javascript code by using the language js instead of javascript. This method notifies you as soon as a block of code has been highlighted. Rainbow; The first parameter returns a reference to that code block in the DOM. The second parameter returns a string of the language being highlighted. This method allows you to remove syntax rules for a specific language. It is only really useful for development if you want to be able to reload language grammars on the fly without having to refresh the page. // Remove all the javascript patternsRainbow; Rainbow is compiled using rollup and buble. Gulp is used for all build related tasks. git clone git@github.com:ccampbell/rainbow.gitcd rainbownpm install The build command is used to build a custom version of rainbow.js. If you run gulp build The lint command will check all the javascript files for things that do not match the styleguide from the .eslintrc file. The pack command will run a buble + rollup build and save the resulting file to dist/rainbow.js The sass command will compile all the rainbow themes. The watch command will look at sass files and src js files and build the css or js (using gulp sass and gulp pack) every time you make a change. If you are looking for line number support you can try one of the following: You can check out demos and build custom packages at rainbowco.de.
https://www.npmjs.com/package/rainbow-code
CC-MAIN-2017-51
refinedweb
1,147
63.29
Hi. I am very new to using Panda3D and Blender and I was wondering does the YABEE exporter not export materials as a colour of the mesh? I am trying to load a maze into panda which is supposed to be a green colour but when I load up the maze from the folder where my model is, it only shows a white colour and not a green one. Like I said I am very new to using Blender and Panda so any help would be appreciated as I’m sure it is something very simple which I am missing. Hi, welcome to the community! Since you are using YABEE, I will assume in this post that you are talking about Blender 2.79. You need lighting in order for a material to show up. When you assign a light, the colour will appear. If you do not wish to use lighting, there is an option in the material panel called “Shadeless”. When you click it, the material gets written out as the object color, and will show up even in the absence of lighting (and in fact lighting will not affect the object whatsoever). Hi! I am actually using Kergalym’s - YABEE exporter which works with blender 2.8, but I’m actually using it with blender 2.9. It seems to be working okay to export the model itself, but could that be an issue? When you say you need lighting for the material to show up, do you mean in the blender scene on within the Panda3D scene? Thank you Ah, that’s an unofficial fork of the YABEE exporter. I do not know much about it. You need to apply a light in Panda3D. The .egg format does not support lights. I am trying to add a light into the scene by using a directional light (I am looking at the panda3D manual) but it is giving me an error saying DirectionalLight is not defined this is how I am adding the light into the scene dlight = DirectionalLight('dlight') dlnp = render.attachNewNode(dlight) render.setLight(dlnp) am I missing an import? If so which one? It’s simple. from panda3d.core import DirectionalLight But you need to know what’s in which package. The green material is now showing thank you all for the help!
https://discourse.panda3d.org/t/meshes-showing-a-white-colour-when-loaded-into-panda3d/26772
CC-MAIN-2021-04
refinedweb
389
71.14
Completely wild starter: type safe subtypes of primitives using erasure Disclaimer: this is more of a brainstorming contribution rather than a well thought out proposal. It is inspired by threads on complex numbers and such like, and by the erasure mechanism of generics. Starter: Add support in the language for type safe sub types of primitives, these erase at compile time (after type checking) to their primitive supertype. Example Definition Note: this is not about Complex numbers, I am just using that as one possible use case. (the only one I can think of without actually thinking about it) <br /> /*<br /> file: Complex.java<br /> dunno really, it needs to go somewhere, buts its not a<br /> class or interface, but hey we've just added enum and<br /> @interface as new top level elements.<br /> */<br /> primitive Complex extends int;<br /> Example Explanation Complex becomes a type usable in method and constructor arguments and return types, and as field and local variable type. But not as a type parameter bounds because its a subtype of a primitive. We sort of do an erasure type thing after type checking to erase it back to an int. If a method takes a Complex, you must pass it a Complex, not an int. So how do you get one, its not a class, so you don't construct one. I guess you would cast to it. <br /> Complex = (Complex)5;<br /> Why? Well thats a good question. I guess I see this as maybe an enabling technology. Can others see where this would be useful? The use case I am thinking of (which by itself is completely insufficient to justify this feature) is as follows. use Case 1 of 1 People are talking about putting immutable objects on the stack rather than the heap. They want complex numbers etc on the stack. One approach might be to give people the tools to make their own stack, where they manage their own objects, similar in a way to the RT for java stuff. Imagine a complex number class. You construct an instance of it which is really a fixed size array of complex numbers (actually 2 arrays, one for each component). Constructor arg specifies the size. All the complex math operator methods take and return a Complex which is a subtype of int, and is the index where that Complex number is in the internal arrays. You can then do a whole heap of complex number operations, BUT all the complex numbers you use are actually (in the JVM) ints, and can therefore be on the stack if they are local variables. So Complex looks something like this <br /> class ComplexLib {<br /> private float real[], imaginary[];<br /> int size;<br /> ComplexLib(int size) {<br /> real = new float[size];<br /> imaginary = new float[size];<br /> this.size=0;<br /> }</p> <p> Complex create(float r, float i) {<br /> size++;<br /> real[size]=r;<br /> imaginary[size]=i;<br /> return (Complex)size;<br /> }</p> <p> Complex add(Complex a, Complex c) {<br /> /* this is probably wrong - its years since I<br /> forgot about complex arithmetic */<br /> return create(real[a]+real[c],<br /> imaginary[a] + imaginary[c]);<br /> }</p> <p> void assign(Complex lhs, Complex rhs) {<br /> real[lhs]=real[rhs];<br /> imaginary[lhs]=imaginary[rhs];<br /> }<br /> }<br /> <br /> private Complex doSomething(ComplexLib env) {<br /> Complex a,b,c;<br /> a=env.create(0,1);<br /> b=env.create(1,0);<br /> c=env.define() ; allocates a new one, value is 0,0<br /> env.assign(c,env.add(a,b));<br /> return c;<br /> }<br /> Yeah, Yeah, I know, heaps of details to get right. Enter and Exit stack frames; Should you have to cast a Complex to an int before using as an array index? (probably not since Complex is a subtype of int, just not the other way around. but that would mean you could << a complex and & two of them - :( and accidentally add them : probably means you must explicitly up cast as well as down cast); bla bla bla. Maybe this just boils down to type safe array indices? Anyway that's it. Yes I know the complex use case would be enhanced by the anti operator overloading operator definition thingy here Remember, anything this can do, can be done today, just use the primitive, all this would add is a level of type safety at compile time, just like generics did compared with casting from Object. Time is limited so the discussion (if any) is probably more valuable if it looks at use cases where this could be useful, rather than arguments of syntax and rules. That can come later IFF we can find hundreds of places where this makes life significantly easier or safer. So have you got a problem where this suddenly make a better solution feasible? Flame throwers on, aim, FIRE Sure, In a sentence it achieves much better memory usage, and reduces GC overheads. One of the complaints (see other topics in this forum), is a performance issue when using heavyweight objects for small immutable things. Some people were asking for escape analysis, others want immutable classes stored on the stack. If java contained everything that everyone wanted, it would be monstrous. What needs to be done, is to acknowledge the problems, then see if there is some form of base enabling technology that allows all those people to solve their own problems. One meta solution enabler, for a whole range of problems. Good languages are like that, they are a toolkit for solving problems, not a collection of solutions. While I acknowledge the sorts of problems the complex math people (for example) are talking about, the JVM changes and such like needed for the proposed solutions just seems untenable. SO.. Is there a way to achieve something pretty much equivalent, but coming at the problem from a different angle? Can they have their own stack and manage it in the API, in such a way as to avoid massive tiny object allocation and deallocation, and the overhead of having an object on the heap (which about doubles the space required to store a Complex number). It seems to me that having these type safe subtypes of primitives, might just be building block to enable such solutions. Sure, if you aren't worried about performance and memory usage (and that is probably true for a large number of applications), the Complex Class is fine, but there are applications where those overheads are unacceptable. IF java can provide a mechanism to enable library writers to optimize heavily in these cases, then that is a good thing, because it has all sorts of uses. If java just has complex numbers built in, that is no use to the rest of us. If the compiler stops you passing a [code]primitive kgMass extends float[/code] to a method expecting a [code]primitive poundMass extends float[/code] Then that is way better than if they are all just floats. At least one space mission may not have failed if their language had made (and enforced) this type of distinction. I suspect that the performance costs of the indirection would exceed the gains in reduced GC. Both escape analysis and (lightweight) immutable classes would also benefit the wider community beyond those using Complex. > Both escape analysis and (lightweight) immutable classes would also benefit the wider > community beyond those using Complex. Exactly, But this is not about Complex, it is about subtyping primitives. But we are very unlikely to get escape analysis. We may get lightweight immutable objects some time (it has been talked about for years). So the question is, is there a mechanism that allows us to build our own when we need it? I think this is possible but any solution is fragile because an arbitrary primitive might be used where only a subset with special meaning are valid. Typedefs/subtypes of primitives, might be sufficient to make these sorts of solution feasible, for a whole range of applications. > I suspect that the performance costs of the indirection > would exceed the gains in reduced GC. There is indirection involved with objects as well. Footprint is certainly reduced. So... basically, what you want is "typedef"? Why should complex be a subtype of int? As a subtype of double they would be more logical and there would not be a problem with array indeces. Monika. No, In the example Complex is actually like a reference to a complex, an index into an array. But becuase of that there is no GC, till the whole ComplexLib instance was GCed, the representations (references) of the complex numbers, are then completely on the stack, not the heap. It was just an example, Either of the 64 bit primitives could be used to hold 2 floats to represent a complex as well. Well, your use case is really not a representative one. What you're trying to do is disguise an int into an opaque handle which gives you access to the "hidden" data held somewhere and let compiler type check that handle. All that and a lot more (GC, inheritance, ...) is already there when you use a reference type. A reference type variable is actually a handle (pointer) to some data. And it is also type checked. Your use case could be rewritten to this: [code] public class Complex { float real; float imaginary; Complex(float r, float i) { real=r; imaginary=i; } Complex add(Complex a, Complex c) { return new Complex(a.real+c.real, a.imaginary + c.imaginary); } void assign(Complex lhs, Complex rhs) { lhs.real=rhs.real; lhs.imaginary=rhs.imaginary; } } [/code] ... notice the reduction of code? Can you tell me what you have achieved with you use case that is not achieved with the modified example above?
https://www.java.net/node/643716
CC-MAIN-2014-10
refinedweb
1,637
60.95
Index Links to LINQ There is an extensive collection of Visual Studio 2005 C# snippets available for download. In this post I'll take a look at these snippets, and show how you can use simple XML syntax to parse the raw code that lies behind the snippets. The code used in this program can be downloaded and run in Visual Studio 2005. Snippets provide a useful means of learning about the C# language. They are a bit like an interactive tutorial, or an interactive version of a 101 tips document. Snippets are not a cure all. For one thing, you can't use them unless you know they exist, and it is difficult to remember all the existing snippets and the shortcuts that make them available. I encountered this problem when I first loaded the nearly 300 snippets from the download into the IDE. There was no easy way to discover what snippets were available or what shortcuts would allow me to access them. To help mitigate this problem, I wrote a simple program to parse the snippets and display them in a single XHTML file. This file is relatively easy to view and relatively easy to search. You can download and install the 300 snippets with a few clicks of the mouse. After running the installation program, you will want to enter the IDE, and press Ctrl-k, Ctrl-b to access the Snippet Manager. The manager should also be available on the Tools menu. If it is not, then right click on the menu bar, choose Customize from the popup, find the Tools category, and drag the Snippet Manager on to the tools menu. In the Snippet Manager, press the Add button, browse to the place where the snippets were installed, and press the Open button. By default, the snippets should be found in a directory called MSDN located beneath your (My) Documents directory. You will probably need to restart the IDE before the snippets are available for use. Access snippets by typing code in the IDE. For instance, type in the letters cw, and press tab. The following code should be printed out in the editor: Console.WriteLine(); I'm cheating a bit here, because the cw snippet ships with Visual Studio. To make things more interesting, let's try the first snippet in the list of snippets that were just installed. Create a default Console Application. Type the new snippet, called osUser. When you type it in and press tab, it resolves to code that looks like this. string userName = Environment.UserName; You can use both this snippet, and the cw snippet, in a single program: 1: using System; 2: using System.Collections.Generic; 3: using System.Text; 4: 5: namespace ConsoleApplication1 6: { 7: class Program 8: { 9: static void Main(string[] args) 10: { 11: string userName = Environment.UserName; 12: Console.WriteLine(userName); 13: Console.ReadLine(); 14: } 15: } 16: } Voila. Using Visual Studio and snippets you were able to create a whole program, and only had to type the single line of code found on line 13. Let's try another snippet, with the mellifluous name of appActNa. This is a very strange little snippet, but I'm showing it to you precisely because it is bizarre and unexpected, and also because it is a bit tricky to use. In particular, you have to change your project's references before the code generated from this snippet will compile. Many snippets ask you to take this step, so it is worth taking a moment to see how it works. Remove the code you created in the last section and type the snippet appActNa directly into the Main method of your console application. This snippet produces the following code: Microsoft.VisualBasic.Interaction.AppActivate("Untitled - Notepad"); One can be forgiven for assuming that this code must be some kind of mistake. After all, we are in C#, not Visual Basic. It turns out, however, that one can run an assembly called Microsoft.VisualBasic from inside a C# program without ever writing VB code. To start figuring out what is going on, run the Snippet Manager or reference the snippet HTML file I mentioned earlier. Here is the entry for appActNa: As you can see, the description of the snippet specifies that you must reference a file called Microsoft.VisualBasic.dll. To follow the instructions laid out here, bring up the Solution Explorer (Ctrl-Alt-L), and right click on the References node. Choose Add Reference from popup menu, and Browse down to Microsoft.VisualBasic. Click the OK button to add the reference to your project. Now bring up an empty copy of NotePad. By default, the message bar at the top should read "Untitled - Notepad." If that is the case, then running your program should bring your copy of notepad to the foreground. In this section we are going to look at snipped called dtTableAdaptPartial. When triggered, it produces this code: 1: namespace NorthwindDataSetTableAdapters 2: { 3: public partial class CustomersTableAdapter 4: { 5: 6: } 7: } The Snippets themselves are stored in XML files with an extension of .snippet. Here is the relatively simple one used to define dtTableAdaptPartial: Listing One: The XML used to define a snippet 1: <?xml version="1.0"?> 2: <CodeSnippets xmlns=""> 3: <CodeSnippet Format="1.0.0"> 4: <Header> 5: <Title>Extend a TableAdapter w/Partial Classes</Title> 6: <Author>Microsoft</Author> 7: <Description>Extends a TableAdapter using partial classes.</Description> 8: <Shortcut>dtTableAdaptPartial</Shortcut> 9: <SnippetTypes> 10: <SnippetType>Expansion</SnippetType> 11: </SnippetTypes> 12: </Header> 13: <Snippet> 14: <Declarations> 15: <Literal> 16: <ID>Namespace</ID> 17: <Type>String</Type> 18: <ToolTip>DataSet the TableAdapters function upon</ToolTip> 19: <Default>NorthwindDataSet</Default> 20: </Literal> 21: <Literal> 22: <ID>TableAdapter</ID> 23: <Type>String</Type> 24: <ToolTip>The name of the TableAdapter you wish to expand</ToolTip> 25: <Default>CustomersTableAdapter</Default> 26: </Literal> 27: </Declarations> 28: <Code Language="csharp"> 29: <![CDATA[namespace $Namespace$TableAdapters 30: { 31: public partial class $TableAdapter$ 32: { 33: $end$ 34: } 35: }]]> 36: </Code> 37: </Snippet> 38: </CodeSnippet> 39: </CodeSnippets> To save space, I shortened the text in the Description and in the first of the two ToolTip fields. There is no need to go through the XML schema of the code in Listing One line by line. Certain obvious features, such as the Title (line 5), Description (line 7) and Shortcut (line 8) elements, all tend to stand out. Most of the XML is self explanatory. For instance, the Shortcut element defines the bit of code that you type into the IDE in order to trigger the snippet. In this case, that code would be dtTableAdaptPartial. Looking down to line 28, you see the XML that defines the code that will be generated in the IDE. Again, the syntax is relatively self explanatory, except for the bits that have dollar signs around them. This is the syntax that snippets use to reference the Literals found in the Declaration section which starts on line 14. For instance, the $TableAdapter$ declaration begins on line 21. After inserting the snippet in the IDE, you can tab back and forth between the $Namespace$ and $TableAdapter$ objects. These are the elements you will typically replace with variables specific to your own program. For instance, your TableAdapter is probably not called the CustomersTableAdapter, but something specific to your program such as MyTableAdapter. Holding your mouse over these objects brings up the tooltips defined in lines 18 and 24. The value $end$ is used to designate the place in your code where the cursor should be located after the snippet is inserted. The code to parse one of these XML files is quite simple. Some of the "tricky" parts can even be generated from the XML snippets found in the xml section at the bottom of our HTML page. Listing Two: Simple code for parsing the XML in a snippet file. 4: using System.Xml; 5: 6: namespace ReadXmlSnippets 7: { 8: class ParseSnippet 9: { 10: private CreateHtml createHtml; 11: private XmlDocument doc = null; 12: 13: public ParseSnippet(CreateHtml createHtml) 14: { 15: this.createHtml = createHtml; 16: } 17: 18: private void ShowNode(string elementName) 19: { 20: XmlNodeList nodes; 21: nodes = doc.GetElementsByTagName(elementName); 22: createHtml.WriteTableDelimiter(nodes[0].FirstChild.Value, 1); 23: } 24: 25: public void Parse(string directory, string fileName) 26: { 27: createHtml.StartTableRow("LightYellow"); 28: doc = new XmlDocument(); 29: doc.Load(fileName); 30: ShowNode("Shortcut"); 31: ShowNode("Description"); 32: createHtml.EndTableRow(); 33: } 34: } 35: } The CreateHtml class referenced here is custom code that helps you generate valid XHTML with a minimum of fuss. It has simple methods in it such as the following, which is used to designate the end of XHTML file: 1: public void StopHtml() 2: { 3: AppendToTextFile("</body>"); 4: AppendToTextFile("</html>"); 5: } The complete code for the CreateHtml.cs file is included in the sample codee associated with this post. The code on lines 28 and 29 of Listing Two shows how to create an XmlDocument, and how to load a snippet into it. The XmlDocument class provides many utility methods that make it easy for you to load and parse an XML file. For instance, up on line 21 you can see the GetElementsByTagName method, which is used to retrieve a particular node from an XML file. (You can use the xmlFind snippet to generate most of the code in the ShowNode method that contains the call to GetElementsByTagName.) Since there are nearly 300 snippet files in the download from the C# site, one needs some reasonable way to locate each file that needs to be parsed. The code found in Listing Three shows the simple solution to this problem. Listing Three: Simple code for iterating over the directories that contain the snippet files. 4: using System.Collections; 8: class XmlSnippets 10: private CreateHtml createHtml = 11: new CreateHtml(@"..\..\snippets.html"); 12: private string saveDirectory = ""; 13: private string snippetDirectory; 14: 15: private String GetMyDocumentsDir() 16: { 17: return Environment.GetFolderPath( 18: Environment.SpecialFolder.Personal); 19: } 20: 21: public void RunSearch() 22: { 23: snippetDirectory = GetMyDocumentsDir() + 24: @"\MSDN\Visual C# 2005 Code Snippets\"; 25: List<FileData> files = new List<FileData>(); 26: DirSearch(snippetDirectory, ref files); 27: 28: ShowCollection(files); 29: Console.ReadLine(); 30: } 31: 32: private void ShowCollection(List<FileData> files) 33: { 34: createHtml.StartHtml(); 35: createHtml.StartTable(); 36: 37: int count = 0; 38: ParseSnippet parseSnippet = new ParseSnippet(createHtml); 39: foreach (FileData fileData in files) 40: { 41: if (!fileData.Directory.Equals(saveDirectory)) 42: { 43: createHtml.WriteSingleRow( 44: fileData.Directory.Substring(snippetDirectory.Length)); 45: saveDirectory = fileData.Directory; 46: } 47: parseSnippet.Parse(fileData.Directory, fileData.FileName); 48: count++; 49: Console.WriteLine(fileData.FileName); 50: } 51: 52: createHtml.EndTable(); 53: createHtml.StopHtml(); 54: } 55: 56: private void DirSearch(string sDir, ref List<FileData> files) 57: { 58: try 59: { 60: string[] dirs = System.IO.Directory.GetDirectories(sDir); 61: 62: foreach (string foundDirectoryName in dirs) 63: { 64: foreach (string foundFileName in 65: System.IO.Directory.GetFiles(foundDirectoryName, 66: "*.*")) 67: { 68: files.Add(new FileData(foundDirectoryName, 69: foundFileName)); 70: } 71: DirSearch(foundDirectoryName, ref files); 72: 73: } 74: } 75: catch (System.Exception excpt) 76: { 77: Console.WriteLine(excpt.Message); 78: } 79: } 80: } 81: } The code shown here is designed to run unchanged inside of the IDE without the user having to enter any parameters. This means there are several compromises that you may want to customize in your own versions of this code. I'm careful not to hardcode in the path to the snippets, but instead use a method, shown in lines 15 through 19, that retrieves the user's My Document directory. This should work, so long as you installed the snippets into the default directory. In line eleven I place the the HTML file that is output by the program in the root of the project directory. The code in the DirSearch method, found on line 56, is nearly straight snippet code accessed by typing filSearchDir. It uses the GetDirectories and GetFiles methods of the System.IO.Directory namespace. This article explains a few facts about snippets, and shows how you can use the XML API's that ship with Visual Studio to simply and easily parse a file. You can learn more about Code Snippets by reading the MSDN documentation. If you would like to receive an email when updates are made to this post, please register here RSS You've been kicked (a good thing) - Trackback from DotNetKicks.com Good Article Orcas and LINQ development has the full attention of nearly everyone on the C# team these days, and the njxihdsodkklsnjchsgwnxmloiponcjh nmxzkhuwhsjhydbnxhjjskomxbzueihsknxhnck nxklsio ncjksnhqhdjuwinxhd njcxkji.mckjso This is index to the various technical posts I have created for this blog. As I add to each section, Wow—looks like I'm going to keep going on the "lazy" theme that I've started over the past few weeks, For those of you who develop with Visual Studio 2005 would know, Code Snippets have you a TON of time. It can be helpful to start from the beginning when working with new technologies. This post explains Valium during pregnancy. Valium abuse. Lorazepam to valium conversion. Effects of valium. Valium. Generic valium no prescription. Matthew Manela, an engineer on our Online Tools team, has created and released a free, community-based A nagy zaj mögött elveszhetnek az apróságok. Legalábbis az apróságnak
http://blogs.msdn.com/charlie/archive/2006/10/25/snippets-n-xml.aspx
crawl-002
refinedweb
2,207
55.74
Unanswered: sencha touch 2.1 and cordova 2.3.0; stuck on the loading screen on iOS based devices Unanswered: sencha touch 2.1 and cordova 2.3.0; stuck on the loading screen on iOS based devices Hi, when I try to load my application on ipad/iphone browser I get stuck on the loading screen. No warning/error messages are issued. The Ext.application launch method is not called but the script is loaded. This happens only when cordova-2.3.0.js is included inside index.html. Can anybody else confirm this? Experiencing the same. Experiencing the same. Not sure if this is helpful, but experiencing in the exact same issue on this end. Here's the error I'm receiving. Perhaps related to configuration, but no idea what's going on yet. TypeError: 'undefined' is not a function (evaluating 'cordova.require('cordova/exec').nativeEvalAndFetch(function(){cordova.fireDocumentEvent('resign');})') Using Safari web inspector to troubleshoot. - Join Date - Sep 2010 - Location - Chisinau, Moldova - 646 - Answers - 21 - Vote Rating - 25 don't included it in index.html, but in app.json.ST2 extensions on My blog: Also having this problem Also having this problem Also having this problem. Posted a question on StackOverflow. The app will load in desktop browsers (at least Chrome and Safari for Mac). Also works in native (phonegap) wrapper. But all mobile browsers I've tried -- Mobile Safari (iOS 6 on iPhone and simulator), Chrome on iOS, browser on Android version 2.3 -- hang on the loading page, without launching the app. My cordova.js file is declared in app.json, not in index.html. One other note -- the Ext object is loaded (I think) fully, my app's namespace is loaded with all my views, controllers, models, and stores, but the app and config properties are not present. I'm using ST 2.1 and Cordova 2.3.0. Removing cordova.js reference from app.json solved my problem. Strange, because earlier versions of cordova would load fine alongside ST 2 in mobile browsers. Also strange that it works in desktop browsers. Is there a way to automate this for different builds? Like package would include cordova.js but production would not?
http://www.sencha.com/forum/showthread.php?255145-sencha-touch-2.1-and-cordova-2.3.0-stuck-on-the-loading-screen-on-iOS-based-devices
CC-MAIN-2014-10
refinedweb
368
62.54
How to change the robot_description paramter for the joint_trajectory_controller Dear ROS community, i am simulating a ABB IRB2400 robot arm in gazebo controlling it with MoveIt. This is working so far. My goal is to simulate two arms. So i moved them in different namespaces and give their descriptions different names. The Problem that now is occurring that the joint_trajectory_controllercant find the 'robot_description' parameter on the parameter server. The errors i get are: [ERROR] [1500973451.323217096, 0.454000000]: Could not find parameter robot_description on parameter server [ERROR] [1500973451.323364757, 0.454000000]: Failed to parse URDF contained in 'robot_description' parameter [ERROR] [1500973451.323468398, 0.454000000]: Failed to initialize the controller [ERROR] [1500973451.323531930, 0.454000000]: Initializing controller 'arm_controller' failed [ERROR] [WallTime: 1500973452.325133] [1.452000] Failed to load arm_controller I looked up in the Wiki and the code and as far as i know there is no way to tell the joint_trajectory_controller to look for the robot description under a different name than 'robot_description'. Am i wrong? Is there a simple way i am overseeing? If there is no way implemented i would like to do so. But i could use some enlightening on how to do that a good way. Entering the robot description in the yaml file describing the controller would be the way i would choose. But i have not yet found the part of the code wich parses the arguments from the yaml to the actual controller. I managed to implement the use of a different robot description. It was quite easy after i understood how everything works. I copied the joint_trajectory_controller package and renamed it. Than you need to register it under that name. To get a different description i added a parameter to the server. Could you please elaborate on what was required to "register it under that name"?
https://answers.ros.org/question/267332/how-to-change-the-robot_description-paramter-for-the-joint_trajectory_controller/
CC-MAIN-2022-05
refinedweb
304
58.48
It seemed to work when i tried it. Tbh, i haven't seen the rest of your code so that might be it but. what i did was this: public String checkString() { String pdphrase = null; try { ... It seemed to work when i tried it. Tbh, i haven't seen the rest of your code so that might be it but. what i did was this: public String checkString() { String pdphrase = null; try { ... Uhm. you might want to line-out your text, it's hard to read. But what i get out of this is: You have a jsp page. You have 2 buttons(on and off). The button On will trigger the servlet for the On... Uhm. Did you get some code to get you started? Because you could probably do this in alot of ways. And what do you mean with "Users, Accounts --> Different types, Visas, Checkings , Savings ,... A way you could do it is: Use a FileReader to read the file: FileReader fr = new FileReader("Uvas.txt"); then you could use a BufferedReader to read the text on the file: Well. This doesn't rlly work but. that is because the If statement isn't right^^. The thing is, I'm used to looping through arrays, and not through Arraylists(if its the same then im sorry). Here... yeah, so lets say you got an application with a button. And if you press the button "System.exit();" will be called, that means that your application will end(terminate) if you press the button When i copy your code and paste it into Netbeans, Netbeans gives errors at these parts: aminosSize = aminos.size(); aminosAry = new aminosAry[aminosSize]; The errors say that; It cannot... I guess you should just: import java.awt.Graphics; After you've imported the graphics, you can make an method which draws your Oval. Like this: public void DrawtheOval(Graphics g) {...
http://www.javaprogrammingforums.com/search.php?s=da2efccee14cf07f2b083c8988c57b1a&searchid=1419619
CC-MAIN-2015-11
refinedweb
315
93.34
Api.AI This component is designed to be used with the “webhook” integration in api.ai. When a conversation ends with an user, api.ai sends an action and parameters to the webhook. api.ai requires a public endpoint (HTTPS recommended), so your Home Assistant should be exposed to Internet. api.ai will return fallback answers if your server do not answer, or takes too long (more than 5 seconds). api.ai could be integrated with many popular messaging, virtual assistant and IoT platforms, eg.: Google Assistant (Google Actions), Skype, Messenger. See here the complete list. Using Api.ai will be easy to create conversations like: User: Which is the temperature at home? Bot: The temperature is 34 degrees User: Turn on the light Bot: In which room? User: In the kitchen Bot: Turning on kitchen light To use this integration you should define a conversation (intent) in Api.ai, configure Home Assistant with the speech to return and, optionally, the action to execute. Configuring your api.ai account - Login with your Google account. - Click on “Create Agent” - Select name, language (if you are planning to use it with Google Actions check here supported languages) and time zone - Click “Save” - Go to “Fullfiment” (in the left menu) - Enable Webhook and set your Home Assistant URL with the Api.ai endpoint. Eg.: - Click “Save” - Create a new intent - Below “User says” write one phrase that you, the user, will tell Api.ai. Eg.: Which is the temperature at home? - In “Action” set some key (this will be the bind with Home Assistant configuration), eg.: GetTemperature - In “Response” set “Cannot connect to Home Assistant or it is taking to long” (fall back response) - At the end of the page, click on “Fulfillment” and check “Use webhook” - Click “Save” - On the top right, where is written “Try it now…”, write, or say, the phrase you have previously defined and hit enter - Api.ai has send a request to your Home Assistant server Take a look to “Integrations”, in the left menu, to configure third parties. Configuring Home Assistant When activated, the Alexa component will have Home Assistant’s native intent support handle the incoming intents. If you want to run actions based on intents, use the intent_script component. Examples Download this zip and load it in your Api.ai agent (Settings -> Export and Import) for examples intents to use with this configuration: # Example configuration.yaml entry apiai: intent_script: Temperature: speech: The temperature at home is {{ states('sensor.home_temp') }} degrees LocateIntent: speech: > {%- for state in states.device_tracker -%} {%- if state.name.lower() == User.lower() -%} {{ state.name }} is at {{ state.state }} {%- elif loop.last -%} I am sorry, I do not know where {{ User }} is. {%- endif -%} {%- else -%} Sorry, I don't have any trackers registered. {%- endfor -%} WhereAreWeIntent: speech: > {%- if is_state('device_tracker.adri', 'home') and is_state('device_tracker.bea', 'home') -%} You are both home, you silly {%- else -%} Bea is at {{ states("device_tracker.bea") }} and Adri is at {{ states("device_tracker.adri") }} {% endif %} TurnLights: speech: Turning {{ Room }} lights {{ OnOff }} action: - service: notify.pushbullet data_template: message: Someone asked via apiai to turn {{ Room }} lights {{ OnOff }} - service_template: > {%- if OnOff == "on" -%} switch.turn_on {%- else -%} switch.turn_off {%- endif -%} data_template: entity_id: "switch.light_{{ Room | replace(' ', '_') }}"
https://home-assistant.io/components/apiai/
CC-MAIN-2017-34
refinedweb
530
58.99
Fast 2-view Hartley-Sturm. More... #include <vnl/vnl_double_3x3.h> #include <vnl/vnl_double_4x4.h> #include <vnl/vnl_double_4.h> #include <vgl/vgl_fwd.h> Go to the source code of this file. Fast 2-view Hartley-Sturm. FManifoldProject is a class which allows repeated fast application of the manifold projection ("Hartley-Sturm") correction to points in two views. Modifications AWF 030897 Moved to MViewBasics 210598 AWF Return squared error, as \sqrt(|x - p|^2 + |x' - p'|^2) is meaningless. AWF Handle affine F. P. Torr added in a check for multiple solutions this might be necessary to flag the instance when a particular correspondence might have several possible closest points all near to each other, indicating high structure variability and high curvature in the F manifold. These points should be treated with care, but are interesting as they are in loci of high information. 22 Jun 2003 - Peter Vanroose - added vgl_homg_point_2d interface Definition in file FManifoldProject.h.
http://public.kitware.com/vxl/doc/release/contrib/oxl/mvl/html/FManifoldProject_8h.html
crawl-003
refinedweb
155
51.34
#include <complex.h> double complex cproj(double complex z); float complex cprojf(float complex z); long double complex cprojl(long double complex z); Link with -lm. These functions project. For an explanation of the terms used in this section, see attributes(7). C99, POSIX.1-2001, POSIX.1-2008. In glibc 2.11 and earlier, the implementation does something different (a stereographic projection onto a Riemann Sphere). This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manual.cs50.io/3/cproj
CC-MAIN-2021-04
refinedweb
101
60.21
. - the clone call.. For further information on IPC namespaces, see ipc_namespaces(7). Only a privileged process (CAP_SYS_ADMIN) can employ CLONE_NEWIPC. This flag can't be specified in conjunction with CLONE_SYSVSEM. -. For further information on network namespaces, see network) and mount. of the clone call. Calls to sigaction(2) performed later by one of the processes have no effect on the other process. Since Linux 2.6.0, the flags mask must also include CLONE_VM if CLONE_SIGHAND is specified - CLONE_STOPPED (since Linux 2.6. In contrast to the glibc wrapper, the raw clone() system call accepts NULL as a stack argument (and clone3() likewise allows cl_args.stack to be NULL). In this the flags mask, but either the effective user ID or the effective group ID of the caller does not have a mapping in the parent namespace (see user_namespaces(7)). - EPERM (since Linux 3.9) CLONE_NEWUSER was specified in the flags mask the flags mask, and the limit on the number of nested user namespaces would be exceeded. See the discussion of the ENOSPC error above. Versions The clone3() system call first appeared in Linux 5.3. Conforming to These system calls clone call.).).
https://dashdash.io/2/clone3
CC-MAIN-2021-39
refinedweb
194
77.43
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Override change_partner_id method in Opportunities Hi, I'm working on adding a new field in opportunites view and I want to override the on_change event of the parent_id field to use my new custom field. I've changed the view crm.crm_case_form_view_oppor of crm.lead object as follow> Original: \< field Changed: \, field But now, I can't find the onchange_partner_id method! I noticed that in crm.lead, there are 2 onchange methods for partner_id: - on_change_partner ( I used this in the lead view ) - onchange_partner_id (defined in base_stage) The first one is called when changing the partner_id in the lead form. The second one when changing the partner_id in the opportunity form. The methods seem to be almost identical, except that the second one does an address_get for 'contact'. On Opportunity view the method receives the parameters ( parent_id and email ) and context, however, in base_stage the method doesn't seems to receive the context and I can't override it in order to use my new custom field : def onchange_partner_id(self, cr, uid, ids, part, email=False): """ This function returns value of partner address based on partner :param part: Partner's id :param email: Partner's email ID """ data={} if part: addr = self.pool.get('res.partner').address_get(cr, uid, [part], ['contact']) data.update(self.onchange_partner_address_id(cr, uid, ids, addr['contact'])['value']) return {'value': data} I want to override this method in Opportunity view. Someone has coped with this task? Thank you all. Well, after more than a year I have evolved as a pokemon in a new level of evolution. And I came back to my old question to answer to myself of the past. The solution was quite simple. base_stage object is a set of utilities for all the models who have inherited the mailgate.thread model, like crm.lead does in class crm_lead(base_stage, format_address, osv.osv): """ CRM Lead Case """ _name = "crm.lead" _description = "Lead/Opportunity" _order = "priority,date_action,id desc" _inherit = ['mail.thread', 'ir.needaction_mixin'] This is the simple solution:. class crm_lead_x(osv.osv): _inherit = "crm.lead" def onchange_partner_id(self, cr, uid, ids, part, email=False): #Override on_change_partner_id used in crm.lead but is defined in base_status/base_stage values=super(crm_lead_x, self).onchange_partner_id(cr,uid,ids,part,email=email) if part: partner = self.pool.get('res.partner').browse(cr, uid, part, context=False) values['value']['new_partner_field'] = partner.new_partner_field return values Enjoy it myself of the past! :-) Thanks for your answer. Just at yestday, I have add 3 new field to res.partner and I want use these 3 new field at crm.lead. (Opportunities form view ) I want these 3 new field can appear infomation automatic when i select Custom. : when you choose a custom at Opportunities form view , the Email and Phone can automatic appear. read your question and answer ,I think it will help me to do these. and i change my plan : add 3 new field in crm.lead , when these field's infomation is changed at crm.lead it also changed at res.partnet automaticly. Maybe it is still hard to me ,but at least it can improve my skill.:-) sorry for my poor english. thanks again. zhanghao: I'm happy that my ask&answer lost in time case could help you :-) About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/override-change-partner-id-method-in-opportunities-42039
CC-MAIN-2017-22
refinedweb
593
59.19
FAQs on Fluidized Bed Filtration Related Articles:: BioFiltration, Nutrient Control and Export, Related FAQs on: Biological Filtration, Ammonia, Nitrites, Nitrates, Phosphates, Denitrification/Denitrifiers, Wet-Dry Filters, Bio-Balls, According to Bob at Aquatic Specialties, this FB filter can handle 8-10,000 feeder comets. Sea Storm Filters 11/23/11 I am trying to locate a Sea Storm filter like the one you have pictured on your site, the larger square clear acrylic one. <Can you link me to this picture?> My dad was the original creator of that particular model. Any help in trying to locate one would be greatly appreciated. <Will this help?> Thanks, <You're welcome. James (Salty Dog)> John Fluidized Bed Design/Fluidized Bed Filters 10/28/10 Hello from Japan. <Hello James> I have been toying with the idea of building a fluidized bed reactor for a 50 gallon tank. Currently it is filtered by a trickle filter with bioballs, Beckett skimmer, coil denitrator (feeding 1.8ml of ethanol), and Rowaphos in an old Coral Sea calcium reactor which is now converted to work as a fluidized bed <filter>. Even with this, I manage to have algae grow on the glass, requiring removal every 4 days. Lighting is 3 150W MH units (2 20000 Kelvin and 1 14000 Kelvin unit) that are on for 10 hours a day. Water temp is 23.5 C. No measureable ammonia or nitrite. Calcium is around 400 ppm and dKH is 12 ( I use Dr. Farley's bionic solution <Not familiar with that product.> plus a small calcium reactor powered by a not so reliable yeast generator). I have read that FSB reactors when designed properly can provide both aerobic and anaerobic filtration, but that it requires height. Is there any rule of thumb as to how much depth is required to establish anaerobic bacteria? <Fluidized Bed Filters are designed so that all of the sand/media is tumbling. In that regard, it would be impossible to establish anaerobic bacteria in a fluidized bed filter as these bacteria only grow in areas of little to no oxygen. Fluidized bed filters never gained much popularity among marine aquarists as there are more efficient means of providing denitrification; notably with live rock and wet/dry filters. You may want to read our FAQs on this subject by going here.> Regards, <Ditto. James (Salty Dog)> James Miller> Fluidized Bed Filter Removal 1/13/10 Crew, <Hi Gary> My tank is just about 5 days into its cycle, no spikes yet. <OK> Can you tell me if I should remove my Fluidized Bed Filter, as I have been reading on the database some not so good things about them, and if so with only 24 pounds of live rock will my tank be sustainable. This question is nothing to do with the cycle, its just a general question of the pros and cons of a bed filter. <With your Live Rock/Sand, the use of the bed filter isn't necessary. I would get another 15-20 pounds of live rock if room permits.> 40 Gallon JuweL Vision 180 with internal filter removed. 24 Pounds of Live Rock. 20 Pounds Coral Reef Live Sand. Lifeguard FB 300 Fluidized bed filter. Protein Skimmer with a needle wheel venturi pump flow rate: 1850 L/H. Wave Maker 6000L/H Powerhead. Wave Maker 3000L/H Powerhead. MaxiJet 600L/H Powerhead. <Mmm, if your above figures are correct, you have an awful lot of flow for a 40 gallon tank.> Thanks in advance <You're welcome. James (Salty Dog)> Gary Re Fluidized Bed Filter Removal 1/14/10 James (Salty Dog), <Gary> I've checked my pump flow rates and they are as stated 3000L/H and 6000L/H, they are both Sun Sun models and after looking at reviews and forums some people prefer them over Koralia, anyway back to the point do you think I should remove one of them or not, the thing is I bought big to get a good flow around my Live Rock because I have read you can never get as strong as the sea anyway in terms of flow. To be honest they don't actually seem as powerful as stated my sand isn't blowing all over or my Live Rock, I would like your opinion though as I value it. <If they produce a similar flow (diffused) as in the Koralia, I feel from personal experience that even then it's a bit much. I have a 60" x 18" tank and equipped it with three Koralia 3's and I had a sand storm. I might add that I have a shallow tank (18") which would make the flow more turbulent. Taller tanks would likely calm down the flow some. As long as your fish aren't hanging on the rocks for dear life, and your corals don't look like my hair riding in a convertible, then go for it. I'm guessing from your information that you are using a wavemaker of some type whereas the pumps alternate.> I have a stock list I would like you to look over and tell me if is viable with the set up I have with the amount of Live Rock I have at the moment, although this obviously won't be happening for some time yet. 2 x A. Ocellaris or A. Percula 1 x Royal Gramma 1 x Dwarf Angel (Centropyge argi) <Centropyge, and not appropriate for your size tank.> 1 x Watchman Goby 1 x Pistol Shrimp <I would add more live rock before adding anymore fish.> Thanks. <You're welcome. James (Salty Dog)> Use of aragonite in fluidized sand filter, shark set -up f' as well 9/1/08 Hello Guys, <Brian> I have a question regarding the use of aragonite sand in a fluidized sand filter for a marine Elasmobranchs pond setup. <Mmmm> Are there any drawbacks to the use of aragonite sand in fluidized sand filters as opposed to the sand that comes with the units (ie silica sand)? <Yes... mainly the pumping action (energy) it takes to keep this asymmetrical, different size media in suspension, turning... otherwise issues from channeling... from insufficient water movement> Is it a good idea to use aragonite as opposed to the supplied sand? <IMO/E, no> I greatly appreciate your time in reading and/or answering my question. Your website is a tremendous asset. Thank you, Brian <It would be a good idea to have a "monster size" DSB composed of aragonite, for buffering and anaerobic activity... but the FB is an area/processor of the forward reactions of nitrification. Bob Fenner>.> Coris Wrasse comp. and Fluidized Bed Filter 11/6/07 Hello Again Crew Two question if I could? I have a Red Coris Wrasse and was wondering if I could put another Coris Wrasse with him? <Mmm, maybe... the same species... C. gaimard? If they're small, likely so... let's say, under four inches overall length or so> I noticed in the LFS they keep 4 or 5 together. Mine is 4 inches and doesn't bother anyone else in my tank. I love them so colorful and bullet proof. Never has gotten sick and has lived through ick and velvet outbreaks in the past and never got it where everyone else has. I was thinking about getting a yellow Coris wrasse to go with him. <Oh! this is actually a Halichoeres species. H. chrysus... will likely get along if there's room...> The LFS always tells me things will get along and then I get home and someone getting beat up. Second question. I was thinking about adding a fluidized bed filter. I read on a company site if you slow down the flow you can turn it into a nitrate reducer. <Mmm, not likely> I know they sell ones that are similar in design for reducing nitrates and was wondering your thoughts on it? The FBF seem like a great filter and Am not sure why I haven't seen or heard more about them? Is there a drawback I missed compared to a wet dry? Thanks Crew! <A few... these FB devices are engineered to be more like wet-dries... with their media in constant upheaval, agitation... I encourage you to keep reading re marine filtration methods for now... consider adding a sump/refugium... with a DSB there... instead. Bob Fenner> Re: Yellow Watchman Goby... Actually BGA, FBS- 09/17/07 I have your book "The Conscientious Marine Aquarist" I read like crazy, I have only been in the hobby for 2 years. The octopus got me really hooked. It is a shame I can no longer acquire them in Brazil because of environmental import restrictions. I loved her. My first aquarium was a 30 gallon with live rock and sump. This new 75g one is much harder because of the Cyano problem. I had almost no problems with the pus and it is supposed to produce a ton of waste. I now have just 4 tiny fish in 75G. I know compared to where they used to live it is a drop but I thought the larger system would be more stable. Do you think I should ditch the FBF? <I would do this for a month and see what happens> It is home made. I just have the plumbing from the overflow flowing into a small bucket full of oolite Aragonite (maybe 5 pounds) which then overflows into the rest of the sump. I can remove it easily enough. I had the Cyano before I installed it, and it has only been in there a few months. BTW, I take water readings like crazy and read every resource I can get my hands on. This is my leisure activity and I can't get enough. <Mmm, you have read here: and the linked files above? BobF> Filtration For 55-Gallon FOWLR - 09/10/07 Good Day, <<Hello>> I have a 55gal saltwater FOWLR tank. Currently there is about 25 lbs of live rock, 1-inch of sugar-fine sand, a Whisper 60 filter, a Berlin Airlift 60 protein skimmer, and an Aquaclear 70 powerhead. <<Hmmm...equipment/filtration choices could be better, and there is need for more/better water movement here>> For fish, I have two 1-inch Yellow Tailed Damsels. I plan on removing the 2 damsels and placing them into a separate tank. I would like to slowly add a Saddle Puffer, a Blenny (either Bicolor or Lawnmower), a smaller butterfly (not sure which yet.. will research) and a smaller Mono (that has been slowly acclimated to full marine conditions and will be moved to a larger tank within 2 years). <<A few comments... The Mono is a true marine fish and always best kept in full-strength marine systems...and yes, will need a bigger tank than you have now. A Butterfly may work out, but a swift and agile Dwarf Angel species may do better as the Toby is likely to get "nippy"... Centropyge loricula gets my vote. And obviously, you are going to need to augment/get better filtration>> I have a few questions. <<Okay>> First, it's about my protein skimmer. There are 2 sizes of wooden air stones. The smaller size fits in the tubing but doesn't produce nearly enough bubbles, while the larger sized produces enough bubbles, but doesn't fit in the tubing, and the air goes up both chambers. I've tried taking out one level of the tubing but it doesn't seem to work that well. My air pump is older but I've checked for any kinks in the tubing and I'm sure it's working fine. It's a large sized pump. Any suggestions to remedy the air-problem? <<This sounds like a result of poor engineering/design re the Berlin skimmer. You could try carving "custom" air stones from Basswood (can be found at most "hobby" stores) to maximize available space, but your time/effort/monies would be better spent upgrading from this less than adequate skimmer. My suggestion for your current setup would be the AquaC Remora or Remora-Pro>> Second, I would like to add a second filter to increase the anticipated filtration needed with the added livestock. <<Indeed>> I have read many articles, and it seems like the Aquaclear 50 or 70 would be preferred over adding a second Whisper 60 filter, but I'm not sure. <<Maybe...but like the Whisper filter it will require diligent care to prevent buildup/decomposition of nitrogenous waste while providing little in the way of biological filtration. I think you would be better served with a fluidized-bed filter as these don't accumulate detritus like the other filters and are able to "ramp up" quickly with changing bio-loads>> I have a smaller tank setup with an Aquaclear, and like the functionality as well as the options for filtration. Should I bother adding another filter? <<I think some additional biological filtration will be needed/have benefit, yes>> If so, should I add the Aquaclear or the Whisper (or any other suggested filter?) <<The fluidized-bed filter...as stated>> Third, if I do add the Aquaclear, should I keep the 3 stages of filtration as is, or would trying "3 stages" of activated carbon work? <<Mmm, yes...if you go with this type filter, carbon and Poly-Filter will serve better than merely "trapping" particulates with a mechanical filter material>> Thank you (all) for the incredible amount of knowledge and information I've gained through this site!!!! Eric <<We're all quite happy to share. Do also consider adding a sump for the additional water volume and space for ancillary equipment...as well as a vegetable refugium with DSB for organics/nitrate removal along with a slew of other benefits. Oh yeah...don't forget to add more water movement in the display...the fish will appreciate it and the system as whole will benefit. Regards, EricR>> Re: Filtration For 55-Gallon FOWLR - 09/10/07 Eric, <<Eric>> Thank you for the quick response. <<Quite welcome>> I've been around saltwater aquariums for only about 6 months or so, and my roommate (who introduced me to freshwater a year ago) has had freshwater for about 5 years... <<Thank you for this...is always helpful to know the experience level of those I'm trying to assist so I can ascertain the depth of explanation required. And while we're sharing...I set up my first fish-only marine system in 1977...began with freshwater keeping some years before this>> and neither of us have really heard of Fluidized-Bed filters, even though I read WWM about 10 hours per week since the start of my saltwater tank... <<I see...some helpful reading here (), as well as here for more general marine filtration (). Do be sure to follow/read among the associated links in blue as well>> So I will definitely read about them in WWM. <<Ah...very good>> In the meantime, can you make some quick suggestions as to a good brand (just like you did with the Skimmer?) <<Mmm, not a lot of choices really...try a keyword search on the NET for QuickSand and/or Rainbow fluidized-bed filters>> And if I get an excellent skimmer such as the ones mentioned below, will I still need to add an additional filter, such as the Fluidized-Bed or Aquaclear/Whisper? <<Maybe not...will depend much on your stocking level and species selection. But the fluidized-bed filter is a "good to have" device for a FOWLR tank that is fairly inexpensive and easy to install...worth doing anyway, in my opinion>> I will definitely purchase more powerheads to increase water movement. <<Excellent...do also read here ()>> I have also been thinking... (a lot about my stocking plan) I figure it's better for the fish - stress, health (and my pocketbook) if I go very slowly and plan out my fish purchases first. <<I am much in agreement>> But about the Centropyge loricula - (actually, I was leaning toward the Lemonpeel angel... as a consideration instead of the butterfly) <<The Lemonpeel is a much more delicate species...definitely not for the novice/for this 55g system>> Let's say I don't purchase the puffer, would the Chaetodon auriga be a more hardy addition than a pygmy angel? <<This is a very good "aquarium suitable" species of butterfly...and I think this would be the better long-term solution for you/your tank over the Toby>> I agree with you about the fin nipping of the puffer, and that's another area for research. <<Indeed>> Further, will the Mono be OK with the puffer? <<Probably...for a time...is very quick>> Do you foresee any issues with the pygmy angel or the butterfly? (He was my first fish, and is in full marine conditions now, but is a bit skittish, and VERY fast). <<Will be okay as far as compatibility I think...but as you already know, the Mono needs a bigger tank>> Thanks again for all the help. Eric <<Happy to assist. Eric Russell>> R2: Filtration For 55-Gallon FOWLR - 09/12/07 Eric, <<Eric>> Once again, thanks for the help. <<My pleasure>> I've been reading about the fluidized-bed filters and I'm not really sold on it yet. <Oh?>> From what I've been reading, it looks like it's mostly for highly fluctuating bioloads and/or large tanks with heavy bioloads -- mostly in the beginning stages of cycling, correct? <<Mmm...is not just for "the beginning stages." And as for highly fluctuating bioloads...that is why I recommend them as this is a prevalent condition with heavily stocked systems and/or systems with messy feeders...both being very common to FO and FOWLR systems>> Either way, I'll read more. <<Okay>> This is actually a pretty exciting time, as I'm reading and doing a lot of research about fish. <<Excellent>> I've probably seriously thought about a dozen different fish that I'd like to add. <<Do feel free to bounce your choices off me once you get down to your short-list>> I'm not sure yet. <<And no need to rush it>> As for the question about the skimmer, the reading I've done in regard to the skimmers suggests I should ditch the skimmer I currently have and upgrade to a better model. <<Is my opinion as well>> Luckily the skimmer came included with a great deal on a 30 gal tank I currently have set up, so it won't be much of a monetary loss. <<I see>> As for the skimmer, I'm thinking of an AquaC Remora hang on variety. <<Is an excellent choice>> Those are pretty expensive though. Are there sites that I could find those on for cheap? (Besides Ebay, Craigslist, etc.) <<Hmm, not that I am aware...but the couple-hundred dollars spent on a quality skimmer for your system is cheap by comparison>> Thanks again, Eric <<Regards, Eric> A question on Fluidized Beds - 07/01/07 Hi Guys, I bought a used 60 gallon recently and it comes with a Sea Storm Fluidized bed filter. Its driven off a separate power head and when active the sand rises to above 1/4th to 1/3rd of the column. <Am familiar with this...> Along with the sand media, there seems to be some small gravel in there which probably crept in there over time. My question is, how do I determine the proper flow rate for the filter? * How high should the sand media rise? Currently its churning but does not go up very high. Stays about 1/3rd of the tube. Is that ok? Or should it rise higher? <No worries, as long as it IS rising... and NOT being pushed all the way to the top, out...> * If it must go higher should I use a more powerful powerhead or get the gravel bits out? Cheers! <I would not change the pump mechanism... is fine as it is. Please read here: re FB use. Bob Fenner> FBF (Fluidized Bed Filter) emptying into sump with DSB (Deep Sand Bed) 9/17/06 Great site lots of info keep up the great work please! My question has to do with using a DSB after a DIY FBF. I've read that you feel a FBF produces a lot of nitrates <Some can/do...> and I was curious as to the use of a DSB in a 150 gallon Rubbermaid sump as a way to process the nitrates. <Can be made to work> I have a 125 gallon FO tank with a 1/2 inch of sand and very little live rock it houses a sohal tang a queen angel and a niger trigger and a 150 gallon reef tank with 200 lbs of live rock a DSB several soft and stony corals and 5 tangs 2 dwarf angels 2 small damsels a six line wrasse and a neon Dottyback. They are connected by the 150 gallon sump along with a 80 gallon sump and a 30 gallon refugium I process wastes with two 5 foot DIY skimmers and a DIY denitrator and I keep calcium, alk and ph in check with a DIY calcium reactor and a DIY Nilsen reactor. <Great to have the drive, knowledge, skill for such DIY projects> Water changes 10% are done monthly and all parameters are stable. My thoughts were to run the water from the 125 tank through the FBF then into the 150 sump with a DSB, the water from the 150 tank would dump directly into the 150 sump from there it would dump into the 80 gallon sump where the protein skimmers denitrator and calcium reactor are located. the Nilsen reactor is connected to the automatic top-off system. My goal is the best water quality I can possibly produce with the livestock I have your input would be appreciated Don <This is the order I also would process your water in... And I also think the addition of the FBF will be "worth" it. Bob Fenner> Fluidizing ChemiPure? 4/28/06 Love the site! <Glad you enjoy it! We're thrilled to bring it to you! Scott F. here today.> I was interested to know your thoughts on removing Chemi-Pure from the bag and using it in my Phosban Reactor? The manufacturers do not recommend removing it from the bag, but I cannot see the harm in this situation (if run properly through the reactor, low flow, etc.) Do you foresee any problems in doing this? Regards, Andrew. <Good question, Andrew, and the answer is kind of unclear. While I'm sure that this media would fluidize nicely, I really don't know if there is any advantage to be gained from using the media in this manner. On the other hand, by making sure that the media is thoroughly exposed to the water column, you may be using it more efficiently. My best recommendation is to contact the manufacturer, Boyd Enterprises, and get the answer from them. This is a great product, and if it can be used in this manner, it would that much better! Please let us know what you find! Regards, Scott F.> acheive> Re: Sequential fluidized bed reactors in a marine aquarium 3/2/06 Bob, <Jer> I just want to thank you for the great site and kindly answering my emails. If I decide to experiment with this, I will let you know what happens. Jery <Do appreciate this, thank you. Bob Fenner> Overkill? Fluidized Bed filtr. SW - 02/27/06 Bob, <David> I'm amazed at the amount of data on the WWM site. A big thanks to everyone that is a contributor. <Welcome> I'm a "gadget nerd", and an engineer in the wastewater industry. I've been a FWA for over 30 years, and finally have some money to burn on a reef setup (I've got a funny story about doing "revised" budgets - 3 times - on the system I'm working on). Nonetheless, I think/feel that a system can be setup to "take care of itself", inasmuch as possible. I understand that the biological process is not "steady state", and that there will be fluctuations in the biological loading of the system. This, in turn, impacts the water chemistry, which WWM has done a wonderful job in explaining, especially the importance of buffering the water. <Simple... there are some folks that have gone "advanced"... Randy Holmes Farley, Craig Bingman, others...> I'm an experimenter, and am playing with a setup. I am putting in an 80 gallon or so system (main tank), plus a 39 gallon refugium, of which I'll probably get beneficial use of about 25 gallons, or so. I am using a skimmer, directly from the tank (goes to a sump, first), which will have a return to a baffle/overflow to the refugium. Here, I am going to "split" the flow, by part of the water going under the "sand" in the refugium, and part flowing directly into the refugium (this flow will be adjustable, by the use of an adjustable "flap gate"). I am concerned about flow patterns within this refugium area, as I want to build a layer at the bottom of this compartment where I want anaerobic processes to happen. The top of this layer will have some LR, mangroves, algae, and various critters. These two flow paths can then either be converged into a single compartment, for return to the tank, OR one/both can be run through a fluidized bed. The reason I am considering a fluidized bed (FB), is that I am not sure the bed in the refugium will be sufficient. At low livestock levels (and minimal feeding), I think the FB may be overkill. I have also thought of putting the FB prior to the refugium, and leaving the option to either have it flow through the refugium, or bypass it altogether. I do want to slowly increase my LS levels. The reason I am considering the FB at all is that my refugium has limited space. It is 36Lx12Dx24tall. This gives me a smaller surface area for the sand in it, hence the "increased" SA by use of the FB. The bacteria levels will self-adjust, based upon the nutrient loading of the tank (with gradual changes being the rule-of-thumb). Am I nuts on this? I've read WWM, and haven't found this specific question. Thanks David P. <Fluidized bed filtration can be very useful in situations of vacillating and high bioloads. You can always subtend or remove it... Bob Fenner> Fluidized Bed Filters 2/28/05 Hey guys, I am setting up a new semi reef tank and I was wondering how I should set it up, filtration wise. The tank is 60 gallons and I am planning on only including about 30 pounds of live rock. I own a Fluval 404, a UV filter, and a Fluidized bed filter. I know you don't like FB but I was wondering if it would work considering the low amount of live rock I am going to use? Thanks, Steve <If you choose a very high quality, open structured rock, 30 pounds will be plenty for your tank if you keep reasonable stocking levels. In my opinion, a better rule of thumb is to fill 1/3 or so of the tank volume with rock. Rock from the Marshall Islands and Kaelini as well as some other locations provides much more volume per pound. I personally am not a fan of canister filters, UV OR FB's for reef tanks. Most of us don't maintain UV units well enough for them to be effective. Canister filters and fluidized beds encourage nitrate accumulation by optimizing ammonia and nitrite processing AWAY from the live rock where the resulting nitrate can be processed. Fluidized beds are phenomenal for very large bioload systems, but for the average reefer (even with relatively small amounts of live rock) they are probably not helpful. Best Regards. AdamC.>> Converting Fluidized Bed filter to Calcium Reactor Hello from Calgary to the Crew << Hello up there, Blundell here this afternoon. >> Your service to the hobby is incredible and you have improved my personal experience greatly. Thanks in advance. I have a 66 gal reef setup that is 2 months old that I inherited from a friend, it had been running for 6 months previously. I have hence nursed most of it back to health. A Rainbow fluidized bed filter came with it and it has been running the whole two months, from what I have seen on WWM the FBF is not necessary and potentially detrimental now that the live rock seems to be actively developing a health population of coralline algae and life in general. If I were to replace the sand media in the FBF with media for a Calcium reactor and add the required CO2 system to the input of the FBF could the FBF be converted to a Calcium reactor. << I would search around on the internet for DIY calcium reactor plans. Most people have used a pvc base to make them. Your idea could work, but rather than converting the FBF over, it may be easier (and possibly cheaper) to just make one from scratch. That is what I would do. >> Hopefully I am making sense. Thanks Lonnie << Blundell >> Fluidized Bed Filters Hi guys, I've read your review on the Saltwater Aquariums Dummy's guide and you had mentioned that FDBs are not appropriate for home use. I understand FDBs are heavily used in commercial applications but little information has been found on hobby usage. Can you elaborate on why I shouldn't use a FDB on my system. <Other than being almost always a superfluous bit of gear (their only use is for accelerated nitrification) these devices overdrive the forward reactions of nitrogen cycling... they often result in nitrate accumulation> I have a 125g which is going from reef to fish only. It will have 150lbs live rock and lots of non aggressive fish. Can you please point out the issue of my filter setup being an FDB and an airstone for oxygenation powered through a UPS. <Mmm, "point out the issue"? You don't need the FDB... try turning it off once your system is established... zero effect. Bob Fenner> Thanks for your time, Jackson Fluidized Bed Filtration Dear Mr. Fenner, I am setting a 500 gallon(2200 litres) fish only aquarium and want to use a large fluidized sand filter for bio filtration. Why am I told this is risky? <Mmm, perhaps "risky" in that it's ill-advised to use "just" this type of filtration in such a size, type system... In other words, you would be better advised to employ other means; mechanical/physical, perhaps biological and the chance (space) to add chemical means in your design... in addition to the fluidized bed...> and could you please explain what IMHO means? <Oh, an acronym for In My Humble Opinion. Bob Fenner> thanks, George Fluidized beds Hi, A friend of mine gave me a Sea Storm Fluidized bed that had worked great for him. <even a blind squirrel finds a nut sometimes> I cleaned the equipment and I used the same power head that he used. When I attached the filter and then started it, the filter pumped well but shot all of the expensive sand that I bought into my sump tank. What can I do to correct this problem. I do not see any ways to adjust the flow rate or stop the sand from coming through. <the cleaning likely improved the pump performance. Go to the local Pet or Hardware store and get a plastic valve or adjustable clamp for your tubing to adjust the PH flow> Thanks, Brion Pechin <best regards, Anthony>> Die off problems IV Hello again, I am pondering removing my fluidized bed filter but am wondering what sort of biological filtration I could use in its place? I have ordered another 30lbs of live rock as you suggested. <With the additional liverock you should have plenty of biological filtration.> In my previous freshwater tanks there was no biological filter that I was aware of. Can the crushed coral substrate and the live rock serve as a biological filter or do I need something else? <Liverock will work fine.> Any input and advice you could provide would be greatly appreciated. The main reason I am considering this is because I am having another large algae bloom and I am wondering if this is due to the bed filter reseeding itself after I have added new sand. <Usually due to nutrients.> Also I want to increase the water flow to the tank which seems suppressed due to the resistance from the bed filter at the end of the filtration line. <Yes, these filters greatly reduce flow and increasing circulation would be a good idea. -Steven Pro> Jame!> Re: Aquariums and Filters Thanks. Yes, I am interested only in freshwater (for now anyway). How about fluidized bed filters? I know nothing about them, but read that they are supposed to be the best biological filters for the money. <Only where such "rapid response" systems are called for... i.e. in high, fluctuating bio-loads. Otherwise largely unnecessary... and in some cases, more source of troubles than they're worth> As another alternative, I was thinking about the possibility of Marineland's wet/dry filter, the Tide Pool I or II. The Tide Pool I is supposed to be good up to 80 gals. Any thoughts? <Nice units. Amongst their best products. Bob Fenner> Bob, What is your opinion on fluidized beds? >> Very useful nitrification devices for facilities with large, and varying bioloads... like fish hatcheries, wholesale facilities... Generally unnecessary in any hobbyist set-up... often guilty of overpowering the forward reactions of ammonia conversion. Bob Fenner Bob, I appreciate the quick reply thank you very much. I was wondering if you could take time to answer one more question for me. I usually don't use biological filters on my reef tanks (besides lots live rock and live sand) but this tank will have a high bio load so do think I should stick with the fluidized bed or just use the live rock as a bio filter. Oh ya one more thing compliments on your book purchased yesterday and it was well worth the money!!!!! Thanks Bob <Thank you for the kind words of encouragement. If I had a fluidized bed filter laying about (I do...) I would hang it on to the system for if/when there might well be biological filter-needing anomalies (which we do have), like big additions, removals of live rock... otherwise, as you hint, the rock and sand are more than sufficient for boosting nitrification. Bob Fenner>
https://www.wetwebmedia.com/fluidbedfaqs.htm
CC-MAIN-2020-40
refinedweb
5,841
71.65
In a previous post we described Algorithmia, a cloud service for discovering, invoking and deploying algorithms. In this short article we look at Algorithmia as a tool to deploy trained machine learning models. We used a tool called Gensim to build a model of scientific documents and then create an Algorithmia service that uses the model to predict the topic categories of scientific article. A Review of Word Vectors and Document Vectors. This technology has been around for a while now, so this review is more of a history lesson and not a deep technical review. However, we will give you links to some of the important papers on the subject. If you want to do document analysis through machine learning, you need a way to represent words in a vector form. Given a collection of documents you can extract all the words to create a vocabulary. If the size of the vocabulary is 100,000 words, you can represent each word as a “one-shot” vector in which the i-th word in the vocabulary is a vector of zeros except for a 1 in the i-th position. Then if your then each document in your collection can be represented as the sum of vectors corresponding the words in that document. If you have M documents, then the collection is represented by the sparse matrix of size M x 100,000. Using this “bag of words” representation, there are a variety of traditional techniques such as Latent Sematic Analysis that can be used to extract the similarities between documents. About five years ago, a team from Google found a much better way to create vectors from words so that words that are used in similar semantic context are nearer to each other as vectors. Described in the paper by Tomas Mikolov et. all., the method, often referred to as Word2Vec, can be considered a map m() of our 100,000 dimension space of word to a dense space of much smaller dimension, say 50, with some remarkable properties. In particular, there is the now-famous analogy linearity relationships. For example “man is to king as woman is to queen” is expressible (approximately) as m( king) – m(man) + m(woman) ≈ m(queen) There is an excellent set of technical explanations of why Word2Vec work on Quora and we won’t go into them here. One of the best papers that address this issue is by Golberg and Levy. Le and Mikolov have shown that the basic methods of Word2Vec generalized to paragraphs, so that we now have a map p() from a corpus of paragraphs to vectors. In other words, given a corpus of documents D of size N, then for any doc d in D, p(d) is a vector of some prespecified length that “encodes” d. At the risk of greatly oversimplifying, the paragraph vector is a concatenation of a component that is specific to the paragraph’s ID with word vectors sampled from the paragraph. (As with Word2Vec, there are actually two main versions of this model. Refer to the Le and Mikolov paper for details.) It turns out that the function p can be extended to arbitrary documents x so that p(x) is an “inferred” vector in the same space vector space. We can then use p(x) to find the documents d such that p(d) is nearest to p(x). If we know how to classify the nearby documents, we can make a guess at the classification of x. That is what we will do below. Using Doc2Vec to Build a Document Classifier Next we will use a version of the Paragraph vectors from Gensim’s Doc2Vec model building tools and show how we can use it to build a simple document classifier. Gensim is a product of Radim Řehůřek’s RaRe Technologies. An excellent tutorial for Gensim is this notebook from RaRe. To initialize Gensim Doc2vec we do the following. import gensim model = gensim.models.doc2vec.Doc2Vec(size=50, min_count=2, iter=55) This creates a model that, when trained will have vectors of length 50. The training will use 2 word minimum from each doc for each iteration and there will be 55 iterations. Next we need to ready a document corpus. What we will use is 7000 science journal article abstracts from the Cornell University archive ArXiv.org . We have recorded the titles, abstracts and the topic classifications assigned by the authors. There are several dozen topic categories but we partition them into five major topics: physics, math, computer science, biology and finance. We have randomly selected 5000 for the training set and we use the remainder plus another 500 from recently posted papers for testing. We must first convert the text of the abstracts into the format needed by Doc2Vec. The files are “sciml_train” and “sciml_test”. The function below preprocesses each of the document abstracts to create the correct corpus. def read_corpus(fname, tokens_only=False): with smart_open.smart_open(fname, encoding="iso-8859-1") as f: for i, line in enumerate(f): doc = gensim.utils.simple_preprocess(line) if tokens_only: yield doc else: # For training data, add tags yield gensim.models.doc2vec.TaggedDocument(d, [i]) train_corpus = list(read_corpus("sciml_train")) test_corpus = list(read_corpus("sciml_test", tokens_only=True)) We next build a vocabulary from the words in the training corpus. This is a dictionary of all the words together with the counts of the word occurrences. Once that is done we can train the model. model.build_vocab(train_corpus) model.train(train_corpus, total_examples=model.corpus_count, epochs=model.iter) The training takes about 1 minutes and a simple 4-core server. We can now save the model so that it can be restored for use later with the Python statement model.save(“gensim_model”). We will use this later when building the version we will install in Algorithmia. The model object contains the 5000 vectors of length 50 that encode our documents. To build our simple classifier we will extract this into an array mar of size 5000 by 50 and normalize each vector to be of unit length. (The normalization will simplify our later computations.) import Numpy as np mar = np.zeros((model.docvecs.count, 50)) for i in range(m.count): x = np.linalg.norm(model.docvecs[i]) mar[i] = model.docvecs[i]/x An interesting thing to do with the mar matrix is to visualize it in 2-d using the t-distributed stochastic neighbor embedding (t-SNE) algorithm. The result is shown in the figure below. The points have been color coded based on topic: 1(dee purple) = “math”, 2(blue gray) = “Physics”, 3(blue green) = “bio”, 4(green) = “finance” and 5(yellow) = “compsci”. There are two things to note here. First, the collection is not well balanced in terms of numerical distribution. About half the collect is physics and there are only a small number of bio and finance papers. That is the nature of academic science: lots of physicists publishing papers and not so many quantitative finance or quantitative bio papers in the open literature. It is interesting to note that the Physics papers divide clearly into two or three clouds. (it turns out these separate clouds could be classed as “astrophysics” and “other physics”.) Computer science and math have a big overlap and bio has a strong overlap with cs because these are all “quantitative bio” papers. The classification algorithm is very simple. Our model has a function infer_vector(doc) that will use stochastic methods to interpret the doc into the model vector space. Using that inferred vector we can compute the nearest k documents to it in the model space with the function below. def find_best(k, abstract): preproc = gensim.utils.simple_preprocess(abstract) v = model.infer_vector(preproc) v0 = v/np.linalg.norm(v) norms = [] for i in range(5000): norms.append([np.dot(v0,mar[i]), i]) return norms[0:k] The dot product of the two normalized vectors is the cosine distance. Because the infer_vector is stochastic in nature, our final version of the classifier calls the find_best ten times and computes an average ranking. (The details are in this notebook. and an Html version.) Selecting one of the more recent abstracts and subjecting it to the classifier gives the result pictured below. The analysis gives the abstract a score of 80 for computer science and 20 for bio. Note that the title contains the detailed ArXiv category, so we see this is correct, but reading the article it could also be cross listed as bio. On the other hand, there are many examples that easily confuse the system. For example, the one below is classified as quantitative biology in arXiv, but the system can’t decide if it is math, cs or physics. In general we can take the highest ranking score for each member of the test set and then compute a confusion matrix. The result is shown below. Each row of the table represents the percent of the best guesses from the system for the row label. One interesting observation here is that in the cases where there is an error in the first guess, the most common mistake was to classify an abstract as mathematics. Moving the model to Algorithmia Moving the model to Algorithmia is surprisingly simple. The first step is to create a data collection in the Algorithmia data cloud. We created one called “gensim” and it contains the three important files: the gensim model, topicdict, the dictionary that translates ArXiv topics to our major topics, and the ArXiv topics associated with each of the training documents. The Algorithmia collection is shown below. We also loaded the training document titles but they are not necessary. The main difference between running a trained model in Algorithmia and that of a “normal” algorithm is the part where you load the model from the data container. The skeleton of the python code now includes a function load_model()which you write and a line that invokes this function as shown below. Now when your algorithm is loaded into the microservice it first calls the load_model()before invoking the apply(input) function. For all subsequent invocations of you algorithm while it running in that microservice instance the model is already loaded. (The full source code is here. ) import Algorithmia import gensim From gensim.models.doc2vec import Doc2Vec client = Algorithmia.client() def load_model(): file_path = 'data://dbgannon/gensim/gensim_model' file_path = client.file(file_path).getFile().name model = Doc2Vec.load(file_path) # similarly load train_sites and topicdict # and create mar by normalizing model data return model, mar, topicdict, train_sites model, mar, topicdict, train_sites = load_model() def find_best_topic(abstract): #body of find_best_topic def apply(input): out = find_best_topic(input) return out Deploying the algorithm follows the same procedure as before. We add the new algorithm from the Algorithmia portal and clone it. Assuming the SciDocClassifier.py contains our final version of the source, we execute the following commands. git add SciDocClassifier.py git commit -m "second commit" git push origin master Returning to the Algorithmia portal, we can go to the project source editor. From there we need to add the code dependencies. In this case, we wanted exactly the same versions of gensim and Numpy we used in our development environment. As shown below that was easy to specify. The final version has been published as dbgannon/SciDocClassifer and is available for anyone to use. Once again, our experience with using Algorithmia’s tools have been easy to use and fun to experiment with. There are many algorithms to try out and a starter free account is all you need.
https://esciencegroup.com/2017/11/06/
CC-MAIN-2021-39
refinedweb
1,927
55.44
Fonto 7.11.0 ( June 26, 2020) New Functionality Fonto now has a notification center that gives updates on the state of the application. This version includes notifications for document save errors, long running searches that complete and Content Quality errors that occur. We aim to add more systems to this notification center in the future. Currently it is not possible to plug in your own custom notifications, but this is also something we identified as an improvement for a future release. Fonto now allows you to drag and drop content within Fonto, between different Fonto tabs and even to and from other applications. Simply select some content and drag to move it around. The Fonto Development Tools now offer more help when setting up new instances of Fonto Editor by enabling common configuration settings, preparing schema configuration, and providing guidance in the form of comments and links to documentation throughout the generated files. The getting started guide has been updated to reflect these changes. Make sure to update to the latest version of FDT using npm install -g @fontoxml/fontoxml-development-tools. Fonto is now able to work with CALS tables with multiple <tgroup>elements. Note that most operations that affect a <tgroup>element or its descendants will only be enabled when the cursor is inside of the <table>element if it has a single <tgroup>child. The new Double Click Operation CVK option / widget argument can be used to configure an operation to run when an element or widget is double clicked. This is supported in all places where the Click Operation could already be used. The new Add Parent On Copy and Add Descendants On Copy CVK options can be used to control what other content ends up on the clipboard when content is copied in Fonto. The reload icon shown in the left border of a sheetframe when a document is out of sync now remains in view when scrolling through the document. This icon is only added when you do not have sheetframe headers configured. We aim to add this feature for sheetframe header enabled apps as well in a later release. We now support pasting nested ordered and unordered lists from Word. We now support pasting text formatted with the Symbol font from Word, and will automatically convert such symbols to their unicode equivalent. We have changed the sidebar tabs to be more visible and made it more clear that they can be expanded. The CMS browser can now be configured to pass extra "query" parameters to requests to POST /browse DITA URLs in <topicref>s can now contain anchors as well. This allows pointing them to topics that are not the root element in the target document.. The bottom of the last sheet frame can now be scrolled to the middle of the viewport to make its content more readable. As usual, we are also releasing new versions of our other products. Please refer to the corresponding release notes for more details: Resolved issues Updating the hierarchy, for instance by adding, removing or moving structure view items will no longer cause all documents with errors to be reloaded. To achieve this, the Document Loader was modified to default to not reloading a document that previously had an error. Please refer to the Upgrade from 7.10 to 7.11 page for details on how to modify your custom loading strategy / hierarchy management code to take advantage of this. Fixed an issue where the editor could crash in Chrome on Windows when scrolling past certain objects using the page up or page down key. Pasting content containing IDs for which the unique-id-configurations strategy has been set to "always-regenerate" will now regenerate those IDs. Resolved an issue where certain cursor positions were not reachable in read-only documents. Resolved an issue where selecting and typing over certain elements, including certain placeholders, would not remove them. Resolved an issue where toggling an inline formatting element with the cursor at its end could also remove different nested inline formatting elements. Resolved an issue where automatic creation of lists would not work if a space was typed in front of the trigger characters. Resolved an issue where certain tables with empty spanning cells were visualized in a confusing way. Resolved an issue where Fonto would always try to add attributes to tables with the default values for those attributes. This should improve performance when working with tables lacking these attributes. Resolved an issue where the selection would not be scrolled into view when the editor loads if the initially focused document is not one of the topmost documents. Resolved a rare error that could occur when loading a document containing certain XHTML tables. Resolved an error when using the execute-update-script operation while there is no selection. An issue in the project browser modal has been resolved where elements wouldn't be linkable if the contextof a structure view item is not linkable. Node An issue in the project browser is fixed where clicking an unloaded document would crash the editor. Resolved an issue where the project browser would not preselect anything if only a documentwas supplied. This now defaults to the root element of the document specified by the Id document. Id Fixed being unable to click at the location of the collapse icon in a structure view (seen in the outline sidebar and the find and replace filter modal) when that item has no child items. The column sizing popover should now stay opened when dragging the mouse outside of the popover. A number of edge cases where Popovers are not positioned correctly have been fixed. Fixed a bug where sometimes the context menu in the editor would show a grey border at the bottom instead of only between the different sections. The Fx Document Contextual Operations Widget now passes the hierarchyproperty to its operations, similar to how these contextual operations would be invoked in other places. This property is required by certain operations to distinguish between multiple instances of the same document in the hierarchy. Node Id Although selecting text in the XML source sidebar is not supported, trying to do so would sometimes briefly show a selection. Now you don't even see a selection cursor or a temporary flash of selected text when you attempt to select the XML. The source sidebar is meant for debugging only. It does not provide an accurate XML serialization of the content. It may show incorrect prefixes, omit namespace declarations and will add additional whitespace that is not present in the document. Certain edge cases where Popovers are not positioned correctly have been fixed. The Find & Replace add-on now correctly scrolls find results in a JIT loaded context into view when the search is cancelled. The Search filter is now not available anymore when there is only one document loaded. This did not make sense since there was only one item to select. The spinner in the Quick Navigation modal is now replaceable with the standard spinner replace component. Resolved a regression introduced in 7.10 that caused the isoption for Fx Highlighted Tab Query Editor Masthead tabs to stop working. Resolved an error that could occur if the Fx Image Loader component could not load an image. Resolved an issue where Fonto would sometimes steal focus from the parent frame when hosted in an iframe. Modifying the given selection range in a custom mutations invoked with an overridewill now correctly update that Range overridefor subsequent operation steps. Range Some SVG icons used in the table context menu unintentionally showed a tooltip because of an unexpected title element in them. This has now been resolved. The attribute editor will now correctly parse out the default value from the schema if there is any specified. Fixed an issue where automatic numbering using add Reducer would sometimes not update correctly if the path to the previous entry remained the same but its value changed. Fixed an issue where automatic numbering would not update if the query itself invalidated due to another dependency. Resolved an issue where reducers would not invalidate when returning an attribute node from the accumulator function. Updating automatic numberings (such as those described in Create a numbering for nodes) now triggers fewer UI updates. For instance, we avoided an unnecessary recomputation in automatic numbering when the path to the numbered node from the context node changed but the numbered node remained unchanged. Resolved an issue where unloaded documents would not be numbered correctly and where reloading documents would not update the numbering as expected. We've made several other improvements to the add Reducer API, fixing issues and improving its performance, especially for larger documents. Clicking on a contextual operations menu item from within an outline item now correctly removes the hover background styling of the outline item. Resolved an issue where query results or the view could fail to update after attribute changes if the attribute was read using undefined or the empty string as namespace URI. Resolved an issue where pressing delete in a table sometimes merged an adjacent list into it. Resolved an issue in Safari where in some cases, when typing in certain empty elements such as table cells, not all characters typed would be inserted into the document. Resolved an issue where retrying a lock release would cause an operation error. Schema Compiler: Fixed a degenerate case where there are a lot of values in an enumeration restriction. Other improvements The default fonts used by the editor are now loaded and processed as early as possible to fix small visual glitches in the editor. Form components now stop click propagation in order to support nesting them in other clickable components. The preview modal for crosslinks now contains a link to the edit operation for the link being previewed. The expand button for tables was frequently in the way of content. It has therefore been removed from the tables and added to the default table toolbar. The option to expand / unexpand a table is also still present in the default context menu for tables. Clicking outside of an expanded table will now automatically unexpand it. We've made the descriptions for operations in the table context menu more consistent. We've made the look of the preview popovers for web and cross links more consistent. FDS form fields that are disabled now allow selecting and copying text from them where appropriate. We've updated the examples in our guides to use current best practices for React code, in particular regarding the use of function-style components and hooks. We made a number of smaller improvements that should improve the overall responsiveness of Fonto. A bunch of smaller UX improvements are made to the scoped search modal, this should ensure a consistent experience. We have changed some icons to a solid variant to make them more visible. This is especially noticeable in the sidebar. We've updated the FontoXPath XPath and XQuery engine to version 3.12.0. Most notably, this release brings some performance improvements and support for the fn:matchesfunction. Please refer to the FontoXPath releases page for more details. The Fonto Development Tools now suggest available product versions when inputting non-existing ones. Deprecated AP Is The showCVK option for configure When As Sheet Frame and configure As Map Sheet Frame is now deprecated. From a UX and design perspective, the background and border of a sheet frame should always be visible, regardless of the current cursor position / focused document. Use of this API will now output a deprecation warning. Remove the showoption if you encounter this warning. When The Drop Button component is now deprecated. Please use the newly introduced Drop Anchor component instead.
https://documentation.fontoxml.com/latest/fonto-7-11-0-june-26-2020-7d461d26007a
CC-MAIN-2021-17
refinedweb
1,960
53.21
Contents Last week I shared the general setup of my development environment. Today I will go a bit into Conan and how I use it. I have written about my current project Fix, and what it is about. For the project I will need a few libraries. In order to not have to install them manually, I use Conan. These are the libraries I currently use: - My unit tests are written using Catch. Up to now I had used Boost.Test, CppUnit, and Google Test for a two hour Coding Dojo. - I use mock objects in my unit tests. I could probably write them myself (currently it’s only one), but I went for a mocking library called Trompeloeil. - For the web server and the application framework I went for Poco. Poco also has some file system functionality which I use in the persistence layer for now. - The JSON based REST API is implemented using “JSON for modern C++”, which is a really convenient library. These four libraries are all I use for now. Catch, Trompeloeil and the JSON library are header only. They would be fairly easy to install on any system I work on, but I still want to use Conan, just for the fun of it. Using Conan Using Conan is pretty straight forward. If you want to use a library (or package), there has to be a recipe for it on the server. A server can be either the public one on conan.io or a private server you can set up for yourself. I use the public server, since the recipes for most libraries are already there. Specifying packages The simplest way to specify which packages a project depends on is to have a conanfile.txt with a [requires] section. For Fix it looks like follows: [requires] Poco/1.7.3@lasote/stable catch/1.5.0@TyRoXx/stable nlJson/2.0.2@arnemertz/stable trompeloeil/v17@rollbear/stable [generators] cmake You see how the different packages are specified: a name, a version number of the package, the name of the package maintainer and a specifier if the package is in a stable, testing or other phase. The package version often, but not always corresponds to the library version it stands for. The [generators] section simply tells Conan to write files for CMake so it knows where to find the libraries and so on. Building the packages When we call conan install path/to/conan/file withe the above conanfile.txt, Conan will try to get or build the packages. There may or may not be already a binary package for your settings available on the server. In my case, the default settings of my environment are: arch=x86_64 build_type=Release compiler=clang compiler.libcxx=libstdc++11 compiler.version=3.8 os=Linux The only thing that changes from case to case currently is the build_type, which I mostly set to Debug. For that case I have to add -s build_type=Debug to the parameters of conan install. The packages that are available on conan.io often have binaries compiled with GCC, but not with Clang. That is not a problem, because in that case conan install simply uses the recipe to download the sources of the package and build it for the settings you use. The downloaded or compiled binaries are then stored in a cache on your machine, so the compilation for a given set of settings will be done only once. In my case I have two binaries for each package in the Conan cache, because I alter between Debug and Release builds. Writing your own recipe You might have noticed that I am the maintainer of the package for the JSON library. The reason is simple: there was no package for that library available on the server. Since I wanted to get the library via Conan regardless, I had to write my own recipe and publish it to conan.io. The JSON library is header only and therefore fairly simple to build. The recipe only needs to specify where to download the headers from, you can find it on GitHub. Even for more complex packages, it is very simple to get started with your own recipes. There is a good documentation for the process in the Conan docs. I want to keep my recipe up to date and adopt new versions of the library as soon as possible. Therefore I want to be notified whenever there is a new release of the JSON library. It is available on GitHub, so I tried GitHub notifications first, but I did not find the granularity to get notifications only on new releases, which made the feature rather noisy. Currently I’m trying Sibbell – we’ll see how that turns out. CMake integration The integration of Conan and CMake is seamless. If you have run the conan install with the settings you want to use, all that is left to be done is integrating the generated conanbuildinfo.cmake file and a setup command into your CMakeLists.txt. cmake_minimum_required(VERSION 2.8.12) project( fix ) include(build/conanbuildinfo.cmake) conan_basic_setup() ... Now, the only thing left is to link the libraries provided by Conan, which are listed in a handy variable: target_link_libraries( ${PROJ_NAME} ${CONAN_LIBS} ) CMake will take care of include directories and everything else. Simply include the headers and use the library. #include "Poco/Util/ServerApplication.h" class FixServer : public Poco::Util::ServerApplication { //... }; Conclusion It is extremely simple to get started using Conan, and it brings all the benefits we know from package managers in other languages. 1 Comment Permalink
https://arne-mertz.de/2016/08/conan-for-third-party-libraries/
CC-MAIN-2022-05
refinedweb
933
64.51
Hi, I am trying to read two strings per line and store them in strings so i can use them in my program.for example if this is my input file: cat:yellow dog:blue chicken:red i wan to store cat as animal1 string dog as animal2 string and yellow as colour1 string and so on.not sure how to do this.What i got so far is reading line by line which is not what i want. #include <fstream> #include <string> using namespace std; int main () { string line; //the variable of type ifstream: ifstream myfile ("example.txt"); //check to see if the file is opened: if (myfile.is_open()) { //while there are still lines in the //file, keep reading: while (! myfile.eof() ) { //place the line from myfile into the //line variable: getline (myfile,line); //display the line we gathered: cout << line << endl; } //close the stream: myfile.close(); } else cout << "Unable to open file"; return 0; } would be great if someone help me with this,thank
https://www.daniweb.com/programming/software-development/threads/372619/reading-data-from-an-input-file
CC-MAIN-2018-13
refinedweb
166
79.5