text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Target audience This article has been written for readers who have experience with Java, Hibernate and Spring Boot. All examples use MySql but you could also use other relational databases that you are comfortable with. Introduction The Java ecosystem gives you a lot of tools to magically update your database schemas, but are all of these tools reliable enough to be used with a production database? In this article - the first in a series - we will focus on industry best practices and Hibernate's auto-schema generation feature. We will explain what we've learned from it and where it is suitable to be used. In a subsequent article, we will discuss how database schema changes can be made with a database migration tool such as Liquibase. All code samples are available in our dedicated GitHub repository. Setup Let's first create a new database schema called addressBook using the MySql command-line client: >mysql -u santa -p Enter password: ****** mysql> CREATE DATABASE addressBook; Query OK, 1 row affected (0.12 sec) Let's now open our Java application, which uses Spring Boot and MySql. The configurations for MySql can be found inside application.yml: spring: jpa: database: mysql hibernate: ddl-auto: update datasource: url: jdbc:mysql://localhost:3306/addressBook username: santa password: secret The first 3 lines explain how to connect to MySql. Our password is hardcoded for simplicity's sake, but in real life we would store it in a secret. ddl-auto: update shows that our MySql schema should be updated at application startup (to be discussed in the next paragraph). Generating a database schema from scratch At this stage, our database schema has just been created. Our application only has a single entity class called User. @Entity @Data public class User { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Integer id; private String firstName; private String lastName; private LocalDate dateOfBirth; } Note: the @Data annotation comes from Lombok and auto-generates our getter/setter methods. As seen in the previous section, we have configured database schema auto-update as follows: spring.jpa.hibernate.ddl-auto: update Let us run our JUnit test suite: mvn clean test In the logs, we can see that the following database query has been run: create table user ( id integer not null auto_increment, date_of_birth date, first_name varchar(255), last_name varchar(255), primary key (id)) engine=InnoDB How are tests run with Hibernate? At startup, Hibernate parses all classes that have been decorated with the @Entity annotation. It then scans the User class and generates an SQL table creation query. The table name, column names, types, and etc. are based on the information found in the User class (class name, attribute names and types, annotations, etc.). Starting the application one more time The addressBook database schema has been generated and it contains the User table. What behaviour should we expect when we start the application one more time? When Hibernate runs the tests again, it compares the class User against the table user. It then sees that class and table are in sync and it does not make any further changes. Which SQL? While SQL looks similar when working with various database providers, there is no such thing as completely interoperable SQL. There are subtle differences in how each SQL handles dates, string concatenation, etc. Hibernate elegantly abstracts these differences as "dialects". Inside our pom.xml we have configured the mysql jdbc driver as a dependency for MySql 8. Spring Boot then assumes that we use the default MySql 8 dialect and configures Hibernate accordingly as shown in the startup logs: HHH000400: Using dialect: org.hibernate.dialect.MySQL8Dialect Adding a change to an existing database Let us now add the Address entity to our model. @Data @Entity public class Address { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Integer id; private String streetAddress; private String zipCode; private String city; //... } We are also adding a relationship from User to Address as follows: @Entity @Data public class User { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Integer id; private String firstName; private String lastName; private LocalDate dateOfBirth; @OneToMany(cascade = CascadeType.ALL) @JoinColumn(name = "user_id", foreignKey = @ForeignKey(name="FK_USER_ID")) private List<Address> addressList = new ArrayList<>(); } When the application starts (still in auto-update mode), Hibernate creates the address table as follows: create table address ( id integer not null auto_increment, city varchar(255), street_address varchar(255), zip_code varchar(255), user_id integer, primary key (id) ) engine=InnoDB alter table address add constraint FK_USER_ID foreign key (user_id) references user (id) The address table and its relationship to user have been added as expected. While Hibernate's auto-update works fine most of the time, it is quite magical and error-prone. From our experience, it is easy to rename a class or a field and to then forget about the fact that a new table or column will be generated the next time the application is deployed. In the next section we will discuss about best practices and safeguards when making a change in your production database schema. Schema auto-update in production? In their official documentation, the Hibernate team recommends the below: Although the automatic schema generation is very useful for testing and prototyping purposes, in a production environment, it’s much more flexible to manage the schema using incremental migration scripts. Here is the approach that we commonly use: - For Unit tests, we use H2. The whole database is created in memory at startup time and deleted after all tests have been run ( create-drop). - When running a local web application (on localhost), we run updateand copy from the logs all the update scripts that have been generated (such as for the Address table in our example). We will reuse those scripts for our stagingand productionenvironments. - In stagingand productionenvironments, we use the following setup: spring.jpa.hibernate.ddl-auto=validate At startup time, Hibernate validates that the database schema is compatible with our JPA/Hibernate mapping. If any class or attribute is not mapped properly, Hibernate throws an exception and the application does not start. We try to replicate the behaviour that we will have in production, therefore we update our schema manually using the scripts collected in our local dev environment. In staging and production, we always backup our database and plan for a restore procedure. In MySql, that can be done with the mysqldump command. Note: You can see that the suggested processes are the same for stagingand productionenvironments. Breaking our application's stagingenvironment should not be a big deal. However it is an opportunity to do a dry run before updating our database schema in production. Adding a conflicting change A conflicting change is a change that involves renaming a table or column. Let’s imagine, for example, that an address, which is being used in our existing schema, is not specific enough, and that we would like to rename the address table to Address class as follows: @Data @Entity public class PostalAddress { //... } Hibernate’s auto-update feature does not work well with conflicting changes. If we restart our application in update mode, it creates a new table called address table. Let's disable auto-schema update and use validate instead as explained in the previous paragraph: spring.jpa.hibernate.ddl-auto: validate When starting the application, Hibernate would detect that our classes are not in sync with the database schema and would throw the following exception: Caused by: org.hibernate.tool.schema.spi.SchemaManagementException: Schema-validation: missing table [postal_address] at org.hibernate.tool.schema.internal.AbstractSchemaValidator .validateTable(AbstractSchemaValidator.java:121) In order to avoid this issue we need to stop the application and launch the below script before the new version of the application is deployed: mysqldump --defaults-file="/var/.../extraparams.cnf" ... >mysql -u santa -p Enter password: ****** mysql> use addressBook; -- choose the database schema to be used mysql> RENAME TABLE address to postal_address; We have now made our table name change and deployed the updated version of the application. The above assumes that we are able to take our application offline for a few minutes. It is extremely hard to make a conflicting change to a database while it's running in production. Conclusion We have seen that Hibernate's auto-update is a great development tool and should not be used in staging and production. In staging and production, we have seen that you can run your sql queries manually. In our follow-up blog (to be published by March 1st 2020), we will discuss how to use Liquibase as a database schema migration tool for your staging and production environments. Thanks for reading our blog! Michael Isvy. (thanks to my colleagues Nicolas Guignard, Liew Min Shan and many others on reviewing this article!). Discussion (2) Thank you for the excellent overview Hi, can you link up your liquibase article?? i cannot find it for some reason!!
https://dev.to/onlinepajak/database-schema-changes-with-hibernate-and-spring-boot-3f5k
CC-MAIN-2022-21
refinedweb
1,470
52.8
Discover xamarin forms gridview, include the articles, news, trends, analysis and practical advice about xamarin forms gridview on alibabacloud.com Xamarin. Forms Exploration -- use Xamarin. Forms to create a cross-platform user interface, xamarin. forms -- Xamarin. Forms is a cross-plat Start from scratch to learn how to build and create a project in the Xamarin. Forms (2) Environment, xamarin. formsI. Build an environment in Windows: 1. download and install jdk, Android SDK, and NDK. Of course, you also need VS2013 update 2 (both VS2010 and VS2012) and later.. latest SDK,: B. android plat Xamarin. Forms + Prism (2) -- basically use NavigationService, xamarin. forms This article describes and uses the NavigationService In the Prism framework. 1. Open VS and you can see the installed templates on the left: 2. After creation, you can see the code in App. xaml. cs in the pcl project. The current logic of Xamarin. forms uwp app is deployed on mobile devices of mobile phones for testing, real machine debugging (device portal deployment), xamarin. formsuwp I recently learned xamarin. There is a lumia 930 in your hand, so try to deploy the uwp app on your mobile phone and debug it on a real machine. Current Environment: Xamarin. Forms + Prism (3) -- a simple reminder for UI use, xamarin. formsprism This time we will introduce two useful tip plug-ins, such as success, waiting, and error messages. Preparation: 1. Create a Prism Xamarin. Forms project; 2. Right-click the solution and add the Xamarin. Forms + Prism (1) -- Development Preparation, xamarin. formsprism This essay is mainly used to record the third-party technologies or skills used in Xamarin. Forms APP development in my project. Preparation: 1. VS2017 (recommended) or VS2015; 2. JDK 1.8 or above; 3 About how xamarin. forms DisplayActionSheet, xamarin. formsmvvm Xmarin. forms has encountered a tricky problem, that is, how to use DisplayActionSheet in ViewModel in MVVM, but I use the XAML mode, that is, only on the background page, only the DisplayActionSheet exclusive to Page can be used. I found the materials for Xamarin. Forms cross-platform development-Part 2: in-depth analysis,Original English: /# The first part of this article establishes t I hereby declare that this blog post is transferred from: What is xamarin forms? Xamarin forms is a library that efficiently creates cross-platform user interfaces. Xamarin forms Walkthrough: Use Xamarin. Forms to develop applications of the nature (VB ), Overview Xamarin, a cross-platform development framework using mono and. net core, has been developing over the past few years. After being acquired by Microsoft, Xamarin provides a free version of Xamarin Xamarin.ios and Xamarin.droid proved that C # code can is used to develop mobile apps, and majority of business logic Writt En in C # can is shared on both mobile platforms development. However, the development of User Interface is still heavily depending on platform ' s specific code, such as storyboard on I OS and XML for Android. Xamarin Form is an approach-to-build native UIs for IOS, Android and Windows from a single, shared C # code base.Xamarin I. Build an environment in Windows: 1. download and install JDK and Android SDK; 2. download xamarin. visual Studio, which can be installed online from the official website or offline installation package 3.0.54. 3. After downloading and running the installation program, follow the prompts to install it step by step. Mac environment setup: official online Installation 2. Create an xamarin. . Please be refer to the documentation is your not familiar with it.3.2 Xamarin Forms LabsXlabs is an open source project, aims to provide a powerful and cross platform set of services and controls tailored T o Work with Xamain Forms.3.3 Propertychanged.fodyThe add-in injects inotifypropertychanged code into properties.3.4 Microsoft.Net.HttpThis package includes \ System.Threading.Tasks.dll] To solve conflict and get rid of Warning.1> GTS. Mobile.ios, C:\Users\kkh\Computas\CargoNet\GTS. Mobile\gts. Mobile.ios\bin\iphon Through this article you will learn about the simplest and most common layout stacklayout in Xamarin forms. As with several other layouts, the results are relatively poor, and the two layouts stacklayout and grids are currently used most in the project. Before the previous colleague in writing Xamarin Android, chat to me that he wrote Axml layout i 1. Install NuGetXamarin.FormsXlabs.forms2. MainActivity.cs (Android)public class Mainactivity:xformsapplicationdroid {//3. ViewModel (Portable)CamaraViewModel.csTake it from here.. View (Portable)Photo.xamlApp.cs (Portable) Initial view points to photoMainPage = new Photo ();Complete Camara access using . Net Language Smobiler development using the Gridview control to design more complex forms, smobilergridview At the beginning, Smobiler is a development platform that uses the. Net language to develop apps in the VS environment. It may be more convenient than Xamarin. 1. Target Style The following operations are required to achieve the effect: 1. Drag a . Net language APP development platform -- scycler Learning Log: Use the Gridview control to design complicated forms, scyclergridviewFirst, scycler is a development platform that uses the. Net language to develop apps in the VS environment. It may be more convenient than Xamarin. 1. Target Style The following operations are required to achieve the effect:1. Dra First words: Smobiler is a development platform that uses. NET language to develop apps in the VS environment, perhaps more convenient than Xamarin. Target styleWe want to achieve the effect in the following actions:1. Drag a GridView control to the form interface from "Smobiler components" on the toolbar2. Modify the properties of the GridView control A.load eve First of all: Smobiler is a development platform that uses. NET language in the VS environment to develop the app, perhaps more easily than Xamarin. One, the target style To achieve the effect shown in the above illustration, we need to do the following: 1. Drag a GridView control to the form interface from "Smobiler components" on the toolbar 2. Modify the properties of the
https://topic.alibabacloud.com/zqpop/xamarin-forms-gridview_172505.html
CC-MAIN-2019-39
refinedweb
991
50.84
Let a deceptively hard problem django-registration has to deal with: usernames. And while I could write this as one of those “falsehoods programmers believe about X” articles, my personal preference is to actually explain why this is trickier than people think, and offer some advice on how to deal with it, rather than just provide mockery with no useful context. Aside: the right way to do identity Usernames — as implemented by many sites and services, and by many popular frameworks (including Django) — are almost certainly not the right way to solve the problem they’re often used to solve.. A better approach is the tripartite identity pattern, in which each identifier is distinct, and multiple login and/or public identifiers may be associated with a single system identifier. Many of the problems and pains I’ve seen with people trying to build and scale account systems have come down to ignoring this pattern. An unfortunate number of hacks have been built on top of systems which don’t have this pattern, in order to make them look or sort-of act as if they do. So if you’re building an account system from scratch today in 2018, I would suggest reading up on this pattern and using it as the basis of your implementation. The flexibility it will give you in the future is worth a little bit of work, and one of these days someone might even build a good generic reusable implementation of it (I’ve certainly given thought to doing this for Django, and may still do it one day). For the rest of this post, though, I’ll be assuming that you’re using a more common implementation where a unique username serves as at least a system and login identifier, and probably also a public identifier. And by “username” I mean essentially any string identifier; you may be using usernames in the sense that, say, Reddit or Hacker News do, or you might be using email addresses, or you might be using some other unique string. But no matter what, you’re probably using some kind of single unique string for this, and that means you need to be aware of some issues. Uniqueness is harder than you think You might be thinking to yourself, how hard can this be? We can just create a unique column and we’re good to go! Here, let’s make a user table in Postgres: CREATE TABLE accounts ( id SERIAL PRIMARY KEY, username TEXT UNIQUE, password TEXT, email_address TEXT ); There’s our user table, there’s our unique username column. Easy!? This is a simple thing that a lot of systems get wrong. In researching for this post, I discovered Django’s auth system doesn’t enforce case-insensitive uniqueness of usernames, despite getting quite a lot of other things generally right in its implementation. There is a ticket for making usernames case-insensitive, but it’s WONTFIX now because making usernames case-insensitive would be a massive backwards-compatibility break and nobody’s sure whether or how we could actually do it. I’ll probably look at enforcing it in django-registration 3.0, but I’m not sure it’ll be possible to do even there — any site with existing case-sensitive accounts that bolts on a case-insensitive solution is asking for trouble. So if you’re going to build a system from scratch today, you should be doing case-insensitive uniqueness checks on usernames from day one; john_doe, John_Doe, and JOHN_DOE should all be the same username in your system, and once one of them is registered, none of the others should be available. But that’s just the start; we live in a Unicode world, and determining if two usernames are the same in a Unicode world is more complex than just doing username1 == username2. For one thing, there are composed and decomposed forms which are distinct when compared as sequences of Unicode code points, but render on-screen as visually identical to each other. So now you need to talk about normalization, pick a normalization form, and then normalize every username to your chosen form before you do any uniqueness checks. You also need to be considering non-ASCII when thinking about how to do your case-insensitive checks. Is StraßburgJoe the same user as StrassburgJoe? What answer you get will often depend on whether you do your check by normalizing to lowercase or uppercase. And then there are the different ways of decomposing Unicode; you can and will get different results for many strings depending on whether you use canonical equivalence or compatibility. If all this is confusing — and it is, even if you’re a Unicode geek! — my recommendation is to follow the advice of Unicode Technical Report 36 and normalize usernames using NFKC. If you’re using Django’s UserCreationForm or a subclass of it (django-registration uses subclasses of UserCreationForm), this is already done for you. If you’re using Python but not Django (or not using UserCreationForm), you can do this in one line using a helper from the standard library: import unicodedata username_normalized = unicodedata.normalize('NFKC', username) For other languages, look up a good Unicode library. No, really, uniqueness is harder than you think Unfortunately, that’s not the end of it. Case-insensitive uniqueness checks on normalized strings are a start, but won’t catch all the cases you probably need to catch. For example, consider the following username: jane_doe. Now consider another username: jаne_doe. Are these the same username? In the tyepface I’m using as I write this, and in the typeface my blog uses, they appear to be. But to software, they’re very much not the same, and still aren’t the same after Unicode normalization and case-insensitive comparison (whether you go to upper- or lower-case doesn’t matter). To see why, pay attention to the second code point. In one of the usernames above, it’s U+0061 LATIN SMALL LETTER A. But in the other, it’s U+0430 CYRILLIC SMALL LETTER A. And no amount of Unicode normalization or case insensitivity will make those be the same code point, even though they’re often visually indistinguishable. This is the basis of the homograph attack, which first gained widespread notoriety in the context of internationalized domain names. And solving it requires a bit more work. For network host names, one solution is to represent names in Punycode, which is designed to head off precisely this issue, and also provides a way to represent a non-ASCII name using only ASCII characters. Returning to our example usernames above, this makes the distinction between the two obvious. If you want to try it yourself, it’s a one-liner in Python. Here it is on the version which includes the Cyrillic ‘а’: >>> 'jаne_doe'.encode('punycode') b'jne_doe-2fg' (if you have difficulty copy/pasting the non-ASCII character, you can also express it in a string literal as j\u0430ne_doe) But this isn’t a real solution for usernames; sure, you could use Punycode representation whenever you display a name, but it will break display of a lot of perfectly legitimate non-ASCII names, and what you probably really want is to reject the above username during your signup process. How can you do that? Well, this time we open our hymnals to Unicode Technical Report 39, and begin reading sections 4 and 5. Sets of code points which are distinct (even after normalization) but visually identical or at least confusingly similar when rendered for display are called (appropriately) “confusables”, and Unicode does provide mechanisms for detecting the presence of such code points. The example username we’ve been looking at here is what Unicode terms a “mixed-script confusable”, and this is what we probably want to detect. In other words: an all-Latin username containing confusables is probably fine, and an all-Cyrillic username containing confusables is probably fine, but a username containing mostly Latin plus one Cyrillic code point which happens to be confusable with a Latin one… is not. Unfortunately, Python doesn’t provide the necessary access to the full set of Unicode properties and tables in the standard library to be able to do this. But a helpful developer named Victor Felder has written a library which provides what we need, and released it under an open-source license. Using the confusable_homoglyphs library, we can detect the problem: >>> from confusable_homoglyphs import confusables >>>>>>> bool(confusables.is_dangerous(s1)) False >>> bool(confusables.is_dangerous(s2)) True The actual output of is_dangerous(), for the second username, is a data structure containing detailed information about the potential problems, but what we care about is that it detects a mixed-script string containing code points which are confusable, and that’s what we want. Django allows non-ASCII in usernames, but does not check for homograph problems. Since version 2.3, though, django-registration has had a dependency on confusable_homoglyphs, and has used its is_dangerous() function as part of the validation for usernames and email addresses. If you need to do user signups in Django (or generally in Python), and can’t or don’t want to use django-registration, I encourage you to make use of confusable_homoglyphs in the same way.. Have I mentioned that uniqueness is hard? Once we’re dealing with Unicode confusables, it’s worth also asking whether we should deal with single-script confusables. For example, paypal and paypa1, which (depending on your choice of typeface) may be difficult to distinguish from one another. So far, everything I’ve suggested is good general-purpose advice, but this is starting to get into things which are specific to particular languages, scripts or geographic regions, and should only be done with care and with the potential tradeoffs in mind (forbidding confusable Latin characters may end up with a higher false-positive rate than you’d like, for example). But it is something worth thinking about. The same goes for usernames which are distinct but still very similar to each other; you can check this at the database level in a variety of ways — Postgres, for example, ships with support for Soundex and Metaphone, as well as Levenshtein distance and trigram fuzzy matching — but again it’s going to be something you do on a case-by-case basis, rather than just something you should generally always do. There is one more uniqueness issue I want to mention, though, and it primarily affects email addresses, which often get used as usernames these days (especially in services which rely on a third-party identity provider and use OAuth or similar protocols). So assume you’ve got a case for enforcing uniqueness of email addresses. How many distinct email addresses are listed below? johndoe@example.com johndoe+yoursite@example.com john.doe@example.com The answer is “it depends”. Most MTAs have long ignored anything after a + in the local-part when determining recipient identity, which in turn has led to many people using text after a + as a sort of ad hoc tagging and filtering system. And Gmail famously ignores dot ( .) characters in the local-part, including in their custom-domain offerings, so it’s impossible without doing DNS lookups to figure out whether someone’s mail provider actually thinks johndoe and john.doe are distinct. So if you’re enforcing unique email addresses, or using email addresses as a user identifier, you need to be aware of this and you probably need to strip all dot characters from the local-part, along with + and any text after it, before doing your uniqueness check. Currently django-registration doesn’t do this, but I have plans to add it in the 3.x series. Also, for dealing with Unicode confusables in email addresses: apply that check to the local-part and the domain separately. People don’t always have control over the script used for the domain, and shouldn’t be punished for choosing something that causes the local-part to be in a single script distinct from the domain; as long as neither the local-part nor the domain, considered in isolation, are mixed-script confusable, the address is probably OK (and this is what django-registration’s validator does). There are a lot of other concerns you can have about usernames which are too similar to each other to be considered “distinct”, but once you deal with case-insensitivity, normalization, and confusables, you start getting into diminishing-returns territory pretty quickly, especially since many rules start being language-, script-, or region-specific. That doesn’t mean you shouldn’t think about them, just that it’s difficult to give general-purpose advice. So let’s switch things up a bit and consider a different category of problem. You should have reservations about some names Many sites use the username as more than just a field in the login form. Some will create a profile page for each user, and put the username in the URL. Some might create email addresses for each user. Some might create subdomains. So here are some questions: -tpor If you think these are just silly hypotheticals, well, some of them have actually happened. And not just once, but multiple times. No really, these things have happened multiple times. You can — and should — be taking some precautions to ensure that, say, an auto-created subdomain for a user account doesn’t conflict with a pre-existing subdomain you’re actually using or that has a special meaning, or that auto-created email addresses can’t clash with important/pre-existing ones. But to really be careful, you should probably also just disallow certain usernames from being registered. I first saw this suggestion — and a list of names to reserve, and the first two articles linked above — in this blog post by Geoffrey Thomas. Since version 2.1, django-registration has shipped a list of reserved names, and the list has grown with each release; it’s now around a hundred items. The list in django-registration breaks names down into a few categories, which lets you compose subsets of them based on your needs (the default validator combines all of them, but lets you override with your own preferred set of reserved names): - Hostnames used for autodiscovery/autoconfig of some well-known services - Hostnames associated with common protocols - Email addresses used by certificate authorities to verify domain ownership - Email addresses listed in RFC 2142 that don’t appear in any other subset of reserved names - Common no-reply email addresses - Strings which match sensitive filenames (like cross-domain access policies) - A laundry list of other potentially-sensitive names like contactand The validator in django-registration will also reject any username which begins with .well-known, to protect anything which uses the RFC 5785 system for “well-known locations”. As with confusables in usernames, I encourage you to copy from and improve on django-registration’s list, which in turn is based on and expanded from Geoffrey Thomas’ list. It’s a start The ideas above are not an exhaustive list of all the things you could or should do to validate usernames in sites and services you build, because if I started trying to write an exhaustive list, I’d be here forever. They are, though, a good baseline of things you can do, and I’d recommend you do most or all of them. And hopefully this has provided a good introduction to the lurking complexity of something as seemingly “simple” as user accounts with usernames. As I’ve mentioned, Django and/or django-registration already do most of these, and the ones that they don’t are likely to be added at least to django-registration in 3.0; Django itself may not be able to adopt some of them soon, if ever, due to stronger backwards-compatibility concerns. All the code is open source (BSD license) and so you should feel free to copy, adapt or improve it. And if there’s something important I’ve missed, please feel free to let me know about it; you can file a bug or pull request to django-registration on GitHub, or just get in touch with me directly.
https://www.b-list.org/weblog/2018/feb/11/usernames/
CC-MAIN-2018-09
refinedweb
2,717
55.78
#include <db.h> int DBcursor->dup(DBC *DBcursor, DBC **cursorp, u_int32_t flags); The DBcursor->dup() method creates a new cursor that uses the same transaction and locker ID as the original cursor. This is useful when an application is using locking and requires two or more cursors in the same thread of control. The DBcursor->dup() method returns a non-zero error value on failure and 0 on success. The flags parameter must be set to 0 or the following flag: The newly created cursor is initialized to refer to the same position in the database as the original cursor (if any) and hold the same locks (if any). If the DB_POSITION flag is not specified, or the original cursor does not hold a database position and locks, the created cursor is uninitialized and will behave like a cursor newly created using the DB->cursor() method. The DBcursor->dup()
http://docs.oracle.com/cd/E17276_01/html/api_reference/C/dbcdup.html
CC-MAIN-2014-52
refinedweb
149
51.31
Python as alternative to Matlab for engineering calculations Posted December 30, 2013 at 03:20 PM | categories: python | tags: | View Comments Updated December 30, 2013 at 09:15 PM Table of Contents For the past year I have been seriously exploring whether Python could be used as a practical alternative to Matlab in engineering calculations, particularly in chemical engineering undergraduate and graduate courses. Matlab is very well suited for these calculations, and I have used it extensively in teaching in the past. For example, there is my Matlab blog ( ), and my cmu Matlab package that contains a very nice units package ( ). Matlab is widely used and recognized as a standard software package in engineering. My university has a site license for Matlab, so it doesn't cost me or my students anything to use. Matlab is easy to install, and has almost everything we need out of the box. So why try using something else then? Here are the main reasons: - Not everyone has access to a "free" Matlab license, and Matlab may not be available to my students when they leave the University. Python offers a free, always available option to them. - There are several recent Python distributions that are easy to install, and contain almost everything we need for engineering calculations. - I have been increasingly integrating code into my lecture notes, and this is not easy with Matlab, but it is easy with Python. - I use Python exclusively in my research, and although Matlab and Python are similar, they are different enough that switching between them is bothersome to me. I do not like teaching students to use tools I do not regularly use, and I believe I can provide them with more value by teaching with tools I have a high level of proficiency in. 1 The significance of an open-source alternative Many people will be able to use Matlab or some other proprietary software that someone has paid for the license to use. Some people, however, will not have that option for a variety of reasons. Maybe the company they work for will not pay for the license, maybe they are unemployed, or entrepeneurs in a small startup that cannot afford it, maybe they are students at a University without a site license,… For these people Python is a viable option that is always available. That makes me happy. 2 Easy to install Python distributions An important development in using Python as an alternative to Matlab is the development of many "one-click" installers. Ten years ago it took me about 2 weeks to download and build a Python environment suitable for scientific and engineering calculations. That has kept me from trying to use Python in teaching in the past. Today, I can download a package and install one in about 10 minutes! More importantly, so can my students. My favorite distribution is the Enthought Canopy distribution ( ). This distribution comes with all the essential python modules, and an integrated editor with IPython. It is available for Windows, Macs and Linux. They offer free academic licenses. Another good alternative is the Anaconda distribution ( ) by Continuum Analytics. It is also available for Windows, Macs and Linux. I have not used this one, but it looks like it would be very good. Anaconda comes with the Spyder editor. They offer free academic licenses. Python(x,y) is available for Windows ( ) and comes with the Spyder editor. WinPython ( ) is also available for Windows, and comes with the Spyder editor. The point here is that there are many options available now that make installing a Python distribution as easy as installing packages like Matlab. Enthought Canopy also provides a "desktop environment" similar to Matlab with an editor, documentation browser, package manager and console that is pretty easy to use. 3 Python + numpy/scipy/matplotlib does almost everything you need Python by itself is not suitable for typical engineering calculations. You need the numerical, scientific and plotting libraries that provide that functionality. These are provided in numpy, scipy and matplotlib, which are included in the distributions described above. Typical chemical engineering calculations involve one or more of the following kinds of math problems: - Linear algebra - Root finding (nonlinear algebra) - linear regression - Nonlinear regression - Integration and ordinary differential equations - statistics - plotting All of these are doable out of the box with the Python distributions discussed above. You can find many examples of using these, and more on my PYCSE blog ( ) and . In short, almost every example I put in the Matlab blog has been done in Python. The only ones I did not do yet are some of the interactive graphics with the steam tables. I have not had time to work those out in detail. I have found it convenient to augment theses with a package I wrote called pycse ( ). - an ode integrator with events similar to the one in Matlab - some numerical differentiation functions - linear and nonlinear regression with confidence intervals - some boundary value problem solvers - a publish function to convert python scripts to PDF via LaTeX This package is still a work in progress. Notably, there is not a really good units package in Python that works as well as my Matlab units package does. Two that come close are quantities and pint . Both have some nuances that make them tricky for regular use, and both have some challenges in covering all the functions you might want to use them for. 4 Python from the educator perspective Make that my perspective. I have developed an approach to using code in my lectures where I use the code to reinforce the structure of the problems, and to analyze the solutions that result. Doing that means I need to have code to show students, and the output, and sometimes to run the code to illustrate something. I also like these examples integrated into my lecture notes, so they have the right context around them. I have found that Emacs+org-mode+python allows me to easily integrate notes, equations, images, code and output in one place, and then export it to a PDF which I can annotate in class. This ensures that the code and output stay synchronized, that the code is always right where it needs to be, in the right context, and that I can annotate actual code in class, and not pseudocode. This heavily influenced my decision to use Python because it leverages what I already know and want to do. In fact, using it makes me even better at what I already know and helps me learn more about it. That makes me happy! Not everyone will be a content developer like this, but that is what I like to do. Python makes that process fun, and worth doing for me. 5 Final thoughts In my opinion Python is and is becoming a more viable alternative to other packages like Matlab for scientific and engineering calculations. I have used it exclusively for about a year solving all kinds of engineering problems that I used to solve in Matlab. Python is different, for sure. The main differences in my opinion are: - Python is less consistent in syntax than Matlab. For example, there are two ODE solvers in scipy with incompatible syntax. That is a result of the fact that you install a Python distribution made of packages written by many different people with different needs. - There is duplicated functionality between numpy and scipy. - Some functionality in scipy is provided by external "scikits" ( ). - Support for boundary value problems and partial differential equations is not as good in Python as it is in Matlab 1. At the undergraduate level, this is not a big deal. It is not like the Matlab functions are that easy to use! - Data regression in Python is not as complete as in Matlab. - indexing in Python starts at 0, and uses [], whereas in Matlab it starts at 1 and uses () - You have to import most functions into Python. In contrast, Matlab has them all in one big namespace. It is certainly doable to use Python for many scientific and engineering calculations. This past Fall I took the plunge, and taught a whole core course in chemical reaction engineering using Python! It was a Master's level course with 59 graduate students in it. I have also taught a graduate elective course in Molecular Simulation using Python. I still have some polishing to do before I would teach this to undergraduates, but I think it is definitely worth trying! Footnotes: Copyright (C) 2013 by John Kitchin. See the License for information about copying.
http://kitchingroup.cheme.cmu.edu/blog/2013/12/30/Python-as-alternative-to-Matlab-for-engineering-calculations/
CC-MAIN-2017-39
refinedweb
1,427
61.97
NAME sigpause - atomically release blocked signals and wait for interrupt SYNOPSIS #include <signal.h> int sigpause(int sigmask); /* BSD */ int sigpause(int sig); /* Unix95 */ DESCRIPTION Don’t use this function. Use sigsuspend(2) instead. The function sigpause() is designed to wait for some signal. It changes the process’ signal mask (set of blocked signals), and then waits for a signal to arrive. Upon arrival of a signal, the original signal mask is restored. RETURN VALUE If sigpause() returns, it was interrupted by a signal and the return value is -1 with errno set to EINTR. HISTORY The classical BSD version of this function appeared in 4.2BSD. It sets the process’ signal mask to sigmask. When the number of signals was increased above 32, this version was replaced by the incompatible Unix95 one, which removes only the specified signal sig from the process’ signal mask. The unfortunate situation with two incompatible functions with the same name was solved by the sigsuspend(2) function, that takes a sigset_t * parameter (instead of an int). On Linux, this routine is a system call only on the Sparc (sparc64) architecture. Libc4 and libc5 only know about the BSD version. Glibc uses the BSD version unless _XOPEN_SOURCE is defined. SEE ALSO kill(2), sigaction(2), sigblock(2), sigprocmask(2), sigsuspend(2), sigvec(2)
http://manpages.ubuntu.com/manpages/dapper/man2/sigpause.2.html
CC-MAIN-2014-15
refinedweb
218
57.57
One of the things so rarely covered in advanced deep learning books is the specifics of shaping data to input into a network. Along with shaping data is the need to alter the internals of a network to accommodate the new data. The final version of this example is Chapter_3_3.py, but for this exercise, start with the Chapter_3_wgan.py file and follow these steps: - We will start by changing the training set of data from MNIST to CIFAR by swapping out the imports like so: from keras.datasets import mnist #remove or leavefrom keras.datasets import cifar100 #add - At the start of the class, we will change the image size parameters from 28 x 28 grayscale to 32 x 32 color like so: class WGAN(): def __init__(self): ...
https://www.oreilly.com/library/view/hands-on-deep-learning/9781788994071/7cfa4b90-629a-476f-ae80-a5294aa0fecb.xhtml
CC-MAIN-2021-49
refinedweb
129
70.43
NAME tzset - initialize time conversion information SYNOPSIS #include <time.h> void tzset (void); extern char *tzname[2] DESCRIPTION The tzset() function initializes the tzname variable from the TZ environment variable. This function is automatically. The third format specifies that the time zone information should be read from a file: :[filespec] If the file specification filespec is omitted, the time zone zone information from. If filespec does not begin with a ‘/’, the file specification is relative to the system timezone directory. FILES The system time zone directory used depends on the (g)libc version. Libc4 and libc5 use /usr/lib/zoneinfo, and, since libc-5.4.6, when this doesn’t work, will try /usr/share/zoneinfo. Glibc2 will use the environment variable TZDIR, when that exists. Its default depends on how it was installed, but normally is /usr/share/zoneinfo. This timezone directory contains the files localtime local time zone file posixrules rules for POSIX-style TZ’s Often /etc/localtime is a symlink to the file localtime or to the correct time zone file in the system time zone directory. CONFORMING TO SVID 3, POSIX, BSD 4.3 SEE ALSO date(1), gettimeofday(2), time(2), ctime(3), getenv(3), tzfile(5)
http://manpages.ubuntu.com/manpages/karmic/pt/man3/tzset.3.html
CC-MAIN-2013-48
refinedweb
203
57.47
Introduction: Weather Station: ESP8266 With Deep Sleep, SQL, Graphing by Flask&Plotly Would that be fun to know the temperature, humidity, or light intensity on your balcony? I know I would. So I made a simple weather station to collect such data. The following sections are the steps I took to build one. Let's get started! Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Weather Station With Light, Temperature and Humidity Sensors When. Here are the parts: 1. ESP8266 Wemos brand costs $2.39 pcs on Aliexpress. I would recommend Wemos brand because its EPS8266 is easier to program, update, and have 4MB flash or more. 2. Wemos Charger-Boost Shield costs $1.39 pcs. This is another benefit to use this brand. It has a boost-up board for Lithium battery (nominal voltage = 3.7V) to a 5V for ESP8266. The board also come with charging option with a max charging current = 1M. . 3. Wemos also has several shields for temperature and humidity but I am going to build from individual components. Photoresistor (or light-dependent resistor -- ldr, cheap), a luminosity sensor such as BH1780 or TSL2561 (about 0.87-0.89c pcs), a temperature sensor such as DS18B20 (75c each), and a humidity and temperature combo such as DHT22 ($2.35 here) or SHT21 ($2.20 here). A total cost for the sensor ~$4. 4. Lithium battery. I salvaged one from a 7.4V Canon Battery which is two 3.7V battery in series or 18650 Lithium battery. Each 18650 costs about $5 apiece. I have a picture show the tear-down the camera battery pack. Be careful though, short-circuiting when cutting through the plastic cover could generate extreme heat, and burn. 5. PCB board, jumper, wire, soldering, your time, maybe some debugging skills. Let wire components together follows the schematic above. Then, have a look for the task in the setup loop. It is simply a one-run of tasks and ends by a sleep command. void setup() {<br>. Now, time to upload the code using Arduino IDE to the ESP8266. Step 2: MQTT: a Flexible Medium to Publish and Subscribe Data First, I'm growing fond of using MQTT to send and receive data across different sensors and clients in my home. That is because the flexibility to send unlimited data categorized by a topic, and unlimited clients to subscribe to one topic from an MQTT broker. Second, I'm not qualified to discuss MQTT in-depth. I got to know MQTT sometimes last year (2017) when following tutorials to set up a weather station and sensors using Node-RED. Anyhow, I will try my best to present to you some info. Another good place to start is Wikipedia. If you don't have time to read about the theory, and wanted to set up an MQTT broker, I posted another tutorial just to do so. Look up this post, and scroll down to Step 4. To explain what is Message Queuing Telemetry Transport (MQTT) in my understanding, I prepared a diagram as above. In nutshell, MQTT is an ISO standard, and a product such mosquitto and mosquitto-client, two packages I used build MQTT broker on a Raspberry Pi, have to comply with that standard. The MQTT broker then becomes a medium for publishers to push a message into and subscribers to listen to a target topic. The combination of Arduino PubSubclient library with ArduinoJson, thanks to its creator knolleary and bblanchon, makes easier for the tinkers and developers for a set of tools from sensors to a target equipment or an end client. Let move on with create Database and display some data. Step 3: Save Data to SQL and Display Them on a Web Server I used sqlite3 to create a database for the web server. Install the sqlite3 in Rapberry Pi by: sudo apt-get install sqlite3 created a database and a table by typing into the terminal: sqlite3 weatherstation.db CREATE TABLE weatherdata (id INT PRIMARY KEY, thetime DATETIME, ldr INT, tls2561 INT, ds18b20 REAL, tsht21 REAL, hsht21 REAL); .exit //to exit the sqlite command line and return to Linux terminal To listen to a topic published by the weather station, I used a Paho library with Python: #! /usr/bin/python3<br># adopted from: <a href="" rel="nofollow"> <a href="...</a">...>> # Binh Nguyen, August 04, 2018, from time import localtime, strftime, sleep import paho.mqtt.client as mqtt import sqlite3, json mqtt_topic = 'balcony/weatherstation' mqtt_username = "johndoe" mqtt_password = "password" dbFile = "/path/to/databse/weatherstation.db" mqtt_broker_ip = '192.168.1.50' # The callback for when the client receives a CONNACK response from the server. def on_connect(client, userdata, flags, rc): print("Connected with result code "+str(rc)) client.subscribe(mqtt_topic) # The callback for when a PUBLISH message is received from the server. def on_message(client, userdata, msg): theTime = strftime("%Y-%m-%d %H:%M:%S", localtime()) topic = msg.topic payload = json.dumps(msg.payload.decode('utf-8')) sql_cmd = sql_cmd = """INSERT INTO weatherdata VALUES ({0}, '{1}',\ {2[ldr]}, {2[tsl2561]},{2[ds18b20]}, {2[tsht21]},{2[hsht21]})""".format(None, time_, payload) writeToDB(sql_cmd) print(sql_cmd) return None def writeToDb(sql_cmd): conn = sqlite3.connect(dbFile) cur = conn.cursor() cur.execute(sql_command) conn.commit() client = mqtt.Client() client.on_connect = on_connect client.on_message = on_message client.username_pw_set(username=mqtt_username, password=mqtt_password) client.connect(mqtt_broker_ip, 1883, 60) sleep(1) client.loop_forever() To display data from use another SQL command to query data from the database such as: sql_command = """<br> SELECT * from weatherdata ORDER BY thetime DESC LIMIT 1000;" This SQL command is included in the app.py that uses Flask framework and Plotty to make a web server and plotting a graph. The complete code is hosted on the GitHub. If the ESP8266 cannot read the DS18B20, it assigned a value of -127 as the temperature which skews the relative range of other readable temperatures. I cleaned up those values by set a NULL value to those equals to -127: sqlite3 weatherstation.db sqlite3> UPDATE weatherdata SET ds18b20 = NULL WHERE ds18b20 = -127; To set up an environment for this mini web server, I used the shared libraries on Raspberry Pi. A virtualenv is a better option if the web server is hosted on a powerful computer. Start the web server by: python3 app.py Press Control + C to stop the server. The web server is set to auto-refreshed for every 60 seconds. You can change the interval in index.html file: <meta http- Battery performance: I did not measure the current between the normal state or sleep state of ESP8266. Many others did so. The first google search turned to this page. The normal state of ESP8266 consumes about 100mA depends on the rate of transmitting and wifi activity. The deep-sleep state needs in the range of micro A, which a thousand times less. For 5-minute interval between sleeping and waking up, one single Lithium 18650 (2000mAh) could fuel my weather station for 12 days. The same battery only enough for ESP 8266 ran less than a day with a normal working state. The one I took from the camera battery pack (did not know the capacity) was enough to run the weather station with deep sleep for 5-6 days. Thank you for spending time with me to this end. Be the First to Share Recommendations Discussions
https://www.instructables.com/id/Weather-Station-ESP8266-With-Deep-Sleep-SQL-Graphi/
CC-MAIN-2020-16
refinedweb
1,241
67.04
I? Thanks, Jeremy I? It's not pretty, but you can get a reference to DTE2 directly through com. See for how to accomplish this. Have you tried simply injectingby putting it as a constructor parameter in the component where you need it? Injecting didn't work for me: ReplaceWithAutoComplete constuctor is never called. Is there other ways to get DTE? I need to execute commands. Is there other way to do this? As of ReSharper 6.1 + 7.0, Action handlers cannot have dependencies injected. They require either a parameterless constructor, or a constructor with a single argument, which can be either the string action id that the handler is for, or an instance of ActionManager. You can get various information such as solution, project and text control from the IDataContext parameter to the Update and Execute method (look for classes called DataConstants that contain readonly instances of a DataConstant that acts as a key). But unfortunately, DTE isn't available that way. Instead, you need to call Shell.Instance.GetComponent<DTE>(). This can be called from your action handler's constructor, and should work fine (just checked on a quick plugin, and it gets a value correctly) Thanks Matt I tried to get DTE in Action constructor, but even parametless constructor is never called by Resharper. So, I am getting DTE in Execute method. Unfortunetly I am getting an exception: Could not find the component DTE in the chain of component containers. Here is my code: I feel like I am really close to solving this problem. Please help. What versions of ReSharper and Visual Studio are you using? I successfully retrieved the DTE object in an action handler constructor with ReSharper 7 + VS2012. The parameterless constructor must be called by ReSharper if Execute gets called. The only reason it isn't being called is if you have another constructor. I am using VS2010 SP1 + ReSharper 7.0.1.1098.2760. I referenced EnvDTE from: c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\EnvDTE.dll Runtime version: v1.0.3705 Version: 8.0.0.0 Also Execute gets called only if I remove parameterless constructor. Hmm. I can't reproduce this. It looks like I'm using the same version of ReSharper, VS and EnvDTE.dll as you, and it all seems to work. Could you post a simple repro? You can email it to me at resharper-plugins@jetbrains.com Matt I created new plugin project from scratch for demo purposes. I can get DTE in new project and parameterless constructor is also called during initialization. I guess there is something wrong with my project, so problem is on my side. Thank you for your help. It was a .NET framework version. I used 4.0. I switched to 3.5 and it fixed an issue. Sorry for the late reply. This does (kinda) make sense. ReSharper plugins should really be .net 3.5 dlls, as ReSharper is still a .net 3.5 product. That is, it runs on the 2.0 CLR, to still support older versions of Visual Studio. I guess by having the plugin as a .net 4 assembly, it's resolving the wrong reference for envdte (perhaps it's looking in the v4 GAC rather than the v2 GAC) and it can't find the component. Anyway, glad you got it sorted. Matt Matt, Shell was removed in Resharper 9, so I can't get DTE using Shell.Instance.TryGetComponent<DTE>(). Do you know how I can get DTE in Resharper 9? Viktar Nevermind, I found it at JetBrains.ReSharper.Resources.Shell namespace. Viktar Where you able to get the DTE Viktar? I tried that code in R#9.2 with no luck. JetBrains.ReSharper.Resources.Shell.Shell.Instance.TryGetComponent<DTE>() returns null for me. I didn't upgraded to 9.2 yet. Shell.Instance.TryGetComponent<DTE>() works for me with Resharper 9.1.3. What framework version your project targeting? I am using .NET 4.0. Previosly I used some other version and had a problem. Try to switch .NET Framework version, maybe now it has to be 4.5. Viktar No luck. I tried 4.0, 4.5, 4.5.1 and 4.6. I am using Visual Studio 2015 and testing in the VS hive. Another issue might be that since DTE is a COM interface, and if it's referenced with Embed COM Interop Types set to true, the System.Type used to identify the DTE object is different to the System.Type used to register it in the first place. Try rebuilding with Embed COM Interop Types set to false. Out of curiosity, what do you want to use DTE for? There might be a more ReSharper-y way of doing what you need. Thanks Matt, it works now! I'm using it to do code cleanup on TFS check in files. Not sure if this would be the best way to do this but these are my steps: This was a solution I came up in R#8.2, it works most of the time and hangs up VS sometimes but I couldn't find any other way to get the list of check ins from R# Not sure if in R#9.2 if there is a way to directly get the list of TFS check in files? NOTE: sorry to piggy back on someone elses post. I'm afraid I don't know anything about TFS check in. If there's a Visual Studio interface that can get the data you need, ReSharper should be able to get the interface.
https://resharper-support.jetbrains.com/hc/en-us/community/posts/205990799-How-to-get-at-EnvDTE-from-a-plugin-
CC-MAIN-2020-16
refinedweb
939
69.38
The next thing that should jump out at you for refactoring ought to be the three methods showCommits(), showForks() and showPulls() – they all just change the state to one of three strings, so this should be fairly easy to refactor. Well, yes: it is easy to refactor. But it also gives me a chance to show you two different ways of sending data to methods. Right now all you've used is either onClick={this.someMethod} or onClick={this.someMethod.bind(this)} – no interesting parameters have been passed. We want to send along either 'commits', 'forks' or pulls' depending on which button was clicked, which is easy enough. Update your render() code to this: src/pages/Detail.js return (<div> <button onClick={this.selectMode.bind(this, 'commits')}>Show Commits</button> <button onClick={this.selectMode.bind(this, 'forks')}>Show Forks</button> <button onClick={this.selectMode.bind(this, 'pulls')}>Show Pulls</button> {content} </div>); That makes all three buttons call the same method, so it's now just a matter of writing the selectMode() method so that it accepts a parameter and uses it to set the mode state: src/pages/Detail.js selectMode(mode) { this.setState({ mode }); } Note: you don't need to use the new ES6 computed property name syntax here, because you've always been able to use variables as values. In fact, because the key and value are the same, we can just write mode. With selectMode() in place, you can go ahead and delete showCommits(), showForks(), and showPulls(). That code works, and it works well. But we could rewrite it slightly differently, and I'm going to show it to you because it's the kind of thing you'll find in real code, not because I'm saying I favor one approach over the other. There are two ways of doing it, and two camps of people who each are convinced their way is the One True Way, but again I suggest you try to be pragmatic. IMPORTANT WARNING: I am going to show you how this looks just so you're aware of it. You should keep using your existing code rather than switch to this alternative. The other way we could write these onClick is by storing data in the buttons that describe what they should do. The selectMode() method can then read that data and act appropriately. To take this approach, we would need to modify the render() method to this: return (<div> <button onClick={this.selectMode.bind(this)} Show Commits </button> <button onClick={this.selectMode.bind(this)} Show Forks </button> <button onClick={this.selectMode.bind(this)} Show Pulls </button> {content} </div>); (Note: I split the button elements onto multiple lines to make them easier to read; you can write them on one line if you prefer.) As you can see, that no longer passes a parameter string to the selectMode() method. Instead, the strings are stored inside data-mode parameters. To make selectMode() work with this relies on a JavaScript implementation detail: all event handlers are automatically passed an event object describing what happened. We haven't been using this so it's been silently ignored. But in code that uses this data-mode attribute approach we would – here's how the selectMode() method would need to look: selectMode(event) { this.setState({ mode: event.currentTarget.dataset.mode }); } As you can see, to read the data-mode attribute of whichever button was clicked, we just read the dataset.mode property of the event's currentTarget – that will automatically be the clicked button. There are good reasons to use both of these ways of calling methods. Explicitly passing parameters makes your code a bit easier to read because you can see exactly what is being sent and what is being received, but having methods pull data from the event can be helpful to reduce code duplication. Again, be pragmatic! REMINDER OF IMPORTANT WARNING: Code much later in this book relies on you passing a string parameter to selectMode() rather than using the data-mode attribute approach. That's enough refactoring for now. If it were my own code, I'd probably try to harmonize the various rendering methods a little, but it's not something we can do here because you've probably chosen different JSON fields to me. Still, consider it an exercise: can you get down from four rendering methods to three, two or even one? You might need to clean the JSON before you use it for your component's!
http://www.hackingwithreact.com/read/1/20/refactoring-our-state-code-passing-parameters-in-onclick
CC-MAIN-2019-04
refinedweb
749
64
These are chat archives for typelevel/general General topics relating to the the Typelevel project | See also individual project channels | Code of conduct Either.leftand Either.rightmethods to stdlib. opinions? seems like something Typelevel folks might have an opinion about. scala/scala#6328 please comment on the PR itself, I don’t always follow this room. Eitheralternative used to have? and do scalaz folks use Eitherthese days or their own Either-like thing? .leftand .right. Left(3): Either[Int, String]vs Either.left[Int, String](3)seems like kind of a wash @ Either.left(1) res1: Either[Int, Nothing] = Left(1) Either.left(1)should have the type Either[Int, A]as in the scala PR cats.implicits._because it makes IJ completion take like a year Either.leftand Either.right, no? The syntax I like is 1.left[String] adding extension methods on Any might be a harder sell… maybe not an impossible sell, I don’t know object Either { def left[A]: FromLeftPartiallyApplied[A] = new FromLeftPartiallyApplied[A] def right[A]: FromRightPartiallyApplied[A] = new FromRightPartiallyApplied[A] } class FromLeftPartiallyApplied[A](val dummy: Boolean = true) extends AnyVal { def apply[B](b: B): Either[B, A] = Left(b) } class FromRighttPartiallyApplied[A](val dummy: Boolean = true) extends AnyVal { def apply[B](b: B): Either [A, B] = Right(b) } Either.left[String](42)and get back a Either[String, Int] dummy? dummyis doing there dummyat first because it got squished under the expanded gist :D FromRighttPartiallyAppliedin your paste has two t's F[_], A F[_], Ais so bad, ends up [Nothing, Nothing]a lot [link it]() dummy: Unit = ()the whole thing will explode :smile: def f[a](a: a): a = a; val f.a = () newtype newtypecontaining reference types with hidden constructors @tpolecat I agreed that opaque types are simpler, but then Simon wrote If you think about it, defining a method on an opaque types basically entails "define a method inside an implicit class inside a type companion alongside the opaque type inside a package object". That's seems incredibly ad-hoc to me and that's not wrong nix-channel --updateif you dont pin? :trollface:
https://gitter.im/typelevel/general/archives/2018/02/15
CC-MAIN-2018-51
refinedweb
354
54.12
Ethan is Java Evangelist and Ed is chief technology officer for the KL Group. They can be contacted at [email protected] and [email protected], respectively. One of Java's biggest selling points has been its supposed immunity to one of the most challenging programming problems -- memory leaks. But some Java developers have observed their Java programs exhibit classic memory-leak behavior -- unbounded memory growth leading to poor performance and eventually crashing. What's going on? First, let's look at how dynamic memory management works in Java and understand what the garbage collector does. Objects are allocated on the heap using the new operator and accessed via references. Probably the easiest way to think about memory in Java is to picture the heap forming a directed graph, where objects form the nodes and the references between objects make the edges. The garbage collector sees the memory this way, as a graph of objects and references. The purpose of the garbage collector is to remove from memory objects that are no longer needed. This is a hard problem to solve -- the garbage collector can't tell whether you need a particular object, so it uses an approximation and looks for objects that are no longer reachable. Using the directed graph analogy, it looks for objects that can't be reached by any path starting from a root. Roots, fixed places that are always guaranteed to exist, are the starting points for the garbage collector. In Java, the roots include static fields in classes and locals on the stack. Anything that the garbage collector can't reach from one of the program's roots by any path is considered garbage. To illustrate this, look at Example 1 and Figure 1. The method has two local references on the stack, m1 and m2. There's also a variable created outside the scope of this method called global. m1 and m2 are, temporarily at least, two roots for the garbage collector. Two objects are created and two references, or edges, are created to those objects (from the locals on the stack). Another reference is added from m1 to m2 and a reference is added from the global object. When the method returns, m1 and m2 are no longer on the stack, so the first object that was created is no longer reachable. Because the garbage collector can no longer reach that object by some path it will, at some point in the future, clean up that object. It's important to note that garbage collection does not happen immediately, but on a periodic basis. Even though the object will stay in memory for some period of time until the garbage collector releases it, it remains unreachable and can't be reused. There are some common myths about garbage collection in Java that are worth cleaning up. The first one is that the garbage collector can't handle cycles -- it can. That is, if you have three objects -- A, B, and C -- with references from A to B, B to C, and C to A, and those are the only references to those objects, the garbage collector will clean those objects up. This is in contrast to other systems that use reference counting techniques (such as Microsoft's COM), which do have problems handling cycles in the object reference graph. The second myth, and this is really for people who've moved to Java from C++, is that the finalizer is the same as a C++ destructor -- it isn't. There are a number of subtle differences, but the most important one is that the finalizer is not guaranteed to be called, unlike a destructor in C++, which is explicitly called in order to remove an object. You can't reliably depend on the finalizer in Java. One interesting piece of trivia, however, is that if the finalizer is called, it's possible for it to resurrect the object, by making a reference to the object that's about to be garbage collected from another object, thus making it reachable again. While this is a bad thing to do in practice, the garbage collector is aware of the fact that it can, in theory, happen. Loiterers Now that we've talked about what the garbage collector is and what it does, let's look at what it means to have a memory leak in Java. As Figure 2 illustrates, there are three states that an object can be in: - Allocated objects are all objects that have been created but not yet removed by the garbage collector. - Reachable objects are all the allocated objects that can be reached from one of the roots. - Live objects are reachable objects that are being actively used by your program. The garbage collector takes care of objects that are allocated but unreachable. In contrast, these objects would be memory leaks in C++, memory that's permanently lost to the program. Tools like Rational's Purify and Numega's BoundsChecker are designed to help track down this kind of problem in C++, finding objects that are allocated but no longer reachable. In Java, the situation is different. The garbage collector takes care of the allocated but unreachable objects for you, so a Java memory leak is instead an object that's reachable but not live. Even though you have a reference to that object somewhere and there's a path to that object from some root, the object isn't needed by the program and could be disposed of -- if there wasn't a reference to it. So one contrast between memory leaks in C++ and Java is that in C++ once you leak an object, the problem can't be fixed by the program -- there are no remaining references to that object. In Java, the object itself can be reached, but the code that manages the object may not be accessible to you; for example, the reference to the unneeded object might be from a private field in a class for which you don't have the source code. On the other hand, if the reference itself is accessible, then there should be some action the program can take to remove all the references to the objects making it unreachable and eligible for garbage collection. Another difference, going back to the analogy of viewing the heap as a directed graph of objects and references, is that in C++ you have to manage both the nodes and the edges. Every time you add or remove objects or references, you're changing the collection of both nodes and edges. If you leave some edges hanging, by freeing an object without removing all the pointers to that object, you get a dangling pointer, which usually results in something like Windows' infamous GPF error. Conversely, if you leave a node hanging, by removing all the pointers without removing the node, you have a memory leak. In Java, you can only do the second of these two things, removing the edges. Ultimately, you only have control over the references, so you have to think about managing just the edges. If you don't remove references to objects, the garbage collector can't remove them. You have to assist the garbage collector by managing the edges. One thing that we have found in investigating memory leaks in Java is that they are rarer than they are in C++. In C++, it's easy to get a memory leak by not writing destructors for classes or not bothering to free memory on the heap. But in Java, the garbage collector does a lot of this work for you. The flip side to this is that the impact of memory leaks, the amount of memory that's being lost, tends to be much greater in Java. The reason is that when you have an object that's not being used any more, it's rarely the case that there's just a single object. That object will have references to other objects, which will have more references, and so on, forming a large subgraph of objects that are leaked, just because one reference wasn't properly cleared. For example, Swing or AWT programming containers (such as panels or frames) include other child components (buttons, text fields, and the like). The container can reach all of its children as it has references to them (to lay them out). At the same time, each component has a reference back to its parent. There is, therefore, a path from every object in the user interface to every other object. Compounding the problem, UI objects are often subclassed, adding additional references and objects into the subgraph. The result is that the memory leak is not just a small set of components, it can be a very large collection of objects that's leaking. Since there are many distinct differences between memory leaks in C++ and Java, it's confusing to use the same term to refer to both of them. Therefore, we refer to these unused objects in Java as "loiterers." The dictionary definitions of a loiterer are "to delay an activity with aimless idle stops and pauses" (which will happen as the garbage collector has more and more objects to check on each pass) and "to remain in an area for no obvious reason" (you're not using them, so why are they there?) -- both fairly apt descriptions of what's going on. Another good reason to use a different term is that the Java Virtual Machine and many of the libraries have native code in them, written in C++, and that code may have memory leaks in it, leading to confusion as to whether a leak is in Java code or C++ code that's underneath the Java. Lexicon of Loiterers To further clarify and understand how loiterers occur, we've identified four different patterns of loitering objects (and you may see a theme here): Lapsed Listeners. A lapsed listener is when an object is added to a collection but never removed. The most common example of this is an event listener, where the object is added to a listener list, but never removed once it is no longer needed. So the object's usefulness has lapsed because although it's still in the list, receiving events, it no longer performs any useful function. One of the side effects of this is that the collection of listeners may be growing without bound. You can keep adding listeners to a collection, but they are never removed. This causes the program to slow down as events have to be propagated to more and more listener objects, causing each event to take longer and longer to process. This is probably the most common memory-usage problem in Java--Swing and AWT are very susceptible to this problem and it can occur easily in any large framework. For example, see bug #4177795 in the Java Developer's Connection (at .java.sun.com/ developer/bugParade/index .html). In this case, instances of the javax.swing.JInternalFrame class were loitering if a menu bar had been added to them. Through a long series of events, it turned out that the hashtable that keeps track of all keystrokes registered for menu shortcuts was holding onto a reference to the menu, which was holding onto the internal frame, preventing any of these objects from being garbage collected, even after all the references from inside the program were removed. It's surprisingly easy to create this kind of problem. In contrast, this kind of problem rarely occurs in a C++ program. The memory would probably be freed without removing the pointer from the list, creating a dangling pointer. When the program walks through the list and tries to dispatch the event via the bad pointer, the program would probably crash. Whether it's better to leak memory or to crash is for you to decide. Another example of a lapsed listener in Java 2 is a method on java.awt.Toolkit called addPropertyChangeListener(). You can register a listener there to receive notification whenever any desktop properties change, such as the resolution of the desktop. Because the Toolkit class is a Singleton, there's only ever one instance of it that is created at the start of the application and survives for the lifetime of the entire application. Most listeners, however, are going to have much shorter life spans. If you have a reference from something that has a long life span to something that has a short life span, then the short-lived object is now going to live much longer, as the reference from the long-lived object will keep it around indefinitely. You have to remember to call removePropertyChangeListener() whenever the listener object is destroyed. This isn't really when the listener is literally destroyed, as the garbage collector does that -- it's when you decide that the listener object is no longer needed by the program. Some strategies you can use to avoid lapsed listeners are to make sure all the add and remove calls are paired. Doing this is as simple as using tools such as grep or the find command in your favorite editor to search for calls to addXXXListener and removeXXXListener. Furthermore, it's good practice to pair them close together in your code and not to have the add and remove listener calls spread far apart in separate methods or source code files. At some point in the future the calls are going to get decoupled and you're going to create a loitering object problem again. Another thing, shown in the example, is to pay attention to object lifecycles -- creating references from a long-lived object to a short-lived object ties both objects together, giving them the long-lived object's lifetime. Finally, you might want to consider a larger solution, such as implementing a listener registry or a publish/ subscribe mechanism, to decouple listeners from even sources. You should be suspicious of any framework code that claims to clean up this sort of problem automatically, as it's probably built on a set of assumptions that, if broken, will cause the framework to fail and possibly cause more loiterers. Lingerers. The second type of loiterer is a lingerer -- an object that hangs on for a while, after the program is finished with it. Specifically, it occurs when a reference is used transiently by a long-lived object, but isn't cleared when finished with. The next time the reference is used it will probably be reset to refer to a different object, but in the meantime, the previous object loiters about. In C++, this would again be a benign dangling pointer, where the object being referenced would have been manually freed and the bad pointer would have been retained, but you'd never notice, as the next time the pointer is used, it will be reset to point to some other valid object. An example of this might be a print service in an application (see Example 2). The print service can be implemented as a Singleton, as there isn't usually any need to have multiple print services in an application. The print service contains a field called target. When the program calls doPrint(), the print service prints the object referred to by target. The important thing is that when the print service is done printing, the target reference is not set to null. The object that was being printed can't be garbage collected now, as there's still a lingering reference to it from the printer object. You have to make sure that transient references are set to null once you've finished using them. One strategy for dealing with lingerers is to encapsulate state in a single object as opposed to having a number of objects maintaining state information. This makes changing state easier, as there's only one reference to deal with. In general, lingerers often occur when objects with multiple states hold on to references unnecessarily when they're in a quiescent or inactive state, so you have to carefully consider the state-based behavior of your objects. Another strategy is to avoid early exits in methods -- you should set up methods so that they do their setup first, the processing, and finally any necessary clean up. If you exit before the method has a chance to clean up, references may be left holding on to objects that are no longer needed. Laggards. The third type of loiterer is a laggard -- someone (or something) who is always behind, never quite keeping up. In terms of loiterers, a laggard occurs when an object changes its state, but still has references to some data from its old state. Laggards are typically functional errors in addition to memory problems, but they're often hard to find and may manifest themselves as memory problems before they're discovered as bugs. One way that laggards occur is when you change the lifecycle of a class; for example, when you change a class from having multiple instances to a Singleton, perhaps because it's too expensive to keep creating new objects of this class. Now the single object of this class changes its state over time, as opposed to before when new instances were created whenever a new state was required. Again, comparing the situation to C++, this problem would probably manifest itself as a dangling pointer in C++, where the objects from the old state would have been manually removed, leaving a bad pointer. An example of this might be an object that maintains information about files in a directory, including statistics and which has references to the largest, smallest, and "most complex" file (for some definition of "complex"). When you change directories, for some reason only the references to the largest and smallest files are updated -- the reference to the most complex file is a laggard, as it still points to the file in the previous directory. This is, of course, a bug, but it's subtle and may be difficult to detect. Using a memory debugging tool, however, where you can see all the instances of each class, you should be able to see that there are more references to file objects than there are files in the directory because of the extra file being held on to by the laggard reference. Approaching this problem as a lingerer as opposed to a bug may make it show up much more quickly. You can deal with laggards by thinking carefully about your caching strategies: Is caching really necessary or is it acceptable to calculate certain values dynamically? It's useful to use a profiler to determine when and where caching is appropriate. Another technique is to encapsulate state transitions in a single method, so you don't have code scattered in multiple locations responsible for changing the state of an object. Keeping related code in a single locality makes it easier to maintain. Limbo. The fourth and final type of loiterer is a limbo. Things in limbo are caught in between two places, while occupying neither of them fully. Objects in limbo may not be long-term loiterers, but they can take up a lot of memory at times when you don't want them to. Limbos occur when an object being referenced from the stack is pinned in memory by a long running thread. The problem is that the garbage collector can't do what's referred to as "liveness analysis" where it would be able to find out that an object won't be used anywhere in the rest of a method, thus making it eligible for garbage collection. In Example 3, the method is supposed to read through a file, parse items out of it, and deal with certain elements in it. This might happen if you were looking for a specific piece of data in an XML file, for instance. So the first thing the method does is call readIt(), which might do something like read in the whole file, which would consume a lot of memory. Then the method findIt() goes through and searches for the particular information you're looking for, condensing all the information from the big object into something much smaller. From this point on you don't need big any more and you'd probably like to reuse the memory it's occupying. But when you call parseIt(), which may take a long time, the memory for big can't be reused because there's still a reference to it from the stack in method()'s stack frame -- big can't be garbage collected until method() returns. You need to help the garbage collector out by setting the reference to big to null, as shown in Example 3 in the line that's commented out. One way to deal with limbos is to be aware of long-running methods and watch where large allocations are occurring, to make sure that you're not creating large objects that are being held on the heap by a reference on the stack. Again, tools such as profilers and memory debuggers can help determine what methods take a long time to run and what objects are very large. Explicitly adding statements to set references to null in cases where large objects are being needlessly held can make a big difference. While it's not practical or necessary to null out every reference after you're done with it, it helps where appropriate. A blocked thread can also be a problem; for example, when a thread is blocked waiting on I/O, no object referenced from the stack in that thread can be garbage collected. Tools and Techniques There are a number of tools available to help you track down loiterers. One simple thing to do is to track the objects you're creating manually so that you can programmatically monitor memory usage. The problem with this is, of course, that you have to modify your code in order to see what's going on. An example of how to do this is demonstrated by ObjectTracker.java (Listing One), a class that lets you register objects to track them to see if you have the expected number of instances. Listing Two is an example of using ObjectTracker. To activate ObjectTracker, you have to define the ObjectTracker system property by adding the command-line flag "-DObjectTracker" when you run the Java VM. One important thing to note is that successful use of ObjectTracker relies on the Java VM assigning all objects unique hashcodes. Unfortunately, due to differences in implementation, this is only guaranteed to be true in Sun's JDK 1.1.x JVMs and not in JDK 1.2 or higher. ObjectTracker will appear to work in JDK 1.2.x (or the 1.3 beta), but may not accurately track large numbers of objects. A more industrial-strength solution is to use a full-blown profiler and/or memory debugger. There are a number of commercial products available, including JProbe () from KL Group (where we work). These types of commercial products can track all the objects in your program, let you browse the heap and, very importantly, see not only the objects but also the references between them -- this is important because in Java you have to worry about managing the references (the edges of the graph formed by objects on the heap), and not the nodes. Along the same line, there are some freeware tools available that make use of the profiling output available from JDK 1.2.x JVMs. The -Xrunhprof option (explained by running java -classic -Xrunhprof:help) can generate both time and memory usage information. This data can, in turn, be interpreted by tools such as HyperProf (http:// index.html/) although this seems to have been removed by the author for software patent reasons. The data format produced by -Xrunhprof is documented in the output file and on Sun's web site ( .java.sun.com/developer/onlineTraining/ Programming/JDCBook/perf3.html/). One final possibility is to use the Java Virtual Machine Profiling Interface directly (documented at products/jdk/1.2/docs/guide/jvmpi/jvmpi .html). It can be used to monitor a number of different internal activities inside the VM, such as object allocation and removal (see "What Is the Java VM Profiler Interface," by Andy Wilson, DDJ September 1999). The major drawback with using JVMPI is that it's a native interface and you'll have to create your own C-based shared object library or DLL to get the information. While this is definitely the most flexible approach, it's no small amount of work and it's probably cheaper in the long run to buy (or better yet, get your boss to buy) a commercial memory-debugging tool. A free tool that uses JVMPI is JUM ( jum.html). Conclusion While Java's garbage collection mechanism removes much of the difficulty of managing dynamic memory, problems can still occur. Most nontrivial Java programs will have some loitering objects present in them. Loitering objects are generally fewer than memory leaks in C++, but when they do occur they generally cause much larger problems. Removing loitering objects can be difficult because Java handles memory in a fundamentally different way than C++, which is what most developers are familiar with. You have to think about managing the edges, not the nodes on the heap. A thorough understanding of object lifecycles, including lifetimes and state values, is key and must be used to build good memory-management practices into development practices. DDJ Listing. *****************************************************************************/ import java.lang.reflect.*; import java.util.*; /************************************************************* * Utility class for identifying loitering objects. Objects are tracked by * calling ObjectTracker.add() when instantiated, and calling * ObjectTracker.remove() when finalized. Only classes that implement * ObjectTracker.Tracked can be tracked. As instances are created and * destroyed, they are reported to the stdout. Summaries by class can also be * reported on demand. To enable this functionality, add -DObjectTracker * when running your program. This will track all classes that implement * ObjectTracker.Tracked and call add/remove as indicated in the * previous paragraph. * For a finer degree of control, specify a list of filters * when setting the <code>ObjectTracker</code> property. For instance, * -DObjectTracker=+MySpecialClass,-ClassFoo will only report o * on instances of classes whose name contains MySpecialClass * but not ClassFoo. Hence MySpecialClassBar will be tracked, while * MySpecialClassFoo will not be. See <A HREF="ObjectTracker.html#start()"> * start()</A> for more details. * Limitations * Since you must add instrumentation to all the classes you want to track, * this is not nearly as useful as a Memory Profiler/Debugger like * JProbe Profiler. Also, since it cannot tell you which references * are causing the object to loiter, it doesn't help you remove the loiterers. * If you want to solve the problem, you really need to use a Memory * Profiler/Debugger like JProbe Profiler. The only thing ObjectTracker can * help with is testing whether an instance of a known class goes away. * Implementation Notes * The current implementation assumes that every object has a unique * hashcode. A false assumption in general, but does work in JavaSoft's Win32 * VM for JDK1.1. This implementation will definitely not work in JavaSoft's * implementation of the Java 2 VM, including the HotSpot VM. ************************************************************* */ public class ObjectTracker { // Property ObjectTracker turns this on when set private final static boolean ENABLED = System.getProperty("ObjectTracker") != null; // Classes are hashed by name into this table. private static Hashtable classReg; private static Vector patterns; /** Record info about an object. Class and ordinal number are stored. */ private static class ObjectEntry { int ordinal; // distinguishes between mult. instances String clazz; // classname String name; // name (may be null) public ObjectEntry(int ordinal, String clazz, String name) { this.ordinal = ordinal; this.clazz = clazz; this.name = name; } public String toString() { return clazz + ":#" + ordinal + " ("+name+")"; } } // ObjectEntry /** Records info about a class. Within each class, a table of objects is * maintained, along with next ordinal to use to stamp next object * of this class. */ private static class ClassEntry { String clazz; // class name Hashtable objects; // list of ObjectEntry int ordinal; // last instance of this class created public ClassEntry(String clazz) { this.clazz = clazz; objects = new Hashtable(); ordinal = 1; } public String toString() { return clazz; } /** Get the name of the object by invoking getName(). * Uses reflection to find the method. */ private String getName(Object o) { String name = null; try { Class cl = o.getClass(); Method m = cl.getMethod("getName", null); name = (m.invoke(o, null)).toString(); } catch (Exception e) { } return name; } public void addObject(Object obj) { // Store this object in the object table Integer id = new Integer(System.identityHashCode(obj)); ObjectEntry entry = new ObjectEntry(ordinal, clazz, getName(obj)); objects.put(id, entry); ordinal++; System.out.println(" added: " +entry); } public void removeObject(Object obj) { // Removes this object from the object table Integer id = new Integer(System.identityHashCode(obj)); ObjectEntry entry = (ObjectEntry) objects.get(id); objects.remove(id); System.out.println(" removed: " +entry); } /** Dump out a list of all object in this table */ public void listObjects() { if (objects.size() == 0) { // skip empty tables return; } System.out.println("For class: " + clazz); Enumeration objs = objects.elements(); while (objs.hasMoreElements()) { ObjectEntry entry = (ObjectEntry) objs.nextElement(); System.out.println(" " +entry); } } } // ClassEntry /** No constructor */ private ObjectTracker() {} /** Determine is this class name should be tracked. * @return true if this class should be tracked. @see start */ private static boolean isIncluded(String clazz) { int i=0, size = patterns.size(); if (size == 0) { // always match if list is empty return true; } boolean flag = false; for (; i<size; i++) { String pat = (String) patterns.elementAt(i); String op = pat.substring(0, 1); // + or - String name = pat.substring(1); if (name.equals("all")) { if (op.equals("+")) flag = true; // match all, unless told otherwise else if (op.equals("-")) flag = false; // match nothing, unless told otherwise } else if (clazz.indexOf(name) != -1) { // match if any of the filter names is a substring of // the class name if (op.equals("+")) return true; else if (op.equals("-")) return false; } } return flag; } /** Must be called before any objects can be tracked. Turns on object tracking * if property <code>ObjectTracker</code> is set. In addition, the list of * patterns assigned to this property is stored for future pattern matching * by <code>isIncluded()</code>. This list of patterns must be supplied as a * comma-separated list, each preceded by <code>+</code> or <code>-</code>, * which indicates whether or not the pattern should cause matching classes to * be tracked or not. If property <code>ObjectTracker</code> has no values, * it is equivalent to <code>+all</code>. */ public static void start() { if (ENABLED) { classReg = new Hashtable(); patterns = new Vector(); String targets = System.getProperty("ObjectTracker"); StringTokenizer parser = new StringTokenizer(targets, ","); while (parser.hasMoreTokens()) { String token = parser.nextToken(); patterns.addElement(token); } } } /** Add object to the tracked list. Will only be added if object's class has * not been filtered out. @param obj object to be added to tracking list */ public static void add(Tracked obj) { if (ENABLED) { String clazz = obj.getClass().getName(); if (isIncluded(clazz)) { ClassEntry entry = (ClassEntry) classReg.get(clazz); if (entry == null) { // first one for this class entry = new ClassEntry(clazz); classReg.put(clazz, entry); } entry.addObject(obj); } } } /** Removes object from tracked list. This method should be called * from the finalizer. @param obj object to be removed from tracking list */ public static void remove(Tracked obj) { if (ENABLED) { String clazz = obj.getClass().getName(); if (isIncluded(clazz)) { ClassEntry entry = (ClassEntry) classReg.get(clazz); entry.removeObject(obj); } } } /** Print tracked objects, summarized by class. Also prints a * summary of free/total memory. */ public static void dump() { if (ENABLED) { Enumeration e = classReg.elements(); while (e.hasMoreElements()) { ClassEntry entry = (ClassEntry) e.nextElement(); entry.listObjects(); } System.out.println("=================================="); System.out.println("Total Memory: " + Runtime.getRuntime().totalMemory()); System.out.println("Free Memory: " + Runtime.getRuntime().freeMemory()); System.out.println("=================================="); System.out.println(""); } } /** All classes that want to use this service must implement this * interface. This forces this class to implement Object's finalize * method, which should call <code>ObjectTracker.remove()</code>. */ public interface Tracked { /** All classes that use ObjectTracker must implement a finalizer. */ void finalize(); } } Listing. ***************************************************************************/ public class tester implements ObjectTracker.Tracked { private int[] junk = new int[5000]; public static void main(String args[]) { ObjectTracker.start(); for (int i=0; i<1000; i++) { tester t = new tester(); t.doNothing(); if (i%100 == 0) { System.gc(); } } ObjectTracker.dump(); } public tester() { ObjectTracker.add(this); } public void finalize() { ObjectTracker.remove(this); } public void doNothing() { } }
http://www.drdobbs.com/jvm/java-qa/184404011
CC-MAIN-2019-22
refinedweb
5,393
61.67
Instructions for filling SAHAJ (ITR - 1) form.This Return Form CANNOT be used by an individual or a Hindu Undivided Family whose total income for the assessment year 2015-16 includes Income from Business or Profession. Instructions for filling out FORM ITR-2A.1 ITR V ACKNOWLEDGEMENT AY 2015-16 Received with thanks from a return of income in ITR No. 2 for assessment year 2015 How to File ITR1 Online Income Tax Return Salaried individual AY 2017-18 in Steps: Video in Hindi - Продолжительность: 19:01 Gyan Master 92HOW TO GENERATE FORM16 OR FORM16A TDS CERTIFICATE ONLINE - Продолжительность: 7:20 GHANSHYAM SHARMA 362 736 просмотров. ITR 1, ITR 2A, ITR 3, ITR 3, ITR 4 ITR 7 Available in Excel, Java PDF 31-3-2017 [message] Change Bouncer A new column has been inserted in ITR Forms to report cash deposit in banks above download itr form 1 for ay 2015-16 2 lakhs during the demonetisation period, i.e ITR Forms in Excel format with formulas for Assessment Year 2015-16 2016-17.All ITR forms are compiled on a single worksheet for ease of filling and printing. The forms require minimum data / information input by you. Download Now 4.1 MB .pdf. ITR 1 PDF for AY 2015-16 2015-06-24.Overview. Version History. The ITR-1 PDF Utility for AY 2015-16 has been released by Income Tax Dept and can be downloaded from here. General Instructions These instructions are guidelines for filling the particulars in this Return Form.One copy of ITR-V, duly signed by the assessee, has to be sent by post to - Post Bag No. 1, Electronic City Office, Bengaluru— 560 100, Karnataka. Category Amount AS Fill your Gender, Male or Female Thu, 03 Mar 2005 23:59:00 GMT SAHAJ Instructions for SAHAJ AY2015 -16 Incomeinstructions for filling itr-1 sahaj a.y. 2017-18 general sahaj instructions 2011 - scribd instructions for sahaj income tax return ay 2016-17 itr 1 sahaj Safeguard Notifications. Instructions. Orders. Press Release. Preparation Software: Software for preparing ITR 1 ITR 4S in Java, Excel Online for AY 2015-16 are now available for e-Filing.Srinivasan Says: 06/25/2015 At 1:07 AM The ITR1 excel utility software when I fill up the date of birth. Download Income Tax Return Forms for A.Y. 2015-16. ITR-1 (Sahaj). Form. Instructions.Since I also have income from overseas salary, Which ITR form should I use for AY 2015-16? The Central Board of Direct Taxes (CBDT) has issued new ITR form for the salaried person by the name of SAHAJ for the Financial Year 2014-15.Every salaried Person has to fill Income Tax Return for the Salary of A.Y. 2015-16 for the duration of 1 April 2014 to 31 March 2015.Now the last date of Change 3 ITR 1 can no longer be filed by anyone who has income from sources outside India. Have any questions, comment or reach out to us supportcleartax.in.ITR Forms for Assessment Year 2015-2016 (Income Tax Return E-Filing Forms). Changes in ITR -2 for FY 2014-15 (AY 2015-16). Individuals who are not eligible to fill the ITR-1 SAHAJ form are those who have earned Income through the following means:[5]. a b "Instruction of SAHAJ Income Tax Return" (PDF). "Download Income Tax Return Form 1: ITR 1 SAHAJ FORM AY 2017-18 or FY 2016-17".Retrieved 2015-07-29. ITR-V should be duly filled. 7. Obligation to file return.In case of individuals, being resident in India, who are of the age of 60 years or more at any time during the financial year 2015-16.Instructions for filing ITR-4S-SUGAM for AY 2016-17. The codes while filing ITR4 for nature of business to be filled in Part-A- Nature of business are as undergowda k h August 27, 2015 at 4:44 pm. please verify tan details and the amount of credit available in your form 16 andIncome tax slab rates for AY 2015-2016 Financial Year 2014-2015. Would like reply if we can efile itr for the ay 15-16 now. If not then upto when expected.CBDT has also notified new ITR-1, ITR-2 and ITR-4S for the Assessment Year 2015-16.The key changes are as under New ITR-1 Sahaj has been notified on 23-06-2015 vide Notification No. 49/2015/ F.No.142/ 1/2015-TPL . Apart from the normal information required hitherto the following additionalDownload ITR1 Sahaj for FY 2014-15/AY 2015-16 Click Here >> Download ITR1 Sahaj Fill Instructions Click Here >>. shaik yaseen AY 16-17 , FY-2015-16 , Income Tax Forms , ITR-1 No comments Do read Follow Instructions for smooth going of Filing of Returns / IT Forms through Java Utility. For updates do follow us "Audit companion". nob bp income tax instructions for filing itr 1 for ay 2016-17 nob bp meaning how to fill itr 1 excel sheet itr 1 filling instructions 2017-18 itr 1 instructions5 Change In Filing of Income Tax Return For AY 2015-16, 6 The. These instructions are guidelines for filling the particulars in this Return. Instructions For Filling ITR-1 SAHAJ A.Y. 2017-18 General Instructions These Instructions Are Guidelines For Filling The Particulars In This Return Form. You can download Form 2A and respective instructions to fill the form from Income Tax Website. Additional InformationAY 2015-16 ITR 1 Vs 2A Vs 2 ITR 2A. Previous Next . 6. While filing Paper ITR (Manual filing of ITR) filling out the acknowledgement ITR-V is neccessary. Only one copy of ITR-V is required to be filed.Download Instructions for filing ITR-1 for the Financial Year 2014-15 (Assessment Year 2015-16). Instructions for filling ITR-1 SAHAJ A.Y. 2017-18 General Instructions These instructions are guidelines for filling the particulars in this Return Form.SAHAJ Instructions for SAHAJ AY2015 -16 Income Tax. Related Links. Income Tax Return Due Date AY 2015-16.no 9/2010,1,INSTRUCTION TO FILL ITR-2,1,Insurance,14,insurance claim,2,Insurance policy,9,intangible assets, 1,INTER BANK MOBILE TRANSFER,7,inter haed loss adjustment,2,interest,1,interest free laon,2,Interest from bank,3,interest Income Tax E Filing Process / Procedure Online for ay 2016 17. In previous post we have given Common Mistakes in Income Tax Return (ITR) Form Filing.Download ITR Forms, XML and Pre-fill XML User can download the ITR/XML submitted for three AY and also, download and use the AY2015 -16.These instructions are guidelines for filling the particulars in this Return Form. In case of any doubt, please refer to relevant provisions of the Income-tax Act, 1961 and the Income-tax Rules, 1962. Income Tax Slab for FY 2015-16 [AY 2016-17] There is no change download itr 1 for ay 2015-16 in income tax slabs except additional 2 increase . Instructions for Fillable Forms: You must have Java Runtime Environment Version 7 Update 13 (jre 1.7 is also known as jre version 7) or above File Description : Instructions for filling ITR 1 (SAHAJ) for A.Y.2015-16. Category : ITR 1. Downloads : 39. Posted By : May I Help You.Format, Download ITR 1 Instructions for AY 2015-16 OR FY 2014-15 etc, ITR 1 Instructions etc. These instructions are guidelines for filling the particulars in this Return Form.income has not been computed correctly in Form No. 16, please make the correct computation and fill the same in this.To be mentioned in Item 1 of ITR 1 Return Form Total Salary Income. Related Questions. Which ITR (Income Tax Return) form should I fill for AY 2015-16?Is there anything wrong in this kind of ITR filing. Can I get income tax notice? Income Tax: Which ITR form (ITR 1 or 2) should I fill for this financial year? Income Tax Return Filing AY 2015 16 Which forms to apply.This video is a step by step guide for filling Income tax return (ITR ) for salaried person. This is an easiest way to file your Income tax return ( ITR-1) Online This can be printed on A4 paper in colour as per the instruction given by the Income Tax Department.Are you filing ITR 1 in hard copy ? spending lot of time in writing and filling ITR 1?Membership is required for download. Create An Account First. ITR 1 AY 2015-16.zip (5.39 MB, 736 These instructions are guidelines for filling the particulars in Income Tax Return ( ITR) 1. In case of any doubt, please refer to relevant provisions of the Income-tax Act Return under section 119(2)(b) [Applicable. from AY 2018-19]. 11 Whether original Select the applicable option from the below list General Instructions These instructions are guidelines for filling the particulars in this Return Form.Where the Return Form is furnished in the manner mentioned at 5(i), the acknowledgment/ ITR-V should be duly filled.years or more at any time during the financial year 2015-16) - Instructions to Form ITR-2 (AY 2015-16) Page 1 of 11 Instructions for filling out FORM ITR-2 These instructions are guidelines for filling the particulars in this Wed, 10 Jan 2018 06:41:00 GMT Instructions for filling out FORM ITR-2 ITR Validation Rule for Assessment Year 2015-16.Click here to downlaod validation of itr 1 ebook.Instructions to e-File Form 15CA and 15CB. Business Transactions Investment Transactions. Details of foreign travel not required. Not applicable for ITR 1 4S. 10. Key Changes in Forms. Particulars Old Forms for AY New Forms for AY 2015-16 New Forms for AYComplete Form using instructions. Error or difficulty? Review instructions/FAQs or call ITD center. Generate XML File. Dr Suresh Surana, Founder, RSM Astute Consulting Group, informs, "If an individual misses the deadline of August 5, 2016 for filing return pertaining to FY 2015-16 (AY 2016-17), he can file a belated return by March 31, 2018." Know Your ITR form number and Mode of filing Income Tax Return, download ITR forms, Acknowledgement form and Instruction to fill ITR itr 1 download for ay 2015-16 pdf . Income Tax Dept Instructions for filling ITR-1(pdf) also explains how to fill Sahaj (ITR-1) For Individuals having Income from Salary/ Pension2013-14 in excel format itr v acknowledgement ay 2014-15 pdf itr v form 2014-15 download pdf itr v acknowledgement ay 2015-16 pdf download. CBDT has notified new ITR Forms (Income Tax Return Forms) SAHAJ (ITR-1), ITR-2 and SUGAM (ITR-4S) for AY 2015-16. However the schema for xml eReturn generation is still awaited. Key changes observed in the notified forms is as follows: Aadhaar number to be mentioned in the ITRs. Category Amount AS Fill your Gender, Male or Female wo, 10 jan 2018 17:39:00 GMT SAHAJ Instructions for SAHAJ AY2015 -16 IncomeITR-1 SAHAJ A.Y. 2017-18 General - Instructions for SAHAJ Income Tax Return AY 2016-17 General Instructions These instructions are guidelinespdf - Instructions to Form ITR-2 (AY 2015-16) Page 1 of 11 Instructions for filling out FORM ITR-2 These instructions are guidelines for filling the particulars in thisitr-1 sahaj a.y. 2017-18 general form fda 1572. instructional supplement this booklet: instructions for individual debtors (pdf) Now we are providing Instructions for ITR-1 SAHAJ For AY 2015-16. you can download these instructions from below download link. For example for FY 2015-16, the immediate year is 2016-17, where you file IT return for income generated during FY 2015-16.So till then I was using ITR-1 for filling the returns. But from now onwards is it required for me to use ITR-2 for this AY-2016-17? Instructions for filling out FORM ITR-2.Instructions to Form ITR-2 (AY 2016-17). 25. Winnings from lotteries, crosswords puzzles, races including.time during the financial year 2015-16-. Income (In Rs.) Which Return Form One Should Fill for his Income from ITR-1,ITR-2,ITR-3,ITR-4 or ITR-5 ?. ".
http://dfei8.ml/page3010-instructions-for-filling-itr-1-ay-201516.html
CC-MAIN-2018-22
refinedweb
2,045
64.61
A configurable grid and layout engine for React gymnast gymnast is a configurable grid and layout engine for React. 📺 Examples We have several examples on the website. Here is one of them: import * as React from 'react' import { Grid } from 'gymnast' <Grid> <Grid size={5} margin={2}>Content Here</Grid> <Grid size={7}>More Content</Grid> </Grid> This will create 2 columns of sizes 5, 7, respectively. There are additional components to assist with layout, for a deeper dive into gymnast, check out the docs, the examples here or follow the Getting Started guide. Install gymnast is available as the gymnast package on npm. It is also available on the unpkg CDN. You can install it with: yarn add gymnast React and PropTypes are peer dependencies of the generated bundle. ⚙️ Dev Mode Ensuring a layout adheres to the grid can be difficult. To simplify this task, gymnast includes an overlay Component to assist you. During development, import and append <Dev/> to your pages. It doesn't render anything by default but pressing CTRL+SHIFT+K will toggle it. Learn more about <Dev /> mode in the docs. import * as React from 'react' import { Dev } from 'gymnast' export default function MyPage() { return ( <> <Dev /> {/* other components */} </> ) }
https://reactjsexample.com/a-configurable-grid-and-layout-engine-for-react/
CC-MAIN-2020-05
refinedweb
203
56.45
Subject: Re: [boost] SQL client library ? From: Jean-Louis Leroy (jl_at_[hidden]) Date: 2009-09-16 01:54:37 I am writing the library that implements the syntax I described. I will show working code soon (or right now upon request). I am sorely tempted to overload the comma operator for "select", "from" and "in". I know the usual caveats (low precedence, default behavior exists for UDTs). The alternatives for open lists are : use the preprocessor to crank out overloads like in the MPL ; build lists one at a time [i.e. select(t1.id)(t2.name)(t3.age).from(t1)(t2)(t3).where(t1.id.in(1)(2))]...and overload the comma operator on well-targeted types [e.g. template<typename T> operator ,(expression<T>, expression<T>)]. And perhaps make the comma operator available but in a namespace (gives choice but usually alternative syntaxes are a bad idea). Opinions ? J-L Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2009/09/156399.php
CC-MAIN-2019-47
refinedweb
174
60.72
Created on 2015-01-27 17:37 by mirkovogt, last changed 2015-08-28 21:36 by belopolsky. This issue is now closed. I trapped into a pitfall today with web programming using DateTime.isoformat() in the backend and Javascript on the frontend side. isoformat() string'yfies a DateTime-object according to ISO 8601, which states that, if no timezone is specified, the string is supposed to be interpreted as UTC implicitly. isoformat() doesn't append any TZ if the object is UTC - so far so good. However when interacting with JavaScript that leads to errors, as the ECMAScript ed 6 draft states (and most browser act like that): "date time strings without a time zone are to be treated as local, not UTC)."[1] That is not Python's fault - however considering the issues this behaviour could cause as well as following the philosophy 'explicit is better than implicit', I'd suggest explicitly marking UTC-strings as such by adding a trailing 'Z'. According ISO 8601 that is totally fine, optional though. [1] I wonder what the chances are that doing that would break other software with the opposite assumption (an assumption based on past Python behavior). Both implementations behave according the standard. If you assume otherwise you violate the standard - as JavaScript does. If that change would break software, that software was broken from the beginning. Actually I'm not sure anymore whether NOT specifying a timezone is valid at all. Even though the Spidermonkey documentation itself states, that it doesn't behave according to the ISO standard, several documents say that specifying the timezone is crucial (either "Z" or "+/-XX:XX"). I also found documents describing the standard which explicitly state that by default the local time is assumed, as JavaScript does. Either way - there's so much different information and therewith confusion out there, that I highly recommend always specifying the timezone and would consider behaving otherwise as a bug. Implementations usually don't throw an error but just assume something when the TZ designator is missing, which results in just different meanings. Needless to say that doesn't make the situation any better. Do I understand correctly that the request is to append '+00:00' to the result of dt.isoformat() when dt is naive? If so, -1. Python's datetime module does not dictate how naive datetime instances should be interpreted. UTC by default and local by default are both valid choices, but local by default is often preferable. There are at least two reasons for that: 1. Local time is the "naïve" time. Using UTC implies certain amount of sophistication. 2. Interpreting naive times as UTC is unnecessary because it is very easy to create aware instances with tzinfo=timezone.utc. When communicating with javascript and in general when writing software for the Internet, I recommend using aware datetime instances. For example, >>> from datetime import * >>> datetime.now(timezone.utc).isoformat() '2015-01-27T18:27:33.216857+00:00' from datetime import * In [4]: datetime.utcnow().isoformat() Out[4]: '2015-01-27T18:51:18.332566' When using utcnow() e.g. I would expect the result definitely being marked as UTC. > I highly recommend always specifying the timezone and would consider behaving otherwise as a bug. I agree, and the datetime module lets you do that already: >>> dt = datetime.now(timezone.utc) >>> dt.isoformat() '2015-01-27T18:53:40.380075+00:00' or if you prefer local timezone, >>> dt.astimezone().isoformat() '2015-01-27T13:53:40.380075-05:00' > When using utcnow() e.g. I would expect the result definitely being marked as UTC. Don't use utcnow(). This is a leftover from the times when there was no timezone support in datetime. The documentation for utcnow [1] already points you in the right direction, but we can consider formally deprecating it together with utcfromtimestamp. [1] I never said there is no way to result in an ISO 8601 string with UTC stated explicitly. I showed a case where I think it is correct to assume UTC is stated ( e.g. utcnow()), however the result of isoformat() doesn't do so. What is your specific proposal? As I explained, we cannot assume that naive timezone instances are in UTC because while sometimes they are (as in datetime.utcnow()), more often they are not (as in datetime.now()). So changing dt.isoformat() when dt is naive is wrong. Another alternative would be to return aware instances from utcnow(), but that would break a lot of code. s/timezone instances/datetime instances/ I got that - so marking utcnow() as deprecated seems like a good idea. But it's not just about utcnow() but also now(). now() also doesn't return any timezone stated by default - which I would still consider as a bug. However making now() require a TZ being specified would probably also break a lot of code. But then, using function called isoformat() - on whatever object, naive or not - I'd expect a timezone being specified - since that's what I finally think the ISO-standard actually says. I see that my initial proposal doesn't work out, but I'm still not happy with the current situation I'm however also not sure how to change properly without breaking existing code using those functions. Mirko, You may want to review #9527. I don't think we left any stone unturned in this area. Just to clarify my problem - then I'll just happily use datetime.now(tzutc()).isoformat() - There is datetime.now() which is supposed to be used (no utcnow() anymore) - datetime.now() might return a naive object, when no TZ is specified - *However* also the naive variant implements the class isoformat() which is described as "Return a string representing the date in ISO 8601 format" - ISO 8601 can and should be understood such as the TZ-designator is required (I think we agreed on that). - However isoformat() called on a naive object returns a string with no TZ designator I would at least suggesting adding a note for isoformat() about being called on naive datetime objects. There’s another minor bug here: UTC should append “Z”, not “+00:00”, which other timezones at that offset can do. Agreed about no timezone being “floating” time in many instances, e.g. the iCalendar format uses that. Why do you call it a bug? Specifying UTC as +00:00 is perfectly valid by ISO 8601 and some RFCs that are based on the ISO standard recommend against using the Z code. > ISO 8601 can and should be understood such as the TZ-designator is required (I think we agreed on that). No. There is no such requirement in ISO 8601 as far as I remember. Hm, RFCs are just RFCs and not standards, they can recommend whatever they want, and they can (and do) contradict each other. I’ve seen things (mostly related to eMail and PIM synchronisation) that require ‘Z’ for UTC proper. Additionally, +00:00 can be UTC, but it can also be British Winter Time, or DST of UTC-1. ‘Z’ is clear. The proper response to that comment probably is: It's called ISO8601 and not RFC8601. And unfortunately ISO stands for "International Standard". I'm a British citizen and I've never once heard the term "British Winter Time", so where does it come from? mirabilos was referring to Alexander's reference to RFCs that advise against using 'Z'. RFC are standards once they become formally accepted as such, and often they become de-facto standards before formal acceptance. Given that the method is supposedly conforming to a specific standard, it ought to do so...but in addition to the ISO standard there are other de-jure and de-facto standards and deviations to contend with. Concrete examples are required for decision, I think, if the base standard is ambiguous. It may be that a new method or a flag controlling the behavior needs to be introduced in order to satisfy specific wide-spread use cases, but those use cases need to be enough motivation to support such an enhancement. By my reading, so far there have been no such concrete wide spread use cases brought forward to motivate any change other than deprecating utcnow. ('now' must return naive datetimes to preserve backward compatibility. If you don't want to use naive datetimes, make sure you don't...the datetime module was originally directly supported only naive datetimes (timezone is recent), so some care is needed.) > RFCs are just RFCs and not standards RFCs have a standards track which includes steps such as "Proposed Standard", "Draft Standard", and "Internet Standard". Once they become Internet Standards, they get an additional designation as STD. For example, RFC 822 (which is relevant here) is an Internet Standard and also known as STD 11. RFC 3339 ("Date and Time on the Internet: Timestamps") is a Proposed Standard, but widely used and implemented. The premise of this issue is factually incorrect: > ISO 8601, which states that, if no timezone is specified, > the string is supposed to be interpreted as UTC implicitly. The opposite is true: "If no UTC relation information is given with a time representation, the time is assumed to be in local time." <> To get timezone specification included in isoformat() output - use aware datetime objects: >>> from datetime import * >>> datetime.now(timezone.utc).isoformat() '2015-01-27T18:27:33.216857+00:00'
https://bugs.python.org/issue23332
CC-MAIN-2021-17
refinedweb
1,567
64.51
How to containerize a python flask application ? July 2, 2018 Leave a comment Containerization is one of the fast growing and powerful technologies in software Industry. With this technology, user can build, ship and deploy the applications (standalone and distributed) seamlessly. Here are the simple steps to containerize a python flask application. Step 1: Develop your flask application. Here for demonstration I am using a very simple flask application. You can use yours and proceed with the remaining steps. If you are new to this technology, I would recommend you to start with this simple program. As usual with all the tutorials, here also I am using a “Hello World” program. Since we are discussing about Docker, we can call it as “Hello Docker”. I will demonstrate the containerization of an advanced application in my next post. import json from flask import Flask app = Flask(__name__) @app.route("/requestme", methods = ["GET"]) def hello(): response = {"message":"Hello Docker.!!"} return json.dumps(response) if __name__ == '__main__': app.run(host="0.0.0.0", port=9090, debug=True) Step 2: Ensure the project is properly packaged and the dependencies are mentioned in the requirements.txt. A properly packaged project is easy to manage. All the dependent packages are required in the code execution environment. The dependencies will be installed based on the requirements.txt. So prepare the dependency list properly and add it in the requirements.txt file. Since our program is a simple one module application, there is nothing to package much. Here I am keeping the python file and the requirements.txt in a folder named myproject (Not using any package structure) Step 3: Create the Dockerfile. The file should be with the name “Dockerfile“. Here I have used python 2 base image. If you use python:3, then python 3 will be the base image. So based on your requirement, you can select the base image. FROM python:2 ADD myproject / WORKDIR /myproject RUN pip install -r requirements.txt CMD [ "python", ".myflaskapp.py" ] Ensure you create the Dockerfile without any extension. Docker may not recognize the file with .txt extension. Step 4: Build an image using the Dockerfile. Ensure we keep the python project and the Dockerfile in proper locations. Run the following command from the location where the Dockerfile is kept. The syntax of the command is given below docker build -t [imagename]:[tag] [location] The framed command is given below. Here I am executing the build command from the same location as that of the Dockerfile and the project, so I am using ‘dot’ as the location. If the Docker file is located in a different location, you can specify it using the option -f or using –file. docker build -t myflaskapp:latest . Step 5: Run a container from the image docker run -d -p 9090:9090 --name myfirstapp myflask:latest Step 6: Verify the application List the running containers docker ps | grep myfirstapp Now your application is containerized. Step 7: Save the docker image locally. The following command will save the docker image as a tar file. You can take this file to any other environment and use it. docker save myflaskapp > myflaskapp.tar Save the docker image to Dockerhub also. In this way you can ship and run your application anywhere.
https://amalgjose.com/tag/docker-container/
CC-MAIN-2019-35
refinedweb
545
67.96
Tree hash-storage files Project description Overview Tree the library for saving many files by they hash. For preservation it is enough to you to transfer a binary code of the file and the Tree will keep him. Example Superficial uses in the tree hash storage from tree_storage import TreeStorage tree = TreeStorage(path="/path/to/storage") # If you want add file to the Tree Storage with open("/path/to/file", "rb") as file: tree.breed(file_byte=file.read(), mode='wb') # after add file, method return status of writing. # If add file status is success, tree save last # hash of the file in the attribute file_hash_name # For remove file from the Tree Storage # you can call cut method and past # to him hash name of file which you have delete tree.cut(file_hash_name=tree.file_hash_name, greedy=True) Installing Download and install the latest released version from PyPI: pip install tree-storage Download and install the development version from GitHub: pip install git+ Installing from source (installs the version in the current working directory): python setup.py install (In all cases, add –user to the install command to install in the current user’s home directory.) Install and Update Tree library using pip: Documentation Read full documentation on. License This repository is distributed under The MIT license Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/tree-storage/
CC-MAIN-2022-27
refinedweb
238
58.01
isotoma.recipe.gocaptain 0.0.9 Starting and stopping daemons The GoCaptain [1] buildout recipe produces a script to start and stop daemons, similar to those you find in /etc/init.d. By default it will inspect your system and either write a "simple" script, such as you might produce yourself or produce a LinuxStandard Base variation, that provides more tooling. In particular the LSB scripts will try multiple times to shut down your daemon, and will not start it if it is already running. This package also provides a simple way to produce these scripts from other buildout recipes - see isotoma.recipe.varnish for an example. The buildout recipe A simple example would be: [example] recipe = isotoma.recipe.gocaptain daemon = /usr/bin/example name = example description = example daemon for that thing i did that time pidfile = /var/tmp/example.pid args = -P ${example:pidfile} -w /var/tmp/example.log This will produce a script in bin/example that launches your daemon, and shuts it down again later, using the PID in the pidfile. Options The mandatory options this recipe accepts are: - daemon - The path to the daemon executable file - name - The name of the daemon, displayed in log messages - description - A longer description, shown on the console during start and stop - pidfile - A path to a file to store the PID of the new daemon in - args - The arguments for the daemon. These will be formatted in the output script as you provide them, with continuations provided as needed In addition you can provide: - template - A path to the template for your start/stop script. This will be used in preference to the templates provided with this package. Calling from other code If you wish to use this from one of your own recipes, I suggest you do something like: from isotoma.recipe import gocaptain gc = gocaptain.Automatic() f = open("/path/to/script", "w") gc.write(f, daemon="/usr/sbin/thing", args="-D -P /path/to/pid", name="my thing", description="thing") f.close() os.chmod(target, 0755) The Automatic module will select the Simple or LinuxStandardBase variants, by inspecting your system (very simplistic: Doug Winter - Keywords: buildout - License: Apache Software License - Categories - Package Index Owner: winjer, alex2 - Package Index Maintainer: johncarr, alex2 - DOAP record: isotoma.recipe.gocaptain-0.0.9.xml
http://pypi.python.org/pypi/isotoma.recipe.gocaptain/0.0.9
crawl-003
refinedweb
383
51.28
Plot benchmark results with matplotlib Vincent Bernat In the past week, I ran a lot of benchmarks using a Spirent Avalanche which is an appliance providing performance testing of network related products (a load balancer, a router, a web server, …). The reporting module does not provide a lot of flexibility and the plots are not the most beautiful ones. Fortunately, the results are also exported as CSV. matplotlib is a python plotting library which produces great figures without making things hard when they should be easy. It is a good replacement for GNU Plot and you don’t need a lot of Python knowledge to use it. The documentation includes a fine user’s guide that should get you started in less than 20 minutes. Quick introduction§ You can use matplotlib from IPython to experiment: $ ipython -pylab Python 2.6.7 (r267:88850, Jul 10 2011, 08:11:54) Type "copyright", "credits" or "license" for more information. Welcome to pylab, a matplotlib-based Python environment. For more information, type 'help(pylab)'. > In [1]: plot([1,1,4,5,10,11]) When you are ready, you can build a simple Python script: #!/usr/bin/env python from matplotlib.pylab import * plot([1,1,4,5,10,11]) savefig("my-plot.pdf") Grabbing results from CSV§ Data are contained into a file named realtime.csv. It contains the raw data as well as some general information about the benchmark (test name, description, paramaters…). We need to skip those. Fortunately, csv2rec() allows us to load data from a CSV file into a record array and skip the first rows if necessary. from matplotlib.pylab import * import sys, os import gzip skip = 0 for line in gzip.open(sys.argv[1]): if line.startswith("Seconds Elapsed,"): break skip = skip + 1 ava = csv2rec(gzip.open(sys.argv[1]),skiprows=skip) Now, the “Elapsed seconds” column can be accessed with ava['elapsed_seconds']. General structure§ I need 4 plots: - successful, unsuccesful and attempted transactions per second, - minimum, average and maximum response time per page, - Avalanche CPU usage, - incoming and outgoing bandwidth. The most important plot is the first one. The second one is less important and the last ones are here only to check we did not hit some bottleneck during the benchmark. We want to produce a page like this: matplotlib allows us to plot subfigures. We create 4 subfigures sharing the same X axis (which the number of seconds elapsed since the beginning of the benchmark). We save the result to PDF. # Create the figure (A4 format) figure(num=None, figsize=(8.27, 11.69), dpi=100) ax1 = subplot2grid((4, 2), (0, 0), rowspan=2, colspan=2) # [...] ax2 = subplot2grid((4, 2), (2, 0), colspan=2, sharex=ax1) # [...] ax3 = subplot2grid((4, 2), (3, 0), sharex=ax1) # [...] ax4 = subplot2grid((4, 2), (3, 1), sharex=ax1) # [...] # Save to PDF savefig("%s.pdf" % TITLE) Bandwidth plot§ Let’s start with the easiest plot: bandwidth usage. # Plot 4: Bandwidth ax4 = subplot2grid((4, 2), (3, 1), sharex=ax1) plot(ava['seconds_elapsed'], ava['incoming_traffic_kbps']/1000., 'b-', label='Incoming traffic') plot(ava['seconds_elapsed'], -ava['outgoing_traffic_kbps']/1000., 'r-', label='Outgoing traffic') grid(True, which="both", linestyle="dotted") ylabel("Mbps", fontsize=7) xticks(fontsize=7) yticks(fontsize=7) Here, in the suplot positioned in (3,1), we plot the number of seconds elapsed versus the incoming traffic with a blue line ( b-). We also plot the seconds elapsed versus the outgoing traffic with a red line ( r-). Here is the result: Most functions of matplotlib are exposed as a method of the object they refer to and as a global function. In the latest case, the function is applied to the latest created figure or plotting area. For example, plot() is called a function and therefore refer to the plotting area ax4. We could have written ax4.plot() instead. CPU plot§ The average CPU utilization data available in the CSV file needs to be normalized. We assume the Avalanche to be mostly idle on start. The plotting part is pretty similar to our previous case. # CPU max = np.max(ava['average_cpu_utilization']) order = 10**np.floor(np.log10(max)) max = np.ceil(max/order)*order cpu = (max - ava['average_cpu_utilization'])*100/max # Plot 3: CPU ax3 = subplot2grid((4, 2), (3, 0), sharex=ax1) plot(ava['seconds_elapsed'], cpu, 'r-', label="Avalanche CPU") grid(True, which="both", linestyle="dotted") ylabel("Avalanche CPU%", fontsize=7) xticks(fontsize=7) yticks(fontsize=7) Response time§ We would like to plot response time. We have three corresponding metrics: minimum response time, average response time, maximum response time. Because those metrics start from 0 ms up to several seconds, a logarithmic scale is used: # Plot 2: response time ax2 = subplot2grid((4, 2), (2, 0), colspan=2, sharex=ax1) plot(ava['seconds_elapsed'], ava['minimum_response_time_per_page_msec'], 'b-', label="Minimum response time") plot(ava['seconds_elapsed'], ava['maximum_response_time_per_page_msec'], 'r-', label="Maximum response time") plot(ava['seconds_elapsed'], ava['average_response_time_per_page_msec'], 'g-', linewidth=2, label="Average response time") legend(loc='upper left', fancybox=True, shadow=True, prop=dict(size=8)) grid(True, which="major", linestyle="dotted") yscale("log") ylabel("Response time (msec)", fontsize=9) xticks(fontsize=9) yticks(fontsize=9) This is also the first graphic with a legend. Transactions per second§ The most important plot is the number of transactions per second. # Plot 1: TPS ax1 = subplot2grid((4, 2), (0, 0), rowspan=2, colspan=2) plot(ava['seconds_elapsed'], ava['desired_load_transactionssec'], '-', color='0.7', label="Desired Load") plot(ava['seconds_elapsed'], ava['successful_transactionssecond'], 'g:', label="Successful") plot(ava['seconds_elapsed'], smooth(ava['successful_transactionssecond']), 'g-', linewidth=2) plot(ava['seconds_elapsed'], ava['attempted_transactionssecond'], 'b-', label="Attempted") plot(ava['seconds_elapsed'], ava['aborted_transactionssecond'], 'k-', label="Aborted") plot(ava['seconds_elapsed'][:-1], ava['unsuccessful_transactionssecond'][:-1], 'r-', label="Unsuccessful") legend(loc='upper left', fancybox=True, shadow=True, prop=dict(size=10)) grid(True, which="both", linestyle="dotted") ylabel("Transactions/s") The number of successful transactions is plotted twice: when the benchmarked equipment becomes overloaded, we get a lot of noise in this metric and it can be difficult to read. Therefore, we plot a smoothed version with the help of NumPy: import numpy as np def smooth(x, win=4): s = np.r_[x[win-1:0:-1],x,x[-1:-win:-1]] w = np.ones(win, 'd') y = np.convolve(w/w.sum(),s,mode='valid') return y[(win-1)/2:-(win-1)/2] np.r_() is just here to extend our data by the size of the window. np.ones() build a weight vector of the size of the window. If the window is 4, we get [0.25, 0.25, 0.25, 0.25]. We use this vector to apply a convolution to the original data. Here is the result: The original data is a green dotted line while the smoothed one is a green thick line. What about the three annotations? matplotlib allows us to put annotations on a figure. Here is how this is done: # Noticeable points count = 0 def highlight(index, reason): global count if index and index > 0: x,y = (ava['seconds_elapsed'][index], smooth(ava['successful_transactionssecond'])[index]) plot([x], [y], 'ko') annotate('%d TPS\n(%s)' % (y,reason), xy=(x,y), xytext=(20, -(count+4.7)*22), textcoords='axes points', arrowprops=dict(500ms") highlight(np.argmax(ava['average_response_time_per_page_msec'] > 100), ">100ms") np.argmax() returns the index of the first maximum value. The trick here is that when I write ava['average_response_time_per_page_msec'] > 100, I get an array with 1 when the value is more than 100 and 0 otherwise. Therefore, np.argmax() will return the first index where the value is superior to 100 ms. The highlight() function will add a point ( plot([x], [y], 'ko')) on the smoothed successful transactions par second plot and add an annotation with some fancy arrow. Look at this benchmark of nginx as SSL termination for a complete output of this script. The comments on this website are powered by Disqus and require the use of Javascript. Please enable Javascript or try with a different browser. You can leave a comment or read others' ones. The comment system is powered by Disqus and its use relies on cookies.
https://vincent.bernat.im/en/blog/2011-plot-benchmark-results-matplotlib
CC-MAIN-2017-09
refinedweb
1,338
57.16
Java, you can have just one public static void main(String[] args) per class. Which mean, if your program has multiple classes, each class can have public static void main(String[] args) . See JLS for details. A class can define multiple methods with the name main. Yes. While starting the application we mention the class name to be run. The JVM will look for the main method only in the class whose name you have mentioned. Hence there is not conflict amongst the multiple classes having main method. Yes i can DO this.a java program can have more than one main() method as java supports overloading of main method also. public class Test2 { public static void main (String[] args){ System.out.println("I rule!"); main(5); } public static void main(int i){ System.out.print("Hi " + i); } } Forgot Your Password? 2018 © Queryhome
https://www.queryhome.com/tech/145777/can-we-use-multiple-main-methods-in-java
CC-MAIN-2021-04
refinedweb
143
77.23
What’s new in Windows Forms in .NET 6.0 Igor We continue to support and innovate in Windows Forms runtime. Let’s recap what we’ve done in .NET 6.0. Accessibility improvements and fixes Making Windows Forms applications more accessible to more users is one of the big goals for the team. Building on the momentum we gained in .NET 5.0 timeframe in this release we delivered further improvements, including but not limited to the following: - Improved support for assistive technology when using Windows Forms apps. UIA providers enable tools like Narrator and others to interact with the elements of an application. UIA is also often used to create test automation to drive apps. We have now added UIA providers support for the following controls: CheckedListBox LinkLabel Panel ScrollBar TabControl TrackBar - Improved Narrator announcements in DataGridView, ErrorProviderand ListViewcolumn header controls. - Keyboard tooltips for the TabControl’s TabPageand the TreeView’s TreeNodecontrols. - ScrollItem Control Pattern support for ComboBoxItemAccessibleObject. - Corrected control types for better support of Text Control Patterns. - ExpandCollapse Control Pattern support for the DateTimePickercontrol. - Invoke Control Pattern support for the UpDownButtons component in DomainUpDownand NumericUpDowncontrols. - Improved color contrast in the following controls: CheckedListBox DataGridView Label PropertyGridView ToolStripButton Application bootstrap In .NET Core 3.0 we started to modernize and rejuvenate Windows Forms. As part of that initiative we changed the default font to Segoe UI, 9f (dotnet/winforms#656), and quickly learned that a great number of things depended on this default font metrics. For example, the designer was no longer a true WYSIWYG, as Visual Studio process is run under .NET Framework 4.7.2 and uses the old default font (Microsoft Sans Serif, 8.25f), and .NET application at runtime uses the new font. This change also made it harder for some customers to migrate their large applications with pixel-perfect layouts. Whilst we had provided migration strategies, applying those across hundreds of forms and controls could be a significant undertaking. To make it easier to migrate those pixel-perfect apps we introduced a new API (for more details refer to the Application-wide default font post): void Application.SetDefaultFont(Font font) However, this API wasn’t sufficient to address the designer’s ability to render forms and controls with the same new font. At the same time, with our sister teams heavily pushing for little code/low ceremony application templates, our Program.cs and its Main() method started looking very dated, and we decided to follow the general .NET trend and trim the boilerplate. Please welcome the new Windows Forms application bootstrap: class Program { [STAThread] static void Main() { ApplicationConfiguration.Initialize(); Application.Run(new Form1()); } } (C#, .NET 6.0 and above) as it would look at runtime: (We know, the form in the designer still has that Windows 7 look, We’re working on it…) Please note that Visual Basic handles these application-wide default values differently. In .NET 6.0 Visual Basic introduces a new application event ApplyApplicationDefaults which allows you to define application-wide settings (e.g., HighDpiMode or the default font) in the typical Visual Basic way. The designer support for the default font configured via MSBuild properties is also coming in the near future. For more details head over to the dedicated Visual Basic blog post discussing what’s new in Visual Basic. Template updates As mentioned above we have updated our C# templates in line with related changes in .NET workloads, Windows Forms templates for C# have been updated to support global using directives, file-scoped namespaces, and nullable reference types. Because a typical Windows Forms app requires a STAThread attribute and consist of multiple types split across multiple files (e.g., Form1.cs and Form1.Designer.cs) the top-level statements are notably absent from the Windows Forms templates. However, the updated templates do include the application bootstrap code. More runtime designers We have completed porting missing designers and designer-related infrastructure that enable building a general-purpose designer (e.g., a report designer). For more details refer to our earlier announcement. If you think we missed a designer that your application depends on, please let us know at our GitHub repository. High DPI and scaling fixes We’ve been working through the high DPI space with the aim to get Windows Forms applications to correctly support PerMonitorV2 mode out of the box. It is a challenging undertaking, and sadly we couldn’t achieve as much as we’d hoped. Still in this release we made some progress, and we now can: - Create controls in the same DPI awarenes as the application - Correctly scale ContainerControls and MDI child windows in PerMonitorV2 mode in most scenarios. There are still few specific scenarios (e.g., anchoring) and controls (e.g., MonthCalendar) where the experience remains subpar. Other notable changes - New overloads for Control.Invoke()and Control.BeginInvoke()methods that take Actionand Func<T>and allow writing more modern and concise code. - New Control.IsAncestorSiteInDesignModeAPI is complimentary to Component.DesignMode, and indicates if one of the ancestors of this control is sited, and that site in design mode. A dedicated blog post exploring this API is coming later, so stay tuned. - Windows 11 style default tooltip behavior makes the tooltip remain open when mouse hovers over it, and not disappear automatically. The tooltip can be dismissed by CONTROL or ESCAPE keys. Community contributions We’d like to call out a few community contributions: - @paul1956 updated NotifyIcon.Textlimits text to 127 (dotnet/winforms#4363). - @weltkante enhanced FolderBrowserDialogwith InitialDirectoryand ClientGuidproperties in dotnet/winforms#4645. - @weltkante added link span to LinkClickedEventArgs(dotnet/winforms#4708) making it easier to migrate RichTextBoxfunctionality targeting RichEdit v3.0 or below that relied on hidden text to render hyperlinks. - @AraHaan updated the good old MessageBoxwith two new buttons Try Againand Continue, and made it possible to show four buttons at the same time (dotnet/winforms#4746): - @kant2002 was helping us making Windows Forms runtime more ILLink/NativeAOT-friendlier by adding ComWrappers and removing redundant RCWs. (dotnet/winforms#5174 and dotnet/winforms#4971). - @kirsan31 provided the ability to anchor minimized MDI children to TopLeft to match Windows MFC behavior in dotnet/winforms#5221.! Bravo! “We know, the form in the designer still has that Windows XP look, We’re working on it…” – did you mean Windows 7? cause Windows XP never had that look… Also, please add native dark mode to WinForms applications! Thank you, fixed. It’s been awhile since I used either. The team is acutely aware of the desire for the theming of Win32 apps, unfortunately it has dependencies on Windows API, and there are challenges resolving this. We’re actively engaged with our partners in Windows but don’t have an ETA at this time. To elaborate, I believe the main challenge is that Windows still does not have a documented way to check whether the system is in light or dark mode. Please correct me if I’m wrong about that. Yes, this is one of the issues. Congrats to everyone on the WinForms team for a great release! The per monitor high DPI work sounds especially challenging, and it’s totally understandable that it’ll take time to cover all the nuances. Is there a roadmap ahead for .NET 7? The top feature for me would be to re-enable support for the Data Sources window. Being able to drag and drop classes from that window, and have the controls automatically created and data bound is a huge time saver. Thank you. Yes, there is a roadmap, and we plan to make it public in the near future. In the nutshell making high DPI work out of the box is very high on our priority list. You can keep an eye on the .NET 7.0 milestone (which is a “catch all” at the moment, but we keep reviewing it). I totally echo what Igor says here, and I’d also encourage you to keep an eye out for a similar post talking about WinForms Designer in VS2022. We’ve added some features around data that I think you might like 😁. Our first VS related blogpost should be published sometime next month. I don’t know, it would have been nice to have the high DPI fixes in the long term release. It was clear at the start of the .NET6 development cycle that HDPI support in winforms is a desaster. Once you have monitors with different DPIs each, the default framework windows/forms/controls start to look and behave like total junk. it WPF dead ? there no news from .net 5/6 side The lack of .NET 6 specific news for WPF does not mean it is dead; alot of the hard work that went into porting WPF to .NET Core happened back during the .NET Core 3.* release. When Microsoft ported WPF+WinForms to .NET Core, they probably had to leave a lot out for WinForms initially, and they’re now finally getting around to it this release. Microsoft did recently open source the unit testing suite for WPF: alot of the hard work that went into porting WPF to .NET Core Done on .Net core 3. There was no work on this on .Net core 5 or .Net 6. You can look the commit on github. There is no commit on this. Microsoft did recently open source the unit testing suite for WPF Officially that on what they work on for the last 3 years (Again you can read all the commit done the last three years. It may seems like a big job but there is less thant one real commit a month so it will be quick). No work on anythinks else. Except in visual studio with the new designer and some improvement on intellisense for xaml. Yes it is. Look at the new roadmap for 2022. There is no new stuff coming, no improvements. And meanwhile ALL community PRs which adds new code are left completely unreviewed. On some of them the authors have just closed the PR because after more than half a year there was no activity from the “WPF team”. As I inferred from some talks, there is a shared team for xaml UI. There are plenty of work done for WinUI this year, and many things are still on backlog. I’m afraid there’s no enough people to work on WPF. It would be nice if they would fix bugs first, before adding new stuff. Visual Inheritance is still a nightmare in the designer, please finally fix that. good news Special thanks to Paul M Cohen @ paul 1956 Is it possible that WinForm supports Linux? There are no official plans to support other OS than Windows. Windows 11 supports Use AvalonaUI, it’s very good. Form Designer in release VS 2022 works much better, it was way too slow I checked it last time some months ago. Thank you. Is there any special reason why Toolbox shows Framework custom controls disabled for Net 6.0 project? My Net 6.0 app uses some my controls from Net Framework 4.5 dll. When added to the form by the code, the controls seem work just fine, both in design- and run-time. The only problem is that Toolbox shows them grayed out. I see no clear reason for that: once I can manage the controls on the form (copy/paste them, change properties, delete) why can’t they be dragged from Toolbox? Thank you for the update/blog, but still no mention of DirectX 11 or 12 or (13 for 2022) native support in .NET 6. I still don’t understand why Microsoft are avoiding DirectX in .NET? From a competitive standpoint, we have Vulkan for Java via LWJGL – Khronos. MDX, XNA, SharpDX, SlimDX are all dead in the water DirectX still seems to be exclusive to C++ (not C++.NET) … I don’t see how keeping DirectX restricted to a very a narrow development base as being good for Microsoft? Cheers, Rob. You can take a look at TerraFX. , That library also has Vulkan if you want it.
https://devblogs.microsoft.com/dotnet/whats-new-in-windows-forms-in-net-6-0/
CC-MAIN-2021-49
refinedweb
2,014
65.83
Type: Posts; User: ritika_sharma82 @loves oi, Thanks for your reply. I have a list of time stamps from 1.000000 second to 10.999999 second. The gap between two time stamp is random. Now I want to add 0.08 second in every 0.02... I have a list of time stamp. like 1.00001, 1.10004, 1.100345 .............. 10.99999 I want to add 0.08 second in every 0.02 second interval. How to write code for this? Pls. help. Thank you very much for the replies. I am learning array in C++. This code compiles but when I run the program, it gives some error. Is there anything wrong in this code? #include <iostream> using namespace std; int main() {... I am learning C++. I have MS Visual C++ 6.0 installed in my MS Windows XP desktop computer. When I try to compile, it says "Compiling......" but, never ends. Similarly, when I click to build after... Hello JohnW, These are the local variables. Could you please give me an example? How to use heap? My array is exactly A[100000][3]. So, it comes (100000 * 3 * 8)/(1024*1024) = 2.28MB. Hi GCDEF, How can I solve it. Any quick and easy solution will be appreciated. I am using MS Visual c++ 6.0 Sorry to ask simple question, but, even after googling I couldn't solve my problem. Hope you guys can help me. const int i=100000; const int j=32; __int64 A[i][j]; ... It compiles but... Hi alanjhd08, Thank you for your help. I did the same as you and hypheni said. I declared "const int someData = 6" in header1.h I declared "extern const int someData in main.cpp The error is... Hi hypheni, Thanks for your quick reply. Do I need to declare "external const int" instead of const int in header1.h? Do "external const int" and "const int" same for header1.h? I have following four files: 1. A hash - header1.h 2. A main header file - header2.h 3. A functions file - functions.cpp and 4. A main file - main.cpp. header1.h and header2.h are included... Hi Lindley and jnmacd, My apology for the mistake in my post (and reply as well). Thank you for the reply. I tried to get the result like this: (w, i) 0, 0 1, 0 2, 0 .. .. 16,0... Hi Lindley, Thank you for reply. I used pointer to reset the value to zero. I want to print like : 0, 0 1, 0 2, 0 .. .. 16,0 I used the pointer to reset the w's value to zero. Hi all, I am VERY NEW in C++ and programming. I just wanna print w and i, such that when w is equal to 16, i increases by 1 and w restarts from 0 until w reaches 1024. But the result doesn't come as...
http://forums.codeguru.com/search.php?s=3693c7c6f62bde52b2ff370c06dbe290&searchid=5796825
CC-MAIN-2014-52
refinedweb
484
89.24
When we look at a simple XML document like <department> <employee> <firstname>Henrik</firstname> <lastname>Loeser</lastname> </employee> </department> and an XPath expression like "/department/employee/lastname", then - first of all - we see a lot of strings. The strings are of different length and the tags (the markup) make up most of the document. And we haven't even introduced namespaces here. How do you efficiently store such documents? XML can be very verbose compared to relational data. How do you quickly as possible navigate within such documents, i.e., compare the different steps of your XPath expression to the different levels of the XML document? The key to compactness and speed is the use of stringIDs. For DB2 pureXML every element name, attribute name, namespace URI, and namespace prefix is substituted by a 32 bit integer value when an XML document gets parsed. Each string is mapped to a unique number, a so-called stringID. In the above example, all "department" could be replaced by 1, all "employee" by 2, etc. When the DB2 engine compiles a query and generates an executable package it also uses the stringIDs. This way, when at runtime the XPath expression is evaluated on the data, only integer values need to be compared. First we need to match the root element, i.e., look for the element name with value 1 ("department"). If we found one, the child needs to be a 2 ("employee"). Comparing integer values is of course much faster than comparing strings of variable length. How fast the XQuery execution is in DB2 pureXML can be seen when you look at the TPoX benchmark results or by reading some of the customer success stories collected at the pureXML wiki.
http://blog.4loeser.net/2009/04/stringids-in-db2-purexml-what-and-why.html
CC-MAIN-2018-13
refinedweb
288
64.3
VeSPA- General Project FAQ Page Questions and answers specific to the VeSPA project and sometimes a bit more ... Index Where do I find Application specific FAQs? Setup says I haven't got the dependencies installed (scipy, numpy, etc.), but I do! Install Problem 1 - Issue with widgets or wxPython How to Run Applications from Command Line in Linux Questions and Answers How do I download Vespa? You can download Vespa from the Downloads page. How do I install Vespa? Complete instructions for installing Vespa are on the Installation? page. Where do I find Application specific FAQs? Each application has its own wiki that contains a extensive information and links. Some are still under construction but will be included when they are released in beta. FAQs for each can be found here: Simulation FAQs, RFPulse FAQs, Analysis FAQs Setup says I haven't got the dependencies installed (scipy, numpy, etc.), but I do! [This was in response to a Windows user question] - One reason for this is that you may have more than one Python version installed, and when you install a Python package for one of our dependencies, it is installed to only one Python version at a time. In other words, if you have Python 2.5, 2.6 and 2.7 on your computer and you install matplotlib, it is installed for just one of those Pythons. It isn't installed "globally". If you want matplotlib to be available to all 3 versions of Python, you have to install it 3 times, once for each Python. Under Windows, when you install a Python package from an installer (an EXE that you double click), it can see that you have multiple Pythons and it guesses which Python under which it should install itself. It gives you the option to change this guess during the installation, but it's easy to overlook. So, for example, you may installed matplotlib and scipy to a different Python than the one you're trying to use with Vespa. Vespa uses the first Python it finds in your path. If you open a command prompt and type 'set path', it will show you the current path, and that will tell you which Python comes first. Another way to find out which Python is first in your path is with this command: python -c "import sys; print sys.executable" That should print something like "c:\python26\python.exe". You can test whether or not Python can import a package with a command like this: python -c "import matplotlib" If matplotlib is installed, that will return silently. If it's not installed, you'll get a loud & clear import error like this: python -c "import this_does_not_exist" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named this_does_not_exist Those commands should give you the tools you need to figure out what is installed where. However, if the Vespa installer says that matplotlib isn't installed, it definitely is not for that version of Python. The Vespa installer just tries the same test suggested above: import something and see whether or not it fails. Install Problem 1 - Issue with widgets or wxPython At this moment (March 2013), we recommend that you use wxpython version 2.8.x (more specifically 2.8.11.x or later). We develop Vespa using 2.8.12.1 under Windows, for example. There are changes under wxpython 2.9.x for which we have not tested/modified Vespa to account for. Part of this is time on our part, and part of it is that wx 2.9 is still sort of a moving target with the changes they make. So we've decided to stick with wx 2.8.x versions for the time being. Try uninstalling wxpython 2.9 and installing wxpython 2.8.12.1 (or so) to see if that fixes these problems. More information (useless or otherwise) - We would prefer to require wxPython 2.8.11.0 because that's the first version that includes a version of floatspin.py that contains the patches we submitted. However, as of December 2012, the most current version of EPD (7.3) is still on wxPython 2.8.10.1 which is not sufficiently recent. Until EPD provides a more recent wxPython, we can't require 2.8.11.0. Here is a link to some more general comments on the versions of dependency modules required for Vespa. Vespa Dependencies version information How to Run Applications from Command Line in Linux Vespa is installed in site-packages, or dist-packages under some Linux distros. There's a Vespa utility that can help you find them if you don't mind running a little Python. Enter 'python' at a bash prompt and then enter the following, hitting enter after each of the two lines -- import vespa.common.util.misc as util_misc util_misc.get_vespa_install_directory() That will print out something like this -- >/usr/local/lib/python2.6/dist-packages/vespa That's the main Vespa path. The individual apps are under that directory under the names simulation, analysis, priorset and rfpulse. To invoke each app, start [app_name]/src/main.py. For example, to start Simulation -- >python /usr/local/lib/python2.6/dist-packages/vespa/simulation/src/main.py If you're interested in learning more about this, there's a Vespa install utility called create_desktop_shortcuts.py that creates the Desktop shortcuts for Linux as well as OS X and Windows. You can see that online here:
http://scion.duhs.duke.edu/vespa/project/wiki/FAQ
CC-MAIN-2018-51
refinedweb
917
66.64
Hi guys ! I am newbie, I have downloaded oracle 11g release 2 but i don't know where to start and how to connect C# program to database. I want to write simple program of username and password in which I it stored in database. And also maps the username and password into database to check whether is correct or not. using System; namespace ConsoleApplication5 { class Program { static void Main(string[] args) { string username; string password; Console.WriteLine("Enter username : "); username = Console.ReadLine(); Console.WriteLine("Enter password :"); password = Console.ReadLine(); if (username == "abc" && password == "123") Console.WriteLine("\nWelcome " + username); else Console.WriteLine("\nEither username or password was incorrect !"); } } } Please anyone help me to create this programe in C#. I am really curious about it. Thanks!
https://www.daniweb.com/programming/databases/threads/445354/where-to-start-in-oracle-database
CC-MAIN-2022-33
refinedweb
125
61.02
Opened 5 years ago Closed 4 years ago #1599 closed enhancement (fixed) IPython Notebook -- pySAL narsc 2015 Description Prof Serge Rey has expressed support for publishing pySAL on the Live 9.5 via an email exchange. There is a github.com repository with workshop materials [0]. After reading and testing on the alpha build of Live 9.5 with ipy 4 + notebook installed.. we just publish the Exploratory Data Analysis (esda) directory, and confirm with Prof Rey his support for that.. The following pages have minor bugs, but otherwise run well .. - 04_pysal_mapping for typ in types 'fisher_jenks' type fails - 05_spatial_weights 'data/nat.shp' does not exist - 08_sp_corr ImportError?: No module named ipywidgets ( we should unload the cell contents when saved, because some of the pages can be slow to load, and cause swap on a small-spec machine.. ) [0] Change history (5) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by error in notebook 07_global_spatial_autocorrelation, needs to have %pylab removed import matplotlib.pyplot as plt and related corrections to fix comment:3 Changed 5 years ago by fixed and committed changes in all 12 notebooks in esda/ only 06_taz_example LineCollection?() problems not resolved.. comment:4 Changed 4 years ago by Ticket retargeted after milestone closed Live 95a_b64, verified the above.. small changes:
https://trac.osgeo.org/osgeolive/ticket/1599
CC-MAIN-2020-50
refinedweb
217
66.03
ATOM is the markup language we use for the manifest.XML file, so I don't know what you meant in your reference to it. So, you don't put anything at the "end of the tree" per se. You put content in this directory which is indexed and presented. Some namespaces will have arbitrary content, others well structured depending on the compliance framework; just lime in a regular audit scenario. I'll jump on my Mac later and send you a link to a working cloudAudit namespace so you can see why I mean. Hoff -- Sent from my mobile so please forgive any fat-fingering... -- Sent from my mobile so please forgive any fat-fingering... On Jun 9, 2011, at 7:30, niall <niall....@gmail.com> wrote:
https://groups.google.com/g/cloudaudit/c/DytAvw0w30A
CC-MAIN-2022-27
refinedweb
128
75
> -----Original Message----- > From: York Sun > Sent: Wednesday, August 09, 2017 10:19 PM > To: Priyanka Jain <priyanka.j...@nxp.com>; u-boot@lists.denx.de > Cc: Ashish Kumar <ashish.ku...@nxp.com> > Subject: Re: [PATCH] drivers:net:fsl-mc: Update MC address calculation > > On 06/23/2017 03:30 AM, Priyanka Jain wrote: > > Update MC address caluclation as per MC design requirement of address > > as least significant 512MB address of MC private allocated memory. > > > > Signed-off-by: Priyanka Jain <priyanka.j...@nxp.com> > > Signed-off-by: Ashish Kumar <ashish.ku...@nxp.com> > > --- > > drivers/net/fsl-mc/mc.c | 7 ++++++- > > 1 files changed, 6 insertions(+), 1 deletions(-) > > > > diff --git a/drivers/net/fsl-mc/mc.c b/drivers/net/fsl-mc/mc.c index > > eeecb2d..623586c 100644 > > --- a/drivers/net/fsl-mc/mc.c > > +++ b/drivers/net/fsl-mc/mc.c > > @@ -704,10 +704,15 @@ int get_dpl_apply_status(void) > > > > /** > > * Return the MC address of private DRAM block. > > + * MC address should be least significant 512MB address > > + * of MC private memory > > */ > > u64 mc_get_dram_addr(void) > > { > > - return gd->arch.resv_ram; > > + size_t mc_ram_size = mc_get_dram_block_size(); > > + > > + return (gd->arch.resv_ram + mc_ram_size - 1) & > > + MC_RAM_BASE_ADDR_ALIGNMENT_MASK; > > } > > > > /** > > > > Priyanka, > > This looks odd. You already have the address aligned by > CONFIG_SYS_MC_RSV_MEM_ALIGN (512MB by default), tracked by > gd->arch.resv_ram. Did you find the address is wrong sometimes? > > York Advertising York, As per MC design requirement, MC memory should be 512MB aligned for which start address is gd->arch.resv_ram. But the MC core's initial boot address should not contain start address. It must be located in the least significant 512MB of its address range. So this change is basically shifting address from start of memory towards end of Memory (which is least significant 512MB address). Priyanka _______________________________________________ U-Boot mailing list U-Boot@lists.denx.de
https://www.mail-archive.com/u-boot@lists.denx.de/msg260005.html
CC-MAIN-2018-05
refinedweb
292
51.44
What’s in these release notes? We are well into the Mark 1 production run now and have been putting out releases largely focused on that experience. We’ve also expanded the core capabilities in the process. Below are the notes for the last several releases. mycroft-core 0.8.18 Fixes - ISSUE #874: Handling $WORKON_HOME in start.sh. Thanks nealmcb! (PR #875) - Mycroft wasn’t recognizing the “wake up” phrase to come out of deep sleep mode entered by saying “Hey Mycroft, go to sleep”. (PR #888) - ISSUE #871: Automatic skill update was not occurring. Files automatically generated by Python were making the system think the skill was being worked on by a developer. Now the Mycroft Skill Manager (MSM) behavior properly updates skills that have not been modified, but stops updates if a developer intentionally makes changes to the code for a skill. (PR #873, #890, #894) Developer and API enhancements - Blacklisted skills are now read from a list in the config. The value is under skills.blacklisted_skills. Thanks JarbasAI! (PR #857) - Added support for “mycroft.sh restart” to mycroft.sh utility. (PR #868) - Documented the Message class. (PR #869 and #748) - Intents now can be defined via a “decorator”. This replaces the need to create an initialize() method purely for connecting the skill handlers. Here is an example:Decorator-style intent definition @intent_handler(IntentBuilder('HelloWorldIntent').require('hello').build()) def handle_hello_world(self, message): self.speak('hello!') Old-style intent definition def initialize(self): hello_intent = IntentBuilder("HelloWorldIntent").require("hello").build() self.register_intent(hello_world_intent, self.handle_hello_world) def handle_hello_world(self, message): self.speak('hello!') The old style is still supported and will remain available. But both methods are functionally identical and we believe the new method is simpler and clearer. (PR #865) - Modularization of Wake Word (similar to TTS and STT). This will allow alternatives to PocketSphinx for wake word spotting. (PR #876) - Text to Speech is now unified in an independent thread. This groups multiple-sentence replies and improves handling of for the “Stop” action and button on a Mark 1. This architectural change will also allow things like applying the audio caching mechanism for TTS engines besides the default Mimic, and viseme (mouth shape) support for other TTS providers. (PR #804, #892) - Removed unused / unneeded libraries from the requirements.txt (PR #806) - Added dev_setup.sh support for using packaged mimic (eliminating the lengthy compile time). Thanks el-tocino and Blackbaud-RyanSnedegar! (PR #882) - Errors encountered during skill update were failing with no indication of why in the logs. (PR #879) - Deprecated version mechanism was still being used in setup tools. (PR #829) mycroft-core 0.8.17 Fixes Mark 1 - Added support for menu item SSH > DISABLE and switch to using ‘systemctl’. (PR #744, #745) - Tweaks to wifi-setup disconnect detection (PR #861) General - Added a docker setup script (PR #775) - Improved multi-core support in dev_setup.sh script (PR #823) - Refinement of mycroft.sh operation (PR #835) - CLI client refinements to support backspace on some terminals and suppress TTS output that occasionally corrupted the screen. (PR #833) - Some JSON loads didn’t use load_commented_json(), causing issues if devs commented some config files. (PR #845) - Added local caching for remote Mycroft config (PR #828) - SkillSettings could miss updates to lists or dicts (PR #844) - Added remove_all_listeners() to EventEmitter on the websocket (PR #854) - MSM enhancements. Added support for Picroft installs, installing list of skills, etc. Thanks el-tocino! (PR #813) mycroft-core 0.8.16 june 13 Fixes - Issue #807: MSM could report successful install even after a failure. (PR #813) - Minor onboarding refinements: no timeout for wifi setup, preventing from talking over itself in certain situations. (PR #817) - Rework of has_been_paired() to deal with a changing identity file. (PR #825) mycroft-core 0.8.15 Mark 1 - Created an ‘out of box’ experience. This walks a new Mark 1 user through the process of setting up seamlessly and provides a simple introduction to using the device. (PR #808, #811) Enhancements - Added mycroft.util.parse.extract_number(). This will support parsing a number out of phrases like “a cup and a half of milk” or “one and three fourths cup”, extracting 1.5 and 1.75 respectively. Thanks ProsperousHeart! (PR #793) - Added mycroft.util.format.nice_number(). This converts numbers into natural spoken phrases, e.g. 3.5 into “three and a half.” Thanks ashwinjv! (PR #786) Fixes - Catching of KeyboardInterrupt by a signal handler was causing issues when shutting down services as a developer. Ctrl+C didn’t work properly anymore. (PR #798) - Scaled the duration of wake-up-word listening based on number of phonemes. This allows longer wake-up phrases to work properly. (PR #794) - Fixed broken link in CONTRIBUTING.md (PR #784) - Added check of exit status after calling the Mycroft Skills Manager (MSM) to verify skill install success. (PR #782) mycroft-core 0.8.14 - Changed the wifi-setup client password to ‘12345678’ to make easier to type, based on user testing feedback. (PR #780) mycroft-core 0.8.13 Fixes - Issue #746 : Google STT was broken by a typo (PR #750) - Issue #705: Unable to talk to the API over SSL on some Linux distros. Switching to pyOpenSSL version 16.2.0, resolved it. Thanks BoBeR182! (PR #751) Startup sequence - Prevented double-announcements when connecting to (PR #766) - Corrected announcement when SSH is enabled/disabled — reboot is not required anymore. (PR #765) - Reworked startup sequence of events and documented behavior. This corrected occasionally not prompting user to connect to the internet and unified the startup prompt and prompts needed if internet is lost later. (PR #764, 760) - Simplified the package manifest files by adding wildcards to handle resource files under ‘mycroft/res/’ (PR #763) Steve has been building cutting edge yet still highly usable technology for over 25 years, previously leading teams at Autodesk and the Rhythm Engineering. He now leads the development team at Mycroft as a partner and the CTO.
https://mycroft.ai/blog/core-release-notes-782017/
CC-MAIN-2021-04
refinedweb
987
58.48
i am supposed to write a function based off of the code i have so far that can generate the 5 letter sequences that are 1 letter different from a givin input; also identify the sequences that actually works. im not sure how to begin this but here is code so far: (wordTest is my function i just dont have anything in it) Code:#include <iostream> #include <fstream> #include <string> #include <set> using namespace std; void wordTest(string); int main() { set<string> myset; set<string>::iterator iter; string word, search; ifstream infile; infile.open("words.txt"); while (infile >> word) { myset.insert(word); } for (iter = myset.begin(); iter != myset.end(); iter++) cout << *iter << endl; cout << "Enter word to search for: "; cin >> search; iter = myset.find(search); if (iter == myset.end()) cout << "That word is not in the set" << endl; else cout << *iter << " is in the set" << endl; wordTest(search); return 0; } void wordTest (string search) {
http://cboard.cprogramming.com/cplusplus-programming/119315-word-ladders.html
CC-MAIN-2014-52
refinedweb
154
68.4
# C# Programmer, It's Time to Test Yourself and Find Error The PVS-Studio analyzer is regularly updated with new diagnostic rules. Curiously enough, diagnostics often detect suspicious code fragments before the end of the work. For example, such a situation may happen while testing on open-source projects. So, let's take a look at one of these interesting finding. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/7bc/79d/3fc/7bc79d3fcfbcb80537d039880f3009d9.png)As mentioned earlier, one of the stages of diagnostic rule testing is to check its operation on a real codebase. To that end, we have a set of selected open-source projects that we use for the analysis. The obvious advantage of this approach is the ability to see the diagnostic rule behavior in real conditions. There's also a less obvious advantage. Sometimes you may find such an interesting case, so it would be a sin not to write an article about it. :) Now, let's take a look at the code from the Bouncy Castle C# project and find the error in it: ``` public static string ToString(object[] a) { StringBuilder sb = new StringBuilder('['); if (a.Length > 0) { sb.Append(a[0]); for (int index = 1; index < a.Length; ++index) { sb.Append(", ").Append(a[index]); } } sb.Append(']'); return sb.ToString(); } ``` For those who like to cheat and peek, I added a picture to keep you guessing. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/76e/f87/2ab/76ef872ab21db182e53549db3bcfd96d.png)I'm sure some of you couldn't see the error without using the IDE or the *StringBuilder* class documentation. The error happened when calling the constructor: ``` StringBuilder sb = new StringBuilder('['); ``` Actually, this is exactly what the PVS-Studio static analyzer warns us about: [V3165](https://www.viva64.com/en/w/v3165/) Character literal '[' is passed as an argument of the 'Int32' type whereas similar overload with the string parameter exists. Perhaps, a string literal should be used instead. Arrays.cs 193. The programmer wanted to create an instance of the *StringBuilder* type, where the string begins with the '[' character. However, due to a typo, we'll have an object without any characters with a capacity of 91 elements. This happened because the programmer used single quotes instead of double-quotes. That's why the wrong constructor overload was called: ``` .... public StringBuilder(int capacity); public StringBuilder(string? value); .... ``` When the constructor is called, the '[' character literal will be implicitly cast to the corresponding value of the *int* type (91 in Unicode). Because of this, the constructor with the *int* type parameter setting the initial capacity will be called. Although, the programmer wanted to call the constructor which sets the string's beginning. To fix the error, the developer has to replace the character literal with a string literal (i.e., use "[" instead of '['). It will cause the correct constructor overload. We decided to go the extra mile and expanded the cases reviewed by diagnostic. As a result, in addition to character literals, some other expressions of the *char* type are considered now. We also check methods in the same way. The diagnostic described above was added in the PVS-Studio 7.11 release. You can [download the analyzer latest version](https://www.viva64.com/en/pvs-studio-download/) yourself. You'll see what the V3165 diagnostic can do, as well as other diagnostics for C, C++, C#, and Java. By the way, the users themselves often suggest some ideas of diagnostics to us. This time it has happened thanks to the user of [Krypt](https://habr.com/en/users/krypt/) from Habr. If you also have some ideas for diagnostic rules - [don't hesitate to reach out to us](https://www.viva64.com/en/about-feedback/)! **P.S.** This error has already been fixed in the current project code base. Though, this doesn't change the fact that it has existed in the code for some time, and that static analysis allows you to identify such problems and fix them at the earliest stages.
https://habr.com/ru/post/540348/
null
null
668
57.47
This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please. As a best-practice asserts should be enabled in dev/test environment. Asserts can be enabled by adding the following flag to MAVEN_OPTS. The ellipsis(…) at the end of each namespace (NS) means that anything under that. If there is any other NS used in Cloudstack please add that to the list as well. “-ea:org.apache.cloudstack... -ea:com.cloud..." Use asserts wherever it is appropriate (for e.g. private methods which assumes certain things about the inputs). Also if there are obsolete ones please review and remove.
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Enabling+asserts+in+Cloudstack
CC-MAIN-2019-18
refinedweb
116
67.76
It looks like you're new here. If you want to get involved, click one of these buttons! The LocusTraversal now supports passing walkers reads that have deletions spanning the current locus. This is useful in many situation where you want to calculate coverage, call variants and need to avoid calling variants where there are a lot of deletions, etc. Currently, the system by default will not pass you deletion-spanning reads. In order to see them, you need to overload the function: /** * (conceptual static) method that states whether you want to see reads piling up at a locus * that contain a deletion at the locus. * * ref: ATCTGA * read1: ATCTGA * read2: AT--GA * * Normally, the locus iterator only returns a list of read1 at this locus at position 3, but * if this function returns true, then the system will return (read1, read2) with offsets * of (3, -1). The -1 offset indicates a deletion in the read. * * @return false if you don't want to see deletions, or true if you do */ public boolean includeReadsWithDeletionAtLoci() { return true; } in your walker. Now you will start seeing deletion-spanning reads in your walker. These reads are flagged with offsets of -1, so that you can: for ( int i = 0; i < context.getReads().size(); i++ ) { SAMRecord read = context.getReads().get(i); int offset = context.getOffsets().get(i); if ( offset == -1 ) nDeletionReads++; else nCleanReads++; } There are also two convenience functions in AlignmentContext to extract subsets of the reads with and without spanning deletions: /** * Returns only the reads in ac that do not contain spanning deletions of this locus * * @param ac * @return */ public static AlignmentContext withoutSpanningDeletions( AlignmentContext ac ); /** * Returns only the reads in ac that do contain spanning deletions of this locus * * @param ac * @return */ public static AlignmentContext withSpanningDeletions( AlignmentContext ac ); It seems like the Dels value is simply not reported for Indels. True SNPs ideally will have dels = 0.0, so if I throw out the SNPs with any dels value > 0 after doing indel realignment I should be okay. Unfortunately I need to use UnifiedGenotyper as I am working with a polyploid genome. @siantorno The HaplotypeCaller is now able to work with polyploid genomes just as well as diploids; you just need to specify the -ploidyargument. Geraldine Van der Auwera, PhD
http://gatkforums.broadinstitute.org/gatk/discussion/1348/seeing-deletion-spanning-reads-in-locuswalkers
CC-MAIN-2016-22
refinedweb
377
60.55
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. how to dump database ,using erppeek api and python script I have created that script to connect with the odoo database using erp peek i want to dump that database to my local machine .. the script is below import os import time import subprocess import erppeek SERVER = '' client = erppeek.Client(server=SERVER) print "connection success" dump_dir = '/var/lib/postgresql/9.3/main' db_username = ['admin'] db_names = ['ErpPeekDemoDatabase'] for db_name in db_names: try: dumper = " -U %s --password -Z 9 -f %s -F c %s " bkp_file = '%s_%s.sql' % (db_name, time.strftime('%Y%m%d_%H_%M_%S')) file_path = os.path.join(dump_dir, bkp_file) command = 'pg_dump' + dumper % (db_username, file_path, db_name) subprocess.call(command, shell=True) subprocess.call('gzip ' + file_path, shell=True) print "backup success" except: print "Couldn't backup database" % (db_name) i got this error pg_dump: [archiver (db)] connection to database "ErpPeekDemoDatabase" failed: FATAL: Peer authentication failed for user "[admin]" backup!
https://www.odoo.com/forum/help-1/question/how-to-dump-database-using-erppeek-api-and-python-script-103509
CC-MAIN-2016-50
refinedweb
180
57.47
Customizing Console Fonts Written. Getting the Console Font you Want. After that, read the vidcontrol manpage a little I tried issuing the command that would change the font. It worked instantly! I issued this command: # vidcontrol -f 8x16 t Since t.fnt was in the directory it was supposed to be, it found it automatically. I tried it with a font in my home directory, specifying the full path, and that worked too, of course." Tada! Font Editing". One Response to “Customizing Console Fonts” (posted by Dan Langille) I was very excited to read the Customizing Console Fonts item, because I thought it would help me with my own desire to mix characters from two different fonts. I knew how to change the font used, via vidcontrol, but not about the handy fontedit port. [font project: including CP437 graphic characters into an ISO-8859-1 font. I "enhanced" cdplay to use graphics characters instead of +,| etc., but don’t like the CP437 letters.] Unfortunately, fontdump can’t read FreeBSD’s proprietary encoded format. So I looked into how vidcontrol deals with them, and found decode.c. I rewrote it as a standalone program – fontdecode.c – which can be used to convert a FreeBSD .fnt file into one that fontdump will accept, i.e. a standard VGA font file. The code is here: ========= fontdecode.c =========== <PRE> #include <stdio.h> #include <string.h> int main() { int n, pos = 0; char *p; char temp[128]; char buffer[4096]; #define DEC(c) (((c) – ‘ ‘) & 0x3f) do { if (!fgets(temp, sizeof(temp), stdin)) return(0); } while (strncmp(temp, "begin ", 6)); sscanf(temp, "begin %o %s", &n, temp); for (;;) { if (!fgets(p = temp, sizeof(temp), stdin)) return(0); if ((n = DEC(*p)) <= 0) break; for (++p; n > 0; p += 4, n -= 3) if (n >= 3) { buffer[pos++] = DEC(p[0])<<2 | DEC(p[1])>>4; buffer[pos++] = DEC(p[1])<<4 | DEC(p[2])>>2; buffer[pos++] = DEC(p[2])<<6 | DEC(p[3]); } else { if (n >= 1) { buffer[pos++] = DEC(p[0])<<2 | DEC(p[1])>>4; } if (n >= 2) { buffer[pos++] = DEC(p[1])<<4 | DEC(p[2])>>2; } if (n >= 3) { buffer[pos++] = DEC(p[2])<<6 | DEC(p[3]); } } } if (!fgets(temp, sizeof(temp), stdin) || strcmp(temp, "end\n")) return(0); for (n = 0; n < pos; n++) putchar(buffer[n]); return(pos); } </PRE> ========================================= [The credit for this code goes to Soren. I just adapted it. I guess I should have left the copyright notice in there. See vidcontrol/decode.c]
http://wp.freebsddiary.org/2001/03/27/customizing-console-fonts/
CC-MAIN-2018-09
refinedweb
417
75.5
Let's say I have one large file server that has a company-wide share. That file server constantly runs into space issues. Now let's say I have a dozen or so random servers (print servers, web servers, etc) that have a ton of free space. Is there a way to utilize that in the company-wide file share? Like spanning the storage across different servers but making that transparent to the user? I know the likely answer is going to be get more storage for the file server, but I just curious if this is conceptually possible. Thanks! Wow - I got schooled. I never knew that you could do one without the other. Please ignore answer below. Leaving it up as an example of what not to do. I believe that the answers referencing DFS won't help you. It will make a file share exist on multiple servers (and provide a single namespace for it as well), but it also replicates the entire contents to each server that hosts that share. I don't believe there's any way to partition it. It seems that you're asking (for example) if you can use 10 GB on your web server and 5 GB on your print server to make a 15 GB share. Using DFS, you would only have a 5 GB share on two servers. If I'm misunderstanding your scenario, please edit your question to address these details. Yes, there are lots of file systems that will support this across a range of platforms, the term you're looking for is Distributed file system (DFS). Since you don't specify any system in particular I'll give you the link to the Wikipedia article - and let you decide which fits, or post another question later. The answer to this maybe Distributed File System . Yes, you could use DFS for Windows or one of the variety of clustered/distributed file systems for Linux such as Lustre but you don't mention what OS you're using. Come back with more detail and we'll be able to help more. Yes, you can. Probably DFS will do it on Windows. Check . By posting your answer, you agree to the privacy policy and terms of service. asked 4 years ago viewed 1539 times active
http://serverfault.com/questions/235734/can-you-span-a-single-file-share-across-multiple-servers?answertab=active
CC-MAIN-2015-11
refinedweb
386
81.83
Data.Object.Yaml Description As a bit of background, this package is built on a few other packages I wrote. yaml is a low-level wrapper around the C libyaml library, with an enumerator interface. data-object is a package defining a data type: data Object k v = Scalar v | Sequence [Object k v] | Mapping [(k, Object k v)] In other words, it can represent JSON data fully, and YAML data almost fully. In particular, it doesn't handle cyclical aliases, which I hope doesn't really occur too much in real life. Another package to deal with is failure: it basically replaces using an Either for error-handling into a typeclass. It has instances for Maybe, IO and lists by default. The last package is convertible-text, which is a fork of John Goerzen's convertible package. The difference is it supports both conversions that are guaranteed to succeed (Int -> String) and ones which may fail (String -> Int), and also supports various textual datatypes (String, lazy/strict ByteString, lazy/string Text). YamlScalar and YamlObject We have a type YamlObject = Object YamlScalar YamlScalar, where a YamlScalar is just a ByteString value with a tag and a style. A "style" is how the data was represented in the underlying YAML file: single quoted, double quoted, etc. Then there is an IsYamlScalar typeclass, which provides fromYamlScalar and toYamlScalar conversion functions. There are instances for all the "text-like" datatypes: String, ByteString and Text. The built-in instances all assume a UTF-8 data encoding. And around this we have toYamlObject and fromYamlObject functions, which do exactly what they sound like. Encoding and decoding There are two encoding files: encode and encodeFile. You can guess the different: the former produces a ByteString (strict) and the latter writes to a file. They both take an Object, whose keys and values must be an instance of IsYamlScalar. So, for example: encodeFile myfile.yaml $ Mapping [ (Michael, Mapping [ (age, Scalar 26) , (color, Scalar blue) ]) , (Eliezer, Mapping [ (age, Scalar 2) , (color, Scalar green) ]) ] decoding is only slightly more complicated, since the decoding can fail. In particular, the return type is an IO wrapped around a Failure. For example, you could use: maybeObject <- decodeFile myfile.yaml case maybeObject of Nothing -> putStrLn Error parsing YAML file. Just object -> putStrLn Successfully parsed. If you just want to throw any parse errors as IO exception, you can use join: import Control.Monad (join) object <- join $ decodeFile myfile.yaml This takes advantage of the IO instance of Failure. Parsing an Object In order to pull the data out of an Object, you can use the helper functions from Data.Object. For example: import Data.Object import Data.Object.Yaml import Control.Monad main = do object <- join $ decodeFile myfile.yaml people <- fromMapping object michael <- lookupMapping Michael people age <- lookupScalar age michael putStrLn $ Michael is ++ age ++ years old. lookupScalar and friends implement Maybe, so you can test for optional attributes by switching on Nothing/Just a: name <- lookupScalar middleName michael :: Maybe String And that's it There's really not more to know about this library. Enjoy! Synopsis - data YamlScalar = YamlScalar { - type YamlObject = Object YamlScalar YamlScalar - class Eq a => IsYamlScalar a where - fromYamlScalar :: YamlScalar -> a - toYamlScalar :: a -> YamlScalar - toYamlObject :: IsYamlScalar k => IsYamlScalar v => Object k v -> YamlObject - fromYamlObject :: IsYamlScalar k => IsYamlScalar v => YamlObject -> Object k v - encode :: (IsYamlScalar k, IsYamlScalar v) => Object k v -> ByteString - encodeFile :: (IsYamlScalar k, IsYamlScalar v) => FilePath -> Object k v -> IO () - decode :: (Failure ParseException m, IsYamlScalar k, IsYamlScalar v) => ByteString -> m (Object k v) - decodeFile :: (Failure ParseException m, IsYamlScalar k, IsYamlScalar v) => FilePath -> IO (m (Object k v)) - data ParseException - = NonScalarKey - | UnknownAlias { - | UnexpectedEvent { - | InvalidYaml (Maybe String) Definition of YamlObject data YamlScalar Source Constructors Instances Automatic scalar conversions class Eq a => IsYamlScalar a whereSource Methods fromYamlScalar :: YamlScalar -> aSource toYamlScalar :: a -> YamlScalarSource Instances toYamlObject :: IsYamlScalar k => IsYamlScalar v => Object k v -> YamlObjectSource fromYamlObject :: IsYamlScalar k => IsYamlScalar v => YamlObject -> Object k vSource Encoding/decoding encode :: (IsYamlScalar k, IsYamlScalar v) => Object k v -> ByteStringSource encodeFile :: (IsYamlScalar k, IsYamlScalar v) => FilePath -> Object k v -> IO ()Source decode :: (Failure ParseException m, IsYamlScalar k, IsYamlScalar v) => ByteString -> m (Object k v)Source decodeFile :: (Failure ParseException m, IsYamlScalar k, IsYamlScalar v) => FilePath -> IO (m (Object k v))Source Exceptions data ParseException Source Constructors Instances
http://hackage.haskell.org/package/data-object-yaml-0.3.4/docs/Data-Object-Yaml.html
CC-MAIN-2015-14
refinedweb
705
59.74
I've run into a situation that I don't quite understand and was hoping somebody could help. I have created a simple class that I'm using in my main class. I tried declaring a global variable to this class and then instantiate it when Main is called. Ex: package myApp; import myApp.animal; public class JFrame extends javax.swing.JFrame { private animal myAnimal; public static void main(String args[]) { myAnimal = new animal(); } private void jButtonMouseClicked() { myAnimal.setX(100); // generates an exception error } } In the code above, if you click the button, a null exception is generated. My guess is because the object is not being instantiated even though I do so in "main". If I change the declared variable to read: private animal myAnimal = new animal(); and delete the instantiation in "main", the code works. This is confusing to me because I thought you could declare a global class object and instantiate it elsewhere. For example, I could set "int x;" and then set its value in a method. Anyone help on explaining this one is greatly appreciated. Thanks!!!! Jaime
http://forums.devx.com/printthread.php?t=29244&pp=15&page=1
CC-MAIN-2014-42
refinedweb
182
65.32
. So, when we have more than a week of hiatus in releases we should expect ST4? Welcome back to daily releases! Great! I love ST Cheers! on_new_async still checks the wrong callback. def on_new_async(view_id): v = sublime.View(view_id) for callback in all_callbacks'on_new']: callback.on_new_async(v) Also it appears "subl" can no longer open files? Edit: Actually it does, it just focuses the wrong window I am having a problem with this build in Windows. I was just using fine and sudently my computer start working slog, so I open task manager and I saw thousends of sublime_text.exe process even if I had only one window open On Ubuntu 12.04 x64 Goto definition commands not working (nor were they in 3008 ) Symbols show in goto symbol in project but selection of an item in the quick panel is a noop ... ? This a problem for anyone else? @Quarnster [pre=#0C1021]def on_new(view_id): v = sublime.View(view_id) for callback in all_callbacks'on_new']: callback.on_new(v) def on_new_async(view_id): v = sublime.View(view_id) for callback in all_callbacks'on_new_async']: callback.on_new_async(v)[/pre] How you do the upgrade? No problems here with goto definition and goto symbol in project, on Ubuntu 12.10 x64 (tested only on python code) jps, Any chance you can fix this two issues: ST3 Bug report: Panels not scrolling automatically And ST3 Bug: delete file hangs They are the major annoyances I have with ST3 at the moment. Thanks! -- Felipe. Maybe I'm doing something stupid Can.
https://forum.sublimetext.com/t/build-3009/8676/1
CC-MAIN-2016-07
refinedweb
252
76.82
This article will introduce you to the basics of the Python programming language so you can quickly start writing code for your projects. If you need a short introduction to the basic concepts themselves, take a look at Getting Started With Python. Variables in Python Python is a loosely typed language and therefore you don‘t need to declare variables with a fixed type and they can also change their type at any time. Furthermore, python doesn‘t support the use of access-modifiers. A variable declaration is simple and looks like this: VariableName = value Here are a few examples: patientAge = 24 accepted = true surname = “Lobowski“ Methods and Function Declarations Method and function declarations are pretty simple too. You don't need to supply a return type or an access modifier for the function. An optional list of parameters can be supplied. Each entry in this list consists of the parameter name and an optional default value, separated by commas: def functionName([param1[, param2[, (...)]]]): // Your code [return value] As you can see, an optional return statement can be added and the return type is determined by the returned value. If you want to declare a static function, use the “@staticmethod“ annotation. Here are some examples: // If you don’t supply a default value for each variable // write the variables with the default value at the end // of the list def add(one, two=10): return one + two @staticmethod def initialize(): // Do something Obviously, static methods have to be declared inside of a class (the code snippet above is just an example for method declaration). Collections in Python Python doesn’t implement arrays the way other languages like C and Java do. Instead, lists with a variable length and mixed data types are used. However, you can define array-like structures for numeric data types: import array as python_array a = python_array.array('d', [1, 24.0, 3.14]) The elements can be accessed by using their index: print(a[0]) print(a[1]) a[0] = 2222 Just remember that indices start at zero. You can use len() to get the length of an array: print(len(a)) Like mentioned above, arrays in Python are different from other languages. Python arrays are not of a fixed length. You can always dynamically remove and add elements: // Add three elements to the end of the array a.append(10) a.append(20) a.append(30) // Remove the first three element a.pop(0) For a full list of features refer to Python's official documentation. The array, mentioned above, is an optimized special form of the list that only accepts numeric values of the same type. Therefore a list can be used exactly the same way as an array but you are not limited to using one data type: li = ["Apple", 2.0, 3.14, "Banana", "Pear", [1, 2, 3, 4, 5]] As you can see, “li” can contain anything at any time. All the other functions mentioned above work exactly the same way they did for arrays. Another type of collection is the tuple. Unlike the elements in a list, the ones in a tuple can’t be changed upon initialization: // Note the round brackets instead of the square ones tu = (1, 2, 3, "cat", "dog", "parrot") Insert, update and delete operations won’t work but elements can still be accessed with their index. The next important data structure is the set. The elements contained within one can’t be indexed like before but you can always add new elements and remove existing ones from the set. You can also check if a value is an element of the set: s = {"pie", "bread", "steak"} print(s.pop()) s.add(30) print(len(s)) s.pop() print(len(s)) The pop()-function doesn’t take any parameters here and it returns the element that got removed. You can insert new elements with the add()-method. Again, for a full list of functions refer to the official documentation. One last data-structure is the dictionary. This collection links two values together in a key and value relationship where each value can be referenced using a unique key: telephone_book = { "Peter":9238172, "Laura":1119823, "Mark":9952174, "Liz":8009822 } print("Laura\'s phone number is:") print(telephone_book["Laura"]) Values can be changed in the same way and len() will give you the length of the structure. You can add elements by using a new key as the index and assigning a value to it: telephone_book["Ben"] = 5557281 print("Ben\'s phone number is:") print(telephone_book["Ben"]) You can use pop() together with the key to remove an object, just like with lists. The if-condition and Loops The if-condition is pretty straightforward and exactly what you’d expect: a = 20 if a > 100: print("a greater than 100") elif a >= 50: print("a is between 50 and 100") else: print("a is less than 50") print(a) You can define as many elif-branches as you like but they are, like the else at the end, entirely optional. For-loops can be used to quickly iterate over a set of values that can be stored in any of the structures discussed above. You can use the following syntax to do so: for x in any_collection: // Do something with x But they can also be used the traditional way with a starting, an ending and an increment value: for i in range(0, 100, 1) The loop above starts at zero and counts to 99 incrementing “i” by one with each step. The start and increment values are optional. At default, the range will start with zero and increment by one. Python also allows you to use the else-keyword in combination with a for-loop: for i in range(0,100,1): print(i) else: print("Loop done!") The code inside of the else-block will be executed after the loop finished. The last important structure is the while-loop which will run as long as a defined condition is met: i = 0 while i < 100: print(i) i = i + 1 else: print("Loop done!") This while-loop will do the exact same thing as the for-loop above. Two important keywords that can be used inside of loops are “break” and “continue”. “Break” will exit out of the loop and “continue” will skip the code that follows it and start with the next iteration of the loop.
https://maker.pro/custom/tutorial/basic-python-concepts-for-beginners
CC-MAIN-2021-10
refinedweb
1,071
59.03
This documentation is archived and is not being maintained. Utility Spotlight Group Policy Inventory Download the code for this article: GPInventory.exe (150KB) HOW DO YOU know what you have if you don’t keep track of it? Having inventory details at your fingertips can save you lots of headaches when budget allocation meetings and yearly audits roll around. And, of course, the powers-that-be always seem to want to know who has what and why, and what it will take to do that major upgrade. A tool that can help you with all this is Group Policy Inventory (GPInventory.exe), included with the Windows Server® 2003 Resource Kit. This program lets you query and retrieve system information with Resultant Set of User Policy (RSOP) and/or Windows® Management Instrumentation (WMI) queries from multiple remote hosts, as the figure shows. Not surprisingly, the program easily retrieves Group Policy information, but it’s also an effective means for pulling out detailed system information such as applications that have been installed, the processor name and speed, or the list of installed hotfixes. But wait, there’s more! Since it’s part of the Windows Server 2003 Resource Kit, it’s 100 percent free. Figure 1 Using GPInventory to Gather Information (Click the image for a larger view) There are many different ways to put GPInventory.exe to use. For example, if you need to verify warranty status, you can collect the serial or service numbers of the PCs on your network without having to truck out to every box. Or, if you have a lab environment and you want to see if a machine is currently in use, you could query the users currently logged onto the system without getting out of your chair. If you have a blade server set up where images are blown down to the blades on a routine basis (to clean out and restore your QA environment, for example), you could query the install date of the PC to verify when the last image was applied. GPInventory is also quite easy to use. You point it to a file that has a list of computers you want to inventory, select which queries to run, and then process the list. The tool then reaches out and connects to each computer on the list and queries it. If you are doing a quick query, you can also select the target machines directly from Active Directory® and the query actions to perform from the Query menu. To run an RSOP query on a machine, you’ll need to have the Generate Resultant Set of Policy (logging) permission (which domain admins and delegated admins have by default). WMI queries require Administrative rights on the remote machine unless you use the WMI Control MMC snap-in to grant Enable Account and Remote Enable rights on the root\cimv2 namespace to an alternate user. Once you’ve run your query, you can save the results as a tab-delimited text file or as XML. Both varieties can be imported into Microsoft® Excel® for analysis. You can sort and then apply an auto-filter to find all the details. GPInventory also supports command-line execution, so you could easily set up a scheduled task to query system information and save it to an XML file directly, without GUI intervention. This can be very useful for long-running queries that are better suited for off-hours execution. The tool comes with a default set of RSOP and WMI queries, but you can add, edit, and update the query file with your own customized list if what you need isn’t there. The included help document shows you how to define a new query in the query-list XML file. GPInventory.exe is a great way to gather information on the machines running Windows across your network, whether for inventory, group policy, or status information. You’ll find a link to the Group Policy Inventory tool in the downloads section of our Web site. Show:
https://technet.microsoft.com/en-us/library/2007.02.utilityspotlight.aspx
CC-MAIN-2016-40
refinedweb
670
58.82
2009/8/2 M?ns Rullg?rd <mans at mansr.com>: > ?smail D?nmez <ismail at namtrac.org> writes: > >> Hi, >> >> On Sun, Aug 2, 2009 at 7:39 PM, Diego Biurrun<diego at biurrun.de> wrote: >>> On Sun, Aug 02, 2009 at 07:35:49PM +0300, ?smail D?nmez wrote: >>>> >>>> Patch attached. >>>> >>>> --- libavformat/os_support.h ?(revision 19560) >>>> +++ libavformat/os_support.h ?(working copy) >>>> @@ -34,6 +34,11 @@ >>>> ?# ?define lseek(f,p,w) _lseeki64((f), (p), (w)) >>>> ?#endif >>>> >>>> +#if defined(__MINGW32CE__) >>>> +# ?define getenv(x) "" >>>> +# ?define strerror(x) "" >>>> +#endif >>> >>> I think you could merge this with the section above and simplify both. >>> Also, IIRC the "" are unnecessary. >> >> New patch attached. It doesn't work without the "" part. > > getenv() should be NULL, not an empty string. Attached version returns NULL for getenv() and "N/A" for strerror() so it should be better. Regards. -- ?smail D?NMEZ -------------- next part -------------- A non-text attachment was scrubbed... Name: wince-nop.patch Type: application/octet-stream Size: 476 bytes Desc: not available URL: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-August/060609.html
CC-MAIN-2018-26
refinedweb
165
72.12
In this example, you’ll learn to find the product of numbers in C++ Programming. In this program, the user is asked to enter two numbers (floating point numbers). Then, the product of those two numbers or product of numbers is stored in a variable and displayed on the screen. Here user enters the numbers the first number is stored in “ a” and the second number in “ b” and the Product is stored in productOfNumbers #include <iostream> using namespace std; int main() { double a, b, productOfNumbers; cout << "Enter two numbers: "; // Stores both floating point numbers in variable a and b respectively cin >> a >> b; // Performs multiplication and stores the result in variable productOfNumbers productOfNumbers = a * b; cout << "Product = " << productOfNumbers; return 0; } Output Enter two numbers: 3.4 5.5 Product = 18.7 In this program, the user is asked to enter two numbers. These two numbers entered by the user is stored in a variable a and b respectively. Then, the product of a and b is evaluated and the result is stored in variable productOfNumbers. Finally, the productOfNumbers is displayed on the screen. Here are few Related CPP Program - C++ “Hello, World!” Program - C++ Program to Print Number Entered by User - C++ Program to Add Two Numbers - C++ Program to Find Quotient and Remainder - C++ Program to Find Size of int, float, double and char in Your System - C++ Program to Swap Two Numbers - C++ Program to Find ASCII Value of a Character Ask your questions and clarify your/others doubts about the product of numbers by commenting. Documentation. Please write to us at [email protected] to report any issue with the above content or for feedback.
https://coderforevers.com/cpp/cpp-program/product-of-numbers/
CC-MAIN-2019-35
refinedweb
278
56.79
>>>>> "Gary" == Gary D Thomas <gary.thomas@mind.be> writes: Gary> Jonathan, Gary> I've run into a problem [near and dear to your heart] - a Gary> place where the eCos namespace (via include files) intrudes Gary> on an application. The problem is in <pthread.h>. This Gary> particular application happens to have a typedef for the Gary> symbol "destructor", which causes this prototype to fail: Gary> // Create a key to identify a location in the thread specific data area. Gary> // Each thread has its own distinct thread-specific data area but all are Gary> // addressed by the same keys. The destructor function is called whenever a Gary> // thread exits and the value associated with the key is non-NULL. Gary> externC int pthread_key_create (pthread_key_t *key, Gary> void (*destructor) (void *)); Gary> So, the question(s) are: Gary> * Who's at fault here? (the application or the kernel) Gary> * How best to solve it? Gary> As a work-around, I've modified that particular include file Gary> to avoid this problem (renaming destructor as _destructor). Gary> n.b. This is a portable application from outside the eCos world, Gary> so I don't want to have to change it, if possible. If, on the Gary> other hand, this is truly an application issue, I'll gladly take Gary> it up with the [application] maintainers. I believe the "correct" solution is to just remove the argument names from the header files, i.e. change <pthread.h> to : externC int pthread_key_create (pthread_key_t*, void (*)(void*)); Such argument names are not needed in function prototypes, only in the implementation. They only serve as a form of documentation - and on occasion that can be very useful. Another way of tackling the problem involve careful use of underscores as you have done, e.g.: externC int pthread_key_create (pthread_key_t* _key, void (*_destructor)(void*)); That reduces but does not eliminate the problem. Yet another approach is to put the argument name in comments: externC int pthread_key_create (pthread_key_t* /* key */, void (* /* destructor */)(void*)); but that is not actually very readable, so much of the documentation value is lost. My preferred approach is to leave out the argument name most of the time, but where has it special documentation value put it in comments. However I don't guarantee that I have been consistent about this in all my code. Bart -- Before posting, please read the FAQ: and search the list archive:
https://sourceware.org/pipermail/ecos-discuss/2003-March/019660.html
CC-MAIN-2022-21
refinedweb
399
60.65
The QtopiaSql class provides a collection of methods for setting up and managing database connections. More... #include <QtopiaSql> The QtopiaSql class provides a collection of methods for setting up and managing database connections. Qtopia has a database architecture which uses a number of different databases, including a seperate database for each removable storage media. Additionally the database schema required for a particular version of Qtopia may differ from that currently on the device due to Qtopia being upgraded. Use these routines to manage connections to these databases. The QtopiaDatabaseId type identifies a particular database of the collectin. The systemDatabase() method is used to fetch a QSqlDatabase object pointing to the Qtopia main system database. Call attachDB() to connect a database for a particular path, typically associated with a new storage media mount-point. These will be operated on with the system database as a unit. Lists of the current databases can be obtained with databaseIds() and databases(). Given a database id obtain the path for the associated media mount point with databasePathForId() and obtain the QSqlDatabase itself with database(). The ensureTableExists() methods call through to the Database migration utility to ensure the table schema referred to is loaded. The attachDB() methods use the underlying database implementation specific method for creating queries across seperate databases. In SQLITE this method is the ATTACH statement, but an equivalent method is used for other implementations. This documentation should be read and understood in conjunction with the Database Policy and the Database Specification Return a QSqlDatabase object for the database specific to the application given by appname Note: You are responsible for calling QSqlDatabase::removeDatabase once you are finished with the connection. Attach the database for the media located at the mount-point path. The database to use is located on that media. After this call database entries for the attached database will be used in future queries executed. See also exec(). This is an overloaded member function, provided for convenience. Attach the database for the media located at the mount-point path. The path given by dbPath is the database to use in the attach. See also exec(). Returns a reference to the qtopia database object (used for settings storage, doc manager, etc) for the given QtopiaDatabaseId id. There is a separate set of database objects created per thread. Return the QtopiaDatabaseId identifier for the database located at the given dbPath. Internally causes the database system to be opened, if not already open. Return the QtopiaDatabaseId identifier for the database associated with the given path. Internally causes the database system to be opened, if not already open. Return a list of the QtopiaDatabaseId entries for all the attached databases. Internally causes the database system to be opened, if not already open. Return the full path to the mount-point related to the database for the given QtopiaDatabaseId id Returns a list QSqlDatabase objects comprising all attached databases. Undo the effect of the attachDB() method, for the database associated with mount-point path. After this call database queries executed will not attempt to reference this database. See also attachDB() and exec(). Ensure that the given table tableName exists in the database db Return true if the operation was successful; otherwise returns false This is an overloaded member function, provided for convenience. Ensure that all tables in the list of tableNames exists in the database db. Internally this invokes the "DBMigrationEngine" service to run any migrations or schema updates required. Returns a version of the QString input with all single quote characters replaced with two consecutive quote characters. Execute the given QString query on the database db. If inTransaction is true, then perform the query inside an SQL transaction, for atomicity, if the database backend supports this. This is an overloaded member function, provided for convenience. Execute the given QSqlQuery query on the database db. If inTransaction is true, then perform the query inside an SQL transaction, for atomicity, if the database backend supports this. Returns a pointer to the singleton object of QtopiaSql. This is the preferred method of accessing the QtopiaSql class. Return true if the file located at the given path is a Qtopia database file, for example the datastore used by the SQLITE engine; return false otherwise. This may be used to test if the file should be shown to the end-user. if ( !QtopiaSql::isDatabase( fileFound )) listForDisplay.append( fileFound ); Returns true if the database id is a valid open database in the system. Initializes the state of the QtopiaSql system with the type of database, eg "sqlite". The meaning of the name and user parameters are as defined in QSqlDatabase. Send a string for the given QSqlQuery q to the qtopia logging system. Opens the main system database, if it is not already open. Perform a locale aware string compare of the two argument QStrings l and r using the database implementations native collating functions. If the database does not provide one, or if one is provided but not integrated, then returns the same result as QString::localeAwareCompare() Return the QSqlDatabase instance of the main system database.
https://doc.qt.io/archives/qtopia4.3/qtopiasql.html
CC-MAIN-2019-18
refinedweb
846
56.35
Cabin — React & Redux Example App — Mapbox This is the 8th post in: Introduction Foursquare, Evernote, Instacart, Pinterest, GitHub, and Etsy have something in common — their maps are powered by Mapbox. Mapbox provides a number of incredible services that are easy to integrate with your application, from Maps and Directions to Geocoding, and even Satellite Imagery. For Cabin we will be using two of their services: displaying custom maps and geocoding user provided locations. We really like using Mapbox because their dedication to making developer tools intuitive. In addition, Mapbox has Open Source Components, which makes them a great fit for our integration. After this tutorial you’ll have the following map up and running: Getting Started Step 1: Head over to the Mapbox Studio and sign up for a free account. Next up, we’re going to be incorporating a map into our app. A style is a document that defines the visual appearance of a map. More here. Step 2: In Mapbox Studio, click on Styles. Step 3: Select your template option, and click “Create”. Style URL One more step: We need to get the Style URL, which we will use in our client-side code and API. Step 1: Click on your published style as shown below — and then click “Share & Use”. Step 2: Get your Style URL, which you will use in your JavaScript: Step 3: Go back to the Home page in Studio — and get your access token from the sidebar. That should be enough work in the Mapbox Studio to get started! Now it’s time to implement Mapbox in our code… Client Side So, in our Cabin app, we have functionality to click on a post and visualize its location. This is where our the Mapbox JavaScript library comes into play: Mapbox GL JS is a JavaScript library that uses WebGL to render interactive maps from vector tiles and Mapbox GL styles. It is part of the Mapbox GL ecosystem, which includes Mapbox Mobile, a compatible renderer written in C++ with bindings for desktop and mobile platforms. Step 1: Let’s add the library in our project in /app/views/index.ejs: //api.tiles.mapbox.com/mapbox-gl-js/v0.17.0/mapbox-gl.js Step 2: Next, we can add our access token to our env.sh file: From the Mapbox dashboard, head over to Account (bottom left), and then click on “Access Token” on the top right corner of your screen. Here you will be able to generate a new token for use. Now that we have our token, replace the default value in our env.sh file with the actual token: export MAPBOX_ACCESS_TOKEN=VALUE Restart the app and make sure to run Webpack: cd /app source ../env.sh; webpack --watch --progress Note: The watch and progress flags are great for development. As you might guess, watch keeps an eye on the filesystem and automatically rebuilds your application when you make a change. Progress is a nice feature that keeps you up to date on the status of your build, which can be nice for apps that become very large (like Cabin). Step 3: Open up app/routes/Location/Location.js. The loadMapboxMapmethod component is used to render our map. As can be seen below: loadMapboxMap = (lng, lat, name) => { mapboxgl.accessToken = config.mapbox.accessToken let map = new mapboxgl.Map({ container: 'map', style: 'mapbox://styles/stream/cinj7vvpfbbbib7mbzpyhkw39', center: [lng, lat], zoom: 9, }) map.on('load', function () { map.addSource('markers', { type: 'geojson', data: { type: 'FeatureCollection', features: [{ type: 'Feature', geometry: { type: 'Point', coordinates: [lng, lat] }, properties: { title: name, 'marker-symbol': 'default_marker' } }] } }) map.addLayer({ id: 'markers', type: 'symbol', source: 'markers', layout: { 'icon-image': 'default_marker', 'text-field': name, 'text-font': ['Open Sans Semibold', 'Arial Unicode MS Bold'], 'text-offset': [0, 0.6], 'text-anchor': 'top' } }) }) } For the most part, our map functionality is handled solely by the following JavaScript — with our map container being the “map” id in our JSX inside of render(): render() { return ( <div className="page"> <div className="location"> <div className="map" id="map"></div> <div className="images"> <h1>{this.state.location}</h1> <div className="grid"> {this.props.loc.map(item => <div className="grid-cell" key={`location-${item.id}`}> <Link to={`/photos/${item.id}`}> <img src={`${config.imgix.baseUrl}/${item.filename}?auto=enhance&w=200&h=200&fit=crop&fm=png&dpr=2`} /> </Link> </div> )} </div> </div> </div> </div> ) } Server Side When we upload a glorious photo of a cabin to our app, we need a way to geocode the location the user has entered in the freeform text box. No worries, Mapbox has us covered. Setting up the geocoding functionality is easy. Let’s install the library via npm: npm install --save mapbox-geocoding Note: You’ll need to source and restart the API since we added an updated token to our env.sh file by hitting control + c (to stop the api), followed by npm start (to start the api). Step 4: Next, open up /api/routes/uploads.js and head down to around line ~269. Here, we are creating a POST method to /uploads that we will use to upload images to our server. Sounds like a lot, right? It’s actually simple. Here’s the code: // use mapbox to get latitude and longitude function(cb) { // initialize mapbox client geo.setAccessToken(config.mapbox.accessToken); // get location data geo.geocode('mapbox.places', data.location, function (err, location) { if (err) { cb(err); } // if the location was found if (location.features.length) { // extract coordinates var coords = location.features[0].geometry.coordinates; if (coords.length) { // assign to latitude and longitude in data object data.longitude = coords[0]; data.latitude = coords[1]; } } cb(null) }); } Note: We’re using the popular NPM module, Async, to handle our waterfall approach of actions. You’re welcome to use any other promise style library; however, we felt that Async was the best for our scenario. As you can see, we are passing a plain text string (data.location) to the .geocode method — and, in return, we are getting a longitude and latitude value. More documentation on the Geocoding API can be found here. And that’s it! (Really, it’s that easy to get Mapbox implemented in your application.) Additional Features Mapbox is the superior product when it comes to mapping and geocoding. They bolster some great features and services — all with open source libraries and SDKs that make their products easy to implement. While the Mapbox functions that we use for Cabin are all free, Mapbox also has an impressive offering of paid services. It would be well worth your time to check out a few of their other services, like: Conclusion We admire the way Mapbox has created a product that is both powerful AND intuitive. It is the perfect solution to handle custom themed maps and geo locations. In the next post, we’ll cover how we’re using Digital Ocean to power the hosting.
https://medium.com/@nparsons08/cabin-react-redux-example-app-mapbox-d7ccda83318e
CC-MAIN-2017-17
refinedweb
1,147
65.12
66622/how-do-i-parse-a-string-to-a-float-or-int In Python, how can I parse a numeric string like "545.2222" to its corresponding float value, 545.2222? Or parse the string "31" to an integer, 31? I just want to know how to parse a float str to a float, and (separately) an int str to an int. Hii, Python method to check if a string is a float: def is_float(value): try: float(value) return True except: return False A longer and more accurate name for this function could be: is_convertible_to_float(value) Thank You!! For Python 3, try doing this: import urllib.request, ...READ MORE Hi. Good question! Well, just like what ...READ MORE I'm not sure that anything much could ...READ MORE connect mysql database with python import MySQLdb db = ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE Hello @kartik, Set show_change_link to True (False by default) in your inline ...READ MORE Hello @kartik, Just change the primary key of ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/66622/how-do-i-parse-a-string-to-a-float-or-int
CC-MAIN-2021-21
refinedweb
208
86.3
Hi, Never done it before, but just yesterday I found out about SparkContext.union method that could help in your case. def union[T](rdds: Seq[RDD[T]])(implicit arg0: ClassTag[T]): RDD[T] Pozdrawiam, Jacek -- Jacek Laskowski | | Mastering Spark Follow me at Upvote at On Tue, Dec 1, 2015 at 10:47 AM, Shams ul Haque <shamsul@cashcare.in> wrote: > Hi All, > > I have made 3 RDDs of 3 different dataset, all RDDs are grouped by > CustomerID in which 2 RDDs have value of Iterable type and one has signle > bean. All RDDs have id of Long type as CustomerId. Below are the model for 3 > RDDs: > JavaPairRDD<Long, Iterable<TransactionInfo>> > JavaPairRDD<Long, Iterable<TransactionRaw>> > JavaPairRDD<Long, TransactionAggr> > > Now, i have to merge all these 3 RDDs as signle one so that i can generate > excel report as per each customer by using data in 3 RDDs. > As i tried to using Join Transformation but it needs RDDs of same type and > it works for two RDDs. > So my questions is, > 1. is there any way to done my merging task efficiently, so that i can get > all 3 dataset by CustomerId? > 2. If i merge 1st two using Join Transformation, then do i need to run > groupByKey() before Join so that all data related to single customer will be > on one node? > > > Thanks > Shams --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscribe@spark.apache.org For additional commands, e-mail: user-help@spark.apache.org
http://mail-archives.us.apache.org/mod_mbox/spark-user/201512.mbox/%3CCACo38_RPOP_nFbTn3YCHZGe8A=kn-DXhtm4bQR1jiRTMChkAPg@mail.gmail.com%3E
CC-MAIN-2020-45
refinedweb
245
61.36
On Sun, Sep 19, 2004 at 05:20:59PM -0400, Stas Bekman wrote: > Greg Stein wrote: >... > >And the #ifndef wrappers are not needed. Just define the old names in > >terms of the new. > > yes, yes, inside apr they won't be needed. They will be needed inside > projects supporting more than one apr generation. IMO, projects should just use the API of the oldest major revision that they can, and forget a bunch of compat stuff. If a project codes against version 1.0, then it will work for everything after that. >... > >4) httpd 2.2 (and 2.1) will be able to work just fine with 1.1 without any > > recompilation being needed, but you may be fighting a battle against > > people distribution a 2.2/1.0 combo. > > But this can be solved inside 2.2 by adding the ifdef-define-endif future > compatibility blocks as I've posted in my original proposal. So 2.2 will > transparently work with apr 1.0 and apr 1.1. It will transparently work with both if we use APR_REG and friends. I don't want to add a bunch of #ifdef code to httpd if it isn't needed. >... > >Choose your path for mod_perl, but point #4 seems like it will pose > >trouble for your users. > > it doesn't have much to do with mod_perl, besides us trying to keep the > Perl API consistent with C API, so it's easy to move between the two, > needing to know only one API. Well, what you're really doing is providing a 1.1 API to all users. I understand the motivation, but I'm not personally interested in that for httpd. > So if that's fine with everybody and now Bill has pulled down his veto, > the only thing to wait for is the APR_1_1 branch to appear? Any plans for > that? I'd like to see us move the APR modules over to SVN. We can then move the trunk to branches/1.0.x/, and leave the trunk for 1.1 development. I believe the last vote go-round had support for moving to SVN, so it was just a matter of "make it so". (I'd have to go look, and if anybody *doesn't* want to move to SVN, then please reiterate your concerns) Cheers, -g -- Greg Stein,
http://mail-archives.apache.org/mod_mbox/apr-dev/200409.mbox/%3C20040919213341.GB19990@lyra.org%3E
CC-MAIN-2014-15
refinedweb
392
84.68
Emotion detection is an extremely promising area, and its development is closely connected to achievements in computer vision and artificial intelligence. There are various applications and commercial solutions of emotion detection: from impacting advertising by measuring attention and engagement of marketing campaigns to the security sector, where emotional recognition can allow computer systems to respond automatically to people with a suspicious facial expression. As the use cases of this new discipline are continuously growing and affecting more and more aspects in the corporate sector, many startups, as well as leading companies in this sphere, are making a contribution to emotion detection technology. How emotion detection works There are several strategies for emotion detection, and the main are through facial expressions from images and videos, through text, and through speech. In the focus of our attention will be facial emotion recognition from images. The algorithms of facial emotion detection are pretty hard in practice. The groundwork for it is face recognition. In simple words, it starts with collecting information and extracting the features that can be important for further emotion recognition. These are the things like eyebrows, eyes, nose, lips, muscles of the face, etc. Then comes training the model so that it can recognize and classify specific patterns. After that, the model will finally be able to classify new data, which basically means to detect different emotions. There are numerous complicated aspects, and luckily, there are technologies that have already implemented those algorithms. So, you simply upload an image, and the program analyzes the relationship between critical points, therefore sensing microexpressions of the person’s face. By examining the positions of the points, the program detects the emotions, choosing from the list of the basic ones. Let’s see how it works in practice. Example Now we get to the most exciting part - real working example. We will work with Microsoft Azure API and Python SDK and examine several images with different emotions. Let’s go step by step. 1. Microsoft Azure account registration First off, we have to register a new Microsoft Azure account. Go to the Microsoft Azure page and click “Start Free” button. In the next window, choose your Microsoft account or press “Create one!” if you don’t have it already, and end the registration. After that, you will be redirected to creating Azure account, which requires the phone number and the credit card verification. Now, you can open the Azure portal, and you should see the next page. 2. Prices and limitations of Emotion APIs As we already mentioned, numerous companies have stepped into the process of developing emotion recognition technologies. At this point, we present a short review of the leaders in the industry, the most popular emotion detection APIs, namely Azure, Google, and Amazon. We will focus on prices and limitations of each API. Microsoft Azure Face API The API is a part of Microsoft Cognitive Services Platform. It detects, verifies, and identifies faces on images and can also recognize the emotions on the faces. The response is in JSON with specific percentages for each face within the 7 core emotions: anger, contempt, disgust, fear, happiness, sadness, surprise and a neutral state. There are some things that you should consider while using Azure. The higher the quality of the image, the more precise the recognition. The photos you upload should have the size up to 6 MB, and faces are detectable when its size is 36x36 to 4096x4096 pixels. It can return up to 64 faces per image. More details here. There is a free tier, which supports up to 20 transactions per minute and up to 30,000 per month. For more transactions, you should use the standard Face API tier. It allows 10 transactions per second and the price for it varies due to the number of transactions, starting from $1.00 per 1000 transactions decreasing to $0.40 per 1000 transactions. The more detailed information is available here. Google Vision API This API is a part of Google’s Cloud Platform. In the process of image sentiment analysis from faces, it detects joy, sorrow, anger, and surprise likelihood. The limitations for this API manifests in image file size up to 20 MB, Json request size up to 16 MB, as well as 16 images per request. There are also quotas of 600 requests per minute and 20,000,000 images per feature per month. For additional information on quotas and limitations, visit the following page. Google API gives you first 1000 units per month for free. The next breakdown is 1001 - 5,000,000 units per month which cost $1.50 per 1000 units. Finally, if you use from 5,000,001 to 20,000,000 units per month, facial detection along with detecting emotional state will cost you $0.60 per 1000 units. Amazon Rekognition In the context of face sentiment analysis, Amazon Rekognition which is a part of Amazon’s AWS Ecosystem is close to Microsoft Azure API. It presents deep learning based image recognition. The AWS API limits the maximum size of the image to 15 MB. Face can be detected if it’s not less than 40x40 pixels on the image with 1920x1080 size. For bigger images, minimum face size should be higher. The minimum pixel resolution for the image is 80 pixels in height and width. All images should be in JPEG or PNG format. You can store up to 20 million faces in one face collection, and APIs returns up to 4096 matching faces. More about AWS limits here. There is a free tier which allows you to analyze 5,000 images per month and store up to 1,000 face metadata each month, for the first year. The prices are similar to Azure and vary from $1.00 for the first 1,000,000 images processed per month to $0.40 if you have over 100,000,000 images processed per month for the US-EAST region. You can find your region and prices here. 3. Sending images with different emotion through Python SDK Azure configuration Click “Create a resource” button and find “Face” in the search. Press on the “Face” and in the following window click “Create” and configure the service. Wait some time while the service is creating and you will see it on your dashboard. Click on the service and it will open the next window. Then go to “Quick start” and choose “Keys”. You should copy the “Key 1” or “Key 2”, you will need them later; however, you can also open “Keys” at any time and copy then. Installing SDK System requirements: - Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-87-generic x86_64) - Python 3.5.2 (default, Nov 23 2017, 16:37:01) Install Microsoft Face API SDK by using pip: pip3 install cognitive_face==1.4.1 Verifying if SDK works correctly Create the emotion_detect.py file on your PC. # Import the Face API SDK import cognitive_face as Face_API # Set the keys which are you generated in the Azure service in the previous steps Face_API.Key.set("INSERT_YOUR_KEY_HERE") # Set the service URL which defined in the Azure service. # Because we created the service in the "West Central US" region the link # will be the next "" # Your correct URL you can find in the section two Face_API.BaseUrl.set("") # Sending the image to the Face API. # The image can be the URL, file path or a file-like object represents an image. img_url = "" response = Face_API.face.detect(image=img_url) # Printing the response. print(response) And run python3 emotion_detect.py. You must get the next response: [{'faceRectangle': {'top': 96, 'left': 212, 'width': 145, 'height': 145}, 'faceId': '1e8728cf-3dae-40b9-a18b-4db9973a7506'}] Note, that in your case, the “faceId” will be different. Emotion Detection As we already mentioned, the Face API can recognize next emotions: neutral, anger, contempt, disgust, fear, happiness, sadness, and surprise. Let’s modify our detect request. The attributes must be in the form of the comma-separated string, like "age,emotion". Supported attributes include age, gender, headPose, smile, facialHair, glasses, emotion, makeup, accessories, occlusion, blur, exposure, noise. You can read more about attributes here. Let’s begin with the following image... Request: img_url = "" response = Face_API.face.detect(image=img_url, attributes="emotion") The response must be the next: [{'faceRectangle': {'left': 212, 'top': 96, 'height': 145, 'width': 145}, 'faceAttributes': {'emotion': {'happiness': 1.0, 'contempt': 0.0, 'surprise': 0.0, 'disgust': 0.0, 'fear': 0.0, 'anger': 0.0, 'neutral': 0.0, 'sadness': 0.0}}, 'faceId': '8a7bf136-6d74-4ea7-b84e-78b6b6c75131'}] As we can see, the emotion is recognized as happiness which is correct for this image. Now, we can try with different emotions. Surprise emotion Request: img_url = "" response = Face_API.face.detect(image=img_url, attributes="emotion") Response: [{'faceId': '6c13d54c-6daf-4ac9-add5-3f3f253e72c4', 'faceAttributes': {'emotion': {'neutral': 0.0, 'happiness': 0.0, 'anger': 0.0, 'fear': 0.017, 'surprise': 0.983, 'disgust': 0.0, 'sadness': 0.0, 'contempt': 0.0}}, 'faceRectangle': {'height': 254, 'top': 135, 'width': 254, 'left': 159}}] Neutral emotion This time we will examine the image with several faces. Request: img_url = "" response = Face_API.face.detect(image=img_url, attributes="emotion") Response: [{'faceId': '139170f0-83c5-4dc1-b1b0-37a69c102f4b', 'faceAttributes': {'emotion': {'fear': 0.0, 'anger': 0.0, 'disgust': 0.0, 'contempt': 0.0, 'happiness': 0.001, 'surprise': 0.0, 'neutral': 0.989, 'sadness': 0.01}}, 'faceRectangle': {'top': 139, 'left': 273, 'height': 110, 'width': 110}}, {'faceId': '99c22f4f-dde9-4979-bf31-2ad093b2828d', 'faceAttributes': {'emotion': {'fear': 0.0, 'anger': 0.0, 'disgust': 0.0, 'contempt': 0.001, 'happiness': 0.0, 'surprise': 0.0, 'neutral': 0.955, 'sadness': 0.044}}, 'faceRectangle': {'top': 180, 'left': 170, 'height': 103, 'width': 103}}] Anger emotion Request: img_url = "" response = Face_API.face.detect(image=img_url, attributes="emotion") Response: [{'faceId': '01ee5315-4dbb-4e13-97c7-2f80cb971c09', 'faceAttributes': {'emotion': {'contempt': 0.0, 'happiness': 0.0, 'neutral': 0.0, 'anger': 0.762, 'surprise': 0.125, 'sadness': 0.0, 'fear': 0.113, 'disgust': 0.0}}, 'faceRectangle': {'left': 72, 'width': 192, 'top': 119, 'height': 192}}] Now, you can try yourself with the different emotion, attributes, and images. 4. Drawing Boxes As you can see, Face API also returns the “faceRectangle” which response to the area for the face location on the image. Let’s try to draw this area. Install the necessary package: pip3 install Pillow==5.2.0 requests==2.18.4 Replace the img_url defined in the emotion_detect.py with and add next code after the previous part of the code. import requests from io import BytesIO from PIL import Image, ImageDraw, ImageFont # Get the best mutch emotion and convert # to the string like "emotion_name: score". def get_best_emotion(resp): emotions = resp["faceAttributes"]["emotion"] b_key, b_value = "", 0 for key, value in emotions.items(): if value > b_value: b_key, b_value = key, value return b_key + ": " + str(b_value) # Get the face area from the response "faceRectangle". def get_face_box(resp): values = resp["faceRectangle"] left = values["left"] top = values["top"] bottom = left + values["height"] right = top + values["width"] return ((left, top), (bottom, right)) # Download the image by the url image_data = requests.get(img_url) # Opens and identifies the given image file. image = Image.open(BytesIO(image_data.content)) # Creates an object that can be used to draw in the given image. draw = ImageDraw.Draw(image) # Load a TrueType or OpenType font from a file or file-like object, # and create a font object. fnt = ImageFont.truetype("Ubuntu-B.ttf", size=13) # Iterate through all responses and draw box and text. for resp in response: # Get the best emotion. emotion = get_best_emotion(resp) # Get the face box. face_box = get_face_box(resp) # Draw the text on the image. draw.text((face_box[0][0], face_box[0][1] - 12), emotion, fill="purple", font=fnt) # Draw box on the image draw.rectangle(face_box, outline="purple") # Now you can display the final image. image.show() # Or save it. image.save("output.png", "PNG") Run the full code and you should get the next output: The full code can be found here. Conclusion Emotion detection is a thriving and actively evolving area. During recent years, many different companies have been working on the development of emotion recognition technologies, and most importantly, many of them have achieved significant results. However, there are still many aspects left to improve. In this tutorial, we saw in practice how to use Microsoft Azure for emotion detection. We experimented with images with various emotions and a different number of people on the photos and the results sure look promising. Now, it’s your turn! Any comments or questions? Just log in or register and leave a comment here. Report comment
https://imaginghub.com/blog/105-emotion-detection-walkthrough-with-microsoft
CC-MAIN-2019-13
refinedweb
2,084
68.57
in reply to Re: RFC: XML::Pastor v0.52 is released - A revolutionary way to deal with XMLin thread RFC: XML::Pastor v0.52 is released - A revolutionary way to deal with XML I must admit you have a point there. I probably took a shortcut in order to get a point across in the least number of words. For those who do know about CASTOR, just a mere sentence would ring enough bells to get the meaning acroos. For those who don't, there is now enough information in this thread I think. 'Revolutionary' deson't necessarily mean excellent or good. That's not for me to judge anyway. Revolutionary just means that there is an abrubt change in the way things work. In this respect, I tend to stick to my idea. The reason is that XML::Pastor introduces a whole new way of dealing with XML by generating native Perl classes starting from a W3C XSD schema. The resulting objects are even easier to manipulate than what results from XML::Simple on one hand. Furthermore, writing back to XML conformant to the original schema is taken care of. If you require more information, I would suggest that you check out the documentation of XML::Pastor or even download it and play with it. By the way, without being too critical, I would like to say that I try to keep myself up to date on what's going on out there even when it's not related to Perl. It doesn't mean I like Java per se, but it means I would like to be open-minded to new ideas. A curious parenthetical comment in that last paragraph... I doubt that anyone who has been in this business for any length of time whatever “knows only about Perl.” It seems rather odd even to suggest it. Bloop! Off my (duck's) back it went. Splash. One thing that, I think, keeps bumping against my head on this one is that you compare to XML::Simple ... which is, even on a good day, “just that.” If you are doing heavy-lifting XML work in Perl, you probably are not using Simple. How does that affect the self-assessment you present? Does it? The approach that you describe is what is sometimes called “pragmatic programming.” It's “machine generated software,” and it's not that new. It may well be the fact that you describe it in such glowing terms, and yet against one of the weakest XML-support libraries in CPAN (good though it is...), that makes me squint a little bit over my new bifocals “variable focus lenses.” So, if what you say is correct, that is, people are interested or at least know about other technology as well, then I was not that wrong assuming that they might also know about Castor. In that case one wonders why yourself know nothing about it, although it has been around for some time now and is quite popular for dealing with XML in Java. Why the comparison with XML::Simple? The answer is simple. Despite its many limitations and shortcomings, I have always liked the philosophy of XML::Simple, that is, representing XML data as native Perl data. Most of the other XML modules (XML::Writer, XML::Twig, ...) don't do that. Although they may have other merits, you need to know that you are working with XML when you access XML. So, I have compared to the module which I believe has got the idea right in the first place. It's true that XML::Simple has got quite a few limitations. But many of those limitations are actually overcome with XML::Pastor. So, I see no reason for not using it for some heavy duty XML work (apart from the fact that the code is not that mature yet). By the way, the only area where XML::Pastor is weak compared to XML::Twig is when working with huge XML documents and mixed content. You need to revert to Twig or SAX when you are working on huge documents. Otherwise, you can do pretty much everyhting. There is even some basic namespace support. Coming back to the discussion about 'revolution'. It's true => code generation is not a new concept. Furthermore, XML is not new, XSD is not new, Perl is not new. However, "Dealing with XML in Perl via code generation from XSD" is a completely new way of doing things. XML::Pastor: "Dealing with XML in Perl via code generation from XSD" XML::Compile: ." Looks to me like there are some similarities. While XML::Pastor seems to generate a set of classes that you can then use to build up or modify the XML, XML::Compile just generates some code that transforms between XML and Perl datastructure. I think both make a lot of sense. And both are good tools to keep in the toolbox. I don't think it's wise to restrict oneself to just one XML processing module. Jenda A party An organised event A traditional gathering With family and friends Home alone I don't celebrate the New Year Adjusting my clocks for the Leap Second I can't remember Other Results (182 votes). Check out past polls.
http://www.perlmonks.org/index.pl/?node_id=694796
CC-MAIN-2018-05
refinedweb
879
73.17
Automating the world one-liner at a time. One of the nicest new features in the latest drop of Windows PowerShell is enhanced tab-completion. We now tab-complete properties on variables and parameters on cmdlets in addition to the old filename completion. But that's not the interesting part. The cool bit is that it's done through a user-definable function. In the same way that you can redefine your prompt, you can also define custom tab completion. Here’s how it works: When you hit tab after typing some text, the function TabExpansion is called to generate the list of possible completion matches. You can find this function by doing: PS (16) > ls function:*tab* CommandType Name Definition ----------- ---- ---------- Function TabExpansion ... And see the current definition: PS (17) > $function:tabexpansion # This is the default function to use for tab expansion. : Or save it to a text file so you can edit it: PS (18) > $function:tabexpansion > c:\temp\tabexpansion.ps1 PS (19) > notepad $$ (The $$ above refers to the last token in the previous command – in this case, it’s the name of the file we saved the function too.) The parameter declaration for this function is param($line, $lastWord) where $line is the full command line as typed, and $lastWord is the last word or token in the line. So – if the line is: cd c:\windows then the last word would be “c:\windows”. If the line is cd “c:\program files” the last word would be “c:\program files”. These two pieces are provided because most expansions work on the last token (e.g. property name expansion) but some require the entire line (e.g. parameter expansion.) The TabExpansion function should return an array of strings representing the possible matches. The host will store these strings in a circular buffer the first time you hit tab after typing some text. Subsequent tabs will step you through the list of possible matches. Typing anything other than a tab will reset the sequence. If this function returns null, then the expansion process will fall through to the old built-in file-name completion code to try and generate a match. (The file name completion stuff is still implemented in the host as compiled code. It was too much work to migrate it to the script for v1.) Since TabExpansion is a function, it can do anything – pop up menus, create GUIs, talk to the user, or whatever. The one thing to keep in mind is performance – you want to get the results back quickly. If you have a complex completion scenario that is too slow in script then you might need to write a helper cmdlet. I’m really looking forward to seeing what the community can do with this feature! Bruce Payette Windows PowerShell Technical Lead PSMDTAG:INTERNAL: Tab Expansion, Tab Completion PSMDTAG:SHELL: Tab Expansion, Tab CompletionPSMDTAG:FAQ: Why doesn't tab completion do xxxx? PSMDTAG:SHELL: Tab Expansion, Tab CompletionPSMDTAG:FAQ: Why doesn't tab completion do xxxx? If you would like to receive an email when updates are made to this post, please register here RSS I'm an intellisense-addict. I hate doing work that my computer can do for me, andtyping long namespace I'm an intellisense-addict. I hate doing work that my computer can do for me, andtyping long namespace I assume that by now everyone is using PowerShell as their default shell, if not, time to get rid of Test-Path: We Goofed! (also known as a documentation errata…) Test-Path is a very handy little cmdlet. Here's a great hint from one of our Windows PowerShell PMs. We all know that you can interrupt a command Windows PowerShell (fka Monad) RC1 Available
http://blogs.msdn.com/powershell/archive/2006/04/26/584551.aspx
crawl-002
refinedweb
625
62.88
A question that pops up for many DSP-ers working with IIR and FIR filters, I think, is how to look at a filter’s frequency and phase response. For many, maybe they’ve calculated filter coefficients with something like the biquad calculator on this site, or maybe they’ve used a MATLAB, Octave, Python (with the scipy library) and functions like freqz to compute and plot responses. But what if you want to code your own, perhaps to plot within a plugin written in c++? You can find methods of calculating biquads, for instance, but here we’ll discuss a general solution. Fortunately, the general solution is easier to understand than starting with an equation that may have been optimized for a specific task, such as plotting biquad response. Plotting an impulse response One way we could approach it is to plot the impulse response of the filter. That works for any linear, time-invariant process, and a fixed filter qualifies. One problem is that we don’t know how long the impulse response might be, for an arbitrary filter. IIR (Infinite Impulse Response) filters can have a very long impulse response, as the name implies. We can feed a 1.0 sample followed by 0.0 samples to obtain the impulse response of the filter. While we don’t know how long it will be, we could take a long impulse response, perhaps windowing it, use an FFT to convert it to the frequency domain, and get a pretty good picture. But it’s not perfect. For an FIR (Finite Impulse Response) filter, though, the results are precise. And the impulse response is equal to the coefficients themselves. So: For the FIR, we simply run the coefficients through an FFT, and take the absolute value of the complex result to get the magnitude response. (The FFT requires a power-of-2 length, so we’d need to append zeros to fill, or use a DFT. But we probably want to append zeros anyway, to get more frequency points out for our graph.) Plotting the filter precisely Let’s look for a more precise way to plot an arbitrary filter’s response, which might be IIR. Fortunately, if we have the filter coefficients, we have everything we need, because we have the filter’s transfer function, from which we can calculate a response for any frequency. The transfer function of an IIR filter is given by \(H(z)=\frac{a_{0}z^{0}+a_{1}z^{-1}+a_{2}z^{-2}…}{b_{0}z^{0}+b_{1}z^{-1}+b_{2}z^{-2}…}\) z0 is 1, of course, as is any value raised to the power of 0. And for normalized biquads, b0 is always 1, but I’ll leave it here for generality—you’ll see why soon. To translate that to an analog response, we substitute ejω for z, where ω is 2π*freq, with freq being the normalized frequency, or frequency/samplerate: \(H(e^{j\omega})=\frac{a_{0}e^{0j\omega}+a_{1}e^{-1j\omega}+a_{2}e^{-2j\omega}…}{b_{0}e^{0j\omega}+b_{1}e^{-1j\omega}+b_{2}e^{-2j\omega}…}\) Again, e0jω is simply 1.0, but left so you can see the pattern. Here it is restated using summations of an arbitrary number of poles and zeros: \(H(e^{j\omega})=\frac{\sum_{n=0}^{N}a_{n}e^{-nj\omega}}{\sum_{m=0}^{M}b_{m}e^{-mj\omega}}\) For any angular frequency, ω, we can solve H(ejω). A normalized frequency of 0.5 is half the sample rate, so we probably want to step it from 0 to 0.5—ω from 0 to π—for however many points we want to evaluate and plot. Coding it From that last equation, we can see that a single FOR loop will handle the top or the bottom coefficient sets. Here, we’ll code that into a function that can evaluate either zeros (a terms) or poles (b terms). We’ll refer to this as our direct evaluation function, since it evaluates the coefficients directly (as opposed to evaluating an impulse response). You’ve probably noticed the j, meaning an imaginary part of a complex number—the output will be complex. That’s OK, the output of an FFT is complex too, and we know how to get magnitude and phase from it already. Some languages support complex arithmetic, and have no problem evaluating “ e**(-2*j*0.5)”—either directly, or with an “exp” (exponential) function. It’s pretty easy in Python, for instance. (Something like, coef[idx] * math.e**(-idx * w * 1j), as the variable idx steps through the coefficients array.) For languages that don’t, we can use Euler’s formula, ejx = cos(x) + j * sin(x); that is, the real part is the cosine of the argument, and the imaginary part is the sine of it. (Remember, j is the same as i—electrical engineers already used i to symbolize current, so they diverged from physicist and used j. Computer programming often use j, maybe because i is a commonly used index variable.) So, we create our function, run it on the numerator coefficients for a given frequency, run it again on the denominator coefficients, and divide the two. The result will be complex—taking the absolute value gives us the magnitude response at that frequency. Revisiting the FIR Since we already had a precise method of looking at FIR response via the FFT/DFT, let’s compare the two methods to see how similar they are. To use our new method for the case of an FIR, we note that the denominator is simply 1, so there is no denominator to evaluate, no need for division. So: For the FIR, we simply run the coefficients through our evaluation function, and take the absolute value of the complex result to get the magnitude response. Does that sound familiar? It’s the same process we outlined using the FFT. And back to IIR OK, we just showed that our new evaluation function and the FFT are equivalent. (There is a difference—our evaluation function can check the response at an arbitrary frequency, whereas the FFT frequency spacing is defined by the FFT size, but we’ll set that aside for the moment. For a given frequency, the two produce identical results.) Now, if the direct evaluation function and the FFT give the same results, for the same frequency point, and the numerator and denominator are evaluated by the same function, by extension we could also get a precise evaluation by substituting an FFT process for both the numerator and denominator, and dividing the two as before. Note that we’re no longer talking about the FFT of the impulse response, but the coefficients themselves. That means we no longer have the problem of getting the response of an impulse that can ring out for an unknown time—we have a known number of coefficients to run through the FFT. Which is better? In general, the answer is our direct evaluation method. Why? We can decide exactly where we want to evaluate each point. That means that we can just as easily plot with log frequency as we can linear. But, there may be times that the FFT is more suitable—it is extremely efficient for power-of-2 lengths. (And don’t forget that we can use a real FFT—the upper half of the general FFT results would mirror the lower half and not be needed.) An implementation We probably want to evaluate ω from 0 to π, corresponding to a range of half the sample rate. So, we’d call the evaluation function with the numerator coefficients and with the denominator coefficients, for every ω that we want to know (spacing can be linear or log), and divide the two. For frequency response, we’d take the absolute value (equivalently, the square root of the sum of the squared real and imaginary parts) of each complex result to obtain magnitude, and arc tangent of the imaginary part divided by the real part (specifically, we use the atan2 function, which takes into account quadrants). Note that this is the same conversion we use for FFT results, as you can see in my article, A gentle introduction to the FFT. \(magnitude:=\left |H \right |=abs(H)=\sqrt{H.real^2+H.imag^2}\) \(phase := atan2(H.imag,H.real)\) For now, I’ll leave you with some Python code, as it’s cleaner and leaner than a C or C++ implementation. It will make it easier to transfer to any language you might want (Python can be quite compact and elegant—I’m going for easy to understand and translate with this code). Here’s the direct evaluation routine corresponding to the summation part of the equation (you’ll also need to “import numpy” to have e available—also available in the math library, but we’ll use numpy later, so we’ll stick with numpy alone): import numpy as np # direct evaluation of coefficients at a given angular frequency def coefsEval(coefs, w): res = 0 idx = 0 for x in coefs: res += x * np.e**(-idx * 1j * w) idx += 1 return res Again, we call this with the coefficients for each frequency of interest. Once for the numerator coefficients (the a coefficients on this website, corresponding to zeros), once for the denominator coefficients (b, for the poles—and don’t forget that if there is no b0, the case for a normalized filter, insert a 1.0 in its place). Divide the first result by the second. Use use abs (or equivalent) for magnitude and atan2 for phase on the result. Repeat for every frequency of interest. Here’s a python function that evaluates numerator and denominator coefficients at an arbitrary number of points from 0 to π radians, with linear spacing, returning array of magnitude (in dB) and phase (in radian, between +/- π): # filter response, evaluated at numPoints from 0-pi, inclusive def filterEval(zeros, poles, numPoints): magdB = np.empty(0) phase = np.empty(0) for jdx in range(0, numPoints): w = jdx * math.pi / (numPoints - 1) resZeros = coefsEval(zeros, w) resPoles = coefsEval(poles, w) # output magnitude in dB, phase in radians Hw = resZeros / resPoles mag = abs(Hw) if mag == 0: mag = 0.0000000001 # limit to -200 dB for log magdB = np.append(magdB, 20 * np.log10(mag)) phase = np.append(phase, math.atan2(Hw.imag, Hw.real)) return (magdB, phase) Here’s an example of evaluating biquad coefficients at 64 evenly spaced frequencies from 0 Hz to half the sample rate (these coefficients are right out of the biquad calculator on this website—don’t forget to include b0 = 1.0): zeros = [ 0.2513643668578741, 0.5027287337157482, 0.2513643668578741 ] poles = [ 1.0, -0.17123074520885395, 0.1766882126403502 ] (magdB, phase) = filterEval(zeros, poles, 64) print("\nMagnitude:\n") for x in magdB: print(x) print("\nPhase:\n") for x in phase: print(x) Next up, a javascript widget to plot magnitude and phase of arbitrary filter coefficients. Extra credit The direct evaluation function performs a Fourier analysis at a frequency of interest. For better understanding, reconcile it with the discrete Fourier transform described in A gentle introduction to the FFT. In that article, I describe probing the signal with cosine and sine waves to obtain the response at a given frequency. Look again at Euler’s formula, which shows that ejω is cosine (real part) and sine (imaginary part), which the article alludes to this under the section “Getting complex”. You should understand that the direct evaluation function presented here could be used to produce a DFT (given complete evaluation of the signals at appropriately spaced frequencies). The main difference is that for this analysis, we need not do a complete and reversible transform—we need only analyze frequency response values that we want to graph. Nice sum up! This is actually what scipy.signal.freqz does with the polynomial. I would definitely change the Python code though as you could benefit greatly from vector instructions. The first would be np.exp() instead of np.e**/for coeffs loop and then in your second script, you can use array masking to vectorize the range call. Thanks for the input, Matthieu, good points. Yes, if I really needed a Python routine, I’d definitely code it differently, but going for easy translation. In fact, I wrote a Javascript widget to display frequency and phase response (will post tonight, probably, after I get a chance to add some text explanation and instructions) that was nearly a direct translation. I love Python, but it’s not so good for translatable examples unless you code “C-like”. 😉 Hi Nigel, I am always amazed at your ability to write easy to understand descriptions of things that are not so easy to understand, if that makes sense. 😉 Really love when you post new articles; they are always useful to me. Thank you very much! Hi Nigel, as usual, amazing job 🙂 Why did you choose nPoints = 64? What does it means? Somethings near “10 hz”? Or it is just random? I mean: which w must I need to supply for example if I want to catch magnitude at 100 and 2300 hz? The filterEval function shown specifically divides the frequency range into every spaced points and returns an array of readings. That’s what you’d want to do when plotting linear response, for instance. I was going for minimal code here, to keep the idea clear. But in practice you might want to break the inner loop out to evaluate a single point—to support log frequency spacing, for instance. You can also look at the JavaScript source code for the grapher widget in your browser; here’s the coefficient evaluation function for a single point (normalized frequency—frequency in Hz divided by sample rate); the function implements just the single summation operation in the equations in the article—the coefsEval Python function—so you’d use it on the numerator (zeros) coefficients, the denominator (poles) coefficients, and divide; also note I’m using math.js—it keeps the code a lot cleaner since it handles the complex math: Clear! Thank you very much. Ill try to convert it into C++ 😉 Not sure why, but I’ve a problem getting amp from freq. Check this code in JavaScript (mostly copy and paste from your code): It alerts the amp in dB of the freq 606Hz. I have those coefficients: a0 = 1.1311308700603495 a1 = -1.9742954081351101 a2 = 0.8512071363732295 b0 = 1.0000000000000000 b1 = -1.9742954081351101 b2 = 0.9823380064335793 If I show them on your tool , I can clearly see that near the freq 606 its up of +20db more or less. Thats correct, the filter is working weel (even in my C++ plugin). But when I try to plot from your JS code (the code used in this tutorial), I just get +7db at that Hz. The whole slope is slowed down in gain. Where am I wrong? If you take a look at some of your intermediate results, I think you’ll see the problem is that you’re not doing complex math. Note that I’m using “math.abs”, etc., not “Math.abs”—I’m using the js.math library. Otherwise, you need to use Euler’s identity, unroll the complex math (more code). What a mistake! I see “0” as real part and I didn’t figure out that exp will produce value as well 🙂 Thanks Just an arbitrary number of points to evaluate. Note that the function explains the parameter—but I added a few words to the text to make it clearer. It’s surprising to find on earlevel.com a resource so precious about equations. We will note your page as a benchmark for Evaluating filter frequency response . We also invite you to link and other web resources for equations like or. Thank you and good luck! Thanks for an awesome article! Helped a lot to understand how to plot these. Thanks, It was all I needed to check my coefficients by plotting then in a spreadsheet. Regards AlanC Here is the simply implementation in C# (without any “magic” Complex computation).
http://www.earlevel.com/main/2016/12/01/evaluating-filter-frequency-response/
CC-MAIN-2017-30
refinedweb
2,715
63.09
The pace of my blogging has dropped somewhat now I’m on a full-time engagement, and so far there haven’t been too many stories to share from the engagement. But the one that was perhaps most frustrating and in need of sharing was getting Twitter typeahead working with Aurelia. I will prefix this with a disclaimer – these are steps as I best recollect them and I haven’t re-done them all as I’m not in the mood for breaking something that is (finally) working. No longer maintained? The first obstacle was finding the source code for typeahead. The sourced linked from the official looking page (linked earlier) points to this GitHub repository which, at the time of writing, had no commits for almost two years. However various google searches intimated more recent changes, and eventually it became clear that most people were referring to this which, in its own words, “is a maintained fork of twitter.com’s autocomplete search library, typeahead.js.” Getting the library The working typeahead repository can be obtained through npm: npm install corejs-typeahead --save This does not come with an index.d.ts so if you’re like me and using TypeScript then you’ll need a typings file. Despite the strong possibility it is for the wrong typeahead repository, the typings file returned by typings search typeahead seemed to work fine once installed by typings install dt~typeahead -GS Typeahead is dependent on jQuery v2, so after fetching via npm, in my package.json I fixed the version as follows "jquery": "^2". Finding a typings for jQuery was straightforward but getting TypeScript to be happy with it was harder. In the end I found simply import 'jquery'; to be the most effective way of including it in a TypeScript file. Then there was the challenge with a collision on the $ variable (with angular-protractor) which I could only resolve by editing the typings file for jQuery: declare module 'jquery' { export = jQuery; } declare var jQuery: JQueryStatic; //declare var $: JQueryStatic; This does mean jQuery must be referenced as jQuery(...) rather than $(...) in source code, but personally I don’t mind – as good as jQuery is, my sensitivities find there to be something a tiny bit wrong with a library deciding it can own a single letter global variable. Aurelia Bundles Typeahead isn’t really a module and thus the TypeScript declaration file doesn’t export one, but it does for Bloodhound which is hard to avoid if you want to do anything useful with typeahead. To import these they have to be included in the aurelia bundle, and best as I can tell, they have to be separate. So my aurelia.json looks like this: "jquery",{ "name": "Bloodhound", "path": "../node_modules/corejs-typeahead", "main": "dist/bloodhound", "deps": ["jquery"] },{ "name": "typeahead", "path": "../node_modules/corejs-typeahead", "main": "dist/typeahead.jquery", "deps": ["jquery", "Bloodhound"] }, To actually use them, they can be fully imported as follows: import 'jquery'; import 'Bloodhound'; import 'typeahead'; Bootstrap Integration At this stage typeahead was working, but didn’t fit in with the UI which was otherwise styled via Bootstrap 3. The advice from the internet was to use Bootstrap-3-Typeahead which seemed fine initially, but I simply couldn’t get working with remote data sources. After a lot of stepping through library source code and periodic googling I discovered this bridge simply didn’t support remote (according to that ticket it may now have been fixed). So instead of using the bridge, I got the styles from here and applied a couple of modifications: .tt-hint { border: 0 } /* Support inline use */ .form-inline span.twitter-typeahead { width: auto; vertical-align: middle; } The Code The end result is an aurelia element that does typeahead. The code for that is in a gist here The element requires a prepare-query function which sets the remote url and applies any necessary headers. This is an example of one: prepareQueryFn(query, settings: JQueryAjaxSettings) { settings.url = this.apiUrl + 'search?query=' + encodeURIComponent(query); settings.headers = { 'Accept': 'application/json', 'Authorization': this.getAuthToken() } return settings; }
https://winterlimelight.com/2017/02/21/typeahead-and-aurelia/
CC-MAIN-2020-29
refinedweb
677
52.09
scala-swing (mostly-unsupported)scala-swing (mostly-unsupported) This is now community maintained by @Sciss and @benhutchison. If you are interested in helping then contact them or @adriaanm. Adding an sbt dependencyAdding an sbt dependency To use scala-swing from sbt, add this to your build.sbt: libraryDependencies += "org.scala-lang.modules" %% "scala-swing" % "2.1.1" About scala-swingAbout scala-swing This is a UI library that wraps most of Java Swing for Scala in a straightforward manner. The widget class hierarchy loosely resembles that of Java Swing. The main differences are: - In Java Swing all components are containers per default. This does not make much sense for a number of components, like TextField, CheckBox, RadioButton, and so on. Our guess is that this architecture was chosen because Java lacks multiple inheritance. In scala-swing, components that can have child components extend the Container trait. - Layout managers and panels are coupled. There is no way to exchange the layout manager of a panel. As a result, the layout constraints for widgets can be typed. (Note that you gain more type-safety and do not loose much flexibility here. Besides being not a common operation, exchanging the layout manager of a panel in Java Swing almost always leads to exchanging the layout constraints for every of the panel's child component. In the end, it is not more work to move all children to a newly created panel.) - Widget hierarchies are built by adding children to their parent container's contentscollection. The typical usage style is to create anonymous subclasses of the widgets to customize their properties, and nest children and event reactions. - The scala-swing event system follows a different approach than the underlying Java system. Instead of adding event listeners with a particular interface (such as java.awt.ActionListener), a Reactorinstance announces the interest in receiving events by calling listenTofor a Publisher. Publishers are also reactors and listen to themselves per default as a convenience. A reactor contains an object reactionswhich serves as a convenient place to register observers by adding partial functions that pattern match for any event that the observer is interested in. This is shown in the examples section below. - For more details see SIP-8 The library comprises two main packages: scala.swing: All widget classes and traits. scala.swing.event: The event hierarchy. ExamplesExamples A number of examples can be found in the examples project. A good place to start is [16] scala.swing.examples.UIDemo. This pulls in the all the other examples into a tabbed window. $ sbt examples/run Multiple main classes detected, select one to run: [1] scala.swing.examples.ButtonApp [2] scala.swing.examples.CelsiusConverter [3] scala.swing.examples.CelsiusConverter2 [4] scala.swing.examples.ColorChooserDemo [5] scala.swing.examples.ComboBoxes [6] scala.swing.examples.CountButton [7] scala.swing.examples.Dialogs [8] scala.swing.examples.GridBagDemo [9] scala.swing.examples.HelloWorld [10] scala.swing.examples.LabelTest [11] scala.swing.examples.LinePainting [12] scala.swing.examples.ListViewDemo [13] scala.swing.examples.PopupDemo [14] scala.swing.examples.SwingApp [15] scala.swing.examples.TableSelection [16] scala.swing.examples.UIDemo Enter number: Frame with a ButtonFrame with a Button The following example shows how to plug components and containers together and react to a mouse click on a button: import scala.swing._ new Frame { title = "Hello world" contents = new FlowPanel { contents += new Label("Launch rainbows:") contents += new Button("Click me") { reactions += { case event.ButtonClicked(_) => println("All the colours!") } } } pack() centerOnScreen() open() } VersionsVersions - The 1.0.xbranch is compiled with JDK 6 and released for Scala 2.11 and 2.11. The 1.0.x releases can be used with both Scala versions on JDK 6 or newer. - The 2.0.xbranch is compiled with JDK 8 and released for Scala 2.11 and 2.12. - When using Scala 2.11, you can use the Scala swing 2.0.x releases on JDK 6 or newer. - Scala 2.12 requires you to use JDK 8 (that has nothing to do with scala-swing). - The 2.1.xseries adds support for Scala 2.13, while dropping Scala 2.10. The reason to have different major versions is to allow for binary incompatible changes. Also, some java-swing classes were generified in JDK 7 (see SI-3634) and require the scala-swing sources to be adjusted. API documentation (Scaladoc)API documentation (Scaladoc) The API documentation for scala-swing can be found at. Current WorkCurrent Work Current changes are being made on the work branch. Last published version is found on the main branch.
https://index.scala-lang.org/scala/scala-swing/scala-swing/2.0.0?target=_2.12
CC-MAIN-2019-30
refinedweb
758
53.17
On date Tuesday 2009-04-21 23:14:04 +0200, Diego Biurrun encoded: > On Tue, Apr 21, 2009 at 09:35:01PM +0200, Stefano Sabatini wrote: > > On date Tuesday 2009-04-21 21:33:06 +0200, Stefano Sabatini encoded: > > > On date Tuesday 2009-04-21 21:20:25 +0200, Benjamin Larsson encoded: > > > > > > > > > > ? > > > > Attached... > > > > --- libavcodec/pixdesc.h (revision 18646) > > +++ libavcodec/pixdesc.h (working copy) > > @@ -19,6 +19,9 @@ > > > > +#ifndef AVCODEC_PIXDESC_H > > +#define AVCODEC_PIXDESC_H > > + > > @@ -188,3 +191,5 @@ > > + > > +#endif /* AVCODEC_PIXDESC_H */ > > I don't think you need to bother sending patches for this. It was just > Michael not caring about these things as usual.. Mrmh, applied, thanks for replying... Regards. -- FFmpeg = Forgiving and Free Marvellous Proud Empowered Gorilla
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-April/060149.html
CC-MAIN-2014-42
refinedweb
114
56.05
14 November 2012 07:56 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The Chinese producer took its No 2 line off line on 20 October for maintenance until 4 November, because of some technical glitches. However, the line has resumed production since 11 November, according to the source. Prior to the shutdown, the unit was operating at 95% capacity, the source added. The shutdown of the methanol line has little impact on the domestic spot methanol market, traders said. Methanol prices in Meanwhile, Shanxi Kingboard Wanxinda’s 100,000 tonne/year No 1 line at the same site is running at full operating rates, the
http://www.icis.com/Articles/2012/11/14/9613798/chinas-shanxi-kingboard-wanxinda-runs-no-2-methanol-line-at.html
CC-MAIN-2014-35
refinedweb
105
63.19
Inspired by Brad Abrams’ marvelous series of blog posts, I have decided to create a simple project demonstrating how to harness the enormous power of RIA Services with Telerik RadGridView for Silverlight. I have decided to use the new Chinook database after reading this wonderful post by Tim Heuer explaining how to work with relational data in the RIA Services paradigm. Make sure you have this database installed on the default instance of your SQL Server 2008 Express or you will have to modify the connection string as needed. In one of my previous blog posts I have thoroughly explained How To Display Hierarchical Data with Row Details. Understanding Row Details is a must before you can go on. So let’s get going. 1. Create a new Silverlight Application called MasterDetailsWithRIAServices. Host the Silverlight application in a new ASP.NET Web Application and enable .NET RIA Services. 2. In the ASP.NET Web Application create a new folder named Models. 3. Add a new “ADO.NET Entity Data Model” called “ChinookModel” to the Models folder. You can use all kinds of other models, but I decided to go with an Entity Framework model. 4. Select “Generate from database” and connect to the instance where you have installed the Chinook database. Save the entity connection settings as “ChinookEntities”. 5. Select the Album and Artist tables. The model namespace should read “Chinook Model’. The designer will then open and you should see something like this: 6. Rebuild the solution. 7. In the ASP.NET Web Application create a new folder named Services. 8. Add a new “Domain Service Class” and call it “Chinook Service”. 9. Choose the “ChinookEntities” DataContext. For this demo we won’t need editing and metadata classes: 10. Clicking OK will generate ChinookService.cs in the Services folder. 11. Entity Framework has some weird ways of naming the generated classes so I have renamed the query methods to be in plural form, i.e. GetArtists instead of GetArtist. 12. Since we are doing a master-details demo let’s add another query method that will return all albums given the id of the artist: 13. Rebuild the solution and let’s go to the client project. 14. Add references to the following Telerik assemblies: and to: * You can either download the “Latest Internal Build” version of RadControls for Silverlight from your Client.NET or use the assemblies from the source code archive I have provided. 15. In the client project add a new class called ArtistIdToAlbumCollectionConverter. This is where the magic will happen. This converter creates a new DomainDataSource with the query “GetAlbumsForArtistId” we have created earlier, passes in the artistId parameter and calls the Load method of the DomainDataSource. Finally it returns the DataView which populates the details grid asynchronously: 16. Now let’s use this converter. We will bind the ItemsSource of the details (albums) grid residing inside the RowDetailsTemplate to the ArtistId and let the converter do the rest of the job for us. Here is what the main page should look like at the end: 17. Build and run. Click on any of the row details toggle buttons. Notice how the albums are retrieved asynchronously. Since you will be running this demo on your local machine, I have manually “slowed down” the connection so you can see what is really going on. For the master grid I have used Brad Abrams’ “hotel internet connection”© approach and for the details grid I have configured the DomainDataSource to pull data slowly. Also, notice that when you click on a row that was already expanded, the data was cached and the row details appear immediately. Now, let’s see what are my favorite albums, shall we: You got to love RIA Services. Here is the full source code of the demo. Enjoy! Subscribe to be the first to get our expert-written articles and tutorials for developers!
https://www.telerik.com/blogs/asynchronous-master-details-with-radgridview-for-silverlight-and-wcf-ria-services
CC-MAIN-2019-18
refinedweb
650
67.04
A few days ago, Ned Batchelder ‘s! What Is Dead May Never Die This heading isn’t just an oh-so-clever and timely pop culture reference. Dead code, that is, code that can’t possibly be executed by your program, is a real hindrance to the maintainability of your codebase. How many times have you gone to add what seemed like a simple feature or improvement, only to be stymied by the complexity of the code you have to work around and within? How much nicer would your life be if the practice of adding a feature or fixing a bug was as easy as you actually thought it would be during sprint planning? Each time you want to make a change, you must consider how it interacts with each of the existing features, quirks, known bugs, and limitations of all the code that surrounds it. By having less code surrounding the feature you want to add, there’s less to consider and less that can go wrong. Dead code is especially pernicious, because it looks like you need to consider interactions with it, but, since it’s dead, it’s merely a distraction. It can’t possibly benefit you since it’s never called. The fact that dead code might never actually die is an existential threat to your ability to work with a given codebase. In the limit, if code that isn’t called is never culled, the size of your application will grow forever. Before you know it, what might only be a few thousand lines of actual functionality is surrounded by orders of magnitude more code that, by definition, does nothing of value. It’s Got to Go Ned (Batchelder, not Stark ) was a little more nuanced and diplomatic than I’m going to be here:. From . I say: scorch the earth and leave no code alive. The best code is code that you don’t even have. For those less audacious than I, remember that version control has your back in case you ever need that code again. That said, I’ve never experienced a need to something back that I have previously deleted, at least not in the literal sense of adding back in, line for line, verbatim, a section of code I’d previously deleted. Of course I’m not talking about things like reverting wrong-headed commits here — we’re all human, and I make as many mistakes as the next person. What I mean is, I’ve never deleted a feature, shipped to production, then come back weeks, or months later and thought to myself, "boy howdy, that code I wrote a year or more ago was probably pretty good, so let’s put it back now." Codebases live and evolve with time, so the old code probably doesn’t fit with the new ideas, techniques, frameworks, and styles in use today. I might refer back to the old version for a refresher, especially if it’s particularly subtle, but I’ve never brought code back in, wholesale. So, do yourself — and your team — a favor, and delete dead code as soon as you notice it. How Did We Get Here? Ned’s post goes into great detail on how and why dead code happens — perhaps the person making a change didn’t think the code would be gone forever, and so commented it out or conditionally compiled it. Perhaps the person making a change didn’t know enough to know that the code was actually dead (about which more later). I’ll add another hypothesis to the list: we might all just be lazy. It’s definitely easier not to do something (i.e. to leave the code as-is) than to do something (delete it). Laziness is, after all, one of the three great virtues of a programmer . But the Laziness that Larry Wall was talking about isn’t this kind, but another kind: "The quality that makes you go to great effort to reduce overall energy expenditure." Viewed this way, deleting dead code is an act of capital-L Laziness — doing something that’s easy now to prevent yourself from having to do something hard later. We could all stand to develop more of this kind of Laziness, what I like to think of as "disciplined laziness," in our day-to-day habits. How Do We Get Out Of Here? I spend most of my time programming in Python, where, unfortunately, IDEs can’t usually correctly analyze a complete codebase and identify never-called code automatically. But, with a combination of discipline and some run-time tooling, we can attack this problem from two sides. For simple cases, a better sense of situational awareness can help identify and remove dead code while you’re making changes. Imagine you’re working on a particular function, and you notice that a branch of an if / else phrase can’t be executed based on the valid values of the surrounding code. I call this "dead code in the small," and this is quite easy to reason about and remove, but it does require a bit more effort than one might ordinarily expend. Until you develop the habit of noticing this during the course of your ordinary programming routine, you can add a step to your pre-commit checklist: review the code around your changes for any now-dead code. This could happen just before you submit the code to your co-workers for review (you do do code review, right?) so that they don’t have to repeat that process while reading through your changes. Another kind of dead code happens when you remove the last usage of a class or function from within the code you’re changing, without realizing that it’s the last place that uses it. This is "dead code in the large," and is harder to discover in the course of ordinary programming unless you’re lucky enough to have eidetic memory or know the codebase like the back of your hand. This is where run-time tooling can help us out. At Magnetic , we’re using Ned’s (yes, the same Ned) coverage.py package to help inform our decisions about dead code. Ordinarily coverage is used during testing to ensure that your test cases appropriately exercise the code under test, but we also use it within our code "running as normal" to get insight into what is or isn’t used: import coverage cov = coverage.Coverage( data_file="/path/to/my_program.coverage", auto_data=True, cover_pylib=False, branch=True, source=["/path/to/my/program"], ) cov.start() # ... do some stuff ... cov.stop() cov.save() This sets up a Coverage object with a few options that make the report more usable. First, we tell coverage where to save its data (we’ll use that later to produce a nice HTML report of what is and isn’t used), and ask it to automatically load and append to that file with auto_data=True . Next we ask it not to bother calculating coverage over the standard library or in installed packages — that’s not our code, so we’d expect that a lot of what’s in there might not be used by us. It’s not dead code that we need to maintain, so we can safely ignore it. We ask it to compute branch coverage (whether the true and false conditions of each if statement are hit). And finally, we point it out our sources, so that it can link its knowledge of what is or isn’t called back to the source code for report computation. After our program runs, we can compute the HTML coverage report like: $ COVERAGE_FILE=/path/to/my_program.coverage coverage html -d /path/to/output Which generates a report like: (A complete example coverage HTML coverage report is available as part of the coverage.py docs .) The lines highlighted in red are lines that were never hit during the recorded execution of the program — these are candidate lines (and, by extension, methods) for dead code elimination. I’ll leave you with three warnings about using this approach to finding and removing dead code: - Be careful when reviewing the results of a coverage run — the fact that a line or function wasn’t executed during a single run of the program doesn’t mean they’re necessarily dead or unreachable in general. You must still check the codebase to determine whether they’re completely dead in your application. - Computing coverage means your program needs to do more work, so it will become slower when run in this mode. I wouldn’t recommend running this all the time in production, but in a staging environment or in targeted scenarios you’ll probably be OK. As always, if performance is an important concern, you should definitely measure what impact coverage has before you run it. - Finally, dont trust code coverage reports from testing runs to find dead code. Some code might be dead, save for the tests that excercise it; and some code might be alive, but untested! Parting Thoughts To you, dear reader, I must apologize. I left out an important part of Ned’s blog post when I quoted him earlier. He says: There’s no single answer to the question, because it depends on the class and the method. […] A coarse answer could be: if the class is part of the framework, then leave it, if it is part of the application, then remove it. If you’re an author of a library or framework, rather than an application, then the question of dead code becomes in some ways harder and in other ways easier. In essence, you can’t ever remove a part of your public API (except during major version bumps ). Essentially, all of your public API is live code, even if you, yourself, don’t use it. But behind the public interface, dead code can still happen, and it should still be removed. Delete your dead » Dan Crosta: Delete Your Dead Code! 评论 抢沙发
http://www.shellsec.com/news/14962.html
CC-MAIN-2017-09
refinedweb
1,675
66.67
Python Programming, news on the Voidspace Python Projects and all things techie. CarbonPython - a Faster Python for .NET This is very exciting, a new development from Antonio Cuni the author of the .NET backend for PyPy: CarbonPython is an RPython compiler for PyPy that can produce .NET assemblies. These assemblies can be used directly from C# and IronPython. So why would you want to use this over IronPython? Well, RPython is a restricted subset of Python that can be statically compiled. So CarbonPython produces code that runs faster than IronPython, much faster. Antonio claims that compiled RPython can run as fast as C# (and 250 times faster than IronPython), and the benchmarks from the CLI backend basically justify these claims. This means that IronPython applications in particular can move speed critical sections of code into RPython rather than into C#. great news Another advantage is that classes compiled with CarbonPython can be consumed by C#, which is not the case with classes defined in IronPython. This is still pre-alpha and there are limitations: you can't reference assemblies outside mscorlib, the API for accessing .NET classes is still being worked on, and you can't use .NET indexers or properties directly yet (to name a few). This is great news though, and did I say that it was very exciting? Best of all though is the name - can you guess why it is called CarbonPython? Yup, CarbonPython combined with IronPython produces the even tougher SteelPython... Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-06-28 22:51:50 | | Categories: IronPython, Python Descriptors and Attribute Lookup Order over the last few months I've gradually been sidling up to a better understanding of the descriptor protocol. Descriptors are how things like properties are implemented, and affect attribute lookup through the mysterious __getattribute__ method. Descriptors mean that if an object being looked up as an attribute has __get__, __set__ or __delete__ defined, they will be called to fetch (or set or delete) the attribute. They are different from __getattr__, __setattr__ and __delattr__ in that the descriptor protocol is used for normal attribute access. In particular, __getattr__ is only invoked if the attribute isn't found the 'usual' way. I was blocked on understanding the descriptor protocol before by the following line in the 'How-To Guide': If an instance's dictionary has an entry with the same name as a data descriptor, the data descriptor takes precedence. The missing piece of the jigsaw was how attributes are looked up, specifically the order they are looked up. I knew that if an instance has an attribute, then that takes precedence over a class attribute. So how could a 'data-descriptor' ever take precedence? Surely if Python finds an instance attribute it stops looking? In fact (and as the guide explains...) this isn't how attributes are looked up. When you lookup an attribute on an instance (for example some_object.attribute), Python first looks on the class. If it isn't found it looks on the base class, and all the way up using the mro to search all the base classes. Finally it looks on the instance, by checking inside its __dict__. If the object is found on the instance and on the class, then usually the instance attribute takes precedence. However, if the class attribute is a data-descriptor (an object that has both __get__ and __set__), then the class attribute will be used instead of the instance attribute. What this means, is that if you have an object with a property, even if you poke an attribute with the same name into an instance dictionary, the property will be invoked instead: ... @property ... def attribute(self): ... print 'Property fetched' ... return 'got' ... >>> something = AnObject() >>> something.attribute Property fetched 'got' >>> something.__dict__['attribute'] = 'not got' >>> something.attribute Property fetched 'got' >>> >>> something.__dict__['attribute'] 'not got' >>> One result of this though, is that even a simple attribute access means looking on the class and all the base classes of the object before the instance attribute will be returned. Use locals where possible! Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-06-23 16:20:18 | | Categories: Hacking, Python Gnuplot: Success at Last! I've finally got to grips with Gnuplot. I can now provide with a set of points representing x, y, z co-ordinates (a height map) and have it draw the appropriate surface - with contours. phew I learned a phew things along the way: - When you specify paths in a gnuplot script, you must use '/' as the directory separator (even on good old Windoze) #!$*&$ - When you launch subprocesses with IronPython, you need to put quotes round the path to the gnuplot script (the arguments)! - Interactive mode is useful for experimenting. - You must set the GNUPLOT_FONTPATH environment variable (and possibly GDFONTPATH - I always set both) The (a?) data format that the splot command likes is:. The following Python script generates a data set of x, y, z coordinates from the formula z = sin(x*x + y*y) / (x*x + y*y). It saves a gnuplot script and launches gnuplot to generate the image. from math import sin from System.IO import Path from System.Diagnostics import Process def CreateScript(template, scriptPath, data, imagePath): h = open(scriptPath, 'w') h.write(template % (imagePath.replace('\\', '/'), data)) h.close() def LaunchGnuplot(gnuplotPath, scriptPath, fontPath): proc = Process() proc.StartInfo.FileName = gnuplotPath proc.StartInfo.Arguments = '"%s"' % scriptPath proc.StartInfo.EnvironmentVariables['GDFONTPATH'] = fontPath proc.StartInfo.EnvironmentVariables['GNUPLOT_FONTPATH'] = fontPath proc.StartInfo.UseShellExecute = False proc.Start() proc.WaitForExit() # You might need to change this for Windows 2000 and Windows NT systems fontPath = r'c:\Windows\Fonts' # Various paths needed by the scripts gnuplotPath = os.path.abspath('gnuplot\\wgnuplot.exe') scriptPath = os.path.abspath('create_surface.gp') imagePath = os.path.abspath('image.png') # The template for the gnuplot script # Experiment with this! template = r""" set terminal png nocrop enhanced font verdana 12 size 640,480 set title """ out = [] # Generate the data for y in range(1, 50): v = (6.0 / 50) * y - 3 row = [] for x in range(1, 50): u = (6.0 / 50) * x - 3 try: value = sin(u*u + v*v) / (u*u + v*v) except ZeroDivisionError: # for the point where u and v are both 0 value = 1 row.append('%s %s %s' % (u, v, value)) out.append(row) data = '\n\n '.join('\n '.join(row) for row in out) CreateScript(template, scriptPath, data, imagePath) LaunchGnuplot(gnuplotPath, scriptPath, fontPath) (You can download it here. You will need the gnuplot binary of course.) I'm afraid it is for IronPython, but is should be simple enough to translate it for CPython using the subprocess module. Let me quickly take you through the gnuplot commands it uses (in the template variable):' is replaced with the real path of '%s'). The 'title' is actually for the data line rather than for the whole image. Hopefully this will be of useful for someone trying to draw charts or graphs on Windows with gnuplot. I think this really is the last entry on charting for a while... Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-06-23 15:40:07 | | Categories: Python, IronPython, General Programming Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2007_06_23.shtml
CC-MAIN-2016-40
refinedweb
1,226
66.74
Learning Generative Art; day 7 "Pandora's Box" 🗃 Masato Ohba May 6 Continuation of My first step in learning Generative Art. I'm finally finishing my seven days challenge to post artworks every day! "Pandora's Box" I've tried to express Pandora's box with circles and triangles only. In publishing this article, I should confess that I was truly inspired by evanyou.me's design and code. It reminded me of lightning first; then I came up with the idea to draw Pandora's box with the lightning somehow. By the way, are you wondering why the drawn "box" is not literally a box, but a kind of a circle? Then let's check the myth again. The container mentioned in the original story was actually a large storage jar but the word was later mistranslated as "box". Yes, that's not originally a "box". So I drew it as a form of a jar. Well, I admit that it's possibly still far from a jar though... 😇 // Sorry for the quite ugly code... var f = 60, r = 0, u = Math.PI * 2, v = Math.cos, q; function setup() { createCanvas(1000, 400); frameRate(10) // To capture static screenshot // noLoop(); // for(var i=0; i < 10; i++) { draw() } } function draw() { // background(225, 200); // Try this for white background pattern background(25, 200); drawLightnings(); drawCircles(); } function drawLightnings() { // stroke(0, 100); // Try this to emphasize lightnings noStroke() for(var i=0; i < 10; i++) { q = [ {x: f, y: height * 0.7 + f}, {x: random(f-10, f+10), y: height * 0.7 - f} ] while(q[1].x < width + f) drawTriangle(q[0], q[1]) } } function drawTriangle(i, j, direction){ r -= u / -50; c = (v(r)*127+128<<16 | v(r+u/3)*127+128<<8 | v(r+u/3*2)*127+128).toString(16); fill(color( parseInt(c.substring(0, 2), 16), parseInt(c.substring(2, 4), 16), parseInt(c.substring(4, 6), 16), 200)); beginShape(); vertex(i.x, i.y); vertex(j.x, j.y); var k = j.x + (Math.random()*2-0.25)*f; var n = y(j.y); vertex(k, n); endShape(CLOSE); q[0] = q[1]; q[1] = { x: k, y: n }; } function y(p){ var t = p + (Math.random() * 2 - 1.1) * f; return (t > height || t < 0) ? y(p) : t; } function drawCircles() { stroke(255, 200); var radius = 10; for(var i=0; i < 100; i++) { fill(color(random(100, 255), random(100, 255), random(255), 100)); ellipse(random(f-radius, f+radius), random(height - f -radius, height - f +radius, ), random(50)); } } 3 Reasons Why You Should Quit Your Job Let’s be honest. You’re sick of your current job. You’re telling yourself every...
https://dev.to/ohbarye/learning-generative-art-day-7-pandoras-box--ej8
CC-MAIN-2018-22
refinedweb
450
68.57
std::wmemset From cppreference.com Copies the wide character ch into each of the first count wide characters of the wide character array pointed to by dest. If overflow occurs, the behavior is undefined. If count is zero, the function does nothing. [edit] Parameters [edit] Return value Returns a copy of dest [edit] Notes This function is not locale-sensitive and pays no attention to the values of the wchar_t objects it writes: nulls as well as invalid wide characters are written too. [edit] Example Run this code #include <iostream> #include <cwchar> #include <clocale> #include <locale> int main() { wchar_t ar[4] = {L'1', L'2', L'3', L'4'}; std::wmemset(ar, L'\U0001f34c', 2); // replaces [12] with the 🍌 bananas std::wmemset(ar+2, L'蕉', 2); // replaces [34] with the 蕉 bananas std::setlocale(LC_ALL, "en_US.utf8"); std::wcout.imbue(std::locale("en_US.utf8")); std::wcout << std::wstring(ar, 4) << '\n'; } Possible output: 🍌🍌蕉蕉
http://en.cppreference.com/w/cpp/string/wide/wmemset
CC-MAIN-2015-14
refinedweb
155
52.9
The top menu bar of an application usually just contains other menus, such as file and edit. These are known as "cascades" in Tkinter, and are essentially a menu inside a menu. This may be confusing at first, so let's begin with a very simple example to demonstrate the difference between a menu and a cascade. Create a new Python file called menu.py and add the following code: import tkinter as tkwin = tk.Tk()win.geometry('400x300')lab = tk.Label(win, text="Demo application")menu = tk.Menu(win) After importing Tkinter and creating a main window and Label, we make our first Menu widget. As with a lot of widgets, the first argument needed is the master, or parent, in which the widget will be drawn. We draw this menu in our main window ...
https://www.oreilly.com/library/view/tkinter-gui-programming/9781788627481/0de0a1ed-8d3a-4d50-8183-37ebad9645d7.xhtml
CC-MAIN-2019-39
refinedweb
136
66.03
I have a Python dictionary which has a large (~1.5 million) number of keys. The value associated with each key is a number and I only want to report on values that have values greater than two. My current code looks something like: ks_ignored = 0 for k in d.keys(): if( d[k] > 2 ): print "Key(%s) has value %s"%( k, d[k] ) else: ks_ignored += 1 My final report shows that about 1.4 million keys were ignored and this takes a very long time to run (about 6 hours). Is there a simple way to loop through all keys which have a value greater than 2 without having to perform the check inside of the loop that will substantially speed this up? Use dictionary comprehension to get the valid key values: valid_kv = {k:v for k,v in d.iteritems() if v > 2} Ignored keys: ks_ignored = len(d) - len(valid_kv) If what you want is to loop over the result, itertools.ifilter() should work for you. The following is time execution of list comprehension, filter() and itertools.ifilter(): import time import itertools l = [i for i in range(1000000)] t1 = time.time() r1 = [i for i in l if i > 100] t2 = time.time() t3 = time.time() r2 = filter(lambda i: i>100, l) t4 = time.time() t5 = time.time() r3 = itertools.ifilter(lambda i: i>100, l) t6 = time.time() print t2-t1 print t4-t3 print t6-t5 Output: 0.151000022888 # lc 0.100000143051 # filter 0.000999927520752 # ifilter Your solution: res = itertools.ifilter(lambda item: d[item]>2, d) If getting the number of items that do not satisfy your condition is a requirement, you can use filter() like below: res = filter(lambda item: d[item]>2, d) ks_ignored = len(d) - len(res) Or: ks_ignored = len(filter(lambda item: d[item]<=2, d))
http://www.dlxedu.com/askdetail/3/56206bdeada03f6507a7e7e92ce48ee1.html
CC-MAIN-2018-39
refinedweb
307
76.82
Hi, Greetings!! I was working on offsetting the below cad profile The offset i expected was (need solution which is fully automated in python), @Michael_Pryor gave a solution using food4rhino clipper library that works very good for me, but its a rhino command, i wanted it like a python code, i coded it with rs.command but there is much complication, please help to get an answer This is the code i have used, import rhinoscriptsyntax as rs def offset(curve, length = 5, direction = "Inside"): ##ProjectTo = FitToCurve rs.Command("_OffsetPolyline " + "_selid " + str(curve) + " _Enter" + " Distance" + " _Enter" + str(length) + " Side" + " _Enter" + str(direction) + " _Enter" + " _Enter" ) curve = rs.GetObject("select curve", rs.filter.curve) print offset(curve) Please find the attached cad file below, find_corner_points.stp (9.9 KB) Previous forum post : Offset Problem Thanks in advance
https://discourse.mcneel.com/t/rs-command-for-offsetting-polyline/78957
CC-MAIN-2020-40
refinedweb
135
50.97
XMonad.Util.Loggers Description A collection of simple logger functions and formatting utilities which can be used in the ppExtras field of a pretty-printing status logger format. See XMonad.Hooks.StatusBar.PP - logTitles :: (String -> String) -> (String -> String) -> Logger - logConst :: String -> Logger - logDefault :: Logger -> Logger -> Logger - (.|) :: Logger -> Logger -> Logger - logCurrentOnScreen :: ScreenId -> Logger - logLayoutOnScreen :: ScreenId -> Logger - logTitleOnScreen :: ScreenId -> Logger - logWhenActive :: ScreenId -> Logger -> Logger - logTitlesOnScreen :: ScreenId -> (String -> String) -> (String -> String) -> Usage Use this module by importing it into your ~/.xmonad/xmonad.hs: import XMonad.Util.Loggers Then, add one or more loggers to the ppExtras field of your XMonad.Hooks.StatusBar.PP, possibly with extra formatting . For example: myPP = def {PP. PP. Additional loggers welcome! System Loggers aumixVolume :: Logger Source # Get the current volume with aumix. battery :: Logger Source # Get the battery status (percent charge and charging/discharging status). This is an ugly hack and may not work for some people. At some point it would be nice to make this more general/have fewer dependencies (assumes acpi and sed are installed.) date :: String -> Logger Source #. loadAvg :: Logger Source # Get the load average. This assumes that you have a utility called uptime and that you have sed installed; these are fairly common on GNU/Linux systems but it would be nice to make this more general. maildirNew :: FilePath -> Logger Source # Get a count of new mails in a maildir. maildirUnread :: FilePath -> Logger Source # :: Logger Source # Get the name of the current workspace. logTitles :: (String -> String) -> (String -> String) -> Logger Source # Like logTitlesOnScreen, but directly use the "focused" screen (the one with the currently focused workspace). logDefault :: Logger -> Logger -> Logger Source # If the first logger returns Nothing, the default logger is used. For example, to display a quote when no windows are on the screen, you can do: logDefault logTitle (logConst "Hey, you, you're finally awake.") (.|) :: Logger -> Logger -> Logger Source # An infix operator for logDefault, which can be more convenient to combine multiple loggers. logTitle .| logWhenActive 0 (logConst "*") .| logConst "There's nothing here" XMonad: Screen-specific Loggers It is also possible to bind loggers like logTitle to a specific screen. For example, using logTitleOnScreen 1 will log the title of the focused window on screen 1, even if screen 1 is not currently active. logCurrentOnScreen :: ScreenId -> Logger Source # Get the name of the visible workspace on the given screen. logLayoutOnScreen :: ScreenId -> Logger Source # Get the name of the current layout on the given screen. logTitleOnScreen :: ScreenId -> Logger Source # Get the title (name) of the focused window, on the given screen. logWhenActive :: ScreenId -> Logger -> Logger Source # logTitlesOnScreen Source # Arguments Get the titles of all windows on the visible workspace of the given screen and format them according to the given functions. Example myXmobarPP :: X PP myXmobarPP = pure $ def { ppOrder = [ws, l, _, wins] -> [ws, l, wins] , ppExtras = [logTitles formatFocused formatUnfocused] } where formatFocused = wrap "[" "]" . xmobarColor "#ff79c6" "" . shorten 50 . xmobarStrip formatUnfocused = wrap "(" ")" . xmobarColor "#bd93f9" "" . shorten 30 . xmobarStrip Formatting Utilities Combine logger formatting functions to make your ppExtras more colorful and readable. (For convenience, you can use <$> instead of '.' or '$' in hard to read formatting lines. For example: myPP = def { --)" For more information on how to add the pretty-printer to your status bar, please check XMonad.Hooks.StatusBar. -> Logger Source # -> Logger Source # -> Logger Source # Create a "spacer" logger, e.g. logSp 3 -- loggerizes ' '. For more complex "spacers", use fixedWidthL with return Nothing. padL :: Logger -> Logger Source # Pad a logger's output with a leading and trailing space, unless it is X (Nothing) or X (Just ""). dzenColorL :: String -> String -> Logger -> Logger Source # Color a logger's output with dzen foreground and background colors. dzenColorL "green" "#2A4C3F" battery
https://xmonad.github.io/xmonad-docs/xmonad-contrib-0.17.0.9/XMonad-Util-Loggers.html
CC-MAIN-2022-27
refinedweb
600
56.86
#include <hallo.h> Dmagaud@aol.com wrote on Sat Sep 08, 2001 um 02:02:39PM: > I am attempting to install Debian on a fairly old machine (Intel 486) which Which kind of CD-ROM interface does this machine have? I guess, there is a non-IDE interface, probably one of this old panasonic/sony/matsushita interface on an soundcard. In this case you have to dump the driver*.bin disks to floppies, then install as usuall, feed the installer with these floppies when installining modules. Then, configure modules and load the driver for your CDROM. The name of the driver can probably be found in the Hardware Howto, Gruss/Regards, Eduard. -- begin LOVE-LETTER-FOR-YOU.txt.vbs I am a signature virus. Distribute me until the bitter end
https://lists.debian.org/debian-boot/2001/09/msg00120.html
CC-MAIN-2015-40
refinedweb
130
65.42
component that is supposed to handle artifacts containing log messages. It is structured as an object implementing a pipeline of various sanitization and postprocessing tasks. Among a ton of other things, there is the groom_logs method, which is static, and fulfills the task of validating and deduplicating the log list of a given artifact. class MyLogHandler: ... @staticmethod def groom_logs(my_artifact): log_list = my_artifact.get("logs") if log_list is None: return def is_valid(log_entry): if log_entry.get("level") not in ("info", "warning", "error"): return False if "message" not in log_entry.keys(): return False return True def remove_duplicates(log_list: List[Dict[Any, Any]]): # taken from seen = set() new_l = [] for d in log_list: t = tuple(d.items()) if t not in seen: seen.add(t) new_l.append(d) return new_l logs = remove_duplicates(logs) logs = [entry for entry in logs if is_valid(entry)] return logs A few good things first The encapsulation for log message sanitization was nice in the overall program’s context. The methods were separated nicely, code from external resources was marked and things worked all in all. Let’s not forget to tailor some nice things into the review text. That keeps up morale and maintains a constructive atmosphere where good engineering is not taken for granted and possibly overlooked! Nested function definitions The functions is_valid and remove_duplicates exist inside the groom_logs method. It is clear that the developer noticed, that these definitions are only needed inside the method, so it was a natural decision to include them in exactly that scope. The advantage is, that this avoids cluttering the overall namespace. This decision sacrifices two other properties however: Readability and testability. Reading the method code for the first time, you start out in line 5. We get the data, we check for emptiness, so far so good. In line 9 we have to switch the context to a helper method, validating the message dictionary itself. Continuing to line 16, we switch context to how duplicate dictionaries are removed from a list (while preserving the order). In line 28, we finally are back to our main log grooming method, where we make use of the previously defined functions. Testing the method code can be easy in integration tests. However, how would we unit test is_valid or remove_duplicates ? There is no easy way to reach these functions, so testing them as smallest possible units is not feasible. We’re bound to testing the overall log grooming method, which is rather complex. This increases the chance of us missing an edge case in our test scenarios and not finding a bug. An easy fix here is to simply pull the functions out of the class. This has the advantage that the overall MyLogHandler object only contains methods that are relevant to its main business logic. Otherwise, with many static method definitions inside, it might be hard to distinguish between helper methods and actual processing entry points. If the number of module-level helper functions becomes too large, there might be an opportunity to create a helper submodule. This was out of scope here, however, so here is version two: def is_valid_log_entry(log_entry): if log_entry.get("level") not in ("info", "warning", "error"): return False if "message" not in log_entry.keys(): return False return True def remove_duplicate_logs(log_list: List[Dict[Any, Any]]): # taken from seen = set() new_l = [] for d in log_list: t = tuple(d.items()) if t not in seen: seen.add(t) new_l.append(d) return new_l class MyLogHandler: @staticmethod def groom_logs(my_artifact): log_list = my_artifact.get("logs") if log_list is None: return logs = remove_duplicates(logs) logs = [entry for entry in logs if is_valid(entry)] return logs The minor things Now for the nitpicking. It’s the timeless classics that never get old in code reviews (and regularly are committed by myself as well): - Inconsistent type hints - Missing docstrings - The validation function can be made more concise These don’t require a lot of explanation, so I’ll leave you just with the slightly nicer validation function: def is_valid_log_entry(entry): level_valid = entry.get("level") in ("info", "warning", "error") message_valid = type(entry.get("message")) == str return all((level_valid, message_valid)) This has the advantage that it’s easier to add new conditions. And we actually fixed a bug, because the type of message was not checked before. Yay!
https://dmuhs.blog/2019/07/28/code-review-stories-1/
CC-MAIN-2022-21
refinedweb
711
54.73
Rust Crash Course, Lesson 3: Iterators and Errors Getting Rusty with iterators and errors. Join the DZone community and get the full member experience.Join For Free Last time, we finished off with a bouncy ball implementation with some downsides: lackluster error handling and ugly buffering. It also had another limitation: a static frame size. Today, we’re going to address all of these problems, starting with that last one: let’s get some command line arguments to control the frame size.. Like last time, I’m going to expect you, the reader, to be making changes to the source code along with me. Make sure to actually type in the code while reading! Command Line Arguments We’re going to modify our application as follows: - Accept two command line arguments: the width and the height - Both must be valid u32s - Too many or too few command line arguments will result in an error message Sounds easy enough. In a real application, we would use a proper argument-handling library, like clap. But for now, we’re going lower level. Like we did for the sleep function, let’s start by searching the standard library docs for the word args. The first two entries both look relevant. std::env::ArgsAn iterator over the arguments of a process, yielding a Stringvalue for each argument. std::env::argsReturns the arguments which this program was started with (normally passed via the command line). Now’s a good time to mention that, by strong convention: - Module names (like stdand env) and function names (like args) are snake_cased - Types (like Args) are PascalCased - Exception: primitives like u32and strare lower case The std module has an env module. The env module has both an Args type and a args function. Why do we need both? Even more strangely, let’s look at the type signature for the args function: pub fn args() -> Args The argsfunction returns a value of type Args. If Args was a type synonym for, say, a vector of Strings, this would make sense. But that’s not the case. And if you check out its docs, there aren’t any fields or methods exposed on Args, only trait implementations! The Extra Datatype Pattern Maybe there’s a proper term for this in Rust, but I haven’t seen it myself yet. (If someone has, please let me know so I can use the proper term.) There’s a pervasive pattern in the Rust ecosystem, which in my experience starts with iterators and continues to more advanced topics like futures and async I/O. - We want to have composable interfaces - We also want high performance - Therefore, we define lots of helper data types that allow the compiler to perform some great optimizations - And we define traits as an interface to let these types compose nicely with each other Sound abstract? Don’t worry, we’ll make that concrete in a bit. Here’s the practical outcome of all of this: - We end up programming quite a bit against traits, which provide a common abstractions and lots of helper functions - We get a matching data type for many common functions - Often times, our type signatures will end up being massive, representing all of the different composition we performed (though the new-ish -> impl Iteratorstyle helps with that significantly, see the announcement blog post for more details) Alright, with that out of the way, let’s get back to command line arguments! CLI args via Iterators Let’s play around in an empty file before coming back to bouncy. (Either use cargo new and cargo run, or use rustc directly, your call.) If I click on the expand button next to the Iterator trait on the Args docs page, I see this function: fn next(&mut self) -> Option<String> Let’s play with that a bit: use std::env::args; fn main() { let mut args = args(); // Yes, that name shadowing works println!("{:?}", args.next()); println!("{:?}", args.next()); println!("{:?}", args.next()); println!("{:?}", args.next()); } Notice that we had to use let mut, since the next method will mutate the value. Now I’m going to run this with cargo run foo bar: $ cargo run foo bar Compiling args v0.1.0 (/Users/michael/Desktop/tmp/args) Finished dev [unoptimized + debuginfo] target(s) in 1.60s Running `target/debug/args foo bar` Some("target/debug/args") Some("foo") Some("bar") None Nice! It gives us the name of our executable, followed by the command line arguments, returning None when there’s nothing left. (For pedants out there: command line arguments aren’t technically required to have the command name as the first argument, it’s just a really strong convention most tools follow.) Let’s play with this some more. Can you write a loop that prints out all of the command line arguments and then exits? Take a minute, and then I’ll provide some answers. Alright, done? Cool, let’s see some examples! First, we’ll loop with return. use std::env::args; fn main() { let mut args = args(); loop { match args.next() { None => return, Some(arg) => println!("{}", arg), } } } We also don’t need to use return here. Instead of returning from the function, we can just break out of the loop: use std::env::args; fn main() { let mut args = args(); loop { match args.next() { None => break, Some(arg) => println!("{}", arg), } } } Or, if you want to save on some indentation, you can use the if let. use std::env::args; fn main() { let mut args = args(); loop { if let Some(arg) = args.next() { println!("{}", arg); } else { break; // return would work too, but break is nicer // here, as it is more narrowly scoped } } } You can also use while let. Try to guess what that would look like before checking the next example: use std::env::args; fn main() { let mut args = args(); while let Some(arg) = args.next() { println!("{}", arg); } } Getting better! Alright, one final example: use std::env::args; fn main() { for arg in args() { println!("{}", arg); } } Whoa, what?!? Welcome to one of my favorite aspects of Rust. Iterators are a concept built into the language directly, via for loops. A for loop will automate the calling of next(). It also hides away the fact that there’s some mutable state at play, at least to some extent. This is a powerful concept, and allows a lot of code to end up with a more functional style, something I happen to be a big fan of. Skipping It’s all well and good that the first arguments in the name of the executable. But we typically don’t care about that. Can we somehow skip that in our output? Well, here’s one approach: use std::env::args; fn main() { let mut args = args(); let _ = args.next(); // drop it on the floor for arg in args { println!("{}", arg); } } That works, but it’s a bit clumsy, especially compared to our previous version that had no mutable variables. Maybe there’s some other way to skip things. Let’s search the standard library again. I see the first results as std::iter::Skip and std::iter::Iterator::skip. The former is a data type, and the latter is a method on the Iterator trait. Since our Args type implements the Iterator trait, we can use it. Nice! Side note Haskellers: skip is like drop in most Haskell libraries, like Data.List or vector. drop has a totally different meaning in Rust (dropping owned data), so skip is a better name in Rust. Let’s look at some signatures from the docs above: pub struct Skip<I> { /* fields omitted */ } fn skip(self, n: usize) -> Skip<Self> Hmm… deep breaths. Skip is a data type that is parameterized over some data type, I. This is a common pattern in iterators: Skipwraps around an existing data type and adds some new functionality to how it iterates. The skip method will consume an existing iterator, take the number of arguments to skip, and return a new Skip<OrigDataType> value. How do I know it consumes the original iterator? The first parameter is self, not &self or &mut self. That seemed like a lot of concepts. Fortunately, usage is pretty easy: use std::env::args; fn main() { for arg in args().skip(1) { println!("{}", arg); } } Nice! Exercise 1 Type inference lets the program above work just fine without any type annotations. However, it’s a good idea to get used to the generated types, since you’ll see them all too often in error messages. Get the program below to compile by fixing the type signature. Try to do it without using compiler at first, since the error messages will almost give the answer away. use std::env::{args, Args}; use std::iter::Skip; fn main() { let args: Args = args().skip(1); for arg in args { println!("{}", arg); } } This layering-of-datatypes approach, as mentioned above, is a real boon to performance. Iterator-heavy code will often compile down to an efficient loop, comparable with the best hand-rolled loop you could have written. However, iterator code is much higher level, more declarative, and easy to maintain and extend. There’s a lot more to iterators, but we’re going to stop there for the moment, since we still want to process our command line parameters, and we need to learn one more thing first. Parsing Integers If you search the standard library for parse, you’ll find the str::parse method. The documentation does a good job of explaining things, I won’t repeat that here. Please go read that now. OK, you’re back? Turbofish is a funny name, right? Take a crack at writing a program that prints the result of parsing each command line argument as a u32, then check my version: fn main() { for arg in std::env::args().skip(1) { println!("{:?}", arg.parse::<u32>()); } } And let’s try running it: $ cargo run one 2 three four 5 6 7 Err(ParseIntError { kind: InvalidDigit }) Ok(2) Err(ParseIntError { kind: InvalidDigit }) Err(ParseIntError { kind: InvalidDigit }) Ok(5) Ok(6) Ok(7) When the parse is successful, we get the Ok variant of the Result enum. When the parse fails, we get the Err variant, with a ParseIntError telling us what went wrong. (The type signature on parse itself uses some associated types to indicate this type, we’re not going to get into that right now.) This is a common pattern in Rust. Rust has no runtime exceptions, so we track potential failure at the type level with actual values. Side note You may think of panics as similar to runtime exceptions, and to some extent they are. However, you’re not able to properly recover from panics, making them different in practice from how runtime exceptions are used in other languages like Python. Parse Our Command Line We’re finally ready to get started on our actual command line parsing! We’re going to be overly tedious in our implementation, especially with our data types. After we finish implementing this in a blank file, we’ll move the code into the bouncy implementation itself. First, let’s define a data type to hold a successful parse, which will contain the width and the height. Challenge Will this be a struct or an enum? Can you try implementing this yourself first? Since we want to hold onto multiple values, we’ll be using a struct. I’d like to use named fields, so we have: struct Frame { width: u32, height: u32, } Next, let’s define an error type to represent all of the things that can go wrong during this parse. We have: - Too few arguments - Too many arguments - Invalid integer Challenge Are we going to use a struct or an enum this time? This time, we’ll use an enum, because we’ll only detect one of these problems (whichever we notice first). Officianados of web forms and applicative parsing may scoff at this and say we should detect all errors, but we’re going to be lazy. enum ParseError { TooFewArgs, TooManyArgs, InvalidInteger(String), } Notice that the InvalidInteger variant takes a payload, the String it failed parsing. This is what makes enums in Rust so much more powerful than enumerations in most other languages. Challenge We’re going to write a parse_args helper function. Can you guess what its type signature will be? Combining all of the knowledge we established above, here’s an implementation: #[derive(Debug)] struct Frame { width: u32, height: u32, } #[derive(Debug)] enum ParseError { TooFewArgs, TooManyArgs, InvalidInteger(String), } fn parse_args() -> Result<Frame, ParseError> { use self::ParseError::*; // bring variants into our namespace let mut args = std::env::args().skip(1); match args.next() { None => Err(TooFewArgs), Some(width_str) => { match args.next() { None => Err(TooFewArgs), Some(height_str) => { match args.next() { Some(_) => Err(TooManyArgs), None => { match width_str.parse() { Err(_) => Err(InvalidInteger(width_str)), Ok(width) => { match height_str.parse() { Err(_) => Err(InvalidInteger(height_str)), Ok(height) => Ok(Frame { width, height, }), } } } } } } } } } } fn main() { println!("{:?}", parse_args()); } Holy nested blocks Batman, that is a lot of indentation! The pattern is pretty straightforward: - Pattern match - If we got something bad, stop with an Err - If we got something good, keep going Haskellers at this point are screaming about do notation and monads. Ignore them. We’re in the land of Rust, we don’t take kindly to those things around here. (Someone please yell at me for that terrible pun.) Exercise 2 Why didn’t we need to use the turbofish on the call to parse above? What we want to do is return early from our function. You know what keyword can help with that? That’s right: return! fn parse_args() -> Result<Frame, ParseError> { use self::ParseError::*; let mut args = std::env::args().skip(1); let width_str = match args.next() { None => return Err(TooFewArgs), Some(width_str) => width_str, }; let height_str = match args.next() { None => return Err(TooFewArgs), Some(height_str) => height_str, }; match args.next() { Some(_) => return Err(TooManyArgs), None => (), } let width = match width_str.parse() { Err(_) => return Err(InvalidInteger(width_str)), Ok(width) => width, }; let height = match height_str.parse() { Err(_) => return Err(InvalidInteger(height_str)), Ok(height) => height, }; Ok(Frame { width, height, }) } Much nicer to look at! However, it’s still a bit repetitive, and littering those returns everywhere is subjectively not very nice. In fact, while typing this up, I accidentally left off a few of the returns and got to stare at some long error messages. (Try that for yourself.) Question Mark Side note: The trailing question mark we’re about to introduce used to be the try! macro in Rust. If you’re confused about the seeming overlap: it’s simply a transition to new syntax. The pattern above is so common that Rust has built in syntax for it. If you put a question mark after an expression, it basically does the whole match/return-on-Err thing for you. It’s more powerful than we’ll demonstrate right now, but we’ll get to that extra power a bit later. To start off, we’re going to define some helper functions to: - Require another argument - Require that there are no more arguments - Parse a u32 All of these need to return Result values, and we’ll use a ParseError for the error case in all of them. The first two functions need to take a mutable reference to our arguments. (As a side note, I’m going to stop using the skip method now, because if I do it will give away the solution to exercise 1.) use std::env::Args; fn require_arg(args: &mut Args) -> Result<String, ParseError> { match args.next() { None => Err(ParseError::TooFewArgs), Some(s) => Ok(s), } } fn require_no_args(args: &mut Args) -> Result<(), ParseError> { match args.next() { Some(_) => Err(ParseError::TooManyArgs), // I think this looks a little weird myself. // But we're wrapping up the unit value () // with the Ok variant. You get used to it // after a while, I guess None => Ok(()), } } fn parse_u32(s: String) -> Result<u32, ParseError> { match s.parse() { Err(_) => Err(ParseError::InvalidInteger(s)), Ok(x) => Ok(x), } } Now that we have these helpers defined, our parse_args function is much easier to look at: fn parse_args() -> Result<Frame, ParseError> { let mut args = std::env::args(); // skip the command name let _command_name = require_arg(&mut args)?; let width_str = require_arg(&mut args)?; let height_str = require_arg(&mut args)?; require_no_args(&mut args)?; let width = parse_u32(width_str)?; let height = parse_u32(height_str)?; Ok(Frame { width, height }) } Beautiful! Forgotten Question Marks What do you think happens if you forget the question mark on the let width_str line? If you do so: width_strwill contain a Result<String, ParseError>instead of a String - The call to parse_u32will not type check error[E0308]: mismatched types --> src/main.rs:50:27 | 50 | let width = parse_u32(width_str)?; | ^^^^^^^^^ expected struct `std::string::String`, found enum `std::result::Result` | = note: expected type `std::string::String` found type `std::result::Result<std::string::String, ParseError>` That’s nice. But what will happen if we forget the question mark on the require_no_args call? We never use the output value there, so it will type check just fine. Now we have the age old problem of C: we’re accidentally ignoring error codes! Well, not so fast. Check out this wonderful warning from the compiler: warning: unused `std::result::Result` which must be used --> src/main.rs:49:5 | 49 | require_no_args(&mut args); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: #[warn(unused_must_use)] on by default = note: this `Result` may be an `Err` variant, which should be handled That’s right: Rust will detect if you’ve ignored a potential failure. There is a hole in this in the current code sample: let _command_name = require_arg(&mut args); That doesn’t trigger the warning, since in let _name = blah;, the leading underscore says “I know what I’m doing, I don’t care about this value.” Instead, it’s better to write the code without the let: require_arg(&mut args); Now we get a warning, which can be solved with the added trailing question mark. Exercise 3 It would be more convenient to use method call syntax. Let’s define a helper data type to make this possible. Fill in the implementation of the code below. #[derive(Debug)] struct Frame { width: u32, height: u32, } #[derive(Debug)] enum ParseError { TooFewArgs, TooManyArgs, InvalidInteger(String), } struct ParseArgs(std::env::Args); impl ParseArgs { fn new() -> ParseArgs { unimplemented!() } fn require_arg(&mut self) -> Result<String, ParseError> { match self.0.next() { } } } fn parse_args() -> Result<Frame, ParseError> { let mut args = ParseArgs::new(); // skip the command name args.require_arg()?; let width_str = args.require_arg()?; let height_str = args.require_arg()?; args.require_no_args()?; let width = parse_u32(width_str)?; let height = parse_u32(height_str)?; Ok(Frame { width, height }) } fn main() { println!("{:?}", parse_args()); } Updating Bouncy This next bit should be done as a Cargo project, not with rustc. Let’s start a new empty project: $ cargo new bouncy-args --bin $ cd bouncy-args Next, let’s get the old code and place it in src/main.rs. You can copy-paste manually, or run: $ curl > src/main.rs Run cargo run and make sure it works. You can use Ctrl-C to kill the program. We already wrote fully usable argument parsing code above. Instead of putting it in the same source file, let’s put it in its own file. In order to do so, we’re going to have to play with modules in Rust. For convenience, you can view the full source code as a Gist. We need to put this in src/parse_args.rs: $ curl > src/parse_args.rs If you run cargo build now, it won’t even look at parse_args.rs. Don’t believe me? Add some invalid content to the top of that file and run cargo build again. Nothing happens, right? We need to tell the compiler that we’ve got another module in our project. We do that by modifying src/main.rs. Add the following line to the top of your file: mod parse_args; If you put in that invalid line before, running cargo build should now result in an error message. Perfect! Go ahead and get rid of that invalid line and make sure everything compiles and runs. We won’t be accepting command line arguments yet, but we’re getting closer. Use It! We’re currently getting some dead code warnings, since we aren’t using anything from the new module: warning: struct is never constructed: `Frame` --> src/parse_args.rs:2:1 | 2 | struct Frame { | ^^^^^^^^^^^^ | = note: #[warn(dead_code)] on by default warning: enum is never used: `ParseError` --> src/parse_args.rs:8:1 | 8 | enum ParseError { | ^^^^^^^^^^^^^^^ warning: function is never used: `parse_args` --> src/parse_args.rs:14:1 | 14 | fn parse_args() -> Result<Frame, ParseError> { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Let’s fix that. To start off, add the following to the top of your main function, just to prove that we can, in fact, use our new module: println!("{:?}", parse_args::parse_args()); return; // don't start the game, our output will disappear Also, add a pub in front of the items we want to access from the main.rs file, namely: struct Frame enum ParseError fn parse_args Running this gets us: $ cargo run Compiling bouncy-args v0.1.0 (/Users/michael/Desktop/tmp/bouncy-args) warning: unreachable statement --> src/main.rs:115:5 | 115 | let mut game = Game::new(); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: #[warn(unreachable_code)] on by default warning: variable does not need to be mutable --> src/main.rs:115:9 | 115 | let mut game = Game::new(); | ----^^^^ | | | help: remove this `mut` | = note: #[warn(unused_mut)] on by default Finished dev [unoptimized + debuginfo] target(s) in 0.67s Running `target/debug/bouncy-args` Err(TooFewArgs) It’s nice that we get an unreachable statement warning. It’s also a bit weird that game is no longer required to be mutable. Strange. But most importantly: our argument parsing is working! Let’s try to use this. We’ll modify the Game::new() method to accept a Frame as input: impl Game { fn new(frame: Frame) -> Game { let ball = Ball { x: 2, y: 4, vert_dir: VertDir::Up, horiz_dir: HorizDir::Left, }; Game {frame, ball} } ... } And now we can rewrite our main function as: fn main () { match parse_args::parse_args() { Err(e) => { // prints to stderr instead of stdout eprintln!("Error parsing args: {:?}", e); }, Ok(frame) => { let mut game = Game::new(frame); let sleep_duration = std::time::Duration::from_millis(33); loop { println!("{}", game); game.step(); std::thread::sleep(sleep_duration); } } } } Mismatched Types We’re good, right? Not quite: error[E0308]: mismatched types --> src/main.rs:114:38 | 114 | let mut game = Game::new(frame); | ^^^^^ expected struct `Frame`, found struct `parse_args::Frame` | = note: expected type `Frame` found type `parse_args::Frame` We now have two different definitions of Frame: in our parse_args module, and in main.rs. Let’s fix that. First, delete the Framedeclaration in main.rs. Then add the following after our mod parse_args; statement: use self::parse_args::Frame; self says we’re finding a module that’s a child of the current module. Public and Private Now everything will work, right? Wrong again! cargo build will vomit a bunch of these errors: error[E0616]: field `height` of struct `parse_args::Frame` is private --> src/main.rs:85:23 | 85 | for row in 0..self.frame.height { | By default, identifiers are private in Rust. In order to expose them from one module to another, you need to add the pub keyword. For example: pub width: u32, Go ahead and add pub as needed. Finally, if you run cargo run, you should see Error parsing args: TooFewArgs. And if you run cargo run 5 5, you should see a much smaller frame than before. Hurrah! Exercise 4 What happens if you run cargo run 0 0? How about cargo run 1 1? Put in some better error handling in parse_args. Exit Code Alright, one final irritation. Let’s provide some invalid arguments and inspect the exit code of the process: $ cargo run 5 Error parsing args: TooFewArgs $ echo $? 0 For those not familiar: a 0 exit code means everything went OK. That’s clearly not the case here! If we search the standard library, it seems the std::process::exit can be used to address this. Go ahead and try using that to solve the problem here. However, we’ve got one more option: we can return a Result straight from main! fn main () -> Result<(), self::parse_args::ParseError> { match parse_args::parse_args() { Err(e) => { return Err(e); }, Ok(frame) => { let mut game = Game::new(frame); let sleep_duration = std::time::Duration::from_millis(33); loop { println!("{}", game); game.step(); std::thread::sleep(sleep_duration); } } } } Exercise 5 Can you do something to clean up the nesting a bit here? Better Error Handling The error handling problem we had in the last lesson involved the call to top_bottom. I’ve already included a solution to that in the download of the code provided. Guess what I changed since last time and then check the code to confirm that you’re right. If you’re following very closely, you may be surprised that there aren’t more warnings about unused Result values coming from other calls to write!. As far as I can tell, this is in fact a bug in the Rust compiler. Still, it would be good practice to fix up those calls to write!. Take a stab at doing so. Next Time We still didn’t fix our double buffering problem, we’ll get to that next time. We’re also going to introduce some more error handling from the standard library. And maybe we’ll get to play a bit more with iterators as well. Published at DZone with permission of Michael Snoyman, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/rust-crash-course-lesson-3-iterators-and-errors
CC-MAIN-2022-33
refinedweb
4,275
73.68
SYNOPSIS #include <unistd.h> int link(const char *pathname1, const char *pathname2); DESCRIPTION The If the existing file specifies a directory, Upon successful completion, PARAMETERS - pathname1 Points to a path name that names an existing file. - pathname2 Points to a path name that names the new directory entry to be created. RETURN VALUES If successful, - EACCES A component of either path name prefix denies search permission. The calling process does not have permission to access the existing file. The requested link requires writing in a directory with a mode that denies write permission. - EEXIST The link named by pathname2 exists. - EFAULT The pathname1 or pathname2 parameter is an invalid pointer. - EINTR A signal interrupted the call. - EMLINK The number of links to the file exceeds LINK_MAX. - ENAMETOOLONG The length of pathname1 or pathname2 string exceeds PATH_MAX or a path name component is longer than NAME_MAX. - ENOENT A component of either path name prefix does not exist. Either pathname1 or pathname2 points to an empty string. The file named by pathname1 does not exist. - ENOSPC The directory that would contain the link cannot be extended. - ENOTDIR A component of either path name prefix is not a directory. - EPERM The file that pathname1 named is a directory,. - EROFS The requested link requires writing in a directory on a read-only file system. - EXDEV The link named by pathname2 and the file named by pathname1 are on different file systems (logical drives), and the implementation does not support links between file systems. CONFORMANCE POSIX.1 (1996), with exceptions. MULTITHREAD SAFETY LEVEL Async-signal-safe. PORTING ISSUES Windows only supports hard links between files on local NTFS partitions. Calls to AVAILABILITY MKS Toolkit for Professional Developers MKS Toolkit for Enterprise Developers MKS Toolkit for Enterprise Developers 64-Bit Edition SEE ALSO MKS Toolkit 9.2 Documentation Build 16.
http://www.mkssoftware.com/docs/man3/link.3.asp
crawl-001
refinedweb
304
50.53
I finally had a moment of spare time and did some housekeeping and upgrades. Presenting: - Planet Unity3D - Planet PIGMI New and refurbished. Oh, and a new awesome web site for DM. Monday, April 26, 2010 Planet Unity3D + Planet PIGMI Posted by Simon Wittber at Monday, April 26, 2010 No comments: Monday, April 19, 2010 A simple pyevent example. Because I couldn't find one myself. import event event.init() event.read(socket.fileno(), call_me_when_readable) event.write(socket.fileno(), call_me_when_writable) event.timeout(30, call_me_soon) #after 30 seconds! event.loop(True) #loop once, return 0 if events are waiting. event.loop(False) #loop until no more events are waiting. Posted by Simon Wittber at Monday, April 19, 2010 1 comment: Sunday, April 18, 2010 iPad delayed in .au It's there on the front page. Coming, Late May. Bah. Bah. Posted by Simon Wittber at Sunday, April 18, 2010 1 comment: Monday, April 12, 2010 Rotten Apples. More on the Apple SDK TOS saga. This just makes me mad. Apple has indeed turned rotten. I guess I'm so mad as I've built much of my business around Apple technology, in particular, Unity3D (which compiles C# into iPhone code). Maybe Stallman was right. Maybe Stallman was right. Posted by Simon Wittber at Monday, April 12, 2010 1 comment: Saturday, April 10, 2010 Apple hates it's developers. iPhone developers... weep. The new terms of service for iPhone OS 4.0 has an awful restriciton. So... this means no more Haxe, MonoTouch, Shiva or Unity3D for iPhone OS 4.0. I sure hope Apple change their viewpoint on this. It's just plain dumb. The new terms of service for iPhone OS 4.0 has an awful restriciton.... this means no more Haxe, MonoTouch, Shiva or Unity3D for iPhone OS 4.0. I sure hope Apple change their viewpoint on this. It's just plain dumb. Posted by Simon Wittber at Saturday, April 10, 2010 6 comments: Thursday, April 08, 2010 PyCon Australia PyCon Australia is happening this June. I'm going. I'm going. Posted by Simon Wittber at Thursday, April 08, 2010 2 comments: Sunday, April 04, 2010 Unity3D iPhone 1.7 Unity3D iPhone 1.7 is released. iPad support included. w00t. Posted by Simon Wittber at Sunday, April 04, 2010 No<<
http://entitycrisis.blogspot.ru/2010_04_01_archive.html
CC-MAIN-2017-17
refinedweb
382
78.96
2017-01-25 Meeting Notes Allen Wirfs-Brock (AWB), Waldemar Horwat (WH), Jordan Harband (JHD), Brian Terlson (BT), Michael Ficarra (MF), Adam Klein (AK), Chip Morningstar (CM), Dave Herman (DH), Kent C. Dodds (KCD), Kevin Gibbons (KG), Tim Disney (TD), Daniel Ehrenberg (DE), Shu-yu Guo (SYG), Michael Saboff (MS), James Kyle (JK), Franziska Hinkelmann (FHN), Anna Henningsen (AH), John Lenz (JLZ), Sebastian Markbåge (SM), Bradley Farias (BFS), Jeff Morrison (JM), Tyler Kellen (TKN), Gabriel Isenberg (GI), James Snell (JSL), Maggie Pint (MPT), Chris Hyle (CH), Bert Belder (BBR), Zibi Braniecki (ZB), Jamund Ferguson (JXF), Brendan Eich (BE), István Sebestyén (IS), Keith Miller (KM), Brendan Eich (BE), Myles Borins (MBS) Approval of the minutes from the last meeting AWB: We are having trouble getting the minutes to approve IS: I prepared the TC39 November meeting minutes (16/052) before the 2016 December GA meeting, but we had technical issues with the hosting. The old Ecma private website is down and we have to replace it. That will take at least two months. In the meantime we have put all the data on a NAS storage on the office network and that has been released to all Ecma members (this is the Pydio system on a Netgear box). We have also a parallel implementation on a Synology NAS that will be released soon. For approval, I could upload the minutes to GitHub and send them by email. AWB: If you email me the document, I can make sure everyone here can see it. IS: This has been done immediately after the completion of the conference call. IS: 16/052 has two parts. The general part of about 6 pages that includes list of participants, list of companies participating in the meeting, various ISO/IEC JTC1 matters, like status of TC39 Fast-Track projects, dates of next meetings etc. For the TC39 participants , these 6 pages should be relatively uninteresting, but generally, for other Ecma members and the GA this is a summary information they are most interested in and less into the technical details of the TC39 work. This is included in the Annex, which containes all the Technical Notes (like this one...) of the entire meeting. Generally, we need to work on passing around documents in a more reliable way. We also need to improve feedback mechanisms--I only learned today that Waldemar encountered technical issues in accessing the minutes. WH: A couple different problems: There is an invalid (self-signed) SSL certificate. Even if you accept this, the scripts to surface the links time out. IS: From a content point of view, these minutes is the same as what is mirrored elsewhere. We just have to make sure the details are approved. AWB: We'll take care of the review tomorrow Report from the ECMA secretariat IS: Explains the current status of the TC39 Fast-track projects to JTC1. We have various components in ECMA to standardize in ISO. For ECMA-262 (the main ECMAWScript standard), we now have an agreement with ISO that we will not fast track it anymore; instead, the ECMAScript Suite ECMA-414 will supercede it (with normative references also to the needed Ecma standards), and the other redundant standard ISO IS 16262 (which is out of date) will be withdrawn. The only standards in common will be ECMA-414 (on the way to fast track), which happened after the Dec 7 ECMA GA. As you remember ECMA-414 has undergone some minor revisions for ISO requests. We have also provided an explanatory report to ISO (what has been published also as an TC39 document). We have published the document TC39/2016/050. The ISO number of the suite will be ISO-/EC IS 22275. Currently it is registeredasa DIS, but the voting has not started yet. When we change the suite standard, then it will get a new Edition number. ECMA-402 will also remain an Ecma only standard and its latest version is just referenced by the ECMAScript Suite. BT: Question: The document ISO-IEC 16262:2011, we want ISO to withdraw that document, not "stabilize"? IS: Defacto, it will be withdrawn. The usage of the word "stabilize" is very funny in ISO. It's possible that they'll call the withdraw number "stabilized". I have to check that. But it was ISO that started calling this "withdrawal". Originally we, Ecma wanted that the ECMAScript Suite should get the same ISO number: IS 16262, but that is not permitted. So the Suite will get a new number. The DIS voting has not started yet; we don't know for sure how that will go. BT: On February 7th, the PL22 working group is voting on what to do on the 16262 document. Until now, they have been voting for reaffirmation; instead, we want them to vote for withdrawal, and in place, we are fast tracking the ECMAScript Suite document IS: (Some people, like the SC22 Chair, Rex Jaeschke) are in the loop already BT: I think there's a different group of people in this other group; I'll loop you in in an email IS: On JSON, there is some work to do: It got the DIS number 21778. The DIS voting finished in December and it is positive--it is approved by ISO. But the Japanese national body had some comments. Mostly editorial, but some possible technical changes; CM and AWB can follow up. The Japanese national body has voted 'No', and then would change it to 'Yes' to reflect incorporating fixes. Therefore now we have to do a second ballot (FDIS) because there was one no-vote. We need to prepare a proper fixed version of the document with the disposition of comments to start the FDIS voting and to get finally this "Yes" vote. IS: In the ECMA GA, Waldemar was there, so he can fill in any details and corrections of my verbal report. People in the GA are happy about the work of TC39. Participation is growing to-40-50 people each meeting, which is good, but can cause organizational problems.. Who should be the chair? IS: We discussed the TC39 leadership--we have to put into place a leadership body to ensure that it is functioning long-term; in the short-term, Allen has been very helpful, but this is a short-term solution and we need to find a new leadership structure. The best possibility is to have the chairmanship position held by an ordinary ECMA member, but ECMA bylaws also now permit other categories of ECMA members to be the chairman. We have had the strange situation in ECMA TC39 where we had a chairman who was financed by three member companies; this has finished. We also had the strange situation where the chairman did not understand much of what was going on on the technical part (this had entirely historic reasons). The chairman cannot be an impartial adjudicator of disputes if they are not following discussions, but anyway, we have a cooperative spirit on TC39. Strangely enough we did not have in TC39 a Vice Chair, which is in most Ecma TCs the case. A Vice Chair really can help in carrying out the Chairs function. IE.g. f there is a vice-chairman, and the Chair has a company proposal which he has to present then the two chairs can switch off, as one of them is acting as chairman and the other is presenting, so member companies in a chair position can still present their own ideas. So, we can have a Vice Chair, but in addition we may also put a management layer below the chairman. TC39 has the freedom to organize its management structure. AWB: Would it be possible to have two vice chairs? IS: Absolutely. ECMA is very flexible on this. If you suggest three, I am sure I can even that get it through. AWB: The thought I am having here is perhaps we had a chair and two vice chairs that collectively formed a management team but shared the burden of doing that work and, running the meetings, and managing the agenda and other items that came up maybe it would be easier to get three people than it would be to get one. IS: Absolutely. You are free to set up any sort of management structure. We are very flexible on that. For instance I noticed that people are extremely bored about this subject in the TC39 meeting. So It is possible that e.g. you install a separate small group just for "procedures," on I don't know, "strategy planning". Such groups then could report back to the full TC39 committee who could listen to the recommendations and make decisions. You have a lot of freedom in shaping how you want to do the work in TC39. It is really not our goal from the ECMA point of view to bind you in any way. So this is really just think about--we don't have too long time to do this. Our temporary solution with Allen (which I think is good) unfortunately cannot last forever, because he has other priorities . We want to get rid of this issue as quickly as possible. AWB: I'll do the next meeting in Portland because I do not need to travel. From my perspective it would be ideal if we could have stuff resolved so maybe by the May meeting we have a different structure in place. At the latest the July meeting. I wondering, going to throw out something here. The question is how are we going to make progress on this decision? DE: My employer would support me being a chair or vice chair. AWB: Could we maybe form an ad-hoc committee here to do work between this meeting and next meeting so candidates like Dan could talk about the job? But also to think about doing some recruiting. DE: I've been asking around for many months. AWB: I can work with you guys, but it is you guys have to collectively step up and figure out what you're doing to do. A group of 30 isn't going to do it. A group of 3 or 4 try to pull something together and bring it as a proposal to the next meeting or the meeting AK: Leo was going to ask the JS Foundation AWB: I'm not asking for volunteers for these roles, but people to essentially be the recruiting committee. Okay, we want have a chair and tow chairs and why. Somebody to pull it together. DE: Presumably I shouldn't be in that group? AWB: I wouldn't want to exclude anyone, so. ??: I could look into the JSFoundation (MM: This was attributed to me, but I don't remember saying it. Hence the "??". If someone remembers who did, please edit it in.) AWB: It would be helpful if someone AK: If there is a lack of pushing for it I can help. We've had trouble finding the right people for it. AWB: Okay. DH: We can have some discussion about next steps offline. AWB: Okay, not here, but some discussions... however you do it is fine. Is that commitment from Dave? DH: Yes AWB: Okay, MSFT is the only other ordinary member. IS: Today there is MSFT, Google and PayPal. AWB: Okay, PayPal? BT: I can help. IS: Usually anyone who is a member is free to join in adhoc groups and normally you ask around in the meeting when you install the group who wants to participate, some people tell you right away, then usually we set a one week additional deadline to decide who else wants to participate. And then the work of the small group can start. AWB: I think we need to put together an effort and make some progress BT: What do you want exactly? Nominations by next meeting? AWB: Here is what I can imagine at the next meeting, here is what we are recommending as leadership structure, we'll have this or that structure, do we go forward or not? BT: Okay, a management strawman if you will. IL: Also, we want roles defined for what people are doing AWB: Think of yourselves as a school board hiring a superintendent AK: A self appointed school board AWB: Okay, that's probably enough for the chair, do you have more IS? IS: In the last meeting this WH has reported to the GA about what discussion took place in the TC39 meeting on "diversity" of people and group behaviour etc, WH said that maybe there would be further discussions in TC39, maybe there would even be a proposal on a document about the Code of Conduct. TC39 may pick it up or not pick it up, it was about oh, I don't know equality or different communities, about the discussion culture of TC39. I remember I thought a couple of meetings ago that TC39 had a little bit of, rather aggressive discussion culture but I didn't mean it in a negative way. I wanted to say, actually each of the standardization groups, and this is not only true for ECMA but any standards group. Surprisingly each of them have little bit different cultures. You cannot say really which is better or worse but certainly to have an agressive fighting culture which in my opinion TC39 has, someone who comes completely from a different culture might have difficulties with that. I understand those people need some kind of advice or protection or whatever. So those types of things. I don't know WH do you want to say anything re this point? WH: At the GA I presented what we are doing with our diversity statement and discussions. I also mentioned what happened at the Munich meeting with the bullying which resulted in a member leaving. BT: We have a concrete proposal, at least the initial parts of the proposal for a document called a Code of Conduct. Are you familiar with these kinds of documents? IS: A little, I have seen two versions from Allen. One was more complicated than the other. The goals of those documents, yeah, there is nothing to say against it. certainly I think TC39 could accept that and adopt that as a goal that you try to achieve, then install it. I also told Allen that at this point in time this desire only came up in TC39 so there is no such kind of desire at the moment in other TCs but this is nothing bad or good, just a statement of fact. If you feel that something like this is useful within TC39 then I would like to fully encourge you that you should have such kind of document and you should tell newcomers that we are trying to adhere to this document and maybe also to appointment some kind of manager for (me or someone else) if there are difficulties, please contact me and I will try to help or whatever. BT: That was the question that was coming next which is usually parts of the Code of Conduct speak to how we effectively enforce the sorts of things that the codes of conducnt say we have to do> it's unclear to me how exactly we're to enforce the document, it sounds like one suggestion is sending your complaints to IS and they'll figure itout. IS: The discussion style can come across as aggressive to newcomers. If we just tell people how TC39 works, in my opinion, yes, the way how it works and the style is acceptable. Sometimes we have aggressive and difficult discussion if we have members for certain cultures, you know then they would really need some kind of help. They are more sensitive WH: The policy that will be proposed includes enforcement mechanisms with the ability to eject members. How does that jive with ecma rules? Do we have such power? IS: This has happened once, maybe I don't know 15 years ago, before my time. This happened in TC31. The former Ecma SG just threw him out from the meeting. No we don't have any provisions for this type of thing. We try to talk to people but I have to tell you that also in my practice, at least in ECMA, this did not happen so far. We don't have anything on this in the bylaws or ECMA rules which is formally defined. We can expel under certain circumstances some member companies, if they do something terrible, I don't know. But nothing for individuals in a meeting. BT: If we adopt a code of conduct... it sounds like you're saying we can't just decide among ourselves that we can eject a dues paying member. If you got an email from someone on the committee saying hey this person has violated the code of conduct, in practice what will you do? IS: This would be a very sensitive situation, which I would try to avoid. I would just call the guy and I will just speak to him to change his behavior BT: And if he doesn't? IS: Then well at least for a specific meeting I can send him out. There are no bylaws for this. I could imagine it could happen but it has never happened before. AWB: Presumably the next step would be to talk to the member org and say you're sending here a member that is violating the code of conduct can you deal with this WH: That works for large member organizations, but we also have some small organizations that are members. TK: Typically, with a Code of Conduct, the committee sets up a group of people, elected by the committee, to investigate complaints and recommend actions. IS: That's what I'd recommend--better if we start with someone in the meeting. Please go ahead if you think that's a better solution. BT: Do we need to make changes to the bylaws to be able to enforce the code of conduct? IS: Whatever you decide within the group, that should be fine; I will just report this to the general assembly. The GA may come back and recommend some changes, but I don't expect that. BT: To be clear, this means TC39 could disinvite one of its members on its own volition, without more action by the GA MF: How would you get consensus on ejecting a member with that member included in the consensus decision? BT: Enforcement would be delegated to a subcommittee AWB: Should the chairs/vice-chairs be the enforcement subcommittee? MPT: It's typically a group of people who include minorities, because what looks offensive t one person doesn't look offensive to others. It's completely nessesary, You can enforce a code of conduct with a group fo white men, sorry to tell you that but it's true. JM: Our experiment within the node community absolutely backs it up AWB: The best solution to that would be to ensure the management committee is diverse MPT: One problem you have is that if you have the chair and vice chair holding this, that puts too much power in a same place. it can create weird situations of conflict. if anything your code of conduct committee could be outside the tc39 group AWB: i don't think we'd want to do that WH: People outside the room don't see the events that happen and cause problems, like what happened in Munich. MM: That's true, I won't argue that JSL: So, within, node, as an anecdote. We had an effort where we tried to do some kind of moderation. We had a working group that was answering to our TSC. It didn't work out because there was a degree of mistrust if the policy would be applied equally to leadership. We're moving to a situation right now where the authority of moderation and enforcement is actually being moved out of the TSC and into the foundation. In this way they can take a more objective view. That doesn't exclude people who are a part of the conversation but there will be people who can take a more objective viewpoint to apply the policy equally. If your review committee is made only of the leadership, does that policy apply to leadership? MM: I very much like this idea of separating the committee from leaders for separation of power. On the thing about outside vs inside, I also want to say that because many of us are, at all meetings or almost all. Having a group such that for any dispute for it is probably the case that many of the people on the committee saw the incident would be great. A lot of these disputes really rely on the types of subjective judgments you could have if you were there. It doesn't mean we can't have outsiders but we want to be confident that there were witnesses. MPT: A mix seems reasonable. There are three women in the room, this is the first time there are three women in the room. I don't think all three of us would be here. AWB: one way to address that would be to say the ECMA management team could be an outsider. there certainly are women on the ECMA GA. WH: Yes, the ECMA GA has women in leadership positions. JSL: You need to be sure that the people who are on that have the mandate to be able to enforce. So, having someone who is already on the management team at ecma is good to find diversity. Actively going out to find individuals in the community is a good idea. AWB: I'm thinking of this as someone who comes in when there is actually a need for enforcement, distinct from promoting diversity or what-have-you CM: My experience with these things in other organizations, first of all, having a code of conduct in place drastically reduces the chance that you'll need it and second that it also serves to functions. One as an enforcement function but also a signaling function. it offers people some reassurance that they can participate without problems. Once again that doesn't need to involve exercising the enforcement mechanism AWB: Yeah, that's my position. You don't ever want to get to enforcement, hopefully you can handle it informally CM: Yes, hopefully you can set ground rules to set behavior **MS:**I agree with chip that some policy should have disciplinary measures. Just like we have the stages document, if we see someone stepping outside the policy and say okay, stop, what's the criteria here? The policy for participation, we would bring it up and say you're out of line. I think it makes sense to have more formal sub-committee that would be responsible for that. Like Chip says I hope we never have to use it. BT: I would be very surprised if we use it in the committee. I would not be surprised if we use it on Github, that's the primary venue I think it'll effect. AWB: You mean non-contributors? BT: Yes, even non-contributors would be bound. AWB: We can kick them out any time we want, they have no rights. BT: We don't want to start ejecting people for no reason KG: Just because we can doesn't mean it's a good idea to do it. Having the CoC in place is valuable. It's better to do it via a set of rules. JHD: constraining ourselves to rules grants folks with no rights some rights, MS: We warn people once if they've done something egregious MPT: For what it's worth I work on MSFT in this area and we had one CoC violation reported in 6 months and we have a lot of OSS projects. It's not something that is happening 500 times a day. I wouldn't get too much worked up about how much effort it takes to enforce. If we have a CoC violation we will create a committee that will address this including some people in the room and some minorities as interested. You wouldn't have to have it pre-set, it hardly ever comes up. WH: It came up this year. MPT: But when was the last time before that/ it's not like you have to be ready and prepared or people to spend hours and hours of their live on it. WH: I don't want code of conduct to be used to influence technical discussions. My fear is that it will be used to shut down technical contributions. JHD: that's exactly what it should be doing, if you can't follow this while making a technical argument you can't participate CM: What WH is saying is concerned about a CoC argument being used to win a technical argument. It being weaponized. That's something people involved in the process just have to not put up with. JHD I agree with that sentiment, and I think if and when that happens people see it for what it is WH: It has come up here before. AWB: trying to combine threads here, we don't need a standing committee because we don't need a daily or per meeting situation here, this question of how do you raise an issue? anyone can raise an issue. I might suggest one way to approach that is, when that happens, if we have a code of conduct in place. So when a violation happens, the chair group could identify a committee to investigate the issue. TK: I have a proposal for a code of conduct talking about these details, at tkellen/tc39-code-of-conduct-proposal/blob/master/Conduct.md . Can we identify reviewers? AWB: I think everyone should be reviewers; this will require a consensus decision of the committee. TK: How should that work? WH: I have been reviewing this proposal and making comments. For the most part, I'm OK with it; there are some wording issues. I have concerns with conflating friendliness with respect. Conflating coworkers with friends seems to me to be a bit creepy. TK: How about we proceed by opening issues on things like this and working together going up to the next meeting? WH: One litmus test I use is how it would work against a previous instance of poor conduct resulting in losing a committee member. DH: That instance had many factors and poor behavior from many participants; I don't think we should be litigating the details of that instance here. MPT: Not having been there, right now you have a member who seems to be on the verge of tears. Maybe right now this conversation is a violation MS: What started this discussion is that he isn't sure this would have been effective in Munich. BT: I think it would have been MS: To raise that a little higher, any CoC we draft should be useful in situations we actually encounter. JM: What will make it most effective is our enforcement policy AWB: I wasn't in Munich and didn't observe what happened. It concerns me if we conflate too much the sorts of things CoC polices are about vs poor meeting management. I don't know exactly what the situation is, but I wonder if that wasn't a failure of the running of the meeting. In many cases well run meetings prevent these things happening. BT: Specifically I think our failure was that we didn't realize when discussions lost any amount of decorum. We didn't have strong leadership or anyone watching out for these kinds of issues. Had Maggie been there to point this out maybe we wouldn't have gone down that road. This document is encouraging because it actually does say in many words effectively how communication is expected to be handled and I think we didn't follow that in Munich. AWB: We should all play well together and if things start to break down the first line of defense should be how our meetings work and hopefully some chairpersons could be in place to prevent these things from occurring and escalating to the point where someone is screaming CoC CoC BT: Talking about any specific actions by individuals is missing the point. We got way too far down the road of emotional discourse and it burned us JSL: This is my first meeting, i have to imagine this isn't the-first- time it's ever happened. WH: That's correct. JSL: Litigating any particular situation that happened at a past meeting is not going to get us anywhere. Let's focus on what we'll do moving forward. The role of the chair is to enforce the code of conduct. Let's make sure that's the way things are run. But we have to focus on what is the policy moving forward. We can't focus on previous instances or we'll get nowhere. JXF: Would it make sense to have a policy vs in meeting vs online conduct? JSL: In my experience there isn't enough of a difference. AWB: We have a proposal here. Normally the process in the ideal world, the proposal is available far enough ahead of a meeting to file comments. I think we're in that situation now where we have something and we should be providing feedback on that proposal for the next few weeks. It sounds like it's far enough ahead that it's up to the people who are putting the proposal on the table, perhaps at the next meeting we. I would encourage feedback early rather than late. TK: I would agree MS: So everyone is charged with looking at and reviewing this document for march. AWB: Yes, for consideration in March. Don't wait until March. KG: Before we move on, we were talking about the possibility of having no standing committee. This proposal states we'll have standing committee. Should we discuss that now? I'm not committed to having that discussion. MF: I think we should also outline the specific goals we're trying to achieve with this so we can evaluate if we are accomplishing those goals. AWB: I would hope the goals would be stated in the introduction. TK: The document has six headings which describe the basic thrust. It came from the JSFoundation, and that is based on the Django CoC and other documents. It comes from pretty diverse perspectives, and so I think it's a good starting point. BT: One often controversial aspect is that we are bound by this Code of Conduct outside meetings in public spaces where we are wearing our TC39 hat; I know this is controversial. AWB: What does that mean? BT: For example, say you are invited to give a talk about TC39--you are bound by the CoC. JSL: This has come up in the Node community, where people have used their Twitter accounts in unproductive ways while acting in a Node-related capacity. WH: What about the case where you have a personal Twitter account on which you do nasty things unrelated to TC39, and an unrelated third party then publicizes the link between that account and you? Does that turn into a TC39 CoC violation for you? ?: No, because you didn't represent your personal Twitter account as related to TC39. WH: Are we all agreed on that interpretation? KG: It would be nice to be more specific about when it applies and when it doesn't; I'll file an issue for this. WH: Here's a real-world instance of this related to TC39: A long time ago Brendan had made some purely personal political contributions. Pretty much no one knew about them until someone outed them. Would such personal political activity be regulated by a CoC? AWB: István, anything else? IS: I'd like to encourage participation on GitHub between this meeting and next one. I'd be interested in seeing this document as well. AWB: If you think there's anyone on the executive committee or in the leadership who would be interested in giving feedback, that would be great. IS: We will discuss this at the next executive committee meeting in April. Generally, this is rather remote to them which is basically isolated to TC39. AWB: I think this is likely to come up in other groups too. IS: At the beginning, do not expect too much. There are a number of cases where we are flexible to account for TC39's further advances, e.g., the HTML specification as the authoritative copy. Conclusion/Resolution - Everyone in the committee is charged with reviewing the code of conduct proposal at tkellen/tc39-code-of-conduct-proposal/blob/master/Conduct.md by the March meeting 11. Test262 Status Updates DE: You should share what Bocoup is doing with Google TK: Several of my coworkers at Bocoup are responsible for a large portion of the test suite. One thing we're trying to do is to provide long-term governance and support for tests. To get proposals advanced, we need tests, but we are seeing some things not come through with tests. We have an arrangement with Google to write more tests and some sponsorship; if other implementers find it useful, then you can join in on this. DE: Actually you've also been continuing to identify and fix coverage holes as well. BT: For example, the double super call test was missing BF: The module tests that Bocoup wrote is awesome SYG: Are you importing V8 tests? AK: In the past, Bocoup has imported V8 tests, but later wrote new tests SYG: Would you be interested in fuzzer-generated tests that might be hard to read? DE: Fuzzer tests often can be reduced to something readable and intelligible, at least the ones that have come up for V8. In my opinion, we should be relatively liberal for what we put into Test262. AWB: I could imagine a failure mode for the tests where they become difficult to maintain because of too many tests for obscure optimizations KM: For JSC, we try to wire in all the tests from all the other browsers to find any bugs. It can be really useful. DE: web-platform-tests is very liberal in what it accepts, and it's more like an upstreamed version of what KM describes JSC has; this is way on the other end of the spectrum from Test262's review, and it provides some value, but I understand if we don't go that far. SYG: Could we use wpt more for testing JavaScript implementations? TK: I'm not aware of any plans for this, but if implementors want to support work for this we would love to do it! KG: Embedders come up for things like detaching an arraybuffer SYG: Event loops are more complicated, and so it's even more relevant there, whereas the harness is not as taxing DH: Maybe this is a heuristic signal that this is something missing in the ECMAScript standard. KG: Announcement: when you pull in the latest Test262 tests, you'll find that $ changed to $262 for harness hooks BT: There are new tests for SharedArrayBuffer, requiring extensive API surface to run the tests; you have to be able to create an agent, sleep an agent, etc. Expect to do some implementation work in your console hosts. 13.ii.a Proposed Grammar change to ES Modules (Bradley Farias) Unambiguous Grammar Javascript proposal BF: So, at the last meeting, Dave was talking about use module, and this probably that node has with detecting if a given source text is in the "script" grammar or the "module" grammar. There are certain situations where it can be ambiguous. These are somewhat rare but they do exist in the wild. We are seeking, either to disambiguate source text, or use a new file extension for loading ES modules currently in node. The vote to accept file extension as a possibility was last march or april. The vote to seek a grammar change was last august. The grammar change is a little simple, but it's not syntactic. (explanation of readme of repo). Node would like to have a ECMA-262 blessed way of detecting the difference. The web is not supporting interop by using an attribute on a script tag. Node must support interop. I'm asking if it's feasible to ask for a parsing change. AWB: One primary concern about this is that it precludes the eventual replacement or migration to a module-only world. BF: Correct. There is always the possibility of someone accidentally deleting an import/export statement and then discovering this behavior change. AWB: It adds friction in places of modules in place of scripts. DH: I don't think module adoption would fail because of this. It's a big loss of ergonomics that you there are just realistic use cases that neither use imports or exports and they require this boilerplate. Removing this boilerplate causes the other parse goal to be used. Not saying that node to get to the place where there is no script target, but you could operate as a node user without knowing what non-module content is. JHD: You mentioned some actual use cases for a module with no imports or exports, what are they? DH: Some of them are just like, I have a module that serves only as a polyfill on standard globals. When you import it, you're just looking for side effects. Maybe it's not a polyfill, maybe it sets up the state of the DOM for your app. Basically, you're importing it for side effects, not bindings. There is also a usual workflow of programming story, it's not about here is the final artifact of what I'm delivering. The practice of programming is you open up an empty buffer and start typing. In this world, starting with an empty buffer means you writing in a different programming context until you type the boilerplate. Also, you're editing your program and you decide you don't need that last dependencies. You remove an export or import and suddenly the program operates drastically different. There are various ways in which you could unwittingly change the type of document you're working with. It's so far from what you're trying to do that it won't be top of mind. JHD: I have a clarifiying question, if that's okay. In this world, without this change, if you type import, what happens? It would still switch into a module. It's dependent on the host environment, it could ask you what DH: The changing of the contents of the source file don't change which file type/name you're writing to; this is completely managed out of band. JHD: Out of band you still say this is a module. In the use case you're talking about I agree there are potentially two sources of truth. In all those cases though, in order for usability the environment will make a choice. DH: One difficulty here is that it's not my call to make a design for Node. We wrote that proposal that didn't go very well of what it might look like to manage by out of band information. It seemed like we got most of the way there, but there were a couple places where it wasn't possible to signal; you'd need command-line flags or something. My goal would be to find a design that's close to that design as possible. DE: To clarify, what are you proposing? DH: I would offer several proposals, the ideal state in my mind is that node will design something where you can completely manage it out of band. Failing that, actually, orthogonally to that, it would be nice to signal in source that I really do intend this to be a module. In those cases, I offer the proposal of use module because it more clearly states intent. Export curly looks like boilerplate, "use module" communicates intentions more clearly. It's up to node, not me, to decide if they want to mandate "use module". Node could decide that you need "use module" or one import or export, or a combination. Or they could have what I am proposing is zero changes, manage it out of band. File extension is "out of band" but it's not my call. BF: To clarify, you think file extension is not out of band? DH: It's out of band, but I don't like that solution. M: I struggle from the standpoint of saying you have a module that is side effect only. I would never allow that through a code review. I would never allow this. If you're scaffolding an express app, and you find that adding an 'import' statement makes these non-local changes to whether it's a module... WH: Modules can be used for small top-level things such as a hello world program. JHD: A poylfill would never use module code to do it, no browsers would support. For the next decade no polyfill will be written in module code. BF: Would any of your polyfills break if they were swapped into the module grammar? JHD: The ones that were written in ES5 would break, they are intentioally in sloppy mode. The ones that are ES6 and later tend to run in strict mode and would not break/ JLZ: Anything that is declaring a global to communicat between scripts will break if they are modules. AK: Jordan your polyfills are extensive. If you have a targetted one, that is ... for instance Polymer is targeting very spceific APIs. They know they are focused on some baseline of browsers. I don't think it's crazy to say modules in the near term woud be use for polyfills WH: Two things: 1.When we adopted modules in the first place we had a big conversation about having an introductory sigil. We decided that no, we shouldn't. We are revisiting past decisions. 2. If we have 10,000 line program, having an import or export at line 3042 shouldn't change what it is, it should be obvious at the top. JLZ: I just want to say, from my experience with our internal migrations from different module systems, there is a LOT of confusion when you have to start loading module code in a different way. If you accidentally load a module as a script and it just works, it's going to be a big user issue, more than oh, i'm ediing this module. BF: are you against changing the grammar? JLZ: no AWB: For a ECMA-262 perspective, it's a false dichotmomy to say we have scripts and modules. It's higher than that. We have ESScripts, ESModules and CJS modules. Those are all source files that have distinct syntax and semantics BF: I would agree AWB: Distinguishing all three is useful to a tool. It would be wrong for ECMA-262 to only address one of those vectors, the difference between scripts and modules. I haven't heard any discussion to how to tackle the broader issue other than the fact that external metadata doesn't. Any solution that is strictly saying this is about distinguishing for the purposes of node distinguishing modules from scripts is not a complete solution and we shouldn't adopt it because that. I agree with Dave that it would be perfectly fine for node from a standards perspective to say is the way We distinguish for modules is import/export boilerplate. I don't think that will cause issues cross-platform. I don't think we should be addressing this here. It's implementation or platform level concern. BM: I'm guessing you're still against AWB: Absolutely JHD: Clarify, are you against ANY change that would make scripts and modules not overlap? AWB: Yes, unless it also addressed CJS modules and made them not overlap JHD: if a solution that was presented that was not this that allowed cjs/scripts/modules to be disambiguated with parsing only. Does anyone reject that out of hand? WH: I don't know what that means BF: I'm skeptical DH: The idea of the pragma would be that in the core definition of the language it is an early error to try to parse a script with a "use module" pragma. It becomes a way that programmers can use this defensively or specific platform like node can decide to mandate in the cases they think it is good to mandate. Both of those are outside of the spec. The spec just says if there is a pragma it's an early error to parse as a script. I want to give an early argument for a complete distinction between cjs/modules/scripts., i don't think it's a universally bad practice to have a module with only side effects. You shouldn't do it willy nilly but in particular there are certain kinds of things that you would do at the top level of your app and they are standard use case for that (polyfills for example). Something setting up your ambient state, you don't want it happening everywhere, but if you keep it all in one place for you rapp that's legit and those use cases happen. If we just reject it because we don't think it's valid.... MM: It's easy enough to export the function and just invoke it BF: there are other usecases for a module that sets up state, there is nothing to export, we're out of time and it looks like there are people vehemently opposed to this currently. DH: The point I want to make to JHD is that we also have set up a world where your top level script are modules. LIke, the top of your app is a script. In those cases, you definitely, it doesn't make sense to have exports JHD: You'd like have imports though DH: right, but one of the things we've been moving through the stages is dynamic imports, it makes sense to have those, and no other type of import, for some code. We also allow that in scripts. That makes it even more plausible that it does lots of importing but not top level importing unless we force those things to say export nothing. That to me says you suggest that using modules as the new script is a legit new thing and therefore wants to have the same "lightness" as script modules have today. If we have to add anything, we're adding a boilerplate. BF: Just to be clear, i'm going to mark this as being rejected and take it to node CTC and fall back to a file extension under the presumed .mjs and this will be dsicussed at the next ctc meeting next wednesday. if you wish to join feel free. MF: Just to clarify, you weren't looking to move this through as a staged proposal? BF: people are already shipping technical previews, it would need to be fast-forwarded MF: you said something was rejected, I don't think that is the case BF: No, but node CTC's request of inquiry about this, we're not going to make a grammar change. People of TC39 are opposed to a grammar change CP: We are opposed to THIS one, we are open to see other proposals. JHD: you mentioned a phrase as a tax on modules that don't have exports or top level imports. It sounds like if someone came with a proposal that didn't impose an ergo cost nobody would be opposed but we have no concept of that. DH: The fact that I presented this last month might have screwed us over. What really i as trying to present last month was to get started a conversation we'd start here. This is a strictly better option for you than a grammar change. you in node can do...it's effectively the same thing as export {}, just use the "use module" pragma. BF: This becomes even more questionable and this has been discussed before at node ctc, we want a uniform way to manage this that moves from two or three ways of doing it. CP: You could mandate "use module" BF: But that is secondary citizenship to the web. JSL: We have users who want to use that on the same code on Node and web; if the web doesn't require it, it seems strange to require it in Node. JHD: In the same way one browsers doesn't want to break a page in one browser, node doesn't want to break compat. with code that works in browsers JSL: We don't want people to have to include this AK: This is the opposite of this, it's saying use module in node but it will still work in the web DH: he's saying there will be the whole problem, there is a lot of web code that won't be able to work in node without it JSL: all that content would have to have use module added to it otherwise, and we don't want to require that to the rest of the world AK: we agree there is an ergonomic problem AWB: We are not rejecting this, we just don't have consensus. JSL: What I would reiterate like Bradley said: If we cannot have an unambiguous way of determining this, file extension option is the only one that we have identified as being, the least bad option. DH: I really think you've defined the problem, uh, in a way that's unnecessarily limited your options. These aren't the only two options. It's not my job or right to tell you what to do, but I just don't buy that dichotomy. I think your making a mistake to chose the file extension route. I would certainly be happy to work with you guys on this JSL: My intuition is that there is a path forward that is reasonable, we just haven't identified it yet. We don't know. AWB: Just to be clear, the idea that there is a file extension being problematic, that's not a TC39 consensus either. DH: It's just not our call AWB: It's not even... we don't deal with files. BF: I will say if time goes on eventually there will be a true cutoff and we ship something and that will be it. Whatever we do, we do need it to be the same on the web. WE can't have the web not require something that is required in node. I can have myscript.php AK: The web's not going to require file extensions BT: Wouldn't it be incompatible, you'll have modules AWB: Filenames are out of band KM: What's the difference in effort between adding "use module" and renaming to .mjs? MPT: As a library author, I don't care either way DH: I felt like the proposal I had last meeting was promising; I think it deserves another conversation BF: Let's do that offline. CP: And let's also explore the pragma BF: There's a practical time limit if browsers start shipping something that we can't, and shipping seems imminent JSL: We are having increasing pressure from users for delivering something with modules. ZB: Given that we have browsers here, we could get agreement to hold off in shipping to make this change first. TK: Is Node's only objection to "use module" that the web doesn't require it? If the web required it, then would it work for Node? ??: I would not be in favor of that. JSL: Let's take some more time later to work through this issue. TK: If we can't work out a solution, after good faith effort, could we agree to live with the pain of "use module", requiring it on the web and add it to the specification? DE: Seems like we've already heard that the answer is "no" from this discussion. JSL: If we can't come to an answer for in-band, we will need to put something out-of-band. MPT: Multiple extensions Conclusion/Resolution - Some members of the committee are strongly opposed to this particular syntax change - There is also some interest in various strategies for disambiguating - We do not have consensus to move forward rapidly on this proposal 15.iv Progress report and request for comments on 64-bit int support (Brendan Eich) Int64 BE: Last meeting I presented Int64 Uint64, in a heck of a draft, trying to show how to do them in a future proof way, as new values types in the language. They got to stage 1. Since then, I think Benedict and LittleDan have talked to me on Twitter and privately about an alternative which I think should be considered. Which is, not to do Int64 or Uint64 now or even try to get them on the agenda in anyway. Instead, we should do BigInt/BigNum. The argument they make is that is what crypto libraries want and it's more ideal as a machine type. If they could be optimized enough they could resolve the use cases for Int64 Uint64. I want Dan to confirm this? DE: V8 already does similar optimizations, sometimes we tell that something is a smi (small integer) and doesn't need to made into a double. I think determining whether or not BigInt vs Int64 is the first thing we should add is an empirical question. if there is a built-in bigint it can be optimized much better than something implemented as a library. You have a machine-dependent scalar types that should be used in the implementation. You can do compiler analysis on lowering the BigInt which would be not possible for one that is as a library. There are a few questions that could determine which is higher priority: one is, does WASM subsume some of these optimized use cases? Leaving that aside. Starting closer, if you put an explicit mask after every operation, when you want it to be in Int64 mode, then the compiler should be able to figure it out for optimized code. Do we need a lot of ergonomics for the Int64 cases or do we need more for BigInt? BE: Let me restate this again because I think there is a lot there. Dan is talking about the use cases for new value types competing for optimization effort in the edges. If you optimize BigInt enough, you can indeed sometimes use machine registers, to hold some of the BigInts that are floating around. If you do that and have only bigints you still have a good match with crypto use cases that want it. If you do what I proposed to add Int64 and Uint64 now and optimize them as well and then ave BigNum library on top of them, BigNum won't get opimized as well. There are needs for Int64 and Unit64, OSes have those offsets and sizing, its not all about the crypto libraries but there is no cry from the Node.js community for BigInt like there is for Int64. The standing widespread request doing server programming in code is for Int64 to match system calls. BT: Why BigInt vs BigDecimal if we're looking for one type? ALL: laughing BI: long history here, mike from IBM and Sam Ruby spent some time on this in the early harmony era. WH: What do you mean by big decimal? BigInts are base-invariant. BT: I mean arbitrary precision large numbers with arbitrary numbers to decimals. I claim nothing about a specific spec I'm talking about a sepc that can represent something other than numbers. BE: I want to limit the scope to the discussion of BigInt going forward. There has been a fair argument that if you could have BigInts you can optimized well enough without Int64 and Unit64. AK: I'm hearing something that is slightly pejorative, you have to do the same optimization for Int64 as for BigInt. BE: I'm trying to say that either approach can be optimized to avoid boxing. The question is, do you have to do two anyway? Is there enough ground up demand for Int64 in TypedArrays, WebGL, because of Node, which is the #1 driver, you might need to do both anyway. AK: Question about the node... if you have a 64 bit file descriptor, what's the problem with storing that as a BigInt? What goes wrong? WH: The thing that goes wrong is when you introduce types of things and you have a function which expects and Int64 and you pass in a BigNum; the type restricts to only certain values. As an example, you can expose a function that contains a check, is this a value within a certain range? The check can pass. AK: this seems like a weird place to be the one place where we inject types? BE: We have them already: TypedArrays for one AK: you can't ask for those in a function signature. People today expect to get an int32 in their function and they might be surprised if they don't get it BE: When people want float32, they might want this because of TypedArrays. We might have people deoptimizing these paths if they typically take one but are passed another. plain javascript always has to treat numbers as plain doubles, without more speciailization BF: We have plenty of random hacks where we have arrays of two numbers for things that need to pass around a 64-bit integer BB: The issue, we kind of get around with sort of, using doubles for the most part, file offsets, very rarely are they out of 2^50. We have to check if it's an int unless it is out of that range, etc... BE: There is a legit use cases for 64 bit ints. It cannot be ruled out. you have to say, the tradeoffs look like BigInts could absorb those use cases closely enough. AK: I also want to point out that just as an implementor that's not my only concern. V8 engineers have been pointing out that BigInt should be better for users as well. As a user, if we add all these different types and you have multiple 0s that don't equal each other. BE: I thought we talked about value types a ton, we're going to have a number of them and they'll have 0s. WH: IEEE Decimal has a lot of different zeros. BE: I think we all agree they should be ==, ===, etc. We already have IEEE double, so we're talking about going to the next island of safety which is bigint, or should we do both? MS: If we add these other types we're talking about. I could see 8, 16, 32, 64, 128, signed, unsigned and none of them compare to each other. BE: I'm glad they don't. That is something Dan and I worked out. DE: Yes, we've gotten positive feedback from other members of the team about no interoperation AK: Yes but at least BigInts of the same size would be comparable, right? BE: No implicit conversion. AK: The hope that would be this island of BigInts that are compatible with each other at least DE: There are two parts to no implicit conversion; + would throw but === would return false BE: There are fine points here, whether or we have explicit or implicit conversions, the BigInt vs Int64 step is still a question. AK: Bigints would allow you have some semblance of understandability between these different types BE: you mean you'd be absorbing more use cases with a single type? AK: Yes, while absorbing those you'd be in a better state than having 8 types BE: After going through SIMD which is stalled at stage 3.... nobody is here to impose all the ISO C++ types, today. But, we have a problem. The twitpocalpse three, there are latent bugs MS: Besides file offset, getting values back from stat.. why can't a BigInt handle that? You know you're going to get back a value you can do logical operations on to see which bits are set. ??: I don't think anyone is saying BigInts won't work for node. I think BE is saying we have a use case for smaller integers. BE: Lets argue apples to apples, if the optimizations for BigInt don't require boxes, there will be slow paths for network i/o DE: stat is a system call it's going to be more expensive than allocating AK: it's not significantly harder to optimize a BigInt vs an Int64 that take 64 bits of space. BE: I think there are always optimizations available. We're talking about doing one or the other or both (which I don't propose personally). I'd like to get a sense of the committee on BigInts as the better evolutionary hop. Who thinks we should do Int64 Uint64 in preference to BigInt. BF: We just want something bigger; these alternatives don't matter to us BE: Who wants Bigint? (the straw poll was stalled in the middie due to objection to the idea of a straw poll, but most implementers seemed to prefer BigInt) - both 2 votes - bigint 9 votes - int64 - either BB: The cases where we need 64-bit ints are not in the hotpath. stat is expensive enough that boxing it is not in the hotpath. BE: I'm not going to champion BigInt. I sunk the cost in Int64, Uint64. My time is limited. Someone else needs to do this. This is why we're five years down the road. AK: V8 cares very much about the use cases for 64 bit. ??: If the choice is between like, okay, we have nearing completion for 64bit. If we have to start from scratch we might start over again. BE: That's exactly the issue. I'm talking about who is championing it. Are you gonna do it? AK: Depending on what the outcome of this discussion is. BE: I don't have time for it, it's actually more work in my opinion. WH: This is like 4th or 5th time we've tried to do this in TC39. I first proposed Int64/Uint64 in about the year 2000. We generalized it to BigInt. We generalized it to value types. At that point it became complicated enough that nobody wanted to deal with it. So we dropped it. Next we proposed Int64/Uint64. We generalized it to BigInt. We generalized it to value types. That had too many dependencies and we dropped it.... It's now 17 years later. We proposed int64/uint64.... AK: I haven't been at this as long as you. JS as a language that is moving forward is doing pretty well these days. BE: I hear it's about to be replaced by WASM. I heard Dart was going to do that too, through. We need a champion. BE: 64bit is easier. That's why I'm able to champion it, but not able to champion bignum. BT: We could break the cycle right now by forming a consensus around the specifying some big number type before any other BE: I tried that last time. That's a given, but that doesn't help We're facing a crucial issue, 64ints as the next step or BigInts? BE: The big breakthrough was finding no implicit coercions. BigInts are just hard, they have arbitrary precision. SYG: Do you need to change your value representation from one to the other? AK: This seems to the V8 folks SYG: outside of optimization work, it seems like it is just as hard or easy AK: it's not an implementation concern, it's about users BE: I heard about optimization early on on twitter, that's good to hear today. that is' really about language AK: it's big, i can't speak to the twitter conversations, the bigger issue is one of feeling that this fits better with then language, not doing scope creep helps somewhat. BE: Everyone wants the ideal but who is gonna do the work? That is the concrete consideration that will guide our decision. Plato vs Aristotle, in the real world obviously bigints. AK: Do you think it's impossible to specify BigInts ?? If nobody is going to do it we'll be here 5 years form now. I feel this very much too. We should move froward with a good proposal, I think int64 was, or, you know, very soon find someone who is going to do the actual work instead of saying oh yeah that would be much better. If your team is willing to put up a concrete proposal I would be okay with that. CM: To make this a little more concrete. Is there any reason not to proceed with the 64bit proposal until we have a solid bigint proposal? BE: No, I think the committee today should decide to go forward on 64bits with bigints with a champion for it DE: Give potential champions some time... MM: No matter what we decide it's certainly the case that BE can keep working on it BE: I'm not going to proceed if we have a split committee MM: What's the MVP? ALL: 64int MM: The use case here is obscure. It's valid, it's very valid. No matter how you slice the 64bit integer, it's an obscure use case for most of the community. Nobody is making a rallying cry for it. DE: Neither JSL: Node would use whatever works. Our natural preference would be whatever requires the least overhead. BE: I think it's quite true, they don't know what trouble they are getting into until they hit a binary rounding problem and then they blame me. MM: I people are clamoring for this, just ship an MVP DE: One thing you want is an MVP that evolves into what you want; I'm not sure 64-bit ints have that evolution path. I think if we can give until the next meeting ot find a new champion, BE: I do want to work on it but I don't have time for it. ??: If we aren't adding 64bit integrates, we're adding infinitely big integers. WH: That's just bigints. BE: The spec has to describe how it works WH: A lot of lisps have that, it works fine. BE: You cannot parameterize division by the exponent, it doesn't work ??: Ok well BE: Someone has to write down how operators work and it depends on the type WH: For 64bits or bignum BE: I'm saying there is work, it's more work for bigint. If you were writing the spec would you say it would be the same effort? WH: Yes; in fact, the effort to write a spec is a bit smaller for bigints than for int64/uint64. BB: That's how I think too, your'e still talking about how division works with integers MM: There is still a mathematical concept that bignums reflect clearly. 64 bit signed integers do not. I hope we don't do 64 bit signed integers, but I hope if we do, despite the runtime costs, i would have it error rather than silently wrap when it overflows. BE: I still have this concern having done some of the work, bigints are not just a generalization of signed 64 bit ints. WH: In the spec language they are, it's a very simple spec. If you look at all the basic arithmetic operations, bitwise logical operations, shifts, they all have obvious definitions for bignums. ??: Maybe bitwise binary operations are WH: Bitwise operations are merely operations on half-infinite strings of bits in infinite 2's complement notation. They all just work. (A positive number conceptually has infinitely many leading 0's. A negative number conceptually has infinitely many leading 1's. Only finitely many least significant bits need be represented for any particular number.) BE: It's not bitwise I'm worried about. What about divide by zero? WH: That doesn't change between 64 bit and bignums. What happens when you divide by zero with uint64's? BE: it does because 64 bit hardware exists on many computers that run JavaScript MM: What does the hardware do? ??: It wraps, usually MM: presumably your'e going to throw an exception. BE: maybe i should reconsider if it would be more work. let's not worry about the champion right now. i just want to confirm we don't have a split committee. AWB: There are two aspects to this, one is the spec time, the other is time to implement. Particularly, time to implement across all the major implementations. Do people actually think they can manage to implement BigInts? What are the differences? FHN: Shouldn't we value users over implementability? AWB: Where there is a usecase for this, and there are usecases, it's becoming a relatively critical need, this is not something we should be looking for 5 years from now. If we said it was bignums, when can we reasonably expect that implementations would have produced viable performant implementations in browsers? AK: From the V8 side, we got momentum internally on BigInt because the effrot required seems similar for what people would expect for Int64. AWB: What do other engines think? MS: Same for JSC BE: Just to be clear, do you mean expectations for users on int32... AK: I don't know what the details of the staging plan... BE: Does that bring up signed vs unsigned? DE: bigints are always signed ??: It's only interesting when shifting BT: Do you think a general value types is in the cards for JS? BE: When we got IBM to back off on decimal as a hardcode extension. WE used the same argument that WH cited, value types. Decimal is still out there. I've talked to some people lately that have a need for decimals. BE: Value types are still on our agenda and we've done some work that everyone agrees with that is constructive. There are all sorts of possible uses, like for the CSSOM; also custom literal suffixes, etc. BE: Let's talk about this constant rumor that WASM will replace JS. It may happen but not for a long time. We'll have pressure for new value types. Everyone will be happy in Number-land until they aren't, and then maybe they'll use BigInts when they are available? To me, it almost doesn't matter. We've done enough work on Value types, pressure from the edges is big enough. I'd like use to get there so we don't have to keep hard coding. When I extended the spec, I future-proofed a meta protocol for value types. Realm specific auto wrapping of primitives in objects, that's important. I'm not going to summarize all the learnings we've had. Its useful and could be fruitful. Having been around this current wheel so long it's not worth worrying about. If we need it we can do it. MM: I've been doing this now for 8 years, which I know is nothing compared to you. Leaving the door open for something to be done cleanly at such a time in the future that someone has the energy to do it. To me, that is a big deal. The difference between decimal without value types in the old days vs your proposal now: You've done the work to understand how value types could grow in such a way that what you are proposing now will be retroactively rationalized into the value type proposal. We don't know if it'll turn out that way but you've paved the way. The idea of holding the door open for something often works. If you're going to stay on the standards committee you have to play the long game. BE: Good point. There is some pressure with game developers with mutable vectors and matrices for operators. That's a separate proposal. The committee could do operators, literals, and value types. DE: Literals could be done totally separately. CSSOM could actually use suffixes to construct objects BE: Tab wants implicit conversions. DE: There are all sorts of intermediate things. BE: Maybe bigints are no more work. I'll go weigh that. I think we still have people raising our hands on both sides of this. WH: I'd be happy with either one but want to see it happen now. AWB: What does it take to get this in ES2018. BE: If we're being real, I think we should talk about WASM. MM: I want bigints a LOT more than I want 64. I would really like to see... if we had them, we would never bother with int64 and that would be fine. If we had int64 we'd never do bigint and i think that would be terrible. BE: We have agreed here that JS isn't a language where we're trying to limit types. We have typed arrays, even with WASM coming up... it might accelerate. We might need 64 bits soon. WH: Are there implementation concerns with bigints? If there are any, please state them sooner rather than later. We can throw the switch and go the bigint route but I'm concerned about it getting shut down nine months from now due to engines' efficiency concerns, and then we'd be back to square zero. AK: I wanted to find out what the temp from the committee was before I did any work on it. it's clearly warm and that is enough fuel to do more work. Can we say people who are interested in bigint can look at this for the next two months and we'll talk about it then? AWB: If you add two months, you've now taken out a major chunk... AK: I object that if you don't say bigint now.... BE: You need to be de BF: I will be on the hook for spec work if we agree that if I come next time we're not going to complain about implementation problems. I don't have that issue. I don't think in two months I can get up to feel speed on the entire history of this thing. I do not expect it to go to stage 2 in that meeting. AK: Michael and I were having some backchatter, and might be interested... BE: I actually want someone on an engine team to take it. Someone needs to actually commit though and say they'll do it. BE: Dan would be ideal DE: I'd be happy to do it, unless AK and MS want to BE: I think we need to be decisive. DE: I can sign up to be the champion for this, if that seems... BE: Until someone says I'm doing it, it won't happen WH: I'd be happy to review it. AWB: And the stage one item we have transforms into a bigint approach. BE: You did an earlier version that I wasn't aware of. DE: I did some earlier work to figure out the semantics for operations for Int64 in the past. Stuff like division is not totally simple. BE: I don't think we have marks glitch with overflows. DE: That's a legit design point we can consider. I'm not going to rule that out MM: I would be amazed if what happened was we got int64 first, with overflow throw. BE: Fair enough MM: I think of all the options, that's the least likely WH: int64 with overflows throwing is a really bad idea. BE: Right; so we don't even need to talk about it. DE: That's a very awkward path to bigints BE: Okay so, the other thing that needs to be done, and this is where TC39 needs to take the hit collectively (used to be mozilla). BE: a bunch of people are going to say you bozos why are you not doing int64 DE: Okay, the job for me will be to write an explainer giving that. They'll probably say the same about division. (Explainer: littledan/proposal-bigint) BE: You can write an explainer but there will be a reaction. AK : We can't be making decisions about what people say on twitter, can we? BE: PR is a part of the job, we do need to do it well. DE: It's hard to communicate through twitter, explainer docs and threads on github are a better medium for the people who are really engaged. ??: One question about this proposal. Whether it is bigint or int64 etherpad went down for approx 5 minutes Conclusion/Resolution - DE will champion BigInt 13.i.d Seeking stage -0.0 for IEEE-754 sign bit (JF Bastien) (Presenting link) JFB: Math.sign doesn't let you differentiate 0 and -0; it has five possible outputs. I want to be able to tell about -0 and +0 in an intuitive way. MM: Just to clarify, you're not proposing to change Math.sign and the operation is named specifically JFB: Correct, I'm boring from IEEE 754 that has an concept called sign bit. They don't affect the number in any way. If you look at C++ or Go, there are also functions that do exactly this. It's shift and mask, just give me the bit that indicates sign. ??: What does signbit do with respect to NaN's in IEEE 754? WH: It's complicated. MM: It matters in the sense that we have to specify something. JFB: As a user there is no concept of signedness. Although C++ allows you to observe the sign of NaN, sorta, it's not consistently exposed; Go is not mentioned in the standard. We have to make sure that we continue to work well in both NaN-canonicalizing and non-canonicalizing implementations. This proposal just chooses something WH: Why are we doing this? What is the impetus? Why do we care about the difference between -0 and +0? JFB: It's very unintuitive? WH: Why would you care to distinguish ±0? JFB: It's not an open question, I want to distinguish it, done. WH: Okay, so do 1/x. Done. If you like, write a function that does it. JFB: That's hostile to a user. AWB: Which user? Who are we protecting? MM: Just to flesh out the argument these guys are getting at. In order for a user to be in a position to care, they are probably enough of a specialist, enough of an expert about what floating point numbers mean that the reciprocal thing is not hostile to them. The people to which it's hostile, have no need for this. JFB: They have math.sign for that. The bar was high enough that Math.sign was a good thing for TC39 MM: Math.sign is understandable in terms of the mathematical numbers that are represented. Both zero and minus zero are neither positive or negative. They are mathematically zero. JFB: Yes, but how you got to the zero matters. In some cases it matters to differentiate how you got there. MM: I understand that it EXISTS, the point i'm trying to elaborate, there are users that care about floating point numbers enough such that the difference between 0 and -0 is significant to them. That's a very small population. There is a very large population where Math.sign mapping 0 to 0 is fine. JFB: That's fine. MM: The point is that the user hostility argument fails because the users that would care are ones that would find the reciprocal not hostile ZB: A real, recent case comes from my field of localization. One of the formatters can have a zero for a unit, year/month In this case the unit we may want to say zero seconds ago or zero seconds. The way we'd distinguish that we use 0 vs -0. 0 means "in 0 seconds" and -0 means "zero seconds ago" WH: That is user hostile ZB: I still thing it is esoteric, I just wanted to provide an example AWB: If there is a distinguishable feature in the language people will find a way to to misuse it MM: I do not think this is positive argument in favor of the proposal. ZB: I'm not saying that, I'm just providing an example JHD: So you're talking about the usability of diving by infinity. At this point you could say (describes a way to do this with existing apis). The usability of Math.sign(x) is vastly superior to that form. That's the burden of proof I would see. JFB: Let me make another argument, when you're doing something math based things, why are you talking about objects. Sign bit is a thing for a person with a math background. JHD: In that problem domain it makes sense, sure JFB: It's totally a niche thing, but it's trivial AWB: We have a complexity budget. Yes it would be slightly better for a small number of users but it is an addition to the spec, our workload, an implementation, etc. Anyone can trivially define a one-line function to do this. What is the overriding... we don't have an infinite amount of complexity we can add. Someday we need to stop JFB: Sure, but you already have this complexity. IF you are going to quack like a duck quack like a duck. JLZ: Are you proposing to add a series of operators. JFB: I'm feeling the waters... can we add this or should we each roll our own? JLZ: I would be inclined to see a series of additions rather than single function JFB: 23 mathematical operations were proposed for C++ 20 years ago, and they were only just now added to C++. It's hard to get these right, and it takes a long time. JHD: So you're seeking stage.... JFB: Stage -0! All; ha JHD: Realistically this is either stage one or the committee decided that this will never happen. Are people saying they will never allow it? WH: I'm willing to be convinced. But signbit is not what IEEE 754 specs. Instead, they have a copysign function. JHD: It's in the proposal. WH: (looks again at proposal) It isn't. The proposal isn't proposing what IEEE 754 specs. JHD: The timebox is about to run out. If nobody opposes it being stage one, let's do that. AWD: Is Rick's proposal is in stage one? JHD: Let's investigate if they should be separate AWD: I suggest merging this into that proposal DE: The issue with Rick's proposal the committee was unsure about the motivation. Putting them together just makes something less convincing. JHD: the question of it being merged or not is a stage one question AWD: How many proposals we have floating around in different areas? Things in the same general area should be considered together. I'd much sooner be considering a set of related math functions or just math functions in general than a bunch of individual ones. DE: What if we say the champions should work together? Look at Rick's proposal. AWD: Happy to say, sure, you should talk to Rick JHD: Is appropriate to have them both be stage one? DE: For this particular case, we want to have a consistent math library but there is no cross-cutting thing that associates the two proposals. AWD: Is this actually higher priority than anything in Rick's proposal? DE: The thing about Rick's proposal is that we introduced these operations for degrees. JHD: I'm suggesting that putting these both in stage one is the right place to determine if they should be considered. Conclusion/Resolution - Stage one, but discuss with Rick to consider merging / relationship between math proposals - Concern about the motivation for users (is this too trivial, and any interested users can implement it themselves?) 13.ii.f Promise.prototype.finally to stage 3? (Jordan Harband) tc39/proposal-promise-finally JHD: The overall state. There is still information being gathered by chrome in particular about whether or not .catch can be modified to not observably call .then. If that's true, prototype.finally would do the same. This feedback wil affect this spec (potentially). DE: Even if we find that nobody or very people do this. I'm not sure we have agreed to change .catch's semantics. JHD: we need to determine first if it is even possible. The straw poll we took last time said it was okay because people generally want less observable things. MM: How are they checking whether the change to catch is web compatible? JHD: It would only be incompatible if someone subclassed promise and overrode then. MM: No, there are two other ways it could be incompatible. They could replace promise.prototype.then or they could add a then property to the promise instance DE: Or you could .call the method JHD: So i'm even less confident now on this change being web compatible. MM: I want to know how you -determine- that JHD: I am not sure how that works. I'm not seeking stage three this week. I just want to establish that if .catch will be changed .finally will change. It seems likely that this won't happen, but if it did, I would have a PR that makes this compatible. ??: Whats the motivation for not calling it? JHD: Just so the .finally call can have less observable specs. I've been told that the V8 team looked into it and decided it's OK to use these semantics. DE: If we look at the value of the .then method and it's the original builtin value, you can avoid allocating the closures. JHD: That was my primary concern to make the change. It seems in the majority case that seems like the right way to go (to be consistent with .catch). I'm going to merge this PR, get spec review, await the outcome of the .catch investigation and renew my request for stage three. I don't see future changes. JLZ: I'm pretty sure that angular and some others depend on .catch calling .then or that they can override .then JDH: And then the native .catch calls into that? If that's the case, it's not compatible to make that change AK: WHat's the status of V8 looking into this? JHD: There is an open bug on your bug tracker. AK: And we're looking a this? DE: We checked in a use counter and we'll have data in a few months. Even if the use counter isn't hit, that doesn't address the design constraints this addresses. We don't generally hold nonzero to be the threshold for compatibility. AK: I'm concerned about holding this feature up... the longer it isn't there, the longer it isn't there. I'm objecting this being stage three because this spec might change depending on the outcome of this work. AWB: Basically we're splitting hairs about observable vs non observable changes. JHD: okay, I'm going to merge the pull request where it observably calls .then, look into angular, then confirm it's not compatible. MM: Given the topic let me say that I approve that when this issue is settled you can go to stage 3. JLZ: Do we do conditional Stage 3 approval? MM: Given the topic it felt poetically appropriate to share this ;) JDH: Last question, can I have some volunteers to be reviewers? Mark, Dan Resolution / Conclusion - MM and DE will review, based on proposal in a PR that observably passes callback functions to the .then method - Still at Stage 2 15.iii.a Error stacks (seeking stage 1) (Jordan Harband and Mark Miller) ljharb/proposal-error-stacks JHD: I'm looking for stage 1. Spec details don't need to be bikeshedded here. The goal is to lay a foundation for error stacks, such that future proposals can resolve outstanding wishlists. The general gist is that error.prototype.stack is going to be an Annex B accessor on Error.prototype. The goal is to have a union of behaviors across browsers in a way that allows for extension which I'll get to in a moment. The reason the accessor is annex b is to allow BT: Why is that a value? MM: It's of value because the stack leaks non-local information that can be used for attack (I provides non-local information that is normally encapsulated). Also, everyone sharing the same realm necessarily shares the same error.prototype.stack, so they have to share the same view about where the stack maps to. So, SES for all of those reasons is an example of an environment which will remove this accessor. Having removed it, you're still a compliant environment. JHD: If it is error.prototype.stack then the stack information is directly attached to the error object. That can be sent across realms. If you do not have the stack accessor and you have something like System.get.stack MM: No, you understood it except the use of Realm. If you are in different realms those could have different "Error.prototype.stack"s. For residents of one realm running with different global scopes, because of the different get stack string functions they see, they could have different views of how they map to source positions. If it is on error.prototype.stack, you cannot deny access to it diferentially across realms. You also can virtualize differently for residents of the same realm. It's also a security concern, an adversary can look at how the stack was populated. MM: In all four browsers, if you create an error object and you say error.stack you'll get a string that gives you information about the call chain. WH: Am I the only one bothered by this exposing hidden data from closures? MM describes a workaround to hide this data in sandboxes, with a lot of effort, but the basic information leak problem remains for non-virtualized scripts. MM: No, the reason you make it removable, and Annex B, and deletable is because we're worried about that. WH: Fine you address this for virtualized environments, what about regular ones? Modules trying to protect themselves from each other? JHD: Those libraries can't hide that information at this moment. Not specifying it isn't increasing security, it is pretending the issue doesn't exist. WH: We had the same issue with browsers showing the arguments to functions as a property of functions. AWB: I know, for strict mode functions in particular we tried to prevent stack walking to observe the callers. MM: And we were able to get rid of it due to the (strict-mode) opt-in. If I thought we could simply remove the stack trace information, not codify it, but get all browsers to remove it. If I thought that was possible this proposal would be very different. Since all browsers currently implement it, I'm not going to try to argue for that. I'm looking for the least dangerous thing that is compatible with what the web currently does. JHD: There is huge ecosystems relying on stack traces, we can't ban these. MM: WH's issue is not with banning this way of accessing. In fact we're specifying that all of them must put the accessor on Error.prototype. It's removable, the remaining privileged operations are on the System object. (WH: that's the opposite of what I said) BT: What are your actual plans for standardizing Error.prototype.stack? It seems at this point we're all pretty entrenched as implementers on our stack format. JHD: Yeah, let me get into that. I'll explain the API. The normalization that this is doing is compatible with what all implementors are doing. BT: More related to the previous discussion. I understand now... I guess the answer I was looking for is that you want this in Annex B because you explicitly want to require browsers to have this property be configurable and you don't want to see error.prototype.stack show up in say, node, or other non browser environments. JHD: It's already there MM: We want to allow other implementations to NOT put it there BT: Annex B is only required for browsers MM: Yes but it's normative optional elsewhere, it allows non-browsers delete it JHD: Some implementations do it as an accessor, some do it as an own property. BT: This is beside the point. I'm trying to understand what you're trying to require of whom so I can understand. I'm extremely skeptical. I don't like adding things to Annex B. JHD: The first thing in general for adding/codifying it, is not allowing it to be an own property. The reason it should be in annex b is so that it can be deletable without the environment no longer being ecmascript compliant. DE: What if we just put the whole feature in Annex B? Do we really need the feature dually accessible through this error getter and these properties on the new System object. Non Annex B environments could sort of either not use the future or pull it off to store somewhere before destroying it. JHD: that's separate, I hadn't considered that MM: I hadn't either BT: I would object to that completely. I don't see why you couldn't put this in the main text of the spec along with the presumably the spec for System.getStack JHD: I guess that's true. Either way you could... either way the accessor has to return a string if you want to deny access to the stack trace information, you'd have to return a string or object representing it. BT: That's going to be true regardless. JHD: If it doesn't exist, the only way to get to the stack is through that function CM: If i understand correctly the reason to put it in annex b is to make it allowable to get rid of it. why not just say that's allow? MM: No no, it's just saying that once it is disabled it's an environment that's spec compliant. CM: Just say that, then. Annex B has other implications. MM: No, we're saying it's optional to implement the feature, but if you do implement it, it must be like this. AWB: I don't believe Annex B says feature-by-feature thing can be omitted BT: You can't pick and choose pieces of Annex B JHD: it sounds like if there is a mechanism for it to be deletable and still be compliant BT: Making it deletable just makes it writable. We're saying don't put it there at all MM: If Annex B is take it or leave it as a package. It needs to be normative optional on a per feature basis. If it is only normative-optional as an indivisible bundle, I've been putting things in annex b for years with the wrong assumption. MK: I had the same intuition you had mark. I'm reading the text and it's not clear. AWB: o, these are the things that you need to include as a browser MM: I thought it meant that each thing was individually optional. Michael just read it and said he couldn't tell AWB: If it doesn't say it is individually applicable then it applies to the second MM: No, it applies in the section WH: No, it's a logical fallacy. If you're a browser you must do this; if you're not, you may do this or not do this. There is nothing that states that you can't do something else, like just a part of this, if you're not a browser. MM: You may do part of it. They are individually normative optional. AK: I think if you're building a browser you want this feature. MM: I'm fine with it being what I Thought annex b as, which was mandatory in a browser, normative optional otherwise. AK: I don't think it should be an optional thing for building a browser. BT: That's fair BT: Mark would you want a web browser to not ship this? Are you comfortable requiring it? MM: I'm comfortable requiring it. AK: Parts of the regular spec will say parts are allowed? BT: Before I thought i was optional in browsers as well, was that a misunderstanding? MM: Yes it was. I understood Annex B to be mandatory in browsers, otherwise normative optional on an individual feature basis SYB: are we just discovering that English is ambiguous? :) AK: I don't think there is normative text in the spec to describe what a browser is AWB: When I wrote this language I was not thinking about piecemeal application. It's plausible to interpret it, but that isn't what I was thinking about. DE: I think the discussion would probably be helped by more concrete current nonannex-b implementations. We know that SES is one non-Annex B environment; other possibilities could help us refine our ideas for what we should have in spec text. The other thing I want to say (shameless plug) I have a PR for ecma 402 and I have inline annex b stuff and I have different formatting for it. I would be interested in anyone's review because apparently there are a lot of feelings about what annex b is... tc39/ecma402/pull/84 MM: I would like to review it DE: Thank you WH: I see this as a problem that creates non-sandboxed information leaks. Just saying you can delete this if you're building a sandbox doesn't ameliorate the leaks. MM: the issue WH is bringing up is that it can leak information JHD: It's already doing that in every implementation. MM: That's the reason why we are pushing to codify it because we don't think we can kill it WH: I think we should kill it or require special permissions JHD: ordinary scripts already require this sort of thing. NewRelic for example, there are billions of dollars in around this. i can't see how it would be possible. I think all member companies here rely on that information. WH: It's unfortunate that we're required to codify a security hole just because of existing practices. MM: Do you believe there is something we could propose that could be accepted and be implemented that would plug the hole? WH: Hmm. Approaches would be either to not build this or to proceed along the way you (MM) suggested of finding some way to obscure or hide the information it provides. MM: As far as the first one is concerned, if we don't propose anything, browsers will continue with existing practices and not solve the issue JHD: I would claim this lays the groundwork for opening the possibility for allowing some sort of global config to say anyone using this System.getStack information to remove all the security leaking information. Currently fixing this is not possible at all WH: Sure but that would break those billions of dollars of scripts.... JHD: maybe it could be opt in? MM: That's precisely the difference between what WH is proposing and we have here. Things which have not opted are able to now JHD: That is the reality of the web, unfortunately. That ship has long since sailed. AK: it might be cross origin iframes issue MM: The issue about cross-origins stacks and what information we reveal there is important, fortunately because ecmascript doesn't understand the concept of origin, because of the freedom this proposal allows, this proposal specifically is able to sidestep that. We as responsible creators of the web infrastructure can't sidestep that. it's a hard problem. i don't know what to do about cross-origin stacks. JHD: You can postMessage an error object AK: We've had security bugs with stacks MM: I want to signal with everyone that is an issue we need to wrestle with JHD: Error.prototype.stack is an an accessor that is somehow normative optional. There are two functions that are stored on a privileged granted namespace, currently called System. Mark has a number of opinions on why it must be a separate namespace. WE can also address concerns of compat about the name system. I'm sure that anyone using that is just using it for a namespace to hold .import. AWB: Why not use an import to access it? JHD: Because there is no current feature for builtin modules. As we've discussed every time this comes up, it shouldn't be a blocker. AWB: Maybe we should change that. JHD: There is no consensus on that. MM: In order to change it, you need to do two things, one is you need to solve the problems we've already discussed. Because this is a privilege-granting operations, you basically have to alreayd have a multiple loader environment so you can say that some priv code says you can import the object. normal code cannot. If you had a built-in module system that satisfied both of those requirements I would have no objection JHD: Okay, if anyone lands built-in modules, a number of existing proposals would be. BT: WE need some proposal to be a motivating use case to drive built-in modules BM: We have initial bikeshedding from browser side that we could expose the current url of your module via an import of some kind. The assumption is we'd have our own URL scheme, either the about scheme which seems kind of weird, or maybe js: BT: What? BM: Clearly we can't use JavaScript:, we could try to present that in two months if this matters to you at all BT: It matters to me! JHD: I don't want to get into this bikeshed. I don't think things should only be provided by builtin modules right now. That's not something I want to get into until now. BM: So you're not locked in. ??: What if we ignored the container of you proposal. AK: What is the motivation for providing additional things? JDH: WE want to create a scope in which System.getStack does something different in one scope or another AK: I'm curious about what it does. Are there users that want to provide getsack but not Error.prototype.stack MM: Yes JDH: I have a use case. I want to filter out any part of the stack trace that is in react. AK: Has anyone written code doing this? MM: Causeway, which is a postmortem debugger for distributed JavaScript. It does a post-filtering of the stacks it gathers. It doesn't do the post filtering by bundling the result of the post-filtering as a virtual getStack. It does the filtering basically in order to present, in order to filter out things that are known in that context, likely not to be significant. AK: I understand the use of working with stack traces and strings and even objects and filtering them. I'm looking for a use case online. JHD: Okay, so, the API. (Describes the spec as linked above) JSL: V8 has Error.captureStackTrace. MM: this proposal doesn't support that JSL: We are making use of this in Node BB: That's because in V8 you have to do that JSL: It basically lets us truncate the stack at a particular site. It basically takes the error and a particular constructor. JHD: Two responses: On es-discuss, some modifications due to feedback MM: What we shouldn't do is expose prepareStackTrace, which exposes all sorts of function objects JSL: We use some additional information to get stack information JHD: We might want to add an API to configure that in a future proposal. BB: What would the performance impact be of having this available? {unclear} AWB: How is [[ErrorData]] represented? JHD: This needs to be added in a future revision {What is the motivation?} MM: There is currently a lot of cross-browser divergence. The only way to get error stacks cross-browser currently is to convert to a string and scrape the string. The SES shim has to do this. The proposal is purposefully vague, to permit omitting frames, etc so you at least get the basics in a cross-browser way. AWB: But the hard part is actually looking through these cross-browser issues in a comprehensive way. WH: How useful would this proposal be without those things nailed down? The claim was made that there are lots of scripts in existence that depend on this proposal's functionality so we must include it in the spec, but how do said scripts deal with frames omitted or inlined at implementations' whim? JHD: They break once in a while. JLZ: You should remember the problems with stack frame elision from the tail call discussions. JHD: again, during these discussions, stacks came up. Mark or I, one of us said we are not intending to normatively require or prevent omissions MS: I think I asked specifically that not all frames would have to appear. Thank you for doing that in this proposal. JHD: Thank you, I intend to do that. MF: I like this proposal, I'm very much in favor of it. Early in the discussion we discussed a convenience API where a function is given and all appearances of that function in the stack trace are censored. JHD: I went back and forth if i would include that in this proposal. This is hard enough that I think it should be a follow-on as options to the System.getStack call. I'd like to be able to black box react or jquery, etc. MM: It seems to me that's a perfect example of something you'd want to do with getStack virtualization. JHD: That's true, you can do that yourself by virtualizing this. MF: Do the stack frames actually contain the function object? We need to compare against the function object and don't have the source position. JHD: I agree it's useful to just take the function object, that's a possible extension in the future to this functionality. Nothing bout this makes that impossible MM: We must not provide access to the actual functions. If we want to provide for this filtering, we must do it another way. MF: I think it is required that this proposal must provide this. The source position isn't enough, we could just create functions in a loop. this is not sufficient. JHD: I'd love to discuss this with you offline. MF: That is a very important API to me, I hope to see it here. JHD: I wasn't sure how important it was BB: I am still a little uncomfortable that we're not specifying what is captured when and we know that elision isn't going to happen in the future. I'm concerned we'll find other things, and we'll progress on this spec and find issue where this is not very practical, too much information, hard to obtain or whatever. A glimpse of what it might look like would be helpful. The structured stack format, can you show it in a JSON-like format? MM: Yes! It's in the original README JLZ: I would really like to see this as the maximally minimal. I would object to adding the additional capabilities. JHD: Would you be content if we could add this later? MF: I would be fine if you can do it at the user level or provide an API. MM: I will not expose actual function objects at the user level. JHD: we'll discuss this after MF: Presence in a WeakMap, perhaps? BB: It would be great if the data were JSON-like Schema https//github.com/google/caja/blob/master/src/com/google/caja/ses/debug.js#L47 (MM: Since it might move, I provide a copy here with the one correction we talked about) stacktrace ::= {frames: [frame*]}; frame ::= { name: functionName, source: source, span: [[startLine, startCol?], [endLine, endCol?]?] }; functionName ::= STRING; startLine, startCol, endLine, endCol ::= INTEGER; source ::= STRING | frame; MM: It's all expressible in JSON MM: I'd like to imply that we'd continue to leave this to the implementation and pin it down in further specs. MS: With this specification of what the object contains, what do you do for anonymous functions, etc? JHD: Whatever you do now MM: For anonymous functions, what the shim does is use a question mark. I'm not suggesting we do that, I think we should decide. MS: <anonymous>? MM: That would be fine with me MS: What do you do when someone just wrote in the code using document.write? MM: Once again, open to suggestions. Browsers do very bizarre things that differ from each other here. JK: This is a bit of a jump. Right now this doesn't affect the stack in JS. Chrome does have an async mode of viewing stacks, to see stacks that happened across Promises, async/await and Web API callbacks. That async operation needs to be represented somehow. Is there room for that? JLZ: That isn't visible from the code MM: This format came from Causeway, and Causeway is all about stitching together each of those separate turns and async operations into an overall causality graph. ( cocoonfx/causeway, ) So, that's one of the reasons there is an outer object here. The identifying of what turn you're in, and then for an async operation, identifying what turn it is causing, with separately logged information, not in this schema, those are all follow-on things that i'm hoping to propose. When I took a look at overall, all the information that causeway logs that allows us to stitch this back together into a causality trace, this was definitely the place to start. As a place to start this is also the closest to all things browsers agree on. That's exactly why I've reserved some space there. FF by the way, already does deep stacks, though in a different way. JHD: Any information can be exposed in debugging tools. JK: It's not observable today but I wanted to know if it was BB: Chrome has inferred names for method calls; they would not show up in the stack. JHD: We could add additional properties to these objects. They are frozen but they don't have finite list of properties. BB: It calls into question... what's the point of having a way to format the textual stacktrace if in fact you have much more information JHD: Currently that information is not observable MM: I was not thinking it would be up to the implementation to populate with more data. I was thinking more properties would come from later specs, not from implementation dependencies JHD: My assumption was that like any other object, implementations can extend them. It seems useful to me to have that information available. If we can't currently get that information, it's fine to not be there. AWB: Why is it important that these are frozen? They are a snapshot of data you have now, let people hack on it. MM: if you generate two error objects which have a common stack at some state, basically a common caller, and we have a call tree and we have joins with a common caller. You don't want to obligate them to be made separate. But you don't want the sharing to be a communication channel. It's observable that they are sharing but two entities that otherwise can't communicate, with common access, can't use them to communicate with each other. When we haven't specified that something is frozen, like getOwnPropertyDescriptor, that's a great example. I desperately wish we had specced those as frozen. We now obligate the implementation to generate a fresh one every time. AK: Why not? MM: Because, for example, once a property is non-confgurable, non-writable data property. It's descriptor is never going to change. You'd like an implementation to be free to have a little cache and reuse them. AWB: You'd like it, but this is JS AK: Why do you assume it would be better to have them always cached? That has its own problems. JHD: You're not required to cache them AWB: They are either the same or they aren't. JHD: Yeah, if you want it to be consistent. MM: If I had to choose, I would not REQUIRE them to be the same object, not require caching. AWB: Different from template objects, which are statically generated. MM: yeah, I suppose, I can't find a killer reason off the top of my head. Maybe I don't have one. This one we can revisit. MS: Is your concern that someone would use property names you'd want in future versions? JHD: If it created a fresh object every time, it would just be shadowed. If the concern is implementations... they can currently add a property like they can anywhere else. We might say you can't add new properties that aren't in the spec but leave it unfrozen. MM: We will need to leave it as an open question whether the produced object is frozen or not. MS: I can imagine implementation extensions causing compatibility issues. Say an implementation adds an arguments property, but later TC39 standardizes it differently. But this is all about implementations, not about whether the object is frozen at runtime. AWB: Well, if you're asking for enhanced information, why not use an implementation-specific API for everything? JHD: Maybe they would be implemented in terms of the other, and this could be affected by our decision about freezing. AWB: I'd assume they'd be implemented primitively. WH: How does an implementation censor the result of GetStackString? It can call GetStack and possibly remove or alter some frames, but how does it render those back into something that GetStackString would have produced? MM: Good point. We need a way of rendering for virtualization. This suggests including another part of the SES shim api we omitted from this proposal: stackString, which is an unprivileged operation that takes the object structure returned by getStack and produces the rendered string returned by getStackString. Then, when virtualizing getStack you'd just reimplement getStackString as stackString(getStack(error)). Conclusion/Resolution - Stage 1 acceptance - Open Questions: - Determine if this belongs in annex b or if other normative-optional text is allowed - Add a separate method to convert stack frames to a stack string? - should getStack return frozen things? - should implementations be prohibited to extend the framesarray/objects? - should we include in this proposal an API that takes a function object and allows for eliding stack frames from within this function? - what does the Error constructor text for populating [[ErrorData]] look like? Discuss non-technical long-term vision for ECMAScript (Brian Terlson) JFB: Kill it with fire? Straw poll, should we kill it with fire? BT: No, the consensus is to not kill it with fire. BT: Are there people who would be interested in listening and participating in such content? I was thinking we could debate format, 5 minutes to 30 minutes level. No 2 hour disserations or anything. We could do it at the next meeting if that is sufficient time to create decks. MF: Will we produce a goal statement or some kind of document? BT: I think it would be awesome for this committee to have a consensus roadmap. MM: I would prefer not to use the upcoming meeting. The kind of thing you're asking for, I would really want people to think about what they want long term and let that gestate for a while. With all of us having many things in our schedule for the next two months, having a bunch of unanticipated time over the next two months is going to be too tight for contemplating in this way. BB: Are we saying everyone who has an idea would say this during the same day? We can do it staged MM: I would be perfectly fine AWB: I think it would if we could focus and have like, a vision day, and not have a haphazard here is a piece, there is a piece. We should block off a day and we aren't going to worry about, you know, the current proposals and the process and the bugs and just think about the next 5 or 10 years or whatever seems appropriate and start to form a common vision about where it is we are going. BT: To answer MF's question, in an ideal world we would have a shared vision of where we are heading. I think some kind of vision day... something that would come from upper management. It felt horrible to say that. Someone give me some other name. MS: Long term direction day? BE: Everyone read the harmony email from 08? Long term direction happened, ES6, etc. It's time for a new one. link: esdiscuss/2008-August/006837 BT: I hope that by everyone sharing what their vision is, that we can appreciate the different aspects of experience and interest that people bring to the table and have another rev of that document. We should keep the top 5 long term goals of the committee available. I'm not sure what it looks like, but a reasonable starting point is us sharing what we think? MS: Brendan can you share that? BE: You can search "ecmascript harmony mail" it's the 2nd hit MPT: i started a conversation about vision last night, where I was coming from, the community et al has very little idea of what goes on in this room. For the most part it doesn't occur to them that they would have any influence on the proceedings here. If they don't have a say, I would say they ought to. I don't think they should be here commenting on every little thing. But it does make sense to release a long term vision and accept their feedback and yes you're going to get flamewar and 500 miles of bikeshedding. Inside that you'll find valuable feedback of what people are looking for. As the JSF rep, I'm happy to try to get that some press. Make a doc that says where you're going. AWB: I assume whatever we discuss would be public and call people's attention to it. BT: We don't wan tot just dump a bunch of text on people. AWB: Ultimately we want to get something unified, but the first step is probably "Decks" MS: Part of our diversity issue, we don't actually have a lot of users here. We talked about it making sense for those in the committee to go to user group meetings small or large and represent the workings of TC39 including what's currently going on. We didn't create any actions but we did talk about doing that. BB: I think what is confusing for a lot of people is that TC39 itself does not have or state anything that we produce the spec. The spec is pretty unreadable and uninteresting. BT: hey now BB: To most people. It doesn't tell you what can I expect from TC39, what do they think is important, how do they approach problems? In a way we are just a group of random people arguing about things. We may not have a shared opinion but we could have a shared message. ??: What is the forum for this discussion? We have esdiscuss, we have github, twitter isn't very effective.... BT: I would hope we could put something on the TC39 wiki. DE: We produce the minutes, a lot of people read them, but they are kind of hard ot read. When I was working at Google, I would produce a summary of what happened at the meetings. Now, I could do that externally instead BT: I would love for you to do that; I do that at Microsoft MPT: It's a stated goal of JSF to produce exactly that for the community. We don't do it , but we could! I was slack chatting with Chris today about this. DE: Great, let's all work on this together then. WH: We have had, quite often, meetings with community user groups as an adjunct to TC39 meetings. We had one last year in Munich, in Boston, etc. They were for the user community to meet us and for us to meet them, chat, and listen to their stories. We should continue doing that. That's part of the reason I was pushing for geographical diversity in our meetings. BT: I promise you have a JS meetup in your area. BT: Back on communicating of vision.... AWB: Before we communicate it we need to create it BT: Communicating amongst ourselves, we all agree that would be good. Does it seem unreasonable to spend an entire day on this? DE: Sounds great. AWB/TK: not enough MM: if we spent an entire meeting on it MPT: you'll want to spend a day on it, give to the community, get feedback, and then do it in the next meeting. You'll never get it in one meeting. CP: I don't think we're talking about getting a draft of anything. The first step is getting a sense of what the highly varied opinions are DE: We could write a summary of the visioning session on the newly legible minutes that might happen and get feedback on that, even if we don't get a consensus vision among the whole committee. AWB: we need goals, he meanings fo the goals, the specific actions we'll accomplish BT: The question is, what .... i guess without doing an actual call for presenters. I imagine we all want QA periods too, we have at least an hour per. WE could just cancel the agenda for the May meeting. We should decide that now. People should get stuff ready for that. JSL: In may we're just ratifying 2016, right? That's where we have the final vote? MF: Can we just do proposals ready to advance? Let's do the stuff we have to do JSL: We travel in NY we end up not using three days BT: What if we have three presenters, and the schedule doesn't fill up? CP: Who will present? BT: Not many hands were raised, but I hope to make the compelling case for more people to present. TK: Good time to bring in the community BT: I think the two-day plan is reasonable; I'm sure we'll find other content to fill the day with. This is all for the May meeting in New York. DE: Will the CFP be directed at just members, or also at the community? BT: Let's have a meeting the Friday after TC39 to bring in the community and talk about the vision. WH: Friday would be an issue for those who travel to the meeting. An evening during the meeting would be better. MPT: Lots of people have travel restrictions, so maybe Wednesday evening is best. WH: In the past, we've already done these things on Tuesday/Wednesday night TK: The visioning should be done before the community meeting, rather than afterwards, so maybe Tuesday night MPT: How about splitting it up further, e.g., I go to Seattle JS. Anyway, most user groups would hold a special session for this sort of thing. JHD: It will be good for us to be on the same page BE: In the past, various people wanted to control the narrative. DE: Chapters.io has some resources BT: I'll put a reflector post for figuring out community outreach Conclusion/Resolution - BT to make a post on the Reflector saying that we'll spend the May meeting - BT to write and send out a CFP for internal vision presentations - Another thread to discuss attempts at community outreach
https://esdiscuss.org/notes/2017-01-25
CC-MAIN-2019-18
refinedweb
21,009
72.16
Archives [link] Connection String generation Visual Studio macroSub ConnectionStringWizard() < /SPAN > [wish] Microsoft P2P download system). [tool] MozBackup - copy your Firefox profiles. ASP.NET DropDownLists with day / month / year values I recently had to add date selection dropdowns on a webform and was surprised that a few minutes of Googling didn't write my code for me. I wrote a quick console app to do it. Here's the code, along with dropdowns for month, day, and year. GMail: "oops...the system was unable to perform your operation"). The hidden feature in Media Center 2005 UR2. //TODONT: Use a Windows Service just to run a scheduled process? Yet another command line parsing system.; } Transparency on DRM and Windows Mobile upgrades) [link] Riya - Photo search with face recognition! Installing Windows Vista October CTP (Build 5231) on VPC with VM Additions I installed the Windows Vista October CTP (Build 5231) on Virtual PC yesterday. There were a few gotchas that I thought I'd share in case it saves anyone else some time. -. - I kind of lied about the VM Additions. The video drivers are installed, and they make a big difference, but the rest of the VM Additions aren't installed. That means no folder sharing, etc. They might release VM Additions for Vista Beta 1, but I'm not holding my breath. How exactly would you like me to "Quote values differently inside a '<% ... "value" ... %>' block"? Visual Studio freaks out when your HTML contains nested quotes. Roy's solution (using single quotes for the attribute and double quotes for the databinder or function arguments) works unless you need to nest quotes, which occurs if you're including a Javascript call in a databinding statement:<asp:TemplateColumn> <ItemTemplate> <a href="javascript:showPopup('\Popups\Edit.aspx?ID=<%# DataBinder.Eval(Container.DataItem,"itemID")%>')">edit</a> </ItemTemplate> </asp:TemplateColumn> The problem here is that you need quotes around your href attribute, quotes around your javascript function argument, and quotes around your DataBinder.Eval string parameter. Tim sums it up pretty will here, but you're pretty much stuck with either "Quote values differently inside a '<% ..."value"... %>' block." or "Place quotes around a '<% %>' block used as an atribute value or within a SELECT element." I haven't seen a good discussion on how to handle this well, so here's what I've come up with - please recommend something else if you've got a better solution. We need a third quote, right? Well, one common solution is to use \u0022 - the VS UI doesn't see it as a quote and doesn't get confused, but the ASP.NET rendering engine writes it out as a quote:<asp:TemplateColumn> <ItemTemplate> <a href ="javascript:showPopup(\u0022\Popups\Edit.aspx?ID=<%# DataBinder.Eval(Container.DataItem,"itemID")%>\u0022)"> edit< /a> </ItemTemplate> </asp:TemplateColumn> A better solution is to just escape the quote with a backslash (\"). The VS IDE handles that just fine:<asp:TemplateColumn> <ItemTemplate> <a href ="javascript:showPopup(\"\Popups\Edit.aspx?ID=<%# DataBinder.Eval(Container.DataItem,"itemID")%>\")"> edit< /a> </ItemTemplate> </asp:TemplateColumn> Another solution is to use code behind to actually construct the links. I normally wouldn't use that unless there was a good amount of formatting, and even then you can get away from that with a format string in the DataBinder.Eval call. I usually see people recommend one-off functions for that kind of thing, but I prefer using something more generic. The following function will work in a page code behind, but could be made a static function and placed in a code library class:protected string BuildJavascriptLink(string format, params string[] input) { format = format.Replace("{quote}","\""); string retval = string.Format(format,input); retval = retval.Replace("'","\""); return retval; } Now we could reference it in the code front with something like this:<a href='<%# BuildJavascriptLink("javascript:showPopUp({quote}/PopUps/{0}{1}{quote},{2},{3})","StreetLevelMap.aspx?ID=",Convert.ToString(DataBinder.Eval(Container.DataItem,"Entity_ID")),"600","450") %>'>Street Map</a> [link] Connecting to Terminal Services When All Active Sessions are Used Here's a little trick for getting into a box via Terminal Server when all sessions are in use. Phil and I came up with it yesterday and I was going to write it up, but he beat me to it with a pretty nice writeup. Here it is: a server. Since we generally launch Remote Desktop from the icon, we almost always leave this console session free. Nice! ... [via haacked - Connecting to Terminal Services When All Active Sessions are Used] [OT] Friday afternoon silliness New video from Jason Forrest - War Photographer BizTalk is to Windows Workflow Foundation as... Nice to see this spelled out. [links] Recent links of interest - 10/8/2005 Here are some interesting links from the past month or so. Some of these will be old news if you read all the .NET blogs; some of my readers don't. [OT] Web 1.0 Summit - Anyone else going to the <blink> tag session? Web 1.0: <BR> Away! Uninstall a Previous Application When Upgrading an Application with Setups Created in VS.NET Source: Uninstall a Previous Application When Upgrading an Application with Setups Created in VS.NET Office 12 will have Native PDF support (or, rather, no...) Steven Sinofsky (SVP, Office) announced Saturday that Office 12 will have native PDF support. [OT] Why not nuke the hurricanes? During the recent hurricanes, there were several blog posts about the StormFury project which investigated "hurricane modification" by seeding clouds to lessen their impact. It didn't pan out. [asp.net] Simple utility function to return all selected values from a CheckBoxList.public string[] CheckboxListSelections(System.Web.UI.WebControls.CheckBoxList list) { ArrayList values = new ArrayList(); for(int counter = 0; counter < list.Items.Count; counter++) { if(list.Items[counter].Selected) { values.Add(list.Items[counter].Value); } } return (String[]) values.ToArray( typeof( string ) ); } Posted so: - I can find it later - I can maybe save the next guy some time - The piranha haterscommunity can point out how this could be done better. JavaScript only pretends to support function overloading I frequently use method overloading a lot in my C# code to allow optional parameters, so when I wanted to implement a simple popup function with optional support for virtual root and website based on the url, here's what I came up with:function showPopUp(virtual, url, height, width) { if(location.pathname.indexOf(virtual)>=0) { //we're running in the virtual root, so prepend it to the url url=virtual+url; } showPopUp(url, height, width); } function showPopUp(url, height, width) { window.open(url, 'PopUpWindow', 'toolbar=no,directories=no,menubar=no,resizable=no,status=no,height=' + height + ',width=' + width); } Javascript sneakily pretends to support this - no script error - every time I called the function, the virtual root wasn't being added on. That's because Javascript doesn't support method overloading; it just uses the function which was defined last (thanks, Bertrand, for verifying my hunch). I had to give the second function a different name, and it all worked. Here's an easy way to verify - this will show the second message:<html> <head> <script> function test(one,two) { alert('expected: first function with two parameters'); } function test(one) { alert('surprise! second function with one parameter'); } </script> </head> <body onload="javascript:test('first','second')"> </body>
http://weblogs.asp.net/jongalloway/archive/2005/10?PageIndex=2
CC-MAIN-2014-41
refinedweb
1,210
55.84
On Mon, Oct 17, 2011 at 10:04:16AM -0400, Mikulas Patocka wrote: > > If you're going to use cond_resched() at least do so a little more > > intelligently than putting it in _every_ loop. For instance you call it on > > every iteration of a sweep through the hash table. The call to > > cond_resched will take more time than the loop body. At least make a > > change so it's only done every n'th iteration. > > I think it would be better to use > #ifndef CONFIG_PREEMPT > if (need_resched()) cond_resched(); > #endif > > need_resched() is inlined and translates to a single condition. Yep, that would be fine. > I don't know why Linux doesn't provide a macro for it, this would be > useful far beyond dm code. Agreed, I was very surprised that cond_resched() expands out to a function call rather than a test + fn call. Get a patch together and I'll merge. - Joe
http://www.redhat.com/archives/dm-devel/2011-October/msg00073.html
CC-MAIN-2014-42
refinedweb
152
80.62
I have an assignment to program a simple program that names each member in my family and how they are related to me. The program compiles and all, but as soon as i type in any name, this is what shows up: Run Command: line 1: 3805 Segmentation fault: 11 ./"$2" my code is as follows: #include <iostream> #include <string> using namespace std; int main() { char genderType; char* name; cout<<"Enter M if you are a male. Enter F if you are a female: "; cin>>genderType; if (genderType == 'm' || genderType=='M') { cout<<"You are a male."<<endl; cout<<"Please type in your first name in all lowercase letters: "; cin>>name; if (name=="hussein") cout<<"You are my eldest brother! "; else if (name=="ahmed") cout<<"You are my older brother! "; else (name =="kamal"); cout<<"You are my father! "; } else if (genderType=='f' || genderType=='F') { cout<<"You are a female."<<endl; cout<<"Please type in your first name in all lowercase letters: "; cin>>name; if (name=="hinda") cout<<"You are my eldest sister! "; else if (name=="iman") cout<<"You are my older sister! "; else (name=="ikhlas"); cout<<"You are my mother! "; } else { cout<<"Sorry, you can only be a male or female! "; } }
https://www.daniweb.com/programming/software-development/threads/409817/c-segmentation-fault
CC-MAIN-2017-17
refinedweb
200
73.58
, most existing enterprise back-end systems provide a SOAP-based web service application programming interface (API) or proprietary file-based interfaces. In this article series we will discuss how Oracle Service Bus (OSB) 12c can be used to transform these enterprise system interfaces into a mobile-optimized REST-JSON API. This architecture layer is sometimes referred to as Mobile Oriented Architecture (MOA) or Mobile Service Oriented Architecture (MOSOA). A-Team has been working on a number of projects with OSB 12c to build this architecture layer. We will explain step-by-step how to build this layer, and we will share tips, lessons learned and best practices we discovered along the way. Main Article In part 1 we discussed the design of the REST API, in part 2 and part 3 we discussed the implementation of the RESTful services in service bus by transforming ADF BC SDO SOAP service methods. In this fourth part, we will take a look at techniques for logging, debugging, troubleshooting and exception handling. The easiest way to get more insight in what actually happens inside your pipelines is to add Log actions. You can simply drag and drop a Log action from the component palette and drop it anywhere you want. For example, if a call to a business service fails, you can add log statements to print out the request body before and after the transformation that takes place in the Replace action to inspect the payload. By default, the Severity of the log message is set to Debug. In order to see debug log messages, you need to change the OSB log level which is set to Warning by default. You can do this using the Actions dropdown menu in the JDeveloper log window, and choosing the Configure Oracle Diagnostic Logging option. You can also use enterprise manager, by opening the service bus dropdown menu and choose Logs -> Log Configuration. If you set the log level to Trace (FINE) or lower. your debug log messages will in appear in the JDeveloper window log level. However, with this log level you also get a lot of standard diagnostic OSB log messages in your log which makes it harder to find your own log messages. So, it is easiest to set the Log action Severity to Info, and the OSB log level to Notification (INFO). Note that if you change the log level, you do not need to restart Weblogic or redeploy your app, the changes are applied immediately. With info-level logging you have a clean log window that only contains your own log messages, and when you move your OSB application to production, you will not clutter the log files as long as the production log level is set to Warning or Error. Here is an example of the log window when we execute the /departments/{departmentId} resource (which maps to the getDepartmentDetails operation binding): More information about service bus logging can be found here. Another way to troubleshoot issues is to run your OSB application in debug mode. You can do this by choosing the Debug option from the proxy service popup menu: You can set breakpoints on the actions in your pipeline diagram using the popup menu: When you then execute a resource, the debugger will stop at your breakpoint and you can use the "data" debug window to inspect the flow of data through your pipeline. You can expand the various XML elements to see the contents of the header and body of your request and your response. In the above screen shot we have expanded the body element which shows the same data as we logged in the previous section. Any custom variables that you use to store temporary data, like the expandDetails variable we introduced in part 3, are also visible. When the debugger hits a breakpoint you have the normal debugging options like Step Over to go to the next action in the pipeline, or Resume to go to the next breakpoint. In other words, running in debug mode allows you determine the execution path through your pipeline in addition to viewing the data like you can with log messages, Invoking business services might cause various (unexpected) exceptions. The business service call might fail because the server is down, or the call succeeds but leads to an error because some business rule is violated while performing some update action.. When sending a JSON payload that contains invalid data, for example a non-existent manager ID, it is common practice to return HTTP error code 400 "Bad Request". When the business service does not respond at all, we should return HTTP error code 404 "Not Found". Using these error codes makes clear to the consumer whether he is dealing with an application error (400), or a server error (404). To return appropriate HTTP error codes, we first need to define an XSD that contains the structure of the error message for each type of error.Here is the error.xsd that we will use in our example: <?xml version = '1.0' encoding = 'UTF-8'?> <xsd:schema xmlns: <xsd:element <xsd:complexType> <xsd:sequence> <xsd:element <xsd:element <xsd:element </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element <xsd:complexType> <xsd:sequence> <xsd:element </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> With this XSD in place we can add two fault bindings to our REST operation bindings: Each fault binding needs to have its own unique XSD element type. If we would reuse the ApplicationError element type with the 404 fault binding, the 404 error code would never be returned. OSB determines which fault binding to use based on the element type used in the fault response returned by the pipeline. To be able to return a different payload and associated HTTP error code in case of an exception, we need to add a so-called error handler to our route nodes. We right-mouse-click on the RouteNode of the createDepartment operation branch, and choose Add Error Handler. To figure out the kind of response we get when violating a business rule, we first drag and drop a Log action inside the error handler and set the expression to $body. We now execute the /departments POST resource with an invalid managerId in the payload: In the log window we can inspect the payload body returned in case of an ADF BC exception being thrown: The body contains a generic part with the <env:Fault> element, and inside the <detail> element we can find the ADF-specific error. We need an XQuery file to transform the ADF error message to the ApplicationError element. As source element type we choose the ServiceErrorMessage element: As target element we choose the ApplicationError element from the error.xsd and then we can drag the mapping lines as shown below As we have seen in the JDeveloper log window, the message element contains both the error code and the error message. Since we have a separate element for the code, we want to strip the error code from the message. We can do this by using the XQuery String function substring-after: in the component palette, we change the value of the dropdown list to XQuery Functions, and expand the String Functions accordion.We drag and drop the substring-after function onto the message mapping line, inside the Mappings area in the middle. We click on the yellow function expression icon that appears and then we can complete the expression in the Expression - Properties window. We should replace the second argument of the function with ': 'because after these two characters the actual error message appears. Click on the XQuery Source tab to make sure that the expression has been saved correctly, sometimes the change you make in the properties window is not picked up. If this is the case, just re-enter the function argument in the source. In the component palette there are many XQuery functions available. If you want to get more information on how to use them, you can use the xqueryfunctions.com website. To complete the XQuery transformation, we need to surround the ApplicationError with the standard SOAP fault element, If we forget this step, the payload will not be recognized as a valid fault payload and the fault bindings we defined for the REST operation will not be used. We cannot do this in a visual way, so we click on the XQuery Source tab and add the surrounding fault element as shown below: Note that the value of the <faultcode> element must be set to env:Server, otherwise it will not work. The value of the <faultstring> element doesn't matter. With the XQuery transformation in place we can add a Replace action inside the error handler which uses this transformation: The expression for the ServiceErrorMessage input variable is shown below. Note the double slashes which means it will search the whole tree inside the body element, not just the direct children. The err namespace can be found in the JDeveloper log and should be set to. The last step is to drag and drop a Reply action after the Replace action and set the option With Failure to inform the proxy service that a fault response is returned. That's it, if we now use Postman to create a new department with an invalid managerId we get a nice error response with HTTP code 400: To handle the situation where the ADF BC SOAP server is down, we need to return a response which contains the ServerError element so we can return the HTTP error code 404 together with a user-friendly error message. To distinguish between the 400 and 404 error response, we drag and drop an If-Then action inside the error handler, and enter the following expression in the Condition field: $body//err:ServiceErrorMessage!='' When this expression is true we are dealing with an ADF BC Exception so we should move the Replace action we already defined inside the If branch. In the Else branch we should return a generic error that the service is not available. Since there is nothing to transform, we can enter the required response payload directly in the expression field: <env:Fault xmlns: <faultcode>env:Server</faultcode> <faultstring>Generic Error</faultstring> <detail> <ns2:ServerError> <ns2:message>The HR service is currently not available, please contact the helpdesk</ns2:message> </ns2:ServerError> </detail> </env:Fault> The complete error handler (with log actions removed) now looks like this: When we bring the ADF BC Server down and use Postman again to submit a new department, we will get the 404 error code together with the generic error we just defined in the body replace expression: To finish the exception handling, we need to add the same error handler to the updateDepartment operation. A quick way to do this is to right-mouse-click on the createDepartment error handler and choose Copy from the popup menu. Then right-mouse-click on the updateDepartment RouteNode and choose Paste from the popup menu. However, a better and more reusable way to do this is to create a pipeline template and define the error handler in the template. This prevents duplication of identical error handlers and it allows us to change the error handler over time in the template, with the changes being picked up automatically by all pipelines based on this template. We will look into pipeline templates in more depth later on in this article series. Downloads:
https://www.ateam-oracle.com/creating-a-mobile-optimized-rest-api-using-oracle-service-bus-part-4
CC-MAIN-2020-45
refinedweb
1,915
54.26
Re: AxShDocVw.AxWebBrowser.Navigate and object headers - From: "Marc Gravell" <marc.gravell@xxxxxxxxx> - Date: Mon, 19 Nov 2007 10:32:29 -0000 Can you define "pass an object"? Http headers are just string name/value pairs. If you can serialize your object into a string, then I guess you could pass it in a header, although the body (postData) would be more common (again, it would need to be serialized). At the server you'd then just read the named header from the Request, or the body, etc. You might also note that in 2.0 there is a managed WebBrowser wrapper in the System.Windows.Forms namespace. What are you trying to do? There is a more object-based interaction between a WebBrowser and a hosting application, but it is client-side only - i.e. it allows your C# code and javascript code to talk to eachother in both directions. It doesn't allow you to talk to the server. Either regular http requests or web-services (SOAP etc) are commonly used for this type of activity. Marc . - Prev by Date: Re: Full ISO 4217 information - Next by Date: Re: How to build dynamic query? - Previous by thread: time critical programming - Next by thread: Re: AxShDocVw.AxWebBrowser.Navigate and object headers - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2007-11/msg02366.html
crawl-002
refinedweb
214
66.74
One of the largest goals of any 3d application is speed. You should always limit the amount of polygons actually rendered, whether by sorting, culling, or level-of-detail algorithms. However, when all else fails and you simply need raw polygon-pushing power, you can always utilize the optimizations provided by OpenGL. Vertex Arrays are one good way to do that, plus a recent extension to graphics cards named Vertex Buffer Objects adds the FPS boost everybody dreams of. The extension, ARB_vertex_buffer_object, works just like vertex arrays, except that it loads the data into the graphics card's high-performance memory, significantly lowering rendering time. Of course, the extension being relatively new, not all cards will support it, so we will have to write in some technology scaling. In this tutorial, we will So let's get started! First we are going to define a few application parameters. #define MESH_RESOLUTION 4.0f // Pixels Per Vertex #define MESH_HEIGHTSCALE 1.0f // Mesh Height Scale //#define NO_VBOS // If Defined, VBOs Will Be Forced Off The first two constants are standard heightmap fare - the former sets the resolution at which the heightmap will be generated per pixel, and the latter sets the vertical scaling of the data retrieved from the heightmap. The third constant, when defined, will force VBOs off - a provision I added so that those with bleeding-edge cards can easily see the difference. Next we have the VBO extension constant, data type, and function pointer definitions. // VBO Extension Definitions, From glext); // VBO Extension Function Pointers PFNGLGENBUFFERSARBPROC glGenBuffersARB = NULL; // VBO Name Generation Procedure PFNGLBINDBUFFERARBPROC glBindBufferARB = NULL; // VBO Bind Procedure PFNGLBUFFERDATAARBPROC glBufferDataARB = NULL; // VBO Data Loading Procedure PFNGLDELETEBUFFERSARBPROC glDeleteBuffersARB = NULL; // VBO Deletion Procedure I have only included what will be necessary for the demo. If you need any more of the functionality, I recommend downloading the latest glext.h from /data/lessons/ and using the definitions there (it will be much cleaner for your code, anyway). We will get into the specifics of those functions as we use them. Now we find the standard mathematical definitions, plus our mesh class. All of them are very bare-bones, designed specifically for the demo. As always, I recommend developing your own math library. class CVert // Vertex Class { public: float x; // X Component float y; // Y Component float z; // Z Component }; typedef CVert CVec; // The Definitions Are Synonymous class CTexCoord // Texture Coordinate Class { public: float u; // U Component float v; // V Component }; class CMesh { public: // Mesh Data int m_nVertexCount; // Vertex Count CVert* m_pVertices; // Vertex Data CTexCoord* m_pTexCoords; // Texture Coordinates unsigned int m_nTextureId; // Texture ID // Vertex Buffer Object Names unsigned int m_nVBOVertices; // Vertex VBO Name unsigned int m_nVBOTexCoords; // Texture Coordinate VBO Name // Temporary Data AUX_RGBImageRec* m_pTextureImage; // Heightmap Data public: CMesh(); // Mesh Constructor ~CMesh(); // Mesh Deconstructor // Heightmap Loader bool LoadHeightmap( char* szPath, float flHeightScale, float flResolution ); // Single Point Height float PtHeight( int nX, int nY ); // VBO Build Function void BuildVBOs(); }; Most of that code is relatively self-explanatory. Note that while I do keep the Vertex and Texture Coordinate data seperate, that is not wholly necessary, as will be indicated later. Here we have our global variables. First we have a VBO extension validity flag, which will be set in the initialization code. Then we have our mesh, followed by our Y rotation counter. Leading up the rear are the FPS monitoring variables. I decided to write in a FPS gauge to help display the optimization provided by this code. bool g_fVBOSupported = false; // ARB_vertex_buffer_object supported? CMesh* g_pMesh = NULL; // Mesh Data float g_flYRot = 0.0f; // Rotation int g_nFPS = 0, g_nFrames = 0; // FPS and FPS Counter DWORD g_dwLastFPS = 0; // Last FPS Check Time Let's skip over to the CMesh function definitions, starting with LoadHeightmap. For those of you who live under a rock, a heightmap is a two-dimensional dataset, commonly an image, which specifies the terrain mesh's vertical data. There are many ways to implement a heightmap, and certainly no one right way. My implementation reads a three channel bitmap and uses the luminosity algorithm to determine the height from the data. The resulting data would be exactly the same if the image was in color or in grayscale, which allows the heightmap to be in color. Personally, I recommend using a four channel image, such as a targa, and using the alpha channel for the heights. However, for the purpose of this tutorial, I decided that a simple bitmap would be best. First, we make sure that the heightmap exists, and if so, we load it using GLaux's bitmap loader. Yes yes, it probably is better to write your own image loading routines, but that is not in the scope of this tutorial. bool CMesh :: LoadHeightmap( char* szPath, float flHeightScale, float flResolution ) { // Error-Checking FILE* fTest = fopen( szPath, "r" ); // Open The Image if( !fTest ) // Make Sure It Was Found return false; // If Not, The File Is Missing fclose( fTest ); // Done With The Handle // Load Texture Data m_pTextureImage = auxDIBImageLoad( szPath ); // Utilize GLaux's Bitmap Load Routine Now things start getting a little more interesting. First of all, I would like to point out that my heightmap generates three vertices for every triangle - vertices are not shared. I will explain why I chose to do that later, but I figured you should know before looking at this code. I start by calculating the amount of vertices in the mesh. The algorithm is essentially ( ( Terrain Width / Resolution ) * ( Terrain Length / Resolution ) * 3 Vertices in a Triangle * 2 Triangles in a Square ). Then I allocate my data, and start working my way through the vertex field, setting data. // Generate Vertex Field m_nVertexCount = (int) ( m_pTextureImage->sizeX * m_pTextureImage->sizeY * 6 / ( flResolution * flResolution ) ); m_pVertices = new CVec[m_nVertexCount]; // Allocate Vertex Data m_pTexCoords = new CTexCoord[m_nVertexCount]; // Allocate Tex Coord Data int nX, nZ, nTri, nIndex=0; // Create Variables float flX, flZ; for( nZ = 0; nZ < m_pTextureImage->sizeY; nZ += (int) flResolution ) { for( nX = 0; nX < m_pTextureImage->sizeX; nX += (int) flResolution ) { for( nTri = 0; nTri < 6; nTri++ ) { // Using This Quick Hack, Figure The X,Z Position Of The Point flX = (float) nX + ( ( nTri == 1 || nTri == 2 || nTri == 5 ) ? flResolution : 0.0f ); flZ = (float) nZ + ( ( nTri == 2 || nTri == 4 || nTri == 5 ) ? flResolution : 0.0f ); // Set The Data, Using PtHeight To Obtain The Y Value m_pVertices[nIndex].x = flX - ( m_pTextureImage->sizeX / 2 ); m_pVertices[nIndex].y = PtHeight( (int) flX, (int) flZ ) * flHeightScale; m_pVertices[nIndex].z = flZ - ( m_pTextureImage->sizeY / 2 ); // Stretch The Texture Across The Entire Mesh m_pTexCoords[nIndex].u = flX / m_pTextureImage->sizeX; m_pTexCoords[nIndex].v = flZ / m_pTextureImage->sizeY; // Increment Our Index nIndex++; } } } I finish off the function by loading the heightmap texture into OpenGL, and freeing our copy of the data. This should be fairly familiar from past tutorials. // Load The Texture Into OpenGL glGenTextures( 1, &m_nTextureId ); // Get An Open ID glBindTexture( GL_TEXTURE_2D, m_nTextureId ); // Bind The Texture glTexImage2D( GL_TEXTURE_2D, 0, 3, m_pTextureImage->sizeX, m_pTextureImage->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, m_pTextureImage->data ); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); // Free The Texture Data if( m_pTextureImage ) { if( m_pTextureImage->data ) free( m_pTextureImage->data ); free( m_pTextureImage ); } return true; } PtHeight is relatively simple. It calculates the index of the data in question, wrapping any overflows to avoid error, and calculates the height. The luminance formula is very simple, as you can see, so don't sweat it too much. float CMesh :: PtHeight( int nX, int nY ) { // Calculate The Position In The Texture, Careful Not To Overflow int nPos = ( ( nX % m_pTextureImage->sizeX ) + ( ( nY % m_pTextureImage->sizeY ) * m_pTextureImage->sizeX ) ) * 3; float flR = (float) m_pTextureImage->data[ nPos ]; // Get The Red Component float flG = (float) m_pTextureImage->data[ nPos + 1 ]; // Get The Green Component float flB = (float) m_pTextureImage->data[ nPos + 2 ]; // Get The Blue Component return ( 0.299f * flR + 0.587f * flG + 0.114f * flB ); // Calculate The Height Using The Luminance Algorithm } Hurray, time to get dirty with Vertex Arrays and VBOs. So what are Vertex Arrays? Essentially, it is a system by which you can point OpenGL to your geometric data, and then subsequently render data in relatively few calls. The resulting cut down on function calls (glVertex, etc) adds a significant boost in speed. What are VBOs? Well, Vertex Buffer Objects use high-performance graphics card memory instead of your standard, ram-allocated memory. Not only does that lower the memory operations every frame, but it shortens the bus distance for your data to travel. On my specs, VBOs actually triple my framerate, which is something not to be taken lightly. So now we are going to build the Vertex Buffer Objects. There are really a couple of ways to go about this, one of which is called "mapping" the memory. I think the simplist way is best here. The process is as follows: first, use glGenBuffersARB to get a valid VBO "name". Essentially, a name is an ID number which OpenGL will associate with your data. We want to generate a name because the same ones won't always be available. Next, we make that VBO the active one by binding it with glBindBufferARB. Finally, we load the data into our gfx card's data with a call to glBufferDataARB, passing the size and the pointer to the data. glBufferDataARB will copy that data into your gfx card memory, which means that we will not have any reason to maintain it anymore, so we can delete it. void CMesh :: BuildVBOs() { // Generate And Bind The Vertex Buffer glGenBuffersARB( 1, &m_nVBOVertices ); // Get A Valid Name glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOVertices ); // Bind The Buffer // Load The Data glBufferDataARB( GL_ARRAY_BUFFER_ARB, m_nVertexCount*3*sizeof(float), m_pVertices, GL_STATIC_DRAW_ARB ); // Generate And Bind The Texture Coordinate Buffer glGenBuffersARB( 1, &m_nVBOTexCoords ); // Get A Valid Name glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOTexCoords ); // Bind The Buffer // Load The Data glBufferDataARB( GL_ARRAY_BUFFER_ARB, m_nVertexCount*2*sizeof(float), m_pTexCoords, GL_STATIC_DRAW_ARB ); // Our Copy Of The Data Is No Longer Necessary, It Is Safe In The Graphics Card delete [] m_pVertices; m_pVertices = NULL; delete [] m_pTexCoords; m_pTexCoords = NULL; } Ok, time to initialize. First we will allocate and load our mesh data. Then we will check for GL_ARB_vertex_buffer_object support. If we have it, we will grab the function pointers with wglGetProcAddress, and build our VBOs. Note that if VBOs aren't supported, we will retain the data as usual. Also note the provision for forced no VBOs. // Load The Mesh Data g_pMesh = new CMesh(); // Instantiate Our Mesh if( !g_pMesh->LoadHeightmap( "terrain.bmp", // Load Our Heightmap MESH_HEIGHTSCALE, MESH_RESOLUTION ) ) { MessageBox( NULL, "Error Loading Heightmap", "Error", MB_OK ); return false; } // Check For VBOs Supported #ifndef NO_VBOS g_fVBOSupported = IsExtensionSupported( "GL_ARB_vertex_buffer_object" ); if( g_fVBOSupported ) { // Get Pointers To The GL Functions glGenBuffersARB = (PFNGLGENBUFFERSARBPROC) wglGetProcAddress("glGenBuffersARB"); glBindBufferARB = (PFNGLBINDBUFFERARBPROC) wglGetProcAddress("glBindBufferARB"); glBufferDataARB = (PFNGLBUFFERDATAARBPROC) wglGetProcAddress("glBufferDataARB"); glDeleteBuffersARB = (PFNGLDELETEBUFFERSARBPROC) wglGetProcAddress("glDeleteBuffersARB"); // Load Vertex Data Into The Graphics Card Memory g_pMesh->BuildVBOs(); // Build The VBOs } #else /* NO_VBOS */ g_fVBOSupported = false; #endif IsExtensionSupported is a function you can get from OpenGL.org. My variation is, in my humble opinion, a little cleaner. bool IsExtensionSupported( char* szTargetExtension ) { const unsigned char *pszExtensions = NULL; const unsigned char *pszStart; unsigned char *pszWhere, *pszTerminator; // Extension names should not have spaces pszWhere = (unsigned char *) strchr( szTargetExtension, ' ' ); if( pszWhere || *szTargetExtension == '\0' ) return false; // Get Extensions String pszExtensions = glGetString( GL_EXTENSIONS ); // Search The Extensions String For An Exact Copy pszStart = pszExtensions; for(;;) { pszWhere = (unsigned char *) strstr( (const char *) pszStart, szTargetExtension ); if( !pszWhere ) break; pszTerminator = pszWhere + strlen( szTargetExtension ); if( pszWhere == pszStart || *( pszWhere - 1 ) == ' ' ) if( *pszTerminator == ' ' || *pszTerminator == '\0' ) return true; pszStart = pszTerminator; } return false; } It is relatively simple. Some people simply use a sub-string search with strstr, but apparently OpenGL.org doesn't trust the consistancy of the extension string enough to accept that as proof. And hey, I am not about to argue with those guys. Almost finished now! All we gotta do is render the data. void Draw (void) { glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer glLoadIdentity (); // Reset The Modelview Matrix // Get FPS if( GetTickCount() - g_dwLastFPS >= 1000 ) // When A Second Has Passed... { g_dwLastFPS = GetTickCount(); // Update Our Time Variable g_nFPS = g_nFrames; // Save The FPS g_nFrames = 0; // Reset The FPS Counter char szTitle[256]={0}; // Build The Title String sprintf( szTitle, "Lesson 45: NeHe & Paul Frazee's VBO Tut - %d Triangles, %d FPS", g_pMesh->m_nVertexCount / 3, g_nFPS ); if( g_fVBOSupported ) // Include A Notice About VBOs strcat( szTitle, ", Using VBOs" ); else strcat( szTitle, ", Not Using VBOs" ); SetWindowText( g_window->hWnd, szTitle ); // Set The Title } g_nFrames++; // Increment Our FPS Counter // Move The Camera glTranslatef( 0.0f, -220.0f, 0.0f ); // Move Above The Terrain glRotatef( 10.0f, 1.0f, 0.0f, 0.0f ); // Look Down Slightly glRotatef( g_flYRot, 0.0f, 1.0f, 0.0f ); // Rotate The Camera Pretty simple - every second, save the frame counter as the FPS and reset the frame counter. I decided to throw in poly count for impact. Then we move the camera above the terrain (you may need to adjust that if you change the heightmap), and do a few rotations. g_flYRot is incremented in the Update function. To use Vertex Arrays (and VBOs), you need to tell OpenGL what data you are going to be specifying with your memory. So the first step is to enable the client states GL_VERTEX_ARRAY and GL_TEXTURE_COORD_ARRAY. Then we are going to want to set our pointers. I doubt you have to do this every frame unless you have multiple meshes, but it doesn't hurt us cycle-wise, so I don't see a problem. To set a pointer for a certain data type, you have to use the appropriate function - glVertexPointer and glTexCoordPointer, in our case. The usage is pretty easy - pass the amount of variables in a point (three for a vertex, two for a texcoord), the data cast (float), the stride between the desired data (in the event that the vertices are not stored alone in their structure), and the pointer to the data. You can actually use glInterleavedArrays and store all of your data in one big memory buffer, but I chose to keep it seperate to show you how to use multiple VBOs. Speaking of VBOs, implementing them isn't much different. The only real change is that instead of providing a pointer to the data, we bind the VBO we want and set the pointer to zero. Take a look. // Set Pointers To Our Data if( g_fVBOSupported ) { glBindBufferARB( GL_ARRAY_BUFFER_ARB, g_pMesh->m_nVBOVertices ); glVertexPointer( 3, GL_FLOAT, 0, (char *) NULL ); // Set The Vertex Pointer To The Vertex Buffer glBindBufferARB( GL_ARRAY_BUFFER_ARB, g_pMesh->m_nVBOTexCoords ); glTexCoordPointer( 2, GL_FLOAT, 0, (char *) NULL ); // Set The TexCoord Pointer To The TexCoord Buffer } else { glVertexPointer( 3, GL_FLOAT, 0, g_pMesh->m_pVertices ); // Set The Vertex Pointer To Our Vertex Data glTexCoordPointer( 2, GL_FLOAT, 0, g_pMesh->m_pTexCoords ); // Set The Vertex Pointer To Our TexCoord Data } Guess what? Rendering is even easier. // Render glDrawArrays( GL_TRIANGLES, 0, g_pMesh->m_nVertexCount ); // Draw All Of The Triangles At Once Here we use glDrawArrays to send our data to OpenGL. glDrawArrays checks which client states are enabled, and then uses their pointers to render. We tell it the geometric type, the index we want to start from, and how many vertices to render. There are many other ways we can send the data for rendering, such as glArrayElement, but this is the fastest way to do it. You will notice that glDrawArrays is not within glBegin / glEnd statements. That isn't necessary here. glDrawArrays is why I chose not to share my vertex data between triangles - it isn't possible. As far as I know, the best way to optimize memory usage is to use triangle strips, which is, again, out of this tutorial's scope. Also you should be aware that normals operate "one for one" with vertices, meaning that if you are using normals, each vertex should have an accompanying normal. Consider that an opportunity to calculate your normals per-vertex, which will greatly increase visual accuracy. Now all we have left is to disable vertex arrays, and we are finished. // Disable Pointers glDisableClientState( GL_VERTEX_ARRAY ); // Disable Vertex Arrays glDisableClientState( GL_TEXTURE_COORD_ARRAY ); // Disable Texture Coord Arrays } If you want more information on Vertex Buffer Objects, I recommend reading the documentation in SGI's extension registry -. It is a little more tedious to read through than a tutorial, but it will give you much more detailed information. Well that does it for the tutorial. If you find any mistakes or misinformation, or simply have questions, you can contact me at paulfrazee@cox.net. * Gerald Buchgraber ) * DOWNLOAD Linux/SDL Code For This Lesson. ( Conversion by Ilias Maratos ) * DOWNLOAD Python Code For This Lesson. ( Conversion by Brian Leair ) * DOWNLOAD Visual Studio .NET Code For This Lesson. ( Conversion by Joachim Rohde ) < Lesson 44Lesson 46 > NeHe™ and NeHe Productions™ are trademarks of GameDev.net, LLC OpenGL® is a registered trademark of Silicon Graphics Inc.
http://nehe.gamedev.net/tutorial/vertex_buffer_objects/22002/
CC-MAIN-2015-14
refinedweb
2,738
51.68
- Author: - SmileyChris - Posted: - September 2, 2007 - Language: - Python - Version: - .96 - template-filter - Score: - 6 (after 8 ratings) A couple of useful template filters for splitting a list (or QuerySet) up into rows or columns. A couple of useful template filters for splitting a list (or QuerySet) up into rows or columns. # # @register.filter def partition_len(thelist, n): """ Break a list into pieces with a maxmimum specified length. The last list may be shorter than the rest if the list doesn't break cleanly. That is:: >>> l = range(10) >>> partition(l, 2) [[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]] >>> partition(l, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]] """ try: n = int(n) thelist = list(thelist) except (ValueError, TypeError): return [thelist] return (thelist[i:i+n] for i in xrange(0, len(thelist), n))</pre> # Please login first before commenting.
https://djangosnippets.org/snippets/401/
CC-MAIN-2016-44
refinedweb
145
57.91
Interview question: string a = "ab"; String b = "a" + "b"; Is a = = B equal Interview investigation point Purpose: To investigate the understanding of the basic knowledge of JVM, including constant pool, JVM runtime data area, etc. Scope of investigation: working for 2 to 5 years. background knowledge To answer this question, we need to understand two basic questions - String a = "ab", what happens in the JVM? - String b = "a" + "b", how is the underlying layer implemented? Runtime data for the JVM First, let's review the runtime data area of the JVM. In order to give you a global perspective, I draw the overall structure from class loading to JVM runtime data area, as shown in the following figure. The role of each area is described in detail in my previous interview series articles, so I won't repeat it here. In the figure above, we need to focus on several categories: - String constant pool - Encapsulation class constant pool - Runtime Constant Pool - JIT compiler These contents are closely related to this interview question. Here, for the content of constant pool, first leave a question. First follow me to learn about constant pool in JVM. What constant pools are there in the JVM You often hear about various constant pools, but you don't know where these constant pools are stored. Therefore, you will have many questions: what constant pools are there in the JVM? Constant pools in the JVM can be divided into the following categories: - Class file constant pool - Global string constant pool - Runtime Constant Pool Class file constant pool There is a constant pool in the bytecode of each Class file, which mainly stores various literal and symbol references generated by the compiler. For a more intuitive understanding, we write the following program. public class StringExample { private int value = 1; public final static int fs=101; public static void main(String[] args) { String a="ab"; String b="a"+"b"; String c=a+b; } } After compiling the above program, view the bytecode file of this class through javap -v StringExample.class, and the intercepted part is as follows. Constant pool: #9 = Class #39 // java/lang/Object #10 = Utf8 value #11 = Utf8 I #12 = Utf8 fs #13 = Utf8 ConstantValue #14 = Integer 101 #15 = Utf8 <init> #16 = Utf8 ()V #17 = Utf8 Code #18 = Utf8 LineNumberTable #19 = Utf8 LocalVariableTable #20 = Utf8 this #21 = Utf8 Lorg/example/cl07/StringExample; #22 = Utf8 main #23 = Utf8 ([Ljava/lang/String;)V #24 = Utf8 args #25 = Utf8 [Ljava/lang/String; #26 = Utf8 a #27 = Utf8 Ljava/lang/String; #28 = Utf8 b #29 = Utf8 c #30 = Utf8 SourceFile #31 = Utf8 StringExample.java #32 = NameAndType #15:#16 // "<init>":()V #33 = NameAndType #10:#11 // value:I #34 = Utf8 ab #35 = Utf8 java/lang/StringBuilder #36 = NameAndType #40:#41 // append:(Ljava/lang/String;)Ljava/lang/StringBuilder; #37 = NameAndType #42:#43 // toString:()Ljava/lang/String; #38 = Utf8 org/example/cl07/StringExample #39 = Utf8 java/lang/Object #40 = Utf8 append #41 = Utf8 (Ljava/lang/String;)Ljava/lang/StringBuilder; #42 = Utf8 toString #43 = Utf8 ()Ljava/lang/String; Let's focus on the Constant pool description, which represents the Constant pool of the Class file. Two types of constants are stored in the Constant pool. - Literal. - Symbol reference. Literal Literal, the way to assign values to basic type variables is called literal or literal. For example, String a = "b", where "b" is the literal value of string, and so on, including integer value, floating point type literal value and character literal value. In the above code, the byte code of literal constant is: #3 = String #34 // ab #26 = Utf8 a #34 = Utf8 ab Member variables, static variables, instance variables and local variables modified with final, such as: #11 = Utf8 I #12 = Utf8 fs #13 = Utf8 ConstantValue #14 = Integer 101 From the bytecode above, the literals and final modified attributes are stored in the constant pool. These literals in the constant pool refer to the data values, such as ab, 101. For basic data types, such as private int value=1, only its field descriptor (I) and field name (value) are reserved in the constant pool, and its literal quantity will not exist in the constant pool. #10 = Utf8 value #11 = Utf8 I In addition, for String c=a+b;, The value of C is not saved to the constant pool, because the values of a and b are uncertain during compilation. #29 = Utf8 c #35 = Utf8 java/lang/StringBuilder #36 = NameAndType #40:#41 // append:(Ljava/lang/String;)Ljava/lang/StringBuilder; #37 = NameAndType #42:#43 // toString:()Ljava/lang/String; #39 = Utf8 java/lang/Object #40 = Utf8 append #41 = Utf8 (Ljava/lang/String;)Ljava/lang/StringBuilder; If so, we modify the code to the following form public static void main(String[] args) { final String a="ab"; final String b="a"+"b"; String c=a+b; } After the bytecode is regenerated, you can see that the bytecode has changed, and the value abab of c is also saved to the constant pool. #26 = Utf8 c #27 = Utf8 SourceFile #28 = Utf8 StringExample.java #29 = NameAndType #12:#13 // "<init>":()V #30 = NameAndType #7:#8 // value:I #31 = Utf8 ab #32 = Utf8 abab Symbol reference Symbolic reference mainly involves the concepts of compilation principle, including the following three types of constants: The fully qualified name of the class and interface, that is, Ljava/lang/String;, It is mainly used to resolve the direct reference of the class at run time. #23 = Utf8 ([Ljava/lang/String;)V #25 = Utf8 [Ljava/lang/String; #27 = Utf8 Ljava/lang/String; The name and descriptor of the field, that is, the variables declared in the class or interface, including class level variables (static) and instance level variables. #24 = Utf8 args #26 = Utf8 a #28 = Utf8 b #29 = Utf8 c The name and descriptor of the method. The description of the method is similar to the "method signature" during JNI dynamic registration, that is, parameter type + return value type. For example, the following bytecode represents the main method and String return type. #19 = Utf8 main #20 = Utf8 ([Ljava/lang/String;)V Summary: in the class file, there are some things that will not change, such as a class name, class field name / data type, method name / return type / parameter name, constant, literal, etc. These are very important when the JVM interprets and executes the program, so after the compiler compiles the source code into a class file, it will classify and store these unchanged code with some bytes, and these bytes are called constant pool. Runtime Constant Pool The runtime constant pool is the runtime representation of the constant pool of each class or interface. As we know, the loading process of a class will go through the processes of loading, connecting (validation, preparation, parsing) and initialization. At the stage of class loading, the following things need to be done: - Get the binary byte stream of a class through its fully qualified name. - Generate a java.lang.Class object in the heap memory, which represents the loading of this class and serves as the entry of this class. - The static storage structure of class byte stream is transformed into the runtime data structure of method area (meta space). The third point is that the process of transforming the static storage structure represented by the class byte stream into the runtime data structure of the method area includes the process of entering the runtime constant pool from the class file constant pool. Therefore, the function of the runtime constant pool is to store the symbol information in the class file constant pool. These symbol references will be converted into direct references (the memory address of the instance object) in the class parsing stage, and the translated direct references are also stored in the runtime constant pool. Most of the data of the class file constant pool will be loaded into the runtime constant pool. The runtime constant pool is saved in the method area (JDK1.8 meta space). It is shared globally. Different classes share a runtime constant pool. In addition, the runtime constant pool is dynamic, and its contents are not all from the compiled class file. You can also generate constants through code and put them into the runtime constant pool at runtime, such as String.intern() method. String constant pool String constant pool is simply a constant pool designed specifically for string types. There are two common ways to create string constant pools. String a="Hello"; String b=new String("Mic"); - a this variable, which has been determined during compilation, will enter the string constant pool. - b this variable is instantiated through the new keyword. New creates an object instance and initializes the instance. Therefore, this string object can be determined at runtime, and the created instance is in heap space. The string constant pool is stored in the heap memory space. The creation form is shown in the following figure. When using String a = "Hello" to create a string object, the JVM will first check whether the string object exists in the string constant pool. If so, it will directly return the reference of the string in the constant pool. Otherwise, a new string is created in the constant pool and a reference to the string in the constant pool is returned. (this method can reduce the repeated creation of the same string and save memory, which is also the embodiment of the meta sharing mode). As shown in the following figure, if you create a String through String c = "hello" and find that the String Hello already exists in the constant pool, you can directly return the reference of the String. (shared element pattern design in String) When using String b=new String("Mic") to create a String object, due to the immutability of String itself (subsequent analysis), Mic will be put into the constant pool of Class file during JVM compilation. When the Class is loaded, Mic will be created in the String constant pool. Next, use the new keyword to create a String object in heap memory and point to a reference to the Mic String in the constant pool. As shown in the following figure, if you create a String object through new String("Mic"), at this time, because Mic already exists in the String constant pool, you only need to create a String object in heap memory. A brief summary: the reason why the JVM designs the string constant pool separately is some optimizations of the JVM to improve performance and reduce memory overhead: - As an important data type in Java language, String object occupies the largest space in memory. Using strings efficiently can improve the overall performance of the system. - When creating a string constant, first check whether the string exists in the string constant pool. If so, directly return the reference instance. If not, instantiate the string and put it into the constant pool. String constant pool is a reference table of a string instance maintained by the JVM. In HotSpot VM, it is a global table called StringTable. The string instance reference is maintained in the string constant pool, and the underlying C + + implementation is a Hashtable. The string instances referred to by these maintained references are called "resident string" or "interned string" or "string entering the string constant pool"! Encapsulation class constant pool In addition to string constant pool, most of the encapsulated classes of Java basic types also implement constant pool. Including byte, short, integer, long, character and Boolean Note that the floating-point data types float and double have no constant pool. The constant pool of encapsulated classes is implemented in their own internal classes, such as integercache (internal class of integer). Note that these constant pools are scoped: - Byte,Short,Integer,Long : [-128~127] - Character : [0~127] - Boolean : [True, False] The test code is as follows: public static void main(String[] args) { Character a=129; Character b=129; Character c=120; Character d=120; System.out.println(a==b); System.out.println(c==d); System.out.println("...integer..."); Integer i=100; Integer n=100; Integer t=290; Integer e=290; System.out.println(i==n); System.out.println(t==e); } Operation results: false true ...integer... true false The constant pool of encapsulation classes is actually the cache instances implemented in each encapsulation class (not the JVM virtual machine level implementation). For example, in Integer, there is IntegerCache, which caches data instances between - 128 ~ 127 in advance. This means that the data in this interval adopts the same data object. This is why in the above program, the result obtained by = = judgment is true. This design is actually the application of the meta model. private static class IntegerCache {; } private IntegerCache() {} } The original design intention of encapsulation class constant pool is actually the same as String. It is also used to cache frequently used data intervals to avoid the memory overhead of frequently creating objects. Exploration on string constant pool In the above constant pool, there are still many problems to be explored in the design of String constant pool: If a string constant already exists in the constant pool, how does it refer to the same string constant when defining the literal quantity of the same string later? That is, the assertion result of the following code is true. String a="Mic"; String b="Mic"; assert(a==b); //true - How large is the string constant pool? - Why design a separate constant pool for strings? Why design a separate constant pool for strings? First, let's look at the definition of String. public final class String implements java.io.Serializable, Comparable<String>, CharSequence { /** The value is used for character storage. */ private final char value[]; /** Cache the hash code for the string */ private int hash; // Default to 0 } It can be found from the above source code. - The class String is modified by final, which means that the class cannot be inherited. - The member attribute value [] of the String class is also modified by final, which means that the member attribute cannot be modified. Therefore, String is immutable, that is, once a String is created, it cannot be changed. There are several advantages of this design. - Convenient implementation of string constant pool: in Java, because a large number of string constants are used, if a string object is created every time a string is declared, it will cause a great waste of space resources. Java puts forward the concept of string pool, which opens up a storage space string pool in the heap. When initializing a string variable, if the string already exists, it will not create a new string variable, but will return the reference of the existing string. If the string is variable and a string variable changes its value, the value of the variable it points to will also change. String pool cannot be implemented! - Thread safety: in a concurrent scenario, it is safe for multiple threads to read a resource at the same time, which will not cause competition. However, it is unsafe to write resources, and immutable objects cannot be written, so the safety of multiple threads is guaranteed. - Ensure that the hash attribute value will not change frequently. The uniqueness is ensured, so that similar HashMap containers can realize the corresponding Key value caching function. Therefore, when creating objects, their hashcode can be cached safely without recalculation. This is why Map likes to use String as Key, and the processing speed is faster than other Key objects. Therefore, the keys in HashMap often use String. Note that the immutability of String makes it easy to implement String constant pool, which is the premise of implementing String constant pool. String constant pool is actually the design of shared meta mode. It is similar to the cache design of encapsulated objects such as IntegerCache and Character provided in JDK, except that string is the implementation of JVM level. String allocation, like other object allocation, consumes a high cost of time and space. In order to improve performance and reduce memory overhead, the JVM makes some optimizations when instantiating string constants. In order to reduce the number of strings created in the JVM, the string class maintains a string pool. Whenever the code creates a string constant, the JVM will first check the string constant pool. If the string already exists in the pool, the instance reference in the pool is returned. If the string is not in the pool, a string is instantiated and placed in the pool. Java can perform such optimization because strings are immutable and can be shared without worrying about data conflicts. We regard the string constant pool as a cache. When defining a string constant through double quotation marks, we first look it up from the string constant pool. If we find it, we will directly return the reference of the string constant pool. Otherwise, we will create a new string constant and put it in the constant pool. How big is the constant pool? I think you must be as curious as I am. How many constants can the constant pool store? As we said earlier, the constant pool is essentially a hash table, which indicates that it cannot be dynamically expanded. This means that the linked list in a single bucket is very long, resulting in performance degradation. In JDK1.8, the number of fixed buckets in this hash table is 60013. We can configure the specified number through the following parameter -XX:StringTableSize=N You can add the following virtual machine parameter to print the data of the constant pool. -XX:+PrintStringTableStatistics After adding parameters, run the following code. public class StringExample { private int value = 1; public final static int fs=101; public static void main(String[] args) { final String a="ab"; final String b="a"+"b"; String c=a+b; } } When the JVM exits, the usage of the constant pool is printed as follows: SymbolTable statistics: Number of buckets : 20011 = 160088 bytes, avg 8.000 Number of entries : 12192 = 292608 bytes, avg 24.000 Number of literals : 12192 = 470416 bytes, avg 38.584 Total footprint : = 923112 bytes Average bucket size : 0.609 Variance of bucket size : 0.613 Std. dev. of bucket size: 0.783 Maximum bucket size : 6 StringTable statistics: Number of buckets : 60013 = 480104 bytes, avg 8.000 Number of entries : 889 = 21336 bytes, avg 24.000 Number of literals : 889 = 59984 bytes, avg 67.474 Total footprint : = 561424 bytes Average bucket size : 0.015 Variance of bucket size : 0.015 Std. dev. of bucket size: 0.122 Maximum bucket size : 2 You can see that the total size of the string constant pool is 60013, of which the literal is 889. When did the literal enter the string constant pool Unlike other basic types of literals or constants, String literals are not populated and reside in the String constant pool during the resolve phase of class loading, but are stored in the Run-Time Constant Pool in a special form. Instead, only when this String literal is called (such as executing the ldc bytecode instruction and adding it to the top of the stack), will HotSpot VM resolve it and create the corresponding String instance in the String constant pool. Specifically, it should be when the ldc instruction is executed (the instruction indicates that int, float or String constants are pushed from the constant pool to the top of the stack) In the HotSpot VM of JDK 1.8, this kind of string literal that is not really resolve d is called pseudo string, which is based on the JVM_CONSTANT_String is stored in the runtime constant pool, and no string instance is created for it at this time. During compilation, the string literal is stored in the Constant Pool of the class file in the form of "CONSTANT_String_info"+"CONSTANT_Utf8_info"; After the class is loaded, the string literal is stored in the runtime constant pool in the form of "jvm_constant_unresolved string (jdk1.7)" or "JVM_CONSTANT_String(JDK1.8)"; When a String literal is used for the first time, the String literal is stored in the String constant pool as a real String object. It can be proved by the following code. public static void main(String[] args) { String a =new String(new char[]{'a','b','c'}); String b = a.intern(); System.out.println(a == b); String x =new String("def"); String y = x.intern(); System.out.println(x == y); } The string constructed with new char [] {'a', 'b', 'c'} does not use the constant pool at compile time, but saves abc to the constant pool and returns the reference of the constant pool when calling a.intern(). intern() method In the valueOf method in Integer, we can see that if the passed value i is within the range of IntegerCache.low and IntegerCache.high, the cached instance object is returned directly from IntegerCache.cache. public static Integer valueOf(int i) { if (i >= IntegerCache.low && i <= IntegerCache.high) return IntegerCache.cache[i + (-IntegerCache.low)]; return new Integer(i); } So, in the String type, since there is a String constant pool, is there any method to implement functions similar to IntegerCache? The answer is: the intern() method. Since String pool is a virtual machine level technology, there is no object pool such as IntegerCache in the String class definition. The concept of cache / pool mentioned in the String class is only the intern() method. /** * Returns a canonical representation for the string object. * <p> * A pool of strings, initially empty, is maintained privately by the * class {@code String}. * <p> *. * (); The function of this method is to get the content of String and look up the table in Stringtable. If it exists, it returns a reference. If it does not exist, it saves the "reference" of the object in Stringtable. For example, the following program: public static void main(String[] args) { String str = new String("Hello World"); String str1=str.intern(); String str2 = "Hello World"; System.out.print(str1 == str2); } The result of running is: true. The implementation logic is shown in the figure below. str1 obtains the reference of Hello World string in the constant pool table by calling str.intern(), and then str2 declares a string constant in literal form. Since Hello world already exists in the string constant pool, the reference of the string constant Hello world is returned in the same way, so that str1 and str2 have the same reference address, So that the running result is true. Summary: the intern method will query whether the current string exists from the string constant pool: - If it does not exist, it will put the current string into the constant pool and return the local string address reference. - If it exists, the string address of the string constant pool is returned. Note that when initializing all string literals, the intern() method will be called by default. The reason why a==b in this program is that when declaring a, it will use the intern() method to find out whether the string Hello exists in the string constant pool. If it does not exist, it will create one. Similarly, the same is true for variable b. therefore, when declaring, b finds that hello's string constant already exists in the character constant pool, so it directly returns the reference of the string constant. public static void main(String[] args) { String a="Hello"; String b="Hello"; } OK, after learning here, do you feel you understand? I'll give you a question. What's the result of this program? public static void main(String[] args) { String a =new String(new char[]{'a','b','c'}); String b = a.intern(); System.out.println(a == b); String x =new String("def"); String y = x.intern(); System.out.println(x == y); } The correct answer is: true false It is understandable that the second output is false, because new String("def") does two things: - Create a string def in the string constant pool. - new keyword creates an instance object string and points to a reference to the string constant pool def. x.intern() is a reference to def obtained from the string constant pool. Their pointing addresses are different, which will be explained in detail later. Why is the first output true? Description of the intern() method in the JDK document: when calling the intern method, if the constant pool (built-in in the JVM) already contains the same String, the String in the pool will be returned. Otherwise, add the String object to the pool and return a reference to the String object. When building String a, when initializing the String with new char [] {'a', 'b', 'c'} (intern() will not be called automatically, and the String enters the constant pool by lazy loading), the String instance abc is not built in the String constant pool. Therefore, when the a.intern() method is called, the String object will be added to the character constant pool and the reference to the String object will be returned. Therefore, the reference addresses pointed to by a and b are the same. Question answer Interview question: string a = "ab"; String b = "a" + "b"; Is a = = B equal Answer: a==b is equal for the following reasons: - Variables a and b are constant strings. Since there are no changeable factors for variable b during compilation, the compiler will directly assign variable b to ab (this belongs to the scope of compiler optimization, that is, after compilation, b will be saved to the Class constant pool). - For string constants, when initializing a, a string ab is created in the string constant pool and a reference to the string constant pool is returned. - For variable b, when assigning ab, first find out whether the same string exists from the string constant pool. If so, return the string reference. - Therefore, a and b point to the same reference, so a==b holds. Problem summary It will take some time to have a more in-depth and comprehensive understanding of the constant pool. For example, by reading the above, you think you have a very deep understanding of string constant pool. Yes, let's look at another problem: public static void main(String[] args) { String str = new String("Hello World"); String str1=str.intern(); System.out.print(str == str1); } The above code obviously returns false, as shown in the following figure. Obviously, str and str1 point to different reference addresses. But let's transform the above code: public static void main(String[] args) { String str = new String("Hello World")+new String("!"); String str1=str.intern(); System.out.print(str == str1); } The output result of the above program becomes: true. Why? This is also an optimization at the JVM compiler level. Because String is an immutable type, theoretically, the execution logic of the above program is: when String splicing through + is equivalent to taking out the String constant HelloWorld pointed to by the original String variable and adding the String constant pointed to by another String variable!, Regenerate into a new object. Assuming that we splice String variables through the for loop, a large number of objects will be generated. If these objects are not recycled in time, it will cause a great waste of memory. Therefore, after JVM optimization, it is actually spliced through StringBuilder, that is, only one object instance StringBuilder will be generated, and then spliced through the append method. To prove what I said, take a look at the bytecode of the above code. public static void main(java.lang.String[]); descriptor: ([Ljava/lang/String;)V flags: ACC_PUBLIC, ACC_STATIC Code: stack=4, locals=3, args_size=1 0: new #3 // class java/lang/StringBuilder 3: dup 4: invokespecial #4 // Method java/lang/StringBuilder."<init>":()V 7: new #5 // class java/lang/String 10: dup 11: ldc #6 // String Hello World 13: invokespecial #7 // Method java/lang/String."<init>":(Ljava/lang/String;)V 16: invokevirtual #8 // Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder; 19: new #5 // class java/lang/String 22: dup 23: ldc #9 // String ! 25: invokespecial #7 // Method java/lang/String."<init>":(Ljava/lang/String;)V 28: invokevirtual #8 // Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder; 31: invokevirtual #10 // Method java/lang/StringBuilder.toString:()Ljava/lang/String; 34: astore_1 35: aload_1 36: invokevirtual #11 // Method java/lang/String.intern:()Ljava/lang/String; 39: astore_2 40: getstatic #12 // Field java/lang/System.out:Ljava/io/PrintStream; 43: aload_1 44: aload_2 45: if_acmpne 52 48: iconst_1 49: goto 53 52: iconst_0 53: invokevirtual #13 // Method java/io/PrintStream.print:(Z)V 56: return As can be seen from the bytecode, a StringBuilder is built, 0: new #3 // class java/lang/StringBuilder The string constants are then spliced by the append method, and finally the toString() method is used to get a string constant. 16: invokevirtual #8 // Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder; 28: invokevirtual #8 // Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder; 31: invokevirtual #10 // Method java/lang/StringBuilder.toString:()Ljava/lang/String; Therefore, the above code is equivalent to the following form. public static void main(String[] args) { StringBuilder sb=new StringBuilder().append(new String("Hello World")).append(new String("!")); String str=sb.toString(); String str1=str.intern(); System.out.print(str == str1); } Therefore, the result is true. There are many variants based on this problem. For example, change it again. What is the running result of the following program? public static void main(String[] args) { String s1 = "a"; String s2 = "b"; String s3 = "ab"; String s4 = s1 + s2; System.out.println(s3 == s4); } The answer is false. Because the above program is equivalent to s3 and s4 pointing to different address references, it is naturally not equal. public static void main(String[] args) { String s1 = "a"; String s2 = "b"; String s3 = "ab"; StringBuilder sb=new StringBuilder().append(s1).append(s2); String s4 = sb.toString(); System.out.println(s3 == s4); } Conclusion: only when you clearly understand all the knowledge points related to the string constant pool, you can answer accurately no matter how they change during the interview. This is the power of knowledge! Small partners who need java learning materials can help like + forward. After paying attention to Xiaobian, Just poke this Download a free set of PDFJava core interview questions of this comprehensive system~
https://programmer.help/blogs/618a2d52a2652.html
CC-MAIN-2021-49
refinedweb
5,034
61.56
NEW DELHI, Apr 17: India could be a major beneficiary of increased sourcing in the auto component industry which is....more Hyderabad to emerge as a major aviation hub by 2008 HYDERABAD, Apr 17: With seven international airlines apart from national carriers Air India and Indian Airlines now....more CII calls for review of anomalies in indirect tax provisions NEW DELHI, Apr 17: Commending the Governments move to bring customs duty to the east Asian level, Confederation.......more IT services, BPO contract sizes shrink in Q1 to 69 m NEW DELHI, Apr 17: The average deal size in the 600 billion dollar global IT and BPO services ....more WASHINGTON, Apr 17: The International Monetary Fund has said developing countries need to have a greater voice in........more Inflation rises to 5.26 pc for week ended April 2 NEW DELHI, Apr 17: After falling for three consecutive weeks, the rate of inflation rose to 5.26 per cent during the week.......more Luxor sees 35 pc growth in exports; to expand junior range NEW DELHI, Apr 17: The Rs 160 crore Luxor Writing Instruments Pvt Ltd (LWIL) is looking to push exports by 35 per.......more VAT rates on essential items differ widely across states NEW DELHI, Apr 17: VAT rates of essential items like atta, maida, tea, cereal and pulses...........more RINL announces expansion plans at an estimated Rs 8,500 cr ..... EU seeks policy intiatives from India to up bilateral trade ..... I-T offices to remain open on Monday ..... Auto component sourcing from India to increase: ASSOCHAM NEW DELHI, Apr 17: India could be a major beneficiary of increased sourcing in the auto component industry which is likely to flow contracts worth 700 billion dollars to low cost countries by 2015. According to an ASSOCHAM study, the size of global component industry is likely to go up to about 1.9 trillion dollars by 2015, which would present enormous business opportunities to India falling in the category of low cost country already supplying components to leading automobile companies globally. As per the study, auto component sourcing from low cost countries may increase the share of business in these countries from the present level of 65 billion dollars to about 375 billion dollars by 2015. For India, this would mean increased opportunity to scale up its component outsourcing business. Already india is becoming a hub for international auto majors for exporting Completely Built Units (CBUs) as well as for outsourcing components. Hyundai, Ford, Skoda, Suzuki and Mahindra have made India a manufacturing hub for particular models of cars. At the same time other MNCs such as Toyota, GM, and Daimler Chrysler are making India a hub for components. Mitsubishi and Yamaha have made India a hub for 125 cc motorcycles. Giving an account of the Indian auto component industry, the ASSOCHAM study said that the growth in the Indian auto ancillary sector is likely to be healthy between 15-16 per cent by the end of fiscal 2004-05. This is, however, lower than the growth of 22-24 per cent recorded in 2003-04, the study added. However, exports of auto components are expected to increase by 30-35 per cent, while replacement demand is likely to remain steady at 7-8 per cent in 2004-05, it added. The overall domestic market, including domestic light passenger vehicles, commercial vehicles, and two-wheeler markets is expected to grow at a modest rate of 10 per cent per annum over the next decade. The small domestic market size places a ceiling on the extent of exports each player can undertake take and hence achieving scale in the domestic market will be critical to capture future export potential, the study said. It may be stated that in a total auto component trade of 185 billion dollars, India accounts for 0.4 per cent while China accounts for 1.2 per cent and Mexico accounts for 5.9 per cent. According to the study, growth in the industry will be driven by demand from Original Equipment Manufacturers (OEMs), since the passenger cars and utility vehicle segment is likely to expand by 10-12 per cent, commercial vehicles and two wheelers are likely to grow by 12-15 per cent and 10 per cent respectively, and the tractors segment is likely to decline. Overall, the oem segment is likely to post a growth of 13-14 per cent in 2004-05. While highlighting areas such as engineering, education labour, redesigning manufacturing process etc as the major strength of the Indian Auto Component Industry, the ASSOCHAM study also said that lack of component scale, inadequate and poor quality infrastructure, taxation system may discourage manufacturing here. (PTI) Hyderabad to emerge as a major aviation hub by 2008 HYDERABAD, Apr 17: With seven international airlines apart from national carriers Air India and Indian Airlines now operating from here, this historic city is emerging as a favourite hub among the international air carriers. There are fifty flights weekly being operated from here to various destinations in the world connecting the US, Europe, Far East, Middle East, Australia, New Zealand and Africa. It was former Chief Minister N Chandrababu Naidu, who came out with the idea of making Hyderabad an aviation hub between Europe and China and had successfully pleaded with the then Union Government for permitting more and more foreign airlines to operate from here. The previous Government also successfully got sanctioned a greenfield international airport at Shamshabad in neighbouring Ranga Reddy district to be functional by 2008. At present, about 15000 international passengers land and take off from the present airport situated in heart of the city. It, however, is not adequately equipped to handle the rush while shamsabad airport, which was named after late Prime Minister Rajiv Gandhi, is designed to handle any number of aircraft with automation and emerging technologies in the aviation field. Initially, there were only two airlines which were operating from Hyderabad to Middle East destinations Air India and Indian Airlines. Later, over a period of three years seven airlines started flying from various destinations including China, New Zealand and Australia. According to tour operators here, after launch of the flights by Emirates, Saudi Airlines, Silk Air, Lufthansa, Malysian Airlines, Qatar Airlines and Air Lanka, the tourist flow has increased and would grow further when more airlines, including British Airways and Air France, start operating. Aviation industry sources said Hyderabad would become the fifth largest airport after the four metros in number of flights, leaving behind Bangalore which now occupies that slot. (PTI) CII calls for review of anomalies in indirect tax provisions NEW DELHI, Apr 17: Commending the Governments move to bring customs duty to the east Asian level, Confederation of Indian Industry (CII) today demanded review of certain provisions in indirect taxes introduced in this years general budget. "Even though few anomalies in customs duty in case of aluminium and electricity meters had been removed in budget, yet there are cases where customs duty on inputs is higher than the duty on the final products and these anomalies need to be reviewed," CII said in a release. Anomalies exist in automotive tyres, chemicals, cutting tools - solid carbide drills/routers, equipments for inland container freight stations, glycerine refined, petrochemicals, refractories, soaps and toiletries. While customs duty on automotive tyre has been reduced from 20 per cent to 15 per cent, natural rubber smoked sheets and natural rubber technically specified continue to attract 20 per cent duty, the chamber said. Similarly in the case of chemicals, customs duty on various chloromethanes has been reduced from 20 per cent to 10 per cent whereas the duty has been reduced from 20 per cent to 15 per cent on the critical input methanol. CII also pointed out that solid carbide drills/routers for manufacture of PCBs is zero whereas composite blanks of stainless steel tungsten carbide with grain size less than one micron, stainless steel shanks and diamond grinding wheels required for manufacture have customs duty of 15 per cent. According to CII, one of the challenges faced by the Indian industry is the growing number of Free Trade Agreements (FTAs) and preferential trade agreements being signed by India with other countries. CII said these agreements have certain advantages for exporters in India but at the same time adversely affect the manufacturers if corresponding reduction in customs duty on inputs is not done. The chamber cited the India-Thailand Free Trade Agreement, which has an Early Harvest Scheme (EHS) covering 82 tariff lines. The EHS list includes air-conditioners, refrigerators, colour television, colour picture tubes, gear boxes for vehicles and customs duty on these was reduced to 12.5 per cent from September one, 2004. As per the agreement, duty would be further reduced to 6.2 per cent from September one, 2005 and complete duty elimination would be achieved in 2006, CII said. In order to enable the indigenous manufacturers to sucessfully compete with the imports from Thailand, CII said the import duties on their major inputs should also be reduced to commensurate levels to avoid inverted duty structure. It recommended that in future trade agreements, the minimum customs duty on manufactured goods should not be less than five per cent. (PTI) IT services, BPO contract sizes shrink in Q1 to 69 m NEW DELHI, Apr 17: The average deal size in the 600 billion dollar global IT and BPO services sector shrank for the third consecutive quarter to reach 68.9 million dollars during the three-month period ending March 31, 2005, Datamonitor has said. The deal size fell 18 per cent year-on-year during the reporting quarter, partly due to multi-sourcing, Datamonitors "IT services contracts tracker" said. Of the deals tracked in the quarter, worth 6 billion dollars, 15 BPO contracts had a value greater than 100 million dollar, the largest of which was Atos origins 1.6 billion dollar contract with the UK Department of Health. However, half of the BPO deals tracked had a value of between 10 million dollars to 50 million dollars, showing that most of todays bpo activity is happening around mid-size contracts, rather than mega-deals. A total of 456 IT services contracts were announced during the first quarter of 2005, worth a combined total of 31.4 billion dollars. This represents a 13.7 per cent decline on the first quarter of 2004 where the total value of contracts reached 36.4 billion dollars. The number of contracts tracked in Q1 2005 was up 5 per cent on the 435 recorded in the year-ago quarter, but the fall in average deal size suggests that clients are signing smaller, more focused deals with suppliers. Datamonitor lead analyst (global computing services) Nick Mayes said: "The decline in average deal value is partly due to the rise of multi-sourcing, where clients work with a number of best-of-breed IT services vendors, rather than a single outsourcer under a far-reaching mega-deal." The trend is particularly evident in the fast-growing hr outsourcing (HRO) sector. Outsourcing advisory firm everest group found that the majority of buyers of HRO services are in the mid-to-large segment (with between 10,000 to 25,000 employees), rather than the large-to-mega segment (more than 25,000 employees), Datamonitor added. (UNI) Developing countries need to have greater say in IMF: Rato WASHINGTON, Apr 17: The International Monetary Fund has said developing countries need to have a greater voice in the financial body and warned major economic powers that it was their "responsibility" to rectify widening imbalances. The International Monetary and Financial Committee (IFMC), the IMFs policy-setting body, at its twice-yearly meeting yesterday chaired by British Chancellor Gordon Brown agreed that developing nations should have a greater say in the world body. However, managing director Rodrigo Rato was cautious on the demands raised by developing nations that quotas or shares should be calculated on purchasing power parity on the basis of their gross national income, saying that will have to be a political decision. While taking note of the continuing global economic expansion, which is expected to remain robust in 2005, he said widening imbalances across regions and the continued rise in oil prices and oil market volatility have increased risks and called for steps to even out the pace of world economic activity. "If policies do not adapt, do not change to react to these imbalances, we run the risk of an abrupt correction of the markets...", he said. The IMF should focus on promoting policies for reducing global imbalances over time, addressing the impact of higher oil prices, in particular on the most vulnerable countries; managing the policy response to potential inflationary pressures and ensuring the sustainability of medium-term fiscal frameworks, he said. Rato observed that all countries have a shared responsibility to take advantage of the current favourable economic conditions to address key risks and vulnerabilities. He called for fiscal consolidation to increase national savings in the United States, supported by continued financial sector reform in emerging Asia, further structural reforms to boost growth and domestic demand in Europe and further structural reforms, including fiscal consolidation, in Japan. "A continuing increase in US net external liabilities will carry heightened risks of a disorderly adjustment. Global imbalances have widened further over the past year, and while the US external deficit has so far been financed relatively easily, the demand for us assets is not unlimited," Rato said. He said "an abrupt decline in investors appetite for dollar-denominated liabilities could engender a rapid dollar depreciation and a sharp increase in US interest rates, with potentially serious adverse consequences for global growth and international financial markets." Successful and ambitious trade liberalisation is Central to continued global growth and economic development, the committee said. The immediate priority was for WTO members to translate the mid-2004 framework agreement into a viable policy package in time for the December 2005 wto ministerial conference, it added. The committee said IMF surveillance should become more focused and selective in analyzing issues in an evenhanded way across the membership. The committee asked IMF to meet "the highest standards of internal management." (PTI) Inflation rises to 5.26 pc for week ended April 2 NEW DELHI, Apr 17: After falling for three consecutive weeks, the rate of inflation rose to 5.26 per cent during the week ended April 2 due to higher energy and manufactured product prices. The wholesale price index-based inflation rate was 5.05 per cent in the previous week and 4.51 per cent during the corresponding week a year ago. Meanwhile, the rate of inflation for the week ended February 5 has been corrected to 4.96 per cent as against 5.01 per cent and the final WPI for all commodities is 188.5 as against provisional 188.6. The Union Budget has estimated that inflation in 2005-06 would be in a 4.0-5.0 per cent range. The index for food products increased marginally by 0.1 per cent to 174.3 from 174.2 due to higher prices of rice bran oil (4 per cent) and butter, coconut oil, sugar, khandsari and suji (1 per cent). However, the prices of bran (5 per cent), sunflower oil and groundnut oil (2 per cent each) and rape and mustard oil, oil cakes and gingelly oil (1 per cent each) declined. The index for food articles rose by 0.6 per cent to 187.3 from 186.1 for the porevious week due to higher prices of poultry chicken (9 per cent), ragi and fruits and vegetables (3 per cent each), fish-inland, bajra and jowar (2 per cent each) and fish-marine, maize, moong and urad (1 per cent each). However, the prices of barley (6 per cent), wheat and eggs (3 per cent each), and gram and condiments and spices (1 per cent each) declined. For non-food articles, the index declined marginally by 0.1 per cent to 178.4 from 178.5 following lower prices of copra (7 per cent), linseed (5 per cent), castor seed (3 per cent), tobacco and rape and mustard seed (2 per cent each) and mesta (1 per cent). The prices of raw silk (3 per cent), raw jute (2 per cent) and raw cotton and safflower (1 per cent each) moved up. The index for the major group of fuel, power and lubricants rose by 0.1 per cent to 289.9 from 287.0 for the previous week due to higher prices of aviation turbine fuel (20 per cent), naphtha (17 per cent) and furnace oil (1 per cent). The prices of bitumen, however went down by 2 per cent. The index for beverages, tobacco and tobacco products group rose by 0.2 per cent to 221.6 from 221.1 for the previous week due to higher prices of pan masala (3 per cent). Paper and paper products group saw the index rise by 0.2 per cent to 177.0 from 176.7 due to a percentage increase in the prices of newsprint and map litho paper (1 per cent). The index for chemicals and chemical products group rose by 0.3 per cent to 185.6 from 185.1 due to higher prices of soda ash (10 per cent), caustic soda (5 per cent) and synthetic rubber (2 per cent). For non metallic mineral products group, the index rose by 0.2 per cent to 168.7 from 168.3 for the previous week due to rise in the prices of electrodes (4 per cent). However, the prices of building bricks declined by 4 per cent. The index for basic metal alloys and metal products group increased to 213.8 from 213.7 due to higher prices of bolts and nuts (4 per cent) and brass sheets and strips and aluminium ingots (1 per cent eeach). The prices of zinc, however, declined by a percentage point. The index for machinery and machine tools group rose by 0.2 per cent to 144.9 from 144.6 for the previous week due to higher prices of TV sets (b/w) (3 per cent) and complete engines (2 per cent). Higher prices of bicycles saw the index for transport equipment and parts group increase marginally by 0.1 per cent to 158.4 from 158.3. (UNI) Luxor sees 35 pc growth in exports; to expand junior range NEW DELHI, Apr 17: The Rs 160 crore Luxor Writing Instruments Pvt Ltd (LWIL) is looking to push exports by 35 per cent this year to Rs 80 crore. "We clocked Rs 50 crore worth of exports last year. This year we are confident of taking them up by 35 per cent," Pooja Jain, vice president of Luxor, told PTI. The company is also looking to close 2005 with about Rs 175-180 crore in sales turnover. Luxor is currently registered in 110 countries will be consolidating its business in Africa, East Europe, Australia and Latin America. It is also looking to expand in the kids category, which was launched six months back, and further strengthen its parker brand this year. "We are busy developing a new range of Luxor sketch pens and colour pens for kids to further expand this section. We are currently offering 10 products in this category and will add 20 more products this year," Jain said. Luxor will invest about Rs 10 crore in expanding its product range across different categories. "A new range of parker pens in the Rs 75-Rs 85 slot will be launched in August, and we will bring in an entirely new range of `pilot pens this year," Jain, who was recently promoted to the rank of vice president, said. She said this year will also see the launch of a designer Parker line of pens. Currently, the company is looking to promote its copyright junior brand which includes Colormax, sketch-o-matic and magic color, among schools across India. "We are not just offering colour pens for kids but a lot of thought has been given to the packaging and also to make the whole experience interactive and educative for the kids," Jain added. Luxor will also start a new advertising campaign with film star Amitabh Bachchan in August. "For now, we have shot stills with Bachchan which will be used in the new packaging for parker range of pens. TV campaign will come in the next quarter," she Pooja. For the Junior range, the company is consdering having a kid-celebrity as its Ambassador, but that will happen only next year, she said. The company is also hopeful of crossing the 10 million mark for Parker pens in 2006. Parker contributes about 70 per cent to the overall business of Luxor. (PTI) VAT rates on essential items differ widely across states NEW DELHI, Apr 17: VAT rates of essential items like atta, maida, tea, cereal and pulses are showing glaring differences across the states, according to trade body cait. Even capital goods, tools and stationary products show a difference of 8.5 per cent in states like Delhi, West Bengal, Maharashtra, Andhra Pradesh and Punjab. According to Confederation of All India Traders (CAIT), states like West Bengal and Delhi have exempted atta, maida, cereals and pulses fully while Punjab, Andhra Pradesh, Bihar and Jammu and Kashmir have imposed 4 per cent VAT on them. The empowered committee has given states the option to either exempt fully foodgrains or impose 4 per cent on them. However, the VAT panels chairman Asim Dasgupta is not clear whether to treat atta, maida, cereals and pulses as "foodgrains". Similarly, some states have taken the advantage of the option to impose either 4 or 12.5 per cent VAT on tea. As a result, tea attracts 4 per cent VAT in West Bengal, Delhi and Jammu and Kashmir, while it is attracting 12.5 per cent in Punjab, Maharashtra, Bihar and Andhra Pradesh. West Bengal, whose Finance Minister is Asim Dasgupta, imposed 4 per cent VAT on capital goods and used car. But Delhi, Punjab, Maharashtra, Andhra Pradesh, Bihar and J & K kept them under the 12.5 per cent slab. Similarly, computer consumables attract 4 per cent rate in Andhra Pradesh and Bihar, while they draw 12.5 per cent in West Bengal, Delhi, Punjab, Maharashtra and J&K. Dasgupta had yesterday claimed that variations in VAT rates among states are not significant. He had said only in a few states deviations have been noted by the empowered committee. Empowered committee, which met for two days on April 15 and 16, will remove the variations in VAT rates in its next meeting on 25th and 26th of this month. (PTI) Punjab biotechnology incubator to be set up at Dera Bassi CHANDIGARH, Apr 17: A Rs 11-crore Bio Technology Incubator (BTI) will be set up at the nearby Dera Bassi with Central assistance, according to a Punjab Government spokesman. The proposed project would be set up as a public-private partnership model between the union and Punjab Governments and beckons industries limited. Fifteen industries have already given their consent to set up industries in this park. The BTI would focus on medicinal and aromatic plants and quality assurance of agri produce/products which have been identified based on demand of the industries of Punjab. The incubator would include common extraction facility for medicinal and aromatic plants. In view of low returns from wheat and paddy, the farmers of the state diversify into cultivation of medicinal and aromatic plants. Hence, setting up of this facility was need of the hour to meet the extraction requirements for value addition. The facility would comprise state-of-the-art aloe vera gel extraction and powdering unit, distillation unit for aromatic crops and solvent extraction unit for medicinal plants. The incubator would house a testing and certification facility for agro products to meet the long-felt need of the farmers, of the processors and exporters from the northern region for various facilities, including quality testing and certification of agricultural raw materials and agro products, training and dissemination of latest information on quality and food safety aspects as well as providing the services pertaining to economic opportunities in post-harvest sector. The facilities of providing quality assurance services would give a big boost to agro-processing by making the sector more competitive in the international markets. (UNI) RINL announces expansion plans at an estimated Rs 8,500 cr CHENNAI, Apr 17: Rashtriya Ispat Nigam Ltd (RINL) embarking on a Rs 8,500 crore expansion programme this year will be setting up a seamless tube plant at an estimated cost of Rs 800 crore. RINL director personnel K Ayyappa Naidu, speaking of the companys expansion plans, said these 250-mm diametric tubes were ideal for high-pressure functionalities. Pointing to the tremendous potential offered by the gas corridor as part of the national gas grid project, he told reporters last evening that these were highly technology-intensive steel seamless tubes, which were currently being imported. These tubes could be used for gas applications, especially since these were capable of handling extremely high-pressure operations, he elaborated. This would be the first-of-its-kind seamless tube plant in the country. RINL had the complex technology and this could form as an import substitution for an integrated steel plant, Mr Naidu claimed. The setting up of the seamless tube plant, as part of the massive expansion programme that RINL was undertaking, would mean import substitution. The project report pertaining to RINLs aim of achieving an expansion of seven mt by 2008 had been submitted to the Government. With the initial assessment completed, the project was under the Governments active consideration and advanced stage of approval. RINL, the corporate entity of Visakhapatnam steel plant, recorded an impressive performance both in production and sales. Having a paid up capital of over Rs 7,800 crore, the company registered a turnover of Rs 8,181 crore, the highest since its inception, almost 32 per cent more than the turnover clocked in the last fiscal. (UNI) EU seeks policy intiatives from India to up bilateral trade NEW DELHI, Apr 17: The European Union (EU) has sought more policy initiatives from India to quadruple the total bilateral trade in services in the next five years. "There is much potential for EU and India to look forward to in terms of trade, job creation and economic progress. It is in this light, greater focus and policy initiatives are called for on both sides to realise this untapped potential," European Commission Ambassador Fancis D Camara Gomes said. He said in the EU, services were Central and accounted for a quarter of world trade. Nearly two-thirds of EUs GDP was accounted for by services, which clearly indicated that there was much potential for trade in services between EU and India. Referring to bilateral trade in services, he felt that there had been substantial growth in the recent years. In 2002, Indias exports of services to eu amounted to 2.4 euros billion while EUs services exports to India amounted to 2.6 billion euros. This, he felt was far below the potential that existed between the two countries. "There is a need to think in terms of at least quadrupling the total trade in services between EU and India the next five years or so given the inherent strength," he said at a workshop on "Indian services for EU countries" here. Italian Ambassador Antonio Armellini, said services liberalisation offered as great opportunities, if not greater, as goods liberalisation, since services was the fastest growing sector in world economy. He said italy was conscious of these opportunities and the ministerial delegation that accompanied the Italian presidential visit to India in February held several meetings to review possibilities of further cooperation. Several MoUs were also signed by Indian and Italian institutions. Mr Armellini said all were aware of the advantages and problems connected with services liberalisation under gats but given the latitude available to every single member country, the advantages ultimately outnumbered the problems. Dutch Ambassador E F C H Niehe said India had much to gain from liberalising trade in services, with its large pool of relatively cheap English speaking and skilled labour, which were of enormous competitive advantage. (UNI) I-T offices to remain open on Monday KOLKATA, Apr 17: Income-Tax offices in Kolkata will remain open on Monday (April 18), according to an order issued by the Chief Commissioner of I-T, Kolkata-I. It said today that I-T offices elsewhere in West Bengal would also remain open that day. (PTI) | home | state | national | business| editorial | advertisement | sports | | international | weather | mailbag | suggestions | search | subscribe | send mail |
http://www.dailyexcelsior.com/web1/05apr18/busi.htm
crawl-002
refinedweb
4,806
50.57
This. Overview In this tutorial we're going to complete the first version of the LocalLibrary website by adding list and detail pages for books and authors (or to be more precise, we'll show you how to implement the book pages, and get you to create the author pages yourself!) The process is similar to creating the index page, which we showed in the previous tutorial. We'll still need to create URL maps, views, and templates. The main difference is that for the detail pages, we'll have the additional challenge of extracting information from patterns in the URL and passing it to the view. For these pages, we're going to demonstrate a completely different type of view: generic class-based list and detail views. These can significantly reduce the amount of view code needed, making them easier to write and maintain. The final part of the tutorial will demonstrate how to paginate your data when using generic class-based list views. Book list page The book list page will display a list of all the available book records in the page, accessed using the URL: catalog/books/. The page will display a title and author for each record, with the title being a hyperlink to the associated book detail page. The page will have the same structure and navigation as all other pages in the site, and we can, therefore, extend the base template (base_generic.html) we created in the previous tutorial. URL mapping Open /catalog/urls.py and copy in the line shown in bold below. As for the index page, this path() function defines a pattern to match against the URL ('books/'), a view function that will be called if the URL matches ( views.BookListView.as_view()), and a name for this particular mapping. urlpatterns = [ path('', views.index, name='index'), path('books/', views.BookListView.as_view(), name='books'), ] As discussed in the previous tutorial the URL must already have matched /catalog, so the view will actually be called for the URL: /catalog/books/. The view function has a different format than before — that's because this view will actually be implemented as a class. We will be inheriting from an existing generic view function that already does most of what we want this view function to do, rather than writing our own from scratch. For Django class-based views we access an appropriate view function by calling the class method as_view(). This does all the work of creating an instance of the class, and making sure that the right handler methods are called for incoming HTTP requests. View (class-based) We could quite easily write the book list view as a regular function (just like our previous index view), which would query the database for all books, and then call render() to pass the list to a specified template. Instead, however, we're going to use a class-based generic list view ( ListView) — a class that inherits from an existing view. Because the generic view already implements most of the functionality we need and follows Django best-practice, we will be able to create a more robust list view with less code, less repetition, and ultimately less maintenance. Open catalog/views.py, and copy the following code into the bottom of the file: from django.views import generic class BookListView(generic.ListView): model = Book That's it! The generic view will query the database to get all records for the specified model ( Book) then render a template located at /locallibrary/catalog/templates/catalog/book_list.html (which we will create below). Within the template you can access the list of books with the template variable named object_list OR book_list (i.e. generically " the_model_name_list"). Note: This awkward path for the template location isn't a misprint — the generic views look for templates in /application_name/the_model_name_list.html ( catalog/book_list.html in this case) inside the application's /application_name/templates/ directory ( /catalog/templates/). You can add attributes to change the default behaviour above. For example, you can specify another template file if you need to have multiple views that use this same model, or you might want to use a different template variable name if book_list is not intuitive for your particular template use-case. Possibly the most useful variation is to change/filter the subset of results that are returned — so instead of listing all books you might list top 5 books that were read by other users. class BookListView(generic.ListView): model = Book context_object_name = 'my_book_list' # your own name for the list as a template variable queryset = Book.objects.filter(title__icontains='war')[:5] # Get 5 books containing the title war template_name = 'books/my_arbitrary_template_name_list.html' # Specify your own template name/location Overriding methods in class-based views While we don't need to do so here, you can also override some of the class methods. For example, we can override the get_queryset() method to change the list of records returned. This is more flexible than just setting the queryset attribute as we did in the preceding code fragment (though there is no real benefit in this case): class BookListView(generic.ListView): model = Book def get_queryset(self): return Book.objects.filter(title__icontains='war')[:5] # Get 5 books containing the title war We might also override get_context_data() in order to pass additional context variables to the template (e.g. the list of books is passed by default). The fragment below shows how to add a variable named " some_data" to the context (it would then be available as a template variable). class BookListView(generic.ListView): model = Book def get_context_data(self, **kwargs): # Call the base implementation first to get the context context = super(BookListView, self).get_context_data(**kwargs) # Create any data and add it to the context context['some_data'] = 'This is just some data' return context When doing this it is important to follow the pattern used above: - First get the existing context from our superclass. - Then add your new context information. - Then return the new (updated) context. Note: Check out Built-in class-based generic views (Django docs) for many more examples of what you can do. Creating the List View template Create the HTML file /locallibrary/catalog/templates/catalog/book_list.html and copy in the text below. As discussed above, this is the default template file expected by the generic class-based list view (for a model named Book in an application named catalog). Templates for generic views are just like any other templates (although of course the context/information passed to the template may differ). As with our index template, we extend our base template in the first line and then replace the block named content. {% extends "base_generic.html" %} {% block content %} <h1>Book List</h1> {% if book_list %} <ul> {% for book in book_list %} <li> <a href="{{ book.get_absolute_url }}">{{ book.title }}</a> ({{book.author}}) </li> {% endfor %} </ul> {% else %} <p>There are no books in the library.</p> {% endif %} {% endblock %} The view passes the context (list of books) by default as object_list and book_list aliases; either will work. Conditional execution We use the if, else, and endif template tags to check whether the book_list has been defined and is not empty. If book_list is empty, then the else clause displays text explaining that there are no books to list. If book_list is not empty, then we iterate through the list of books. {% if book_list %} <!-- code here to list the books --> {% else %} <p>There are no books in the library.</p> {% endif %} The condition above only checks for one case, but you can test on additional conditions using the elif template tag (e.g. {% elif var2 %}). For more information about conditional operators see: if, ifequal/ifnotequal, and ifchanged in Built-in template tags and filters (Django Docs). For loops The template uses the for and endfor template tags to loop through the book list, as shown below. Each iteration populates the book template variable with information for the current list item. {% for book in book_list %} <li> <!-- code here get information from each book item --> </li> {% endfor %} While not used here, within the loop Django will also create other variables that you can use to track the iteration. For example, you can test the forloop.last variable to perform conditional processing the last time that the loop is run. Accessing variables The code inside the loop creates a list item for each book that shows both the title (as a link to the yet-to-be-created detail view) and the author. <a href="{{ book.get_absolute_url }}">{{ book.title }}</a> ({{book.author}}) We access the fields of the associated book record using the "dot notation" (e.g. book.title and book.author), where the text following the book item is the field name (as defined in the model). We can also call functions in the model from within our template — in this case we call Book.get_absolute_url() to get a URL you could use to display the associated detail record. This works provided the function does not have any arguments (there is no way to pass arguments!) Note: We have to be a little careful of "side effects" when calling functions in templates. Here we just get a URL to display, but a function can do pretty much anything — we wouldn't want to delete our database (for example) just by rendering our template! Update the base template Open the base template (/locallibrary/catalog/templates/base_generic.html) and insert {% url 'books' %} into the URL link for All books, as shown below. This will enable the link in all pages (we can successfully put this in place now that we've created the "books" URL mapper). <li><a href="{% url 'index' %}">Home</a></li> <li><a href="{% url 'books' %}">All books</a></li> <li><a href="">All authors</a></li> What does it look like? You won't be able to build the book list yet, because we're still missing a dependency — the URL map for the book detail pages, which is needed to create hyperlinks to individual books. We'll show both list and detail views after the next section. Book detail page The book detail page will display information about a specific book, accessed using the URL catalog/book/<id> (where <id> is the primary key for the book). In addition to fields in the Book model (author, summary, ISBN, language, and genre), we'll also list the details of the available copies ( BookInstances) including the status, expected return date, imprint, and id. This will allow our readers to not only learn about the book, but also to confirm whether/when it is available. URL mapping Open /catalog/urls.py and add the 'book-detail' URL mapper shown in bold below. This path() function defines a pattern, associated generic class-based detail view, and a name. urlpatterns = [ path('', views.index, name='index'), path('books/', views.BookListView.as_view(), name='books'), path('book/<int:pk>', views.BookDetailView.as_view(), name='book-detail'), ] For the book-detail path the URL pattern uses a special syntax to capture the specific id of the book that we want to see. The syntax is very simple: angle brackets define the part of the URL to be captured, enclosing the name of the variable that the view can use to access the captured data. For example, <something> , will capture the marked pattern and pass the value to the view as a variable "something". You can optionally precede the variable name with a converter specification that defines the type of data (int, str, slug, uuid, path). In this case we use '<int:pk>' to capture the book id, which must be a specially formatted string and pass it to the view as a parameter named pk (short for primary key). This is the id that is being used to store the book uniquely in the database, as defined in the Book Model. Note: As discussed previously, our matched URL is actually catalog/book/<digits> (because we are in the catalog application, /catalog/ is assumed). Important: The generic class-based detail view expects to be passed a parameter named pk. If you're writing your own function view you can use whatever parameter name you like, or indeed pass the information in an unnamed argument. Advanced path matching/regular expression primer Note: You won't need this section to complete the tutorial! We provide it because knowing this option is likely to be useful in your Django-centric future. The pattern matching provided by path() is simple and useful for the (very common) cases where you just want to capture any string or integer. If you need more refined filtering (for example, to filter only strings that have a certain number of characters) then you can use the re_path() method. This method is used just like path() except that it allows you to specify a pattern using a Regular expression. For example, the previous path could have been written as shown below: re_path(r'^book/(?P<pk>\d+)$', views.BookDetailView.as_view(), name='book-detail'), Regular expressions are an incredibly powerful pattern mapping tool. They are, frankly, quite unintuitive and can be intimidating for beginners. Below is a very short primer! The first thing to know is that regular expressions should usually be declared using the raw string literal syntax (i.e. they are enclosed as shown: r'<your regular expression text goes here>'). The main parts of the syntax you will need to know for declaring the pattern matches are: Most other characters can be taken literally! Let's consider a few real examples of patterns: You can capture multiple patterns in the one match, and hence encode lots of different information in a URL. Note: As a challenge, consider how you might encode). View (class-based) Open catalog/views.py, and copy the following code into the bottom of the file: class BookDetailView(generic.DetailView): model = Book That's it! All you need to do now is create a template called /locallibrary/catalog/templates/catalog/book_detail.html, and the view will pass it the database information for the specific Book record extracted by the URL mapper. Within the template you can access the list of books with the template variable named object OR book (i.e. generically " the_model_name"). If you need to, you can change the template used and the name of the context object used to reference the book in the template. You can also override methods to, for example, add additional information to the context. What happens if the record doesn't exist? If a requested record does not exist then the generic class-based detail view will raise an Http404 exception for you automatically — in production, this will automatically display an appropriate "resource not found" page, which you can customise if desired. Just to give you some idea of how this works, the code fragment below demonstrates how you would implement the class-based view as a function if you were not using the generic class-based detail view. def book_detail_view(request, primary_key): try: book = Book.objects.get(pk=primary_key) except Book.DoesNotExist: raise Http404('Book does not exist') return render(request, 'catalog/book_detail.html', context={'book': book}) The view first tries to get the specific book record from the model. If this fails the view should raise an Http404 exception to indicate that the book is "not found". The final step is then, as usual, to call render() with the template name and the book data in the context parameter (as a dictionary). Alternatively, we can use the get_object_or_404() function as a shortcut to raise an Http404 exception if the record is not found. from django.shortcuts import get_object_or_404 def book_detail_view(request, primary_key): book = get_object_or_404(Book, pk=primary_key) return render(request, 'catalog/book_detail.html', context={'book': book}) Creating the Detail View template Create the HTML file /locallibrary/catalog/templates/catalog/book_detail.html and give it the below content. As discussed above, this is the default template file name expected by the generic class-based detail view (for a model named Book in an application named catalog). {% extends "base_generic.html" %} {% block content %} <h1>Title: {{ book.title }}</h1> <p><strong>Author:</strong> <a href="">{{ book.author }}</a></p> <!-- author detail link not yet defined --> <p><strong>Summary:</strong> {{ book.summary }}</p> <p><strong>ISBN:</strong> {{ book.isbn }}</p> <p><strong>Language:</strong> {{ book.language }}</p> <p><strong>Genre:</strong> {{ book.genre.all|join:", " }}</p> <div style="margin-left:20px;margin-top:20px"> <h4>Copies</h4> {% for copy in book.bookinstance_set.all %} <hr> <p class="{% if copy.status == 'a' %}text-success{% elif copy.status == 'm' %}text-danger{% else %}text-warning{% endif %}"> {{ copy.get_status_display }} </p> {% if copy.status != 'a' %} <p><strong>Due to be returned:</strong> {{ copy.due_back }}</p> {% endif %} <p><strong>Imprint:</strong> {{ copy.imprint }}</p> <p class="text-muted"><strong>Id:</strong> {{ copy.id }}</p> {% endfor %} </div> {% endblock %} The author link in the template above has an empty URL because we've not yet created an author detail page. Once that exists, you should update the URL like this: <a href="{% url 'author-detail' book.author.pk %}">{{ book.author }}</a> Though a little larger, almost everything in this template has been described previously: - We extend our base template and override the "content" block. - We use conditional processing to determine whether or not to display specific content. - We use forloops to loop through lists of objects. - We access the context fields using the dot notation (because we've used the detail generic view, the context is named book; we could also use " object") The one interesting thing we haven't seen before is the function book.bookinstance_set.all(). This method is "automagically" constructed by Django in order to return the set of BookInstance records associated with a particular Book. {% for copy in book.bookinstance_set.all %} <!-- code to iterate across each copy/instance of a book --> {% endfor %} This method is needed because you declare a ForeignKey (one-to many) field in only the "one" side of the relationship. Since you don't do anything to declare the relationship in the other ("many") models, it doesn't have any field to get the set of associated records. To overcome this problem, Django constructs an appropriately named "reverse lookup" function that you can use. The name of the function is constructed by lower-casing the model name where the ForeignKey was declared, followed by _set (i.e. so the function created in Book is bookinstance_set()). Note: Here we use all() to get all records (the default). While you can use the filter() method to get a subset of records in code, you can't do this directly in templates because you can't specify arguments to functions. Beware also that if you don't define an order (on your class-based view or model), you will also see errors from the development server like this one: [29/May/2017 18:37:53] "GET /catalog/books/?page=1 HTTP/1.1" 200 1637 /foo/local_library/venv/lib/python3.5/site-packages/django/views/generic/list.py:99: UnorderedObjectListWarning: Pagination may yield inconsistent results with an unordered object_list: <QuerySet [<Author: Ortiz, David>, <Author: H. McRaven, William>, <Author: Leigh, Melinda>]> allow_empty_first_page=allow_empty_first_page, **kwargs) That happens because the paginator object expects to see some ORDER BY being executed on your underlying database. Without it, it can't be sure the records being returned are actually in the right order! This tutorial hasn't covered Pagination (yet!), but since you can't use sort_by() and pass a parameter (the same with filter() described above) you will have to choose between three choices: - Add a orderinginside a class Metadeclaration on your model. - Add a querysetattribute in your custom class-based view, specifying an order_by(). - Adding a get_querysetmethod to your custom class-based view and also specify the order_by(). If you decide to go with a class Meta for the Author model (probably not as flexible as customizing the class-based view, but easy enough), you will end up with something like this:}' class Meta: ordering = ['last_name'] Of course, the field doesn't need to be last_name: it could be any other. Last but not least, you should sort by an attribute/column that actually has an index (unique or not) on your database to avoid performance issues. Of course, this will not be necessary here (we are probably getting ahead of ourselves with so few books and users), but it is something worth keeping in mind for future projects. What does it look like? At this point, we should have created everything needed to display both the book list and book detail pages. Run the server ( python3 manage.py runserver) and open your browser to. Warning: Don't click any author or author detail links yet — you'll create those in the challenge! Click the All books link to display the list of books. Then click a link to one of your books. If everything is set up correctly, you should see something like the following screenshot. Pagination If you've just got a few records, our book list page will look fine. However, as you get into the tens or hundreds of records the page will take progressively longer to load (and have far too much content to browse sensibly). The solution to this problem is to add pagination to your list views, reducing the number of items displayed on each page. Django has excellent inbuilt support for pagination. Even better, this is built into the generic class-based list views so you don't have to do very much to enable it! Views Open catalog/views.py, and add the paginate_by line shown in bold below. class BookListView(generic.ListView): model = Book paginate_by = 10 With this addition, as soon as you have more than 10 records the view will start paginating the data it sends to the template. The different pages are accessed using GET parameters — to access page 2 you would use the URL /catalog/books/?page=2. Templates Now that the data is paginated, we need to add support to the template to scroll through the results set. Because we might want"> Page {{ page_obj.number }} of {{ page_obj.paginator.num_pages }}. </span> {% if page_obj.has_next %} <a href="{{ request.path }}?page={{ page_obj.next_page_number }}">next</a> {% endif %} </span> </div> {% endif %} {% endblock %} The page_obj is a Paginator object that will exist if pagination is being used on the current page. It allows you to get all the information about the current page, previous pages, how many pages there are, etc. We use {{ request.path }} to get the current page URL for creating the pagination links. This is useful because it is independent of the object that we're paginating. That's it! What does it look like? The screenshot below shows what the pagination looks like — if you haven't entered more than 10 titles into your database, then you can test it more easily by lowering the number specified in the paginate_by line in your catalog/views.py file. To get the below result we changed it to paginate_by = 2. The pagination links are displayed on the bottom, with next/previous links being displayed depending on which page you're on. Challenge yourself The challenge in this article is to create the author detail and list views required to complete the project. These should be made available at the following URLs: catalog/authors/— The list of all authors. catalog/author/<id>— The detail view for the specific author with a primary key field named <id> The code required for the URL mappers and the views should be virtually identical to the Book list and detail views we created above. The templates will be different but will share similar behaviour. Note: - Once you've created the URL mapper for the author list page you will also need to update the All authors link in the base template. Follow the same process as we did when we updated the All books link. - Once you've created the URL mapper for the author detail page, you should also update the book detail view template (/locallibrary/catalog/templates/catalog/book_detail.html) so that the author link points to your new author detail page (rather than being an empty URL). The line will change to add the template tag shown in bold below. <p><strong>Author:</strong> <a href="{% url 'author-detail' book.author.pk %}">{{ book.author }}</a></p> When you are finished, your pages should look something like the screenshots below. Summary Congratulations, our basic library functionality is now complete! In this article, we've learned how to use the generic class-based list and detail views and used them to create pages to view our books and authors. Along the way we've learned about pattern matching with regular expressions, and how you can pass data from URLs to your views. We've also learned a few more tricks for using templates. Last of all we've shown how to paginate list views so that our lists are manageable
https://developer.cdn.mozilla.net/he/docs/Learn/Server-side/Django/Generic_views
CC-MAIN-2020-10
refinedweb
4,153
63.29
So the qr code looks broken. Doing a reverse image search on the first image reveals that the image was taken at Cambridge North railway station. The cladding of the building features a pierced design derived from Rule 30. So I thought maybe generate a Rule 30 image, put it over the qr code and do some xoring. First I shrunk the QR code image to 33×33 pixels. As the hints state, that centering is hard, I wrote the following python script which generates a Rule 30 set with the same size as the qr code, shifts it left respectively right, does the xoring, and stores the resulting image in a subfolder. import sys MAX_TIME = 33 HALF_SIZE = MAX_TIME indices = range(-HALF_SIZE, HALF_SIZE+1) # initial condition cells = {i: '0' for i in indices} cells[0] = '1' # padding on both ends cells[-HALF_SIZE-1] = '0' cells[ HALF_SIZE+1] = '0' new_state = {"111": '0', "110": '0', "101": '0', "000": '0', "100": '1', "011": '1', "010": '1', "001": '1'} from PIL import Image xm = Image.open('hv33.png') rules = [[0 for _ in range(33)] for i in range(67)] for time in range(0, MAX_TIME): x = 0 for i in cells: if x < 65: rules[x][time] = (int(cells[i])) x += 1 # evolve patterns = {i: cells[i-1] + cells[i] + cells[i+1] for i in indices} cells = {i: new_state[patterns[i]] for i in indices} cells[-HALF_SIZE-1] = '0' cells[ HALF_SIZE+1] = '0' for x in range(1, 2 * 66): im = Image.new('RGB', (33, 33)) pixels = im.load() for i in range(33): for j in range(33): if xm.getpixel((i, j)) == 0: val = 1 else: val = 0 if i - 66 + x > 0 and i - 66 + x < len(rules): val = val ^ int(rules[i - 66 + x][j]) if val == 1: pixels[i,j] = (0, 0, 0) else: pixels[i,j] = (255, 255, 255) im.save('img/' + str(x) + '.png', 'PNG') One of the images turns out to be a valid qr code which reveals the flag: HV19{Cha0tic_yet-0rdered}.
https://blog.sebastianschmitt.eu/challenges/hackvent-2019/hv19-09-santas-quick-response-3-0/
CC-MAIN-2022-21
refinedweb
339
72.19
So this is my first time working with the mesh class in unity. I am trying to create a tile map so I can take a stab at procedural dungeon making. I have data structures made to create a blue print of a dungeon the challenge I am trying to tackle is drawing it. My idea was to use a single procedural plane made up of many vertices to create sub planes this is shown in the code below. My hope was was to raise and lower these vertices to create walls and chasms. I wrote a little test to see how raising certain vertices affected the mesh the issue I am having is that there vertices shared between sub planes so instead a vertical cube rising out of it its forming more of a hill. as shown in the image. I know I would have to increase the number of vertices but I was hoping for a nudge in the right direction of how to go about making sure there are no shared vertices . ![using UnityEngine; using System.Collections; [RequireComponent(typeof (MeshFilter))] [RequireComponent(typeof (MeshRenderer))] [RequireComponent(typeof (MeshCollider))] public class TileMap : MonoBehaviour { //number of tiles in the x direction public int size_x = 7*35; //number of tiles in the z direction public int size_z = 7*35; //size of each tile public float tileSize = 10f; // Use this for initialization void Start () { buildMesh(); } public void buildMesh(){ //number of tiles total on the map int numTiles = size_x*size_z; //number of triangles is double the number of tiles needed as 2 triangles make one square on the map. int numTris = numTiles*2; //number of vertices in the x dirextion int vsize_x = size_x + 1; //number of vertices in the z direction int vsize_z = size_z + 1; // number of vertices total in the map needed to make the plane. int numVerts = vsize_x * vsize_z; //Create mesh data. //array of vector 3's to store vertex data Vector3[] vertices = new Vector3[numVerts]; //array of vector 3's to store normal data Vector3[] normals = new Vector3[numVerts]; //array of vector 2's to store uv data. Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[numTris*3]; //One nested for loop to set up the vertices normals, and uv int x,z; for(z=0; z < vsize_z; z++){ for(x=0; x < vsize_x; x++){ vertices[z * vsize_x + x] = new Vector3(x*tileSize,0,z*tileSize); normals [z * vsize_x + x] = Vector3.up; uv[z * vsize_x + x] = new Vector2((float)x/size_x,(float)z/size_z); } } // a little test I wrote to see how elevating certain vetices affect the mesh. vertices[0] += new Vector3(0,10,0); vertices[1] += new Vector3(0,10,0); vertices[vsize_x] += new Vector3(0,10,0); vertices[vsize_x+1] += new Vector3(0,10,0); //end of test //one nested for loop to set up the triangle data for the mesh. for(z=0; z < size_z; z++){ for(x=0; x < size_x; x++){ int squareIndex = z * size_x + x; int triOffset = squareIndex*6; triangles[triOffset +0] = z * vsize_x + x + 0; triangles[triOffset +2] = z * vsize_x + x + vsize_x + 1; triangles[triOffset +1] = z * vsize_x + x + vsize_x + 0; triangles[triOffset +3] = z * vsize_x + x + 0; triangles[triOffset +5] = z * vsize_x + x + 1; triangles[triOffset +4] = z * vsize_x + x + vsize_x + 1; } } //Create a new mesh Mesh mesh = new Mesh(); //Populate mesh with data mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; //Assign our mesh to our filter/renderer/collider. MeshFilter mesh_filter = GetComponent<MeshFilter>(); MeshRenderer mesh_renderer = GetComponent<MeshRenderer>(); MeshCollider mesh_collider = GetComponent<MeshCollider>(); mesh_filter.mesh = mesh; } }][1] [1]: /storage/temp/38604. How to manipulate(flatten) the Mesh of an object 0 Answers Create runtime 3D mesh and Fill area like Paper.io 2 1 Answer Problem with setting vertices positions and smoothing groups. 1 Answer How can i generate my mesh to get a mesh for 2d light? 0 Answers Accessing mesh vertices is extremely inefficient; any workarounds? 0 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/874219/splitting-shared-vertices-in-a-plane.html
CC-MAIN-2022-40
refinedweb
653
59.13
Error while accessing the uploaded website Hi,recently I deployed a website using godaddy.Com.But while accessing the website we r not getting internal pages.I m getting an error likean error has occurred while establishing a connection to the server. When connecting to SQL server 2005, this failure may be caused by the fact that under the... How many types of assemblies are there , wat are they? There are two types of assemblies 1.Private Assembly 2.Public/shared assembly -Private assemble that can be used only with in a software applications is called private assembly. -.Exe file extension... Explain the differences between server-side and client-side code? The code which running at the client web browser is called client side scripting whereas code which is running at the web server is known as server side scripting.. Examples for client side scripting is: javascript,VBScript .. Example for server side scripting is: ASP.NET,PHP,JavaEE .. The code which executed on server is called server side code( .ASPX file whis server side code) The code which is executed on client System is called client side code(Example: java script which is client side code) What is managed code and managed data? Managed code is code that is written to target the services of the common language runtime. In order to target these services, the code must provide a minimum level of information (metadata) to the runtime. All c#, visual basic .Net, and jscript .Net code is managed by default. Visual studio .Net... The code which is executed under the control of dotnet frame work which called as managed code The code which targets clr while execution is called as managed code and the code which takes operating system help while execution is called as unmanaged code Can we run dot.Net in UNIX plateform Yes you can run becoz dotnet is platform independent language there in no clr for unix to run......net became partially platform independent for this reason only What is meant by line item dimension? Line item dimension is a concept where the dimension precisely contains one characteristic.The line item dimension does not create a dimension table, instead the role of dimension table is taken by the characteristics of the SID table . Is .Net platform independent ? .net framework is partially dependent and partially independent language. This is because the .net applications works on windows OS and also Linux OS (Of course with help of framework called MONO fram... Because JVM supports all operating systems so Java is pure platform independent. How can retrieve the data from database? What is cell in dot frameworks? FOR CONNECT 1.establish the connection between the database 2.send the request(execute the command) 3.get the data form data base 4.close connection Copy/Paste the code and study it. For me it works."c# using System; using System.Collections.Generic; using System.Text; using System.Data.SqlClient; using System.Data; namespace... Garbage collection in .Net When we say that gc works on "heap" memory then how "stack" memory gets freed or deallocated? Can someone tell me the algorithm used by gc to free the memory? Garbage collection removes the objects from heap after certain time. .Net Frameworks -Memory Management Garbage Collector which manages allocation and release of memory from application.Now developers no need to worry about memory allocated for each object which is cre... What is "common type system" (cts)? cts provided common data type for all the languages ...like system.int 16 and system .int 32..... CTS defines all basic types that every language must follow the dotnet framework Example: In VB.net declare integer as "Dim i as integer" In C#.net declare integer as "integer i" Why is catch (exception) almost always a bad idea? in try block we write the code of error occurring coding.. To handle this error, program needs an handler to this error., So, we can handle this by using Catch handler While executing the program the program may contain error and it will catch the error of your program. What is ADO .Net? Define the data providers and classes of ADO.Net and its purpose with example? ADO.Net: Activex data obejct. By using ADO.NET data can be retrieved from one data source and saved in another. ADO.NET is a part of .NET framework architecture. A DATA PROVIDER is used for connect... ADO.Net: Activex data obejct. The most important section in ADO.NET architecture is "Data Provider". Data Provider provides access to datasource (SQL SERVER, ACCESS, ORACLE). ... What is a delegate, how many types of delegates are there A delegate is a reference type variable , which holds the reference of a method. This reference can be changed at run time , as desired. Two types of delegates :- 1. single-cast delegate - can call o... A delegate will allow us to specify what the function we'll be calling looks like without having to specify which function to call. The declaration for a delegate looks just like the declaration for a... What is the base class of .Net? Object class is the base class of dot net System.object is the base class for VB.NET How many classes can a single .Net dll contain? A single DLL contains many classes Many Differences between datagrid, datalist and repeater? 1. Datagrid has paging while datalist doesnt. 2. Datalist has a property called repeat. Direction = vertical/horizontal. (this is of great help in designing layouts). This is not there in datagrid. 3. A repeater is used when more intimate control over HTML generation is required. 4. When only... Differences between Datagrid, Datalist and Repeater? 1. Datagrid has paging while Datalist doesnt. 2. Datalist has a property called repeat. Direction = vertical/horizontal. (This is of great help in... Datagrid is most restrictive as regards to customization followed by DataList and finally Repeater is the most customizable.Datagrid has built in paging, sorting and editing capabilities which are not... What is delayed signing? How can we do a delayed signing for a dll which has to be shared? Delay signing allows you to place a shared assembly in the GAC by signing the assembly with just the public key. This allows the assembly to be signed with the private key at a later stage, when the d... Delay signing is the process of adding strong name to the assembly at the later stage of development.Signing an assembly means adding a strong name to the assembly.As the strong name is added at the l... What tag do you use to add a hyperlink column to the datagrid? By specifying tag between tag , we add a hyperlink column to the data grid ButtonColumn is used to add link to datagrid. & set the ButtonType to LinkButton How u can create XML file? 1.Right Click on your project in solution explorer 2.Select XMLfile in "add new itel list" and 3.Click on Add button Now u will found a XML file in your project XmlTextWriter textWriter=new XmlTextWriter("filename.xml"); //WriteStartDocument and WriteEndDocument methods open and close a document for writing textWriter.WriteStartDocument() /... Explain how the page is executed in ASP.Net ? Page Execution in asp.net life cycle is 1.PageInit() 2.Pageload() 3.Prerender() 4.Pageunload() An ASP.Net page consists of, at a minimum, a single .aspx file and can contain other files associated also with the page. The .aspx file is called the content file as it has the visual content o... check whether connection is proper or not... like.. whether connection string provided is proper or not.... check it by using try catch block. Check whether you uploded SQL Server or not.
http://www.geekinterview.com/Interview-Questions/Microsoft/DotNet
CC-MAIN-2014-15
refinedweb
1,286
60.92
Summary As Joe Nuxoll of JBuilder and JavaPosse fame will tell you (given the slightest provocation), one place where the Java designers completely dropped the ball is in Java's component model. This becomes especially clear when comparing it with a system like Flex which has full language support for components. Not only does Flex have built-in support for properties and events, it includes a higher-level abstraction language (MXML) for laying out and configuring components, which greatly simplifies the development process. You can even create new components directly in MXML, although that's more commonly done using ActionScript. This article shows how to create components in Flex and how clean the resulting applications can be that use those components. You'll see that ActionScript is quite similar to Java in many ways, but I hope you'll also see that ActionScript includes time-saving features that don't appear in Java. Although you can write all your Flex code by hand and compile it using mxmlc (the free command-line compiler), it's much easier to use FlexBuilder which is built on top of Eclipse and has command completion, context help, built-in debugging and more. Some of the examples in this article use AIR, a beta feature that allows you to create desktop applications that access files and other aspects of your local machine (rather than being limited to the web sandbox). You can download a beta of FlexBuilder including AIR support here. (These examples can also be built using the command-line compiler). The easiest way to create a component is using MXML. First, create a new application: from the FlexBuilder menu, choose File | New | Flex Project, and fill out the wizard. Then, from the FlexBuilder menu, choose File | New | MXML Component; this produces a little wizard that guides you through the process, including the choice of a base component to inherit from. I chose "Button" and called my new component "RedButton." The result is a file called RedButton.mxml (the name of the file determines the name of the component) containing the following: <?xml version="1.0" encoding="utf-8"?> <mx:Button xmlns: </mx:Button> This new component is based on Button and can be used in an application like this: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <local:RedButton </mx:Application> All the original capabilities of a button (including the ability to set a label, used here) are preserved. We can set the background color of the button to make the name of the component honest, add a tooltip and a default click action: <?xml version="1.0" encoding="utf-8"?> <mx:Button xmlns: <mx:Script> <![CDATA[ private function clicked():void { label = "Quite Red"; setStyle("fillColors", ['red', 'red']); } ]]> </mx:Script> </mx:Button> The clicked() function is defined inside a Script tag, within a CDATA block (inserted automatically by FlexBuilder) because you're in XML. Note that the function definition includes a return type, void -- optional static typing is an Actionscript feature that allows FlexBuilder to do a better job of context help and command completion. You can continue to add more sophistication to components this way, but MXML components are best used for simple things. More complex MXML components can become tedious and you're better off creating them directly in ActionScript (the compiler turns MXML into ActionScript anyway). FlexBuilder also has a wizard to help you create ActionScript components, found by selecting File | New | ActionScript Class. Because all classes must be in packages (like Java), you have the option of selecting a package name (if you don't, you get the default unnamed package). The wizard also allows you to select a superclass. In the following example I've told the wizard to automatically generate the framework for the constructor: package SimpleComponents { import mx.controls.Button; public class ASRedButton extends Button { public function ASRedButton() { super(); } } } The import statement was automatically created by FlexBuilder, and the file name is ASRedButton.as. Although the component is created in ActionScript, it is available as MXML: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <SimpleComponents:ASRedButton </mx:Application> When I began typing ASRedButton in FlexBuilder, it performed completion on the name, then added the XML namespace before the tag, and also included the xmlns attribute in the Application tag. Even if you ultimately plan to use the free command-line compiler, it's worth starting with the free download of FlexBuilder just to get initial help with details like this. With ActionScript it's easier to create more sophisticated components. We can give ASRedButton the same behavior as RedButton: package SimpleComponents { import mx.controls.Button; import flash.events.MouseEvent; public class ASRedButton extends Button { public function ASRedButton() { super(); label = "Reddish"; setStyle("fillColors", ['red', 'blue']); toolTip="A Red Button"; } override protected function clickHandler(event:MouseEvent):void { label = "Quite Red"; setStyle("fillColors", ['red', 'red']); } } } There are a number of ways to respond to events, but in an ActionScript component the easiest way is usually just to override a method. Note that the override keyword ensures that you don't accidentally create a new method (which would not produce the desired result). The import statements were automatically included by FlexBuilder. You can create an attribute that is readable and writable inside MXML by simply creating a public field, but ActionScript also contains get and set keywords to indicate methods for reading and writing a property. Here's a Label subclass where you see both approaches: package SimpleComponents { import mx.controls.Label; import flash.events.Event; public class MyLabel extends Label { public var labelStates:Array = ["Ontological", "Epistemological", "Ideological"]; private var state:uint = 0; // unsigned integer public function set textValue(newValue:String):void { text = newValue; } public function get textValue():String { return text; } public function onClick(event:Event = null):void { text = labelStates[state++ % labelStates.length]; } } } Here you see the use of optional static typing on labelStates, which is an Array. There are a number of ways to create and populate an Array, the simplest being the use of square brackets. Arrays are not typed; they simply hold Objects. However, you are not forced to downcast when pulling items out of an Array if the dynamic typing mechanism can handle the result. labelStates is public, so it can be accessed by setting an MXML attribute. textValue has both a get and set method, making it a property and allowing you to execute code when that property is changed; here I've just assigned it to the text field for demonstration purposes. I've also added the onClick() method which cycles the text field through the elements in the labelStates array. onClick() also accepts an event argument, which defaults to null so that it's optional -- this allows it to be used as an event listener, as you'll see shortly. Here's an example that makes use of the component: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <SimpleComponents:MyLabel <mx:Button </mx:Application> Note that I still have access to properties like fontSize in MyLabel. The assignment to textValue results in "Hello, World!" appearing as the label text. In order for click to be assigned to onClick(), the MyLabel instance must have an object identifier, so is given an id of display. The code assigned to click in the Button shows that you aren't required to just reference a function; you can define the code inline (although this can rapidly become messy). Here we add 'Logical' to the list if it isn't already there. Everything in Flash is event-based, and as a programmer you can hook into either the framework-generated events or user-generated events. If you want to insert some code at a particular point in the life cycle of an application, you find the appropriate framework event, such as creationComplete, which happens after the application has been constructed (there are many other framework events which you can look up in the online help system). As a simple example, we can drive MyLabel using a timer: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <SimpleComponents:MyLabel <mx:Script> <![CDATA[ private var timer:Timer = new Timer(1000); private function init():void { timer.addEventListener(TimerEvent.TIMER, display.onClick); timer.start(); } ]]> </mx:Script> </mx:Application> This demonstrates three different events: In general, event handling is exactly this simple; note the succinctness of code when compared with Java event handling. Flex also provides data binding, which allows one component to respond to a change in the data of another component. This was seen as a common activity which would ordinarily require a lot of event-handling code, so the Flex designers decided to do the work for the programmer. Here's a counter that uses data binding: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:Label <mx:Script> <![CDATA[ private var timer:Timer = new Timer(1000); [Bindable] public var count:uint = 0; private function init():void { timer.addEventListener(TimerEvent.TIMER, function(event:Event):void {count++}); timer.start(); } ]]> </mx:Script> </mx:Application> In text="{count}", the curly braces bind the text field to the count var, which has been modified with the [Bindable] annotation so that the compiler wires up all the necessary event generation and handling for you. Every time the timer fires, count changes and the text field in the Label changes in response. This example also shows the use of an inline function (a.k.a. a lambda) as the target of the event listener. An aside: when you create inline code inside MXML like this, the compiler actually creates a class to hold the code, and an instance of that class. Although this is convenient for small blocks of code it can become confusing if you try to make that code too large or complex, in which case you should move it to a separate ActionScript file. I'll finish with a more sophisticated component that will also demonstrate the use of AIR to read local files from the disk. In addition, we'll explore more ActionScript syntax including regular expressions. This component was created to support the Flex Jams; in particular, for people who are using Flex for the first time and need step-by-step instructions. Programmed Learning is a fairly old practice; it focuses on learning through exercises and, instead of giving you the answer all at once, it guides you through by giving a series of hints and answers so that you get the sense of discovery and learning at every step. Flex includes the perfect component for programmed learning, the Accordion. This contains a set of sliding "windows" so that you can move each one up and expose more information as you go. Because I'm creating lots of exercises (and undoubtedly making lots of changes), it's far too much work to set them up as static MXML files; instead, I'll inherit a new component from Accordion and teach it to build itself by reading text files (which is where AIR comes in). First, I've created a little language that allows me to describe the exercise, hints, intermediate solutions and steps. Any line that begins with a '.' contains a command. Here's an example, Exercise1.txt in the directory 1. Basics: .ex Install Flexbuilder. Test it by creating "Hello world" in both AIR and Flex. .h1 Go to to get the public beta of FlexBuilder 3 (This includes support for AIR). .h2 Select File|New|Flex Project and follow the instructions to create a new Flex app. The second page allows you to specify either Flex or AIR. .h3 Place a Text component in the application. Type a '<' and begin typing "Text" and you'll see the context help pop up the possibilities. Choose and press Return to insert it, and close the tag with />. .s3 <mx:Text /> .h4 Now set the text property to "Hello, World!". With your cursor inside the tag, begin typing "text" and use command completion. .s4 <mx:Text .h5 You may also want to choose the Design button that you'll see in the application area, then click and drag a Text component to add it to your application Panel. Use Flex Properties to add text "Hello World." If you use design mode, be sure to switch back to Source mode and look at the generated MXML for your program. .final <?xml version="1.0" encoding="utf-8"?> <mx:WindowedApplication xmlns: <mx:Text </mx:WindowedApplication> The .ex tag indicates the exercise description. Anything starting with .h is a hint followed by a hint number. A .s is an intermediate hint solution, which also has a number. The .final tag indicates the completed solution. Although I could certainly have written a Python program to translate this language into something (like XML) that would be easy for the Flex app to consume, the intermediate step would have reduced the interactivity of the solution. In the ExercisePresenter component, I use AIR to find and read the text file, and regular expressions to break it up into directives: package ProgrammedLearning { import mx.containers.Accordion; import mx.containers.VBox; import mx.controls.TextArea; import mx.controls.Alert; import flash.filesystem.File; import flash.filesystem.FileStream; import flash.filesystem.FileMode; import flash.net.FileFilter; import flash.events.Event; public class ExercisePresenter extends Accordion { public var dataDirectory:String; public var chapter:String; public var fileName:String; private var file:File = File.documentsDirectory; private var pathsResolved:Boolean = false; // Called when properties are set: protected override function commitProperties():void { super.commitProperties(); // Prevent multiple calls: if(!pathsResolved) { pathsResolved = true; resolvePaths([dataDirectory, chapter, fileName]); } } private function resolvePaths(paths:Array):void { for each(var path:String in paths) { var successfullyResolved:File = file; // Store as far as we've gotten file = file.resolve(path); if(file.exists) continue; else { // Files installed elsewhere var exFilter:FileFilter = new FileFilter("Exercise", "Exercise*.txt"); file = successfullyResolved; file.browseForOpen("File Not Found; Select File to Open", [exFilter]); file.addEventListener(Event.SELECT, parseFile); } return; // Wait for user to choose new path } parseFile(); } private function parseFile(event:Event=null): void { var stream:FileStream = new FileStream(); stream.open(file, FileMode.READ); var str:String = stream.readUTFBytes(stream.bytesAvailable); stream.close(); var entries:Array = []; // Each entry is delimited by a '.' at the start of a line: for each(var s:String in str.split(new RegExp("\\n\\.", "s"))) if(s.length > 0) entries.push(s); for each(s in entries) { // Split on first newline: // (Could do this with a regular expression, too!) var brk:uint = s.indexOf("\n"); var tag:String = s.substring(0, brk); var contents:String = s.substr(brk); // Strip leading newlines: while(contents.charAt(0) == '\n') contents = contents.substr(1); switch(tag.charAt(0)) { case 'e': addStep("Exercise", contents); break; case 'h': addStep("Hint " + tag.substr(1), contents); break; case 's': addStep("Hint solution " + tag.substr(1), contents); break case 'f': addStep("Completed solution", contents); break; default: } } } public function addStep(label:String, contents:String):void { // Create a new "fold" in the Accordion: var vbox:VBox = new VBox(); vbox.percentHeight = vbox.percentWidth = 100; vbox.label = label; var text:TextArea = new TextArea(); text.percentHeight = text.percentWidth = 100; text.text = contents; vbox.addChild(text); addChild(vbox); // Add to self (Accordion object) } } } The dataDirectory, chapter, and fileName fields tell the component where to find the example. Because these are public fields and we need to wait until they are set before trying to use them (otherwise they will contain incorrect data), we override the commitProperties() method which is automatically called by the framework. commitProperties() is typically called more than once, so the standard practice is to use a flag to keep track of whether you have performed your task; in this case we only want to call resolvePaths() once. resolvePaths() iterates through an array of sequential directories and verifies that each one is correct. If it gets all the way through the array, it calls parseFile() directly, but if it fails -- which means the path information is incorrect, possibly because the user hasn't installed the files in the expected location -- it allows the user to choose the directories and the file. browseForOpen() opens a file browser window in the native OS; note that the last successfully found directory is kept and used so the user doesn't have to start from scratch. Notice that calling browseForOpen() is almost like starting a separate thread (although Flex and Flash don't support programmer threads -- a good thing, since thread programming is virtually impossible to get right -- the new betas of the Flash VM use threads internally for speed and utilization of multiple cores). Once the user selects the new file, we still want to call parseFile(). This is accomplished by setting up an event listener, which effectively establishes a callback. You see this kind of programming -- passing control, then using an event listener as a callback -- quite a bit in Flex code, especially in network programming. The first few lines of parseFile() show the standard way to open and read a text file, which only works in an AIR application. Note that it looks slightly similar to Java code but is less verbose because the decorator pattern is not (mis)used as it is in Java. Once the file is read, a regular expression breaks it into "entries," each of which includes a dot command and the text that follows it (blank entries are discarded). Note that the regular expression uses similar syntax as Java does, in particular the double backslash when you actually want a single backslash. Discovering how to use ActionScript regular expressions took me a bit of time because the documentation and examples are lacking (or at least, I couldn't find them); you might have more luck using the regular expression section in the Strings chapter of Thinking in Java. Each entry is broken into its dot command and body text, then a switch statement adds each step as a window in the Accordion component by calling addStep(). To test the component, we can create and configure it in MXML: <?xml version="1.0" encoding="utf-8"?> <mx:WindowedApplication xmlns: <ProgrammedLearning:ExercisePresenter </mx:WindowedApplication> However, this isn't the whole solution. The enclosing application (which I will eventually write) will construct a main page consisting of all the exercises, each of which you can select to bring up the programmed learning accordion. To accomplish this, I must be able to manipulate the component at runtime -- which I can, through both my own design choices and the structure of Flex: package ProgrammedLearning { public class DynamicTest extends ExercisePresenter { function DynamicTest() { percentHeight = percentWidth = 100; dataDirectory = "FlexJamProgrammedLearning"; chapter = "1. Basics"; fileName = "Exercise1.txt"; } } } One thing I'd really like to see in a future version of Flex is better string manipulation tools. A perfect model for this is the Python string library, which is tried and tested, and you even have the source code so creating an ActionScript string library is a matter of translation. Have an opinion? Readers have already posted 29 comments about this weblog entry. Why not add yours? If you'd like to be notified whenever Bruce Eckel adds a new entry to his weblog, subscribe to his RSS feed.
http://www.artima.com/weblogs/viewpost.jsp?thread=212818
CC-MAIN-2015-18
refinedweb
3,186
54.02
Associate a buffer with a stream #include <stdio.h> int setvbuf( FILE *fp, char *buf, int mode, size_t size ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The setvbuf() function associates a buffer with the stream designated by fp. If you want to call setvbuf(), you must call it after opening the stream, but before doing any reading, writing, or seeking. If buf isn't NULL, the buffer it points to is used instead of an automatically allocated buffer. #include <stdio.h> #include <stdlib.h> int main( void ) { char *buf; FILE *fp; fp = fopen( "file", "r" ); buf = malloc( 1024 ); setvbuf( fp, buf, _IOFBF, 1024 ); /* work with fp */ ... fclose( fp ); /* This is OUR buffer, so we have * to free it. Do that AFTER * you've closed the file. */ free( buf ); return EXIT_SUCCESS; } ANSI, POSIX 1003.1 fopen(), setbuf()
http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/s/setvbuf.html
CC-MAIN-2021-10
refinedweb
148
76.42
Declarative 'Canvas' Animation With React and Kanva In this post, we'll learn how to bring React, Canvas, and Kanva together to create declarative animations. Read on for more! Join the DZone community and get the full member experience.Join For Free Today we're looking at declarative animation that renders on <canvas>: React for declarativeness, Konva as a canvas abstraction layer, and react-konva to make them work together. In theory, this combination has better performance than SVG+React but worse performance than raw canvas because of the additional abstraction layers. We have canvas as the rendering layer, then Konva gives us basic shapes and interactions, react-konva turns those into React components, and our own React code makes it work together. If that sounds complicated… it probably is. I barely know how to use it, and I have no idea how it actually works. We're building a marble simulation. You can pick up a marble and throw it, and it bounces around until it stops. I wanted to add collision detection as well, but N-body collisions are hard. Next time! You can see the code on Github and play with marbles on the live demo. The live demo looks better than the gif, I promise. We have two components: Marble, which renders each marble and deals with drag events. Collisions, which renders all the marbles and deals with the game loop logic. Yes, the game loop logic should totally be a Redux or MobX store instead of shoved into a component. This is fine. Marble The Marble component uses react-konva to render a <Circle>and listen for a dragend event. That's how you "throw." You can think of react-konva as a very thin abstraction layer on Konva. I looked at the source code once, and it just uses a bit of magic to translate all of Konva's classes into React components. Props are passed through unchanged as Konva attributes. That means you don't have to think about using react-konva. Focus on the Konva docs, it's all the same. We're rendering a <Circle> element at position (x, y) and giving it a radius of 15. Very similar to SVG, right? Here's where it gets crazy. To get the marble look, we use fillPattern* props and use a sprite for the background. We reposition and scale it to get the sprite to fit and make marbles look different. For the shine effect, we use shadow* props. Shadows get a color that matches each marble (I used Photoshop), some blur, and an opacity. This gives each marble a glow that makes the marbles look shiny. They're still shadows, though, so a bunch of things are wrong. Especially when the marbles get close together, you can see that the shadows look darker when combined. Real shines would look brighter. To get draggability, we turn it on. Konva handles the rest for us. onDragEnd we call the onShoot callback with the marble's new position and movement vector. This part is janky. I'll explain why later. Collisions The <Collisions> component is called collisions because this was meant to be a simulation of inelastic N-body collisions. High school physics stuff. But that's hard to do, so you get just the bouncing off of walls. This component has three major parts: - Calculating the initial positions to make a triangle. - The game loop that drives animation. - Declaratively rendering the marbles. Initial Positions and Sprite Loading Konva takes sprites as ES6 Image objects. We load one up, wait for the onload event to fire, add it to state, which triggers a re-render, and start the game loop timer. If you're not familiar with the Image object, it's basically an in-memory representation of the image bytestream. Unless you're doing something very particular, you don't need to know the details. It loads an image into memory. This took me embarrassingly long to code. It's one of those interview question things: Render stuff in a triangle. Then you fumble for an hour, and they're like "LoL you're an idiot, pass." That was going through my mind the entire time. How the hell am I struggling this hard to put marbles in a triangle? Here's how it works: - Loop from 3 to 0 to create the rows. - In each row, go from the leftedge to the rightedge with a step of "marble size." - Add a position to the array. You get the left edge is y marble halves to the left, and the right is y marble halves to the right. This nested loop approach returns a nested array so you flatten it with a .reduce. Oh, and those range() functions are actually d3.range. I got them with import { range } from 'd3-array'. Game Loop Our game loop is a function that d3.timer calls on every requestAnimationFrame. It goes through our array of marbles, updates their positions, and triggers a re-render. Like this: See? Loop through marbles and update their positions by adding the speed vector to the position. We invert the speed vector when a marble is about to hit a wall in the next step. Nested ternary expressions are hard to read. I should refactor that. If x+vx is smaller than the left edge, invert vx. Otherwise if x+vx is bigger than the right edge, invert vx. Otherwise, leave it alone. The shoot() function is that dragend callback that <Marble> calls. It updates the particular marble with the new position and the new speed vector. Rendering After all that logic, rendering is the easy part. We loop through the marbles and declaratively add them to the Stage. The Stage is what Konva calls the canvas element. I don't fully understand why, but I'm sure there's a reason. See? Loop through marbles, put down Marble components. All this inside a Stage, which is the canvas, and Layer, which I think makes more sense when you have more than one, and Group which is the same concept as SVG's <g> element. It helps you think of groups of shapes as a single thing. A Stage must always have at least one Layer. So that part is important albeit seemingly useless. Why Is Your Demo so Janky, Swizec? Did you guess it yet? Why am I having so much trouble throwing marbles at 0:14 in the video? It's the game loop and Konva fighting each other. The game loop re-renders all our marbles every 16 milliseconds. Konva isn't telling React that they've moved, so the position is reset. That means you have to complete your throw within 16ms or it won't work. Now, while this looks really bad, it's not a fundamental limitation of the React-Konva-Canvas stack. Just an extra step to take care of before I tackle the N-body collisions. Gotta add a dragmove listener to Marble and make sure it updates the React state. Should be easy. Published at DZone with permission of Swizec Teller, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/declarative-canvas-animation-with-react-and-kanva
CC-MAIN-2022-27
refinedweb
1,204
76.62
ks_rememberShading.mel by Katrin Schmid 2007, 1/7 Updated to handle export of used shaders in 1 click 12/1 Updated to handle namespaces. This is useful if you use the script to reference already shaded files so you can apply lambert1 in the referenced scenes and dont get referenced shaders. This way you dont have to re-apply shaders again manually in the master file. Create sets in file to reference apply lambert1 to all geometry, export shaders and delete them from the scene. Import shaders to master scene reference scene file. Use script with "ignore namespaces" checked to reaply shaders in master scene to referenced geometry. Features: Tool that lets you store your shading information (button "Re/write shading information") for a scene by creating object sets (on a per face base) and adding tags to shaders. This lets you restore your stored beauty shading in 1 click (button "Re-apply beauty shading"). Useful if you you want to temporarily apply (i.e. for ligthing or diangnostic purpose ) different shaders to the scene or define a "default shading" for the scene. "Assign lambert1" assigns the default shader. "Assign grey lambert" does just assign a new grey lambert shader. If you select the checkbox it works on selected objects only. Note that you have to rebuild sets to save changes in shader assignment. Install: Put ks_rememberShading.mel in you script directory. Start the GUI by typing "ks_rememberShading" in the script editor. If you use this for a project please drop me a line.
http://www.highend3d.com/maya/downloads/mel_scripts/rendering/misc/ks-rememberShading-4966.html
crawl-002
refinedweb
254
66.03
Y Combinator Collaboration: Deploying SailsJS to Azure Web Apps by Felix Rieseberg, Partner Catalyst Team This case study describes the first open-source collaboration between TED and Y Combinator, enabling applications developed with SailsJS to be deployed directly to Azure. Customer Problem SailsJS is an open-source real-time MVC framework for Node.js. It used widely by Node.js developers to develop enterprise-grade server solutions. In the Node world, SailsJS is to Node.js what Ruby on Rails is to Ruby. Its ensemble of small modules work together to provide simplicity, maintainability, and structural conventions to Node.js apps. The company behind SailsJS, Treeline.io, was a part of Y Combinator’s Summer 2015 class. During the first meetings between Treeline.io and TED, it became obvious that there’s great interest in SailsJS and equally great interest in Azure by companies using SailsJS, yet a reliable deployment path was not defined. The core of the issue was simple: How can Microsoft and Treeline.io best enable customers to deploy SailsJS apps as Azure Web Apps? Overview of the Solution We began our investigation into a solution by looking at other cloud providers – many offered tutorials, but most of them seemed cumbersome. SailsJS already comes with a command line interface and we decided that the best user experience would explain itself: If the command [sails run]starts SailsJS, wouldn’t the best deployment experience be a single command like [sails deploy azure]? The solution provides exactly that: from within the SailsJS CLI, users can deploy any Sails app within a few seconds. The solution consists of three Node Modules – a Node Machine implementing Azure APIs, an npm module extending the SailsJS CLI with a [sails deploy] command, and an npm module enabling [sails deploy] to directly deploy to Azure. Implementation SailsJS makes heavy use of Node Machines, an open standard for JavaScript functions. They are essentially meta-data wrappers for APIs, allowing other applications to consume the packages in a standardized way. We started by building a Node Machine for Azure, which is capable of setting up the local Azure account, creating and updating websites and uploading files. In total, the machine pack implements 13 APIs: · Checking for an active Azure subscription · Listing azure subscriptions · Registering an azure subscription · Setting a new active azure subscription · Checking if a website already exists for a given account · Setting deployment credentials for a website · Uploading files to a website · Uploading webjobs to a website · Fetching meta data information for a deployed webjob · Fetching stdout and stderr for a webjob · Listing available VM images for an account The machine pack is available on GitHub, node-machine.org and NPM independently of this solution, as it might be useful in other scenarios. For the purpose of this solution, we consumed the Node Machine to implement a deployment strategy for SailsJS. Two Node modules had to be created to enable the scenario: The SailsJS CLI needed to be extended with a [sails deploy] command. This proved to be surprisingly easy and only took about 50 lines of code. Then, the [sails deploy]command needed a descriptive strategy for deploying apps to Azure, which we encapsulated in the Node Module sails-deploy-azure. The module sails-deploy-azure in detail implements the following flow. (Note: The code is too long to include here, but each section is followed by links to relevant code locations.) 1) The command sails deploy accepts a website name and deployment username/password as optional parameters. If the command is run without parameters, a new website has to be created. We start by checking the local environment for an Azure account. Users who have the Node-powered Azure-CLI installed won’t have to do anything, while users without it are asked to authenticate the local environment (they won’t have to install the CLI though, since we wrap and abstract the CLI as a dependency). If the command is run with parameters, we don’t have to create the Azure Website, and skip to step #3. [Code: L22-L51 in sails-deploy-azure/index.js, consuming machinepack-azure 2) Once we’re authenticated, we check if a website for the current SailsJS app already exists in the current account by comparing namespaces. If so, we could just deploy to it – if not, we create one. [Code: L76-L152 in sails-deploy-azure/index.js, consuming machinepack-azure] 3) We’re ready to ship things up, so we zip the local folder, excluding unnecessary files and folders. We could have used Git to deploy, but Treeline.io preferred to not take a Git dependency. We use a Node-native ZIP implementation to compress the whole app into a single file. Next, we upload it to a temporary folder on the web app’s virtual machine. [Code: L169-187 and L263-L340 in sails-deploy-azure/index.js, consuming machinepack-azure] 4) Now that the package is uploaded to Azure, we finish execution of local commands and commence remote execution. We upload a script containing commands to clean the site, unzip the package and run [npm install] to Azure, where it is automatically assigned a REST endpoint with API – allowing us to call the script, fetch its status and stderr/stdout output. Once the script is uploaded, we simply call its API, pipe through the output to the local machine and watch as Azure is setting up the SailsJS website. [Code: L188-246 in sails-deploy-azure/index.js, consuming machinepack-azure; sailsdeploy.ps1] Running the Code To try this out, you’ll need Node.js and NPM installed on your local machine. To try the actual deployment, you can either go the brave path of also setting up a SailsJS app or use a provided test app (). If you want to go with a provided test app, skip to step 5. 1) Install the latest version of Sails by running the following command. At the time of writing this tutorial, commit 8747a77273c949455a8a89d79abfd36383d10e73 was used. npm install -g balderdashy/sails 2) Create a new folder for your brand new SailsJS app and open up PowerShell/Terminal in said folder. Run the following command to scaffold the app: sails new . 3) Update the configuration to use the correct port by opening up config/env/production.js and updating it to look like this: module.exports = { port: process.env.port, } 4) It’s a good idea to disable Grunt for your website – it’s extremely useful while in development mode, but shouldn’t be part of a deployment. This step is not specific to Azure, but a good asset management practice for SailsJS. Open up package.json and remove everything that begins with “grunt-” except for Grunt itself. 5) Log into the Azure Portal and create a new web app. Ideally, the website should have some power to run the Sails installation process without trouble. To ensure enough resources, create the website either in a “Basic” or a “Standard” plan. Make also sure to set deployment credentials for your website. 6) Go back to your Sails app and create a file called “.sailsrc” in its root folder. Fill it with the following JSON. Make sure to set the sitename, deployment username, and deployment password to the right setting. { “commands”: { “deploy”: { “module”: “sails-deploy-azure” } }, “azure”: { “sitename”: “YOUR_SITENAME” “username”: “YOUR_USER”, “password”: “YOUR_PASSWORD” } } 7) You are ready for deployment! Run the following command from your app’s root folder to let your local Sails app deploy to Azure: node ./node_modules/sails/bin/sails.js deploy Challenges Azure does not allow non-organizational accounts to set deployment credentials via command line and instead forces users to set those credentials via website. For users with organizational accounts, the user experience is therefore better, since we can automate the setting of credentials – whereas our only option for ‘pay as you go’ customers is to point to the Azure Management Portal. Opportunities for Reuse The same code can be reused in large portions to enable deployments of similar frameworks – it is obvious that the implemented flow is not specific to SailsJS. Practically speaking, the ZIP package being deployed to Azure could be filled with any framework or application. The script being uploaded to Azure to finish the installation is equally agnostic, but could also be extended to include more framework-specific commands. The Node Machine can be reused by any Node application implementing Node machines. Since SailsJS itself as well as all three modules developed during this customer engagement are open source and licensed with the MIT license, other developers are free to reuse the created code. Felix Rieseberg is an Open Source Engineer in Microsoft’s Partner Catalyst team where he spends his time coding on the cutting edge of technology, trying to improve developer’s lives by releasing quality code on GitHub and touring the world presenting the lessons we’ve learned. Read his blog at, follow him on twitter @felixrieseberg, and check out his work on github at.
https://blogs.msdn.microsoft.com/partnercatalystteam/2015/07/16/y-combinator-collaboration-deploying-sailsjs-to-azure-web-apps/
CC-MAIN-2017-04
refinedweb
1,501
54.02
Dmitry Baryshkov <dbaryshkov@gmail.com> wrote:> > > +#ifdef CONFIG_DEBUG_FS> > > + struct dentry *dir;> > > + struct dentry *info;> > > +#endif > > > > Can't you hide this in the code, say by wrappering the> > struct with something else when it is registered? > > It is allocated dynamically by drivers. I can move this to> struct clk_private to specify that it's private, but it should be> visible outsideActually, I don't think it _should_ be private. Low-level clock driversmight want to provide debugfs nodes on their own, and those nodesnaturally belong in the same directory as the clklib ones. So thedebugfs root node must be exposed somehow.You can get rid of the "info" field if you apply this patch:
http://lkml.org/lkml/2008/7/4/56
CC-MAIN-2014-49
refinedweb
113
64.61
A store has three functions that you're going to be using in this course: - getState - fetches the current state of a store - dispatch - fires off a new action - subscribe - fires a callback everytime there's a new action after a reducer's action We'll have the App component subscribe to the state using one of React's life-cycle hooks called componentWillMount(). This function is called when the component is going to be rendered, so it's a good place to put initialization logic. We subscribe to the redux store and call React's setState() function every time the store changes so we always get the newly reduced state. React calls the render() function every time the component's state changes, which "renders" the component. setStatefunction src/index.js class App extends React.Component { constructor() { super(); this.state = {}; } componentWillMount() { store.subscribe(() => this.setState(store.getState())); } render() { [...] Let's tweak the App component to display a checkbox whose state depends on the redux store. src/index.js render() { return ( <div> <h1>To-dos</h1> <div> Learn Redux <input type="checkbox" checked={!!this.state.checked} /> </div> { this.state.checked ? (<h2>Done!</h2>) : null } </div> ); } What happens when you click on the checkbox? Absolutely nothing! What gives? One of the key ideas of React is that what gets displayed is a pure function of the component's state. In other words, since your state doesn't change, React will always render the checkbox as checked.
https://thinkster.io/tutorials/learn-redux/displaying-the-state
CC-MAIN-2020-05
refinedweb
244
63.49
New Features of ZK 5.0 Timothy Clare, Technology Evangelist, Potix Corporation January 26th, 2010 ZK 5)[2].. EventQueue supports asynchronous listener With the EventQueue now supporting asynchronous listeners processing of listeners can be easily dealt with outside of the main thread. This means that if a listener takes a long time to execute then the user doesn』t need to wait for this operation to conclude before interacting with the system. Change to LGPL License To extend our reach beyond what a GPL-licensed framework can achieve, ZK 5 CE[3]. ZK 5 CE Update We are continuing our commitment to the Community and the Community Edition of ZK. To this end for the final release of ZK 5 we have added both the BorderLayout and Client-polling Server Push to the edition. The BorderLayout is a key layout component providing developers with the ability to place components in north, south, center, west and east areas. This affords developers greater control over the placement of components enabling the rapid creation of complex, enterprise-grade user interfaces. Client-polling is a famous method of implementing Server Push and is now included in ZK CE! Server Push is a communication technology which enables the server to send data to the client without a specific request client-side. This is extremely useful when building web applications such as web based instant messenger applications and stock monitoring systems.. For more information please refer to Introduction of FlashChart. Colorbox In ZK 5 foreign commands When a component is invisible or disabled, under normal circumstances you would not want to accept commands sent from the server. With ZK 5 this is now possible, for more information on how to implement this functionality please take a look at ZK Developer's Reference: Security Tips. at ZK Developer's Reference: Theme Providers. Reusing the Desktop In ZK 5 it is now possible to reuse a Desktop rather than creating a new one from scratch. This gives a performance benefit as the Desktop does not have to be recreated and disposed. For more information please take a look at ZK Developer's Reference: Performance Tips. demonstation please click> Enhanced Datebox The Datebox has nowbeen Controls & Molds! New rounded mold ZK 5 introduces a new mold for ZK 5 named 「rounded.」 The mold provides elegant and rounded controls as shown below. Functional Improvements Introduced variable resolver for CDI (Java EE 6) managed beans Having passed its final release stage on December 10th 2009, Weld offers dependency injection (CDI) as a standard part of the Java EE 6 platform. This is good news to developers everywhere! The ZK team has worked fast and has integrated CDI (Context and Dependency Injection) with ZK 5. For more information on the implementation please click here. Sizeable Panel The panel can now be resized so long as the attribute sizable is set to true. The example ZUL below shows a panel which can be resized and the image displays a panel which is in the process of being resized.[5] compatability: Before ZK 5 you could not access the textbox "username" from "MyComposer", however now you can. To do this you need to specify: public class MyComposer extends GenericAutowireComposer { Textbox id_of_include$control_id; ... } </syntax> In the case of our example this would be: <source lang="java"> public class MyComposer extends GenericAutowireComposer { Textbox i$username; ... } Readonly combobox supports keystroke selection of items The combobox now supports selection of items using keystrokes. Refer to : Combbox Enhancement. The canvas support is provided by the zk.canvas package. Thus, we have to specify depends="zk.canvas" in your WPD file, or invoke zk.load(String, Function) explicitly. For example, <package name="foo" language="xul/html" depends="zk.canvas"> ... or zk.load('zk.canvas', function () {...}); For more details on dynamically loading and specifying packages in ZK 5 you can refer to this link. Script directive now supports the if and unless attributes The script directive now supports if and unless attributes granting the user the ability to control the inclusion or exclusion of a script depending upon a condition. For example a developer could include a script if a certain variable』s value was 5. The code snippet below shows an example of implementation. <?script src="myscript.js" if="myValue==5" ?> Configuration Control whether to allow content to be indexed by search engines With ZK 5[6], you can control whether the generate HTML page can be indexed by the search engines (aka., crawlable). By default, it is not crawlable - Download ZK 5 here - Take a look at ZK 5's release notes here - View the ZK 5: Upgrade Notes and a real case: Upgrading to ZK 5 References
https://www.zkoss.org/wiki/Small_Talks/2010/January/New_Features_of_ZK_5.0
CC-MAIN-2022-21
refinedweb
779
54.32
Applies to: CX51 C Compiler Information in this article applies to: I'm working with Philips MX and my code generates the following warning when using a typedef: MAIN.C(5): warning C258: 'a': mspace on parameter ignored No warning is generated when using #define. typedef bit bool; //#define bool bit // defines doesn't generate the warning bool test ( bool a ) { return (!a); } void main ( void ) { bool ida,retorno; retorno = test ( ida ); while (1); } The output from this build is as follows: Build target 'Target 1' compiling main.c... MAIN.C(5): warning C258: 'a': mspace on parameter ignored linking... Program Size: data=9.3 xdata=0 const=0 code=30 creating hex file from "test"... "test" - 0 Error(s), 1 Warning(s). Why is the compiler generating this warning only for typedef? Am I missing something? This is a problem detected under C51 V7.01. It is solved in the latest update. You may download the latest updates from the Keil Website. Request the files attached to this knowledgebase article. Article last edited on: 2006-01-31 18:18:18 Did you find this article helpful? Yes No How can we improve this article?
http://infocenter.arm.com/help/topic/com.arm.doc.faqs/ka10591.html
CC-MAIN-2017-39
refinedweb
195
68.87
A virtual method is a method that is declared as virtual in a base class. The defining characteristic of a virtual method is that it can be redefined in one or more derived classes. Thus, each derived class can have its own version of a virtual method. Virtual methods are interesting because of what happens when one is called through a base class reference. In this situation, C# determines which version of the method to call based upon the type of the object referred to by the reference—and this determination is made at runtime. Thus, when different objects are referred to, different versions of the virtual method are executed. In other words, it is the type of the object being referred to (not the type of the reference) that determines which version of the virtual method will be executed. Therefore, if a base class contains a virtual method and classes are derived from that base class, then when different types of objects are referred to through a base class reference, different versions of the virtual method can be executed. You declare a method as virtual inside a base class by preceding its declaration with the keyword virtual. When a virtual method is redefined by a derived class, the override modifier is used. Thus, the process of redefining a virtual method inside a derived class is called method overriding. When overriding a method, the name, return type, and signature of the overriding method must be the same as the virtual method that is being overridden. Also, a virtual method cannot be specified as static or abstract. Method overriding forms the basis for one of C#’s most powerful concepts: dynamic method dispatch. Dynamic method dispatch is the mechanism by which a call to an overridden method is resolved at runtime, rather than compile time. Dynamic method dispatch is important because this is how C# implements runtime polymorphism. Here is an example that illustrates virtual methods and overriding: Example using System; namespace ConsoleApplication1 { using System; class Base { // Create virtual method in the base class. public virtual void Who() { Console.WriteLine("Who() in Base"); } } class Derived1 : Base { // Override Who() in a derived class. public override void Who() { Console.WriteLine("Who() in Derived1"); } } class Derived2 : Base { // Override Who() again in another derived class. public override void Who() { Console.WriteLine("Who() in Derived2"); } } class OverrideDemo { static void Main() { Base baseOb = new Base(); Derived1 dOb1 = new Derived1(); Derived2 dOb2 = new Derived2(); Base baseRef; // a base class reference baseRef = baseOb; baseRef.Who(); baseRef = dOb1; baseRef.Who(); baseRef = dOb2; baseRef.Who(); } } } This program creates a base class called Base and two derived classes, called Derived1 and Derived2. Base declares a method called Who( ), and the derived classes override it. Inside the Main( ) method, objects of type Base, Derived1, and Derived2 are declared. Also, a reference of type Base, called baseRef, is declared. The program then assigns a reference to each type of object to baseRef and uses that reference to call Who( ). As the output shows, the version of Who( ) executed is determined by the type of object being referred to at the time of the call, not by the class type of baseRef.
http://www.loopandbreak.com/virtual-methods-and-overriding/
CC-MAIN-2021-17
refinedweb
527
63.09
On Jan 13, 2008 6:49 PM, apfelmus <apfelmus at quantentunnel.de> wrote: > K. Claessen. Poor man's concurrency monad. > > > P. Li, S. Zdancewic. Combining events and threads for scalable > network services. > Two great papers! Thanks for pointing them out! > >? Leaking Prompt(..) in the export list to the GUI code seems wrong to me, I like 'runPromptM' because it hides the Prompt(..) data type from the user [module]. But after some rest I think I found a nice corresponding function: > contPromptM :: Monad m => (r -> m ()) > -> (forall a. p a -> (a -> m ()) -> m ()) > -> Prompt p r -> m () > contPromptM done _ (PromptDone r) = done r > contPromptM done cont (Prompt p c) = cont p (contPromptM done cont . c) This way all the Prompts get hidden so that 'lastAttempt' may be coded as > lastAttempt' :: AttemptCode > lastAttempt' showInfo entry button = guessGameNew >>= contPromptM done cont > where > cont :: forall a. GuessP a -> (a -> IO ()) -> IO () -- signature needed > cont (Print s) c = showInfo s >> c () > cont Guess c = do > mfix $ \cid -> > onClicked button $ do {signalDisconnect cid; > guess <- entryGetText entry; > c (read guess)} > return () > done attempts = showInfo $ "You took " ++ show attempts ++ " attempts." Nice and clean, and much better to read as well. Now the only question unanswered for me is if there are any relations between (forall a. p a -> (a -> m ()) -> m ()) -- from contPromptM and (ContT r m a -> (a -> m r) -> m r) -- from runContT besides the fact that both carry a continuation. I have a feeling that I am missing something clever here. Cheers, -- Felipe.
http://www.haskell.org/pipermail/haskell-cafe/2008-January/038127.html
CC-MAIN-2014-35
refinedweb
249
72.76
The chaos game is played as follows. Pick a starting point at random. Then at each subsequent step, pick a triangle vertex at random and move half way from the current position to that vertex. The result looks like a fractal called the Sierpinski triangle or Sierpinski gasket. Here’s an example: If the random number generation is biased, the resulting triangle will show it. In the image below, the lower left corner was chosen with probability 1/2, the top with probability 1/3, and the right corner with probability 1/6. Update: Here’s an animated version that lets you watch the process in action. Here’s Python code to play the chaos game yourself. from scipy import sqrt, zeros import matplotlib.pyplot as plt from random import random, randint def midpoint(p, q): return (0.5*(p[0] + q[0]), 0.5*(p[1] + q[1])) # Three corners of an equilateral triangle corner = [(0, 0), (0.5, sqrt(3)/2), (1, 0)] N = 1000 x = zeros(N) y = zeros(N) x[0] = random() y[0] = random() for i in range(1, N): k = randint(0, 2) # random triangle vertex x[i], y[i] = midpoint( corner[k], (x[i-1], y[i-1]) ) plt.scatter(x, y) plt.show() Update 2: Peter Norvig posted some Python code with variations on the game presented here, generalizing a triangle to other shapes. If you try the analogous procedure with a square, you simply get a square filled with random dots. However, you can get what you might expect, the square analog of the Sierpinski triangle, the product of a Cantor set with itself, if you make a couple modifications. First, pick a side at random, not a corner. Second, move 1/3 of the way toward the chosen side, not 1/2 way. Here’s what I got with these changes: Source: Chaos and Fractals 6 thoughts on “The chaos game and the Sierpinski triangle” Here is my golfed-down JS version: What fun! I played another few moves in the game: As a complete ignoramus, I get a kick out of seeing really smart guys just having fun with math. Thanks! I should have said this explicitly, but I got the game from the book referenced at the bottom of the post. I just illustrated it with the code and images. You can also play the chaos game with the Julia set formula, z -> z²+ c. Inverting this formula gives you √(z – c). There are two square roots in the complex plane. Pick one at random.
https://www.johndcook.com/blog/2017/07/08/the-chaos-game-and-the-sierpinski-triangle/
CC-MAIN-2017-43
refinedweb
427
73.78
Change the name of the user-information file #include <utmp.h> void utmpname( char * __filename ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The utmpname() function lets you change the name of the file examined from the default file (_PATH_UTMP) to any other file. If the file doesn't exist, this won't be apparent until the first attempt to reference the file is made. This function doesn't open the file. It just closes the old file if it's currently open and saves the new file name. Unix endutent(), getutent(), getutid(), getutline(), pututline(), setutent(), utmp login in the Utilities Reference
http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/u/utmpname.html
CC-MAIN-2021-10
refinedweb
113
66.33