text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Introduction to Python Import Module Python import module is very similar to including files in programming languages like C/C++. It is used to include the code libraries so that we can use those pre-built functionalities into our current project. Python has huge libraries of codes and modules that have pre-built code. So while creating a project we need to write coding for each and everything, we just need to import the required module to inherit those features into our project. There are many functions in python that don’t require any module like print function. How to Import Module in Python? To import any module into python we need to use import statements with module name at the starting of the code. Syntax: Import module_name Example: import random Here random is the module name. When we are importing any module into our program, all the features of that module are available in our program. To access any method into that module we have to use the dot notation to access function. We have to write the module name then dot and then the method name. Example: random.randint() - Using From: We can also import our module using ‘from’ the keyword. It works the same as an import but gives us additional features. Now we don’t need to use dot notation while calling the methods from the module that we have imported. Now we can directly call any method. Example: randint() - Using Alias: Sometimes when we use importing our module, the module name is already being used in our project, in such case module importing will not work and our code will not work properly. So to avoid such scenarios we can give another name to our module i.e alias name. Now we can use this alias to access all the methods of the module. import [module] as [alias_name] Example: import random as rand rand.randint() Here rand is the alias for our random module. It is very helpful in big projects where we have so many modules imported. Example: import random print: import random print. Examples of Python Import Module Following are the different examples of Python Import Module. Example #1 Code: import math a = 3.4 print("Ceil of 3.4 is: ") print(math.ceil(a)) print ("The floor of 3.4 is: ") print (math.floor(a)) Output: We have used the math module in the above program and used two methods of the math module, ceil, and floor. Ceil method gives the next round off the value of the number. The floor method gives the integral value smaller than the value. If we pass the integral value then the same value is returned by both the function. Example #2 Code: import math a = 6 print("Factorial of 6 is: ") print(math.factorial(a)) Output: In the above program, we are importing the math module and using the factorial method of the math module. We just need to mention the module name dot method name. Example #3 Code: import math as m a = 6 print("Factorial of 6 is: ") print(math.factorial(a)) Output: As you can in the above program, we have imported the math module but the program is giving error as math is not defined. We have created the alias name as ‘m’ for the math module. So we have to use alias names only for calling the methods. Example #4 Code: import math as m a = 8 print("Factorial of 8 is: ") print(m.factorial(a)) Output: This time we have used an alias name for calling the method, and it works fine. Recommended Articles This is a guide to Python Import Module. Here we discuss the Introduction and how to Import Module in Python along with different examples and code implementation. You may also have a look at the following articles to learn more –
https://www.educba.com/python-import-module/
CC-MAIN-2020-24
refinedweb
645
74.49
The extract, transform, load (ETL) component of SQL Server is re-designed from the ground up in SQL Server 2005. The new ETL component, Integration Services, replaces the Data Transformation Services (DTS) included in SQL Server 2000. Integration Services introduces a rich set of tools to support the development, deployment, and administration of ETL solutions. The tools support the simplest solutions, in which you just want to perform tasks such as copying data from one location to another, to enterprise-level solutions, in which you develop a large number of complex packages in a team environment. This section describes the Integration Services tools and service in the context of the life cycle of the ETL solution: development and testing, deployment to the test or production environment, and finally, administration in the production environment. This chapter discusses the Integration Services tools for developing and configuring packages, the tools that are available in Business Intelligence Development Studio; as well as the Integration Services management tools that are available in Server Management Studio to import or export packages, assign roles that have read and write permissions on packages, and monitor running packages. The discussion also includes information about the Integration Services command prompt utilities that you use to run or manage packages outside the Studio environments. Business Intelligence Development Studio is the SQL Server 2005 studio for developing business intelligence solutions, including Integration Services packages, data sources, and data source views. In Business Intelligence Development Studio you perform the following tasks: Design and create new packages Design and create the data source objects that packages use Design and create the data source views that packages use Modify existing packages Debug package functionality Create the deployment bundle that you use to deploy packages Services projects. A project contains the items of a specific project type and provides the templates to build those items. The items are saved to disk, locally or remotely, as XML files. Business Intelligence Developments Studio provides the following project types: Analysis Services Project Integration Services Project Report Server Project Reporting Model Project In addition to starting a new project from scratch and manually constructing project items or adding existing items, you can launch the following tools with Business Intelligence Development Studio: Import Analysis Services 9.0 Database to create a new Analysis Services project by importing an existing SQL Server 2005 Analysis Services database Report Server Wizard to create a project and launch the Report Server Wizard automatically The options for project types and tools are presented to you in the New Project window, as shown in Figure 16-1. You use the Integration Services project type to create packages and the data sources and data source views that packages use. If you choose the Integration Services project type, Business Intelligence Development Studio creates a project with a Data Sources, Data Source Views, SSIS Packages, and Miscellaneous folder (see Figure 16-2). An empty package is also provided. Many of the windows that you use when building packages are part of Business Intelligence Development studio: the Toolbox that provides the items for building control flow and data flow in packages, the Properties window that lists the properties of a package or package object, and the Solution Explorer that manages projects and project items, including the packages, data sources, and data source views in an Integration Services project. Figure 16-3 shows the default layout of Business Intelligence Development Studio windows. The behavior and placement of windows are configurable. If you have used Microsoft Visual Studio, this environment is familiar to you and you probably already know how to customize the development environment. If you are new to Studios, see Chapter 13, "Inside the Analysis Services OLAP Tools," for more information about the features of Business Intelligence Development Studio. The SQL Server Import and Export Wizard is the simplest way to create an Integration Services package. The packages that you create with this wizard can extract data from a variety of data sources such as Excel spreadsheets, flat files, and relational databases, and load the data into a similar variety of data stores. For example, the package can select data from an Excel spreadsheet with a query and write the data into a SQL Server table. You can launch the SQL Server Import and Export Wizard from SQL Server Management Studio or an Integration Services project in Business Intelligence Development Studio. In SQL Server Management Studio, the primary use of the wizard is to create and run packages as is. Administrators typically use these packages to perform ad hoc imports and exports of data, or they save the packages to rerun as part of routine data maintenance. This chapter focuses on using the wizard in Business Intelligence Development Studio. The packages that you create with the SQL Server Import and Export Wizard can perform only very limited data transformation, such as changing column metadata. However, these packages provide a great way to get a jump start on creating more complex packages. If you run the wizard from Business Intelligence Development Studio, you cannot run the package as a step in completing the wizard. Instead, the wizard creates a package and adds it to the Integration Services project from which you launched the wizard. This package includes a basic workflow to extract and load data (see Figure 16-4) . Also, depending on the options that you selected on wizard pages, the package may include tasks that prepare destination data stores, such as dropping and re-creating tables or truncating table data. Once you have been through the wizard and the package is added to the Integration Services project, you can work with the package in SSIS Designer and enhance the package by adding other tasks, implementing advanced features such as logging and configurations and inserting transformations between the source and destination. SSIS Designer is the graphical tool for developing packages. When you first open the designer, it consists of the four tabs: Control Flow, Data Flow, Event Handlers, and Package Explorer (see Figure 16-5). When you run the package a fifth tab, named Progress, is added to the designer. After you stop the package, the Progress tab is renamed to Execution Results. When you open an Integration Services project in Business Intelligence Development Studio, the SSIS menu is added to the menu bar. At this time, the menu has only one option: Work Offline (see Figure 16-6). This option applies to an entire project. When you select the Work Offline option, you are working in an offline mode. This means that Integration Services skips the aspects of package validation that make a connection to data sources and other external components. When you open the first package in SSIS Designer, additional options become available in the SSIS menu (see Figure 16-7). From the options on the SSIS menu, you can access the tools for implementing more advanced features in your packages, specify whether to work in offline mode, or switch to a different tab within SSIS Designer. The Work Offline option applies to the current Integration Services project. This option can also be set before you open SSIS Designer. The following list describes the SSIS menu options: Logging opens the Configure SSIS Logs dialog box, in which you add new logs and select the events and information to log. Package Configurations opens the Package Configuration Organizer dialog box, from which you launch the Package Configuration Wizard to create configurations. Digital Signing opens the Digital Signing dialog box, from which you can select the certificate to use. Variables opens the Variables window, in which you add, change, and delete user-defined variables and view system variables. Log Events opens the Log Event window, which lists the log entries that the package generates in real time. New Connection opens the Add SSIS Connection Manager dialog box, in which you select the type of connection manager to create. View provides access to the Control Flow, Data Flow, and the Event Handlers design surfaces and Package Explorer. The Format menu becomes available when you open a package in SSIS Designer. This menu includes many options for sizing the control flow and data flow items that a package contains and refining the layout of the control and data flows (see Figure 16-8). By applying these options to packages, you can make packages more legible and the control and data flows easier to understand. Depending on the layout of the package and the items selected, different options are available. For all options, except for Auto Layout, you must select at least two items before the sub-options become available. The Control Flow tab provides the control flow designer, in which you construct the package control flow. The control flow consists of autonomous tasks and repeating sub control flows that are linked into an ordered workflow by precedence constraints. When the Control Flow tab is active, the Toolbox lists the tasks and containers that you can use to construct control flows. Figure 16-9 shows the control flow designer and the Toolbox when the Control Flow tab is active. The Toolbox window is in the default location. The "Common Environment Configuration Scenarios" section, later in this chapter, provides information about customizing the Toolbox and the behavior of control flow items. The Data Flow tab (see Figure 16-10) provides the data flow designer, in which you construct the data flows in the package. A package can include no, one, or multiple data flows. A data flow consists of one or more sources that extract data, transformations that modify the data, and one or more destinations that write data. The Toolbox lists the sources, transformations, and data control flow designer, and the default Toolbox when the Data Flow tab is active. The Common Environment Configuration Scenarios section, later in this chapter, provides information about customizing the Toolbox and the behavior of data flow items. The Event Handlers tab provides the event handler designer, in which you construct an event handler for an Integration Services event. An event handler is a workflow that runs in response to an event that the runtime raises. The event handler also consists of a control flow of autonomous tasks and repeating sub control flows that are linked into an ordered workflow. If the event handler includes a data flow, then you use the data flow designer to construct the data flow. The event handler designer is similar to the control flow designer. When the Event Handlers tab is active, the Toolbox lists the tasks and containers that you can use to construct control flows in event handlers. Figure 16-11 shows the control flow designer and the default Toolbox when the Event Handlers tab is active. The Common Environment Configuration Scenarios section, later in this chapter, provides information about customizing the Toolbox and the behavior of control flow items. The Package Explorer tab provides an Explorer (why the cap?)-type view of package content. The view is built as you construct the package and provides a great way to understand the structure of the package. Figure 16-12 shows the expanded view of a fairly basic package; it has only one executable (Run SQL Statement) one connection manager (LocalHost.DatabaseName), and no user-defined variables. You can imagine how important this view is to understanding and communicating to others the structure of complex packages! The Explorer on the Progress tab records the progress of package execution and provides a view of package execution while the package is running. The view is built as the package makes progress in the execution of the control flow (see Figure 16-13) in the package and event handlers and in the data flow. The explorer records the beginning and completion of validation, progress percentages, and the start and end times of each executable, tasks, containers, and event handlers in the package, as well as the package itself. Depending on the tasks that the package contains, the Progress tab shows different types of information. For example, the Data Flow task might report the number of rows inserted into the destination data store. If errors or warnings occur, they are also listed in the Progress window. In addition, the explorer on the Progress tab provides useful information about ways that you can improve the package. For example, if a data flow extracts columns from a data source and makes no subsequent use of the columns, a warning entry that identifies the unused column is written in Explorer window on the Progress tab. After you stop running the package, the name of the Progress tab changes to Execution Results. The results from the previous execution of a package remain available on the tab explorer until you rerun the package, run a different package, or exit SSIS Designer. The Connection Manager area (see Figure 16-15) contains the connection managers that a package uses. Connection managers connect to data stores. They are used by sources and destinations to extract and load data, as well as many tasks, containers, and transformations that require access to a data store to do their work. You can add and configure connection managers as a separate step in the construction of a package, or you can add and configure the connection managers as you construct the control and data flows or implement logging in the package. If you choose to add and configure the connection managers as you go, Integration Services automatically makes available only the connection manager types that a particular control flow item, data flow item, or log provider can use. Integration Services includes a wide variety of connection managers and provides a user interface to configure each type. You configure a connection manager as a step in adding the connection manager to the package. Later, you can modify the configuration by double-clicking the connection manager in the Connection Managers area. Figure 16-16 shows the right-click menu where you select the connection manager type and open the dialog box to configure that type. Variables are used in a million different ways in Integration Services packages. Integration Services supports system and user-defined variables. System variables are the read-only variables that Integration Services provides. User-defined variables are the variables that you define to support package functionality. You will soon find that you need to add variables to packages to support package functionality. The following are a few of the ways that packages can use variables: Provide values to input parameters in SQL statements and capture values from output parameters Serve the expressions that variables, precedence constraints, property expressions, and data flow components use. Provide values to use in scripts Capture the row count from the Row Count transformation Provide the SQL and XML (code) that tasks and data flow components use The Variables window for working with variables is not part of SSIS Designer, but variables exist within the context of a package and you must open the package in SSIS Designer before you can add, delete, and configure variables. To open the Variables window, click Variables on the SSIS menu. By default, the window is docked in the upper-left corner of the Business Intelligence Development Studio. Like other Business Intelligence Development Studio windows, you can move this window and configure it to be a dockable or floating window or a tabbed document and use auto-hide. The Variables window can add, delete, and list variables. By default the window contains columns for the name, scope, data type, and value of variables. Figure 16.17 shows the default Variables window. You can set additional variable properties in the Choose Variable Columns dialog box. In the Choose Variable Columns dialog box, you can add the less frequently configured variable properties, the namespace, and whether an event is raised when the variable changes value to the Variables window. Tip Variables have properties that are not accessible from the Variables window. These properties can be set in the Properties window instead. For example, if you want to use the evaluation result from an expression as the value of a variable, then you need to configure the variable, or at least this property, in the Properties window. Integration Services include a variety of log provider types that you can use to implement logging in your packages. The log provider types include types to log to text and XML files, SQL Server Profiler, SQL Server, and Windows Events Log. You use the Configure SSIS Logs dialog box to configure logging. In this dialog box, you can specify type of log provider to implement, the logs to use, and the log entries to write to the log. The Configure SSIS Logs dialog box (see Figure 16-18) is not part of SSIS Designer, but log providers exist within the context of a package and you must open the package in SSIS Designer before you can configure logging. To open the Configure SSIS Logs dialog box, click Logging on the SSIS menu. The logs are defined at the package level (see Figure 16-19). After you have defined the logs, the tasks and containers in the package can use them. A package is a hierarchical collection of objects with the package object at the top of the hierarchy. In this hierarchy, every executable (task or container), except the package itself, has a parent. If you do not want to configure logging for each executable, you can specify that an executable uses the logging specifications of its parent container. Integration Services supports the use of logging templates. If you need to impose a consistent logging strategy across multiple packages you should consider using logging templates. Integration Services provides a variety of tools for setting the properties of packages and the objects that packages contain. The tools include custom tools for configuring the properties of tasks, containers, sources, transformations, and destinations, as well as the generic Advanced Editor dialog box that you can use to configure most data flow components. The Properties window (see Figure 16-20), built into Business Intelligence Development Studio, provides an alternative way to configure package items. For packages and the Sequence container, the Properties window is the only tool available to set properties. In addition, the Properties window lists properties that are not available to configure in the custom tools such as properties that are read-only or properties for which the default values are often used. To view properties of an item in the Properties window, click the item in the package and then click Properties Window on the View menu. To show the properties of a package in the Properties window, click the background of the control flow designer. You can update the value of properties with the evaluation results of expressions by implementing property expressions on properties. A property expression is an expression that you write using the Integration Services expression language and assign to a property. You can access the tools for building property expressions from the Properties window (see Figure 16-21). To learn more about using property expressions, see the "Common Package Development Scenarios" section, later in this chapter. Package configurations update the values of packages and package objects at runtime. Each configuration is a name/value pair in which the name specifies the path of the property to update and the value specifies the property value. By implementing configurations in a package, you can tailor each deployment of the package to a specific environment. For example, you can update the connection string of a connection manager to point to a different server. Integration Services supports a variety of configuration types. You can store configurations in XML files, environment and parent package variables, Registry entries, or SQL Server tables. The XML file and SQL Server table can store multiple configurations; the other types only a single configuration. If you choose to use a SQL Server table, you can store the configurations for multiple packages in the table and specify a filter to identify configurations for different packages. Integration Services provides two tools for package configurations: the Package Configuration Organizer dialog box and the Package Configuration Wizard. In the Package Configurations Organizer dialog box (see Figure 16-22), you enable the package to use configurations and specify the order in which the configurations are loaded at runtime. The configurations are loaded in top-to-bottom order. If multiple configurations update the same property, the configuration that is loaded last wins. To launch the Package Configuration Wizard click Add. The Package Configuration Wizard guides you though the steps to create configurations. On the Select Configuration Type page, you select the type in the Configuration type list (see Figure 16-23). You can specify the configuration location directly or choose an existing environment variable to specify the location. Table 16-1 lists and describes the various Integration Services configuration types. Configuration Type Description XML configuration file Select an existing file or provide the name of a file to create. The file is created when you complete the wizard. Environment variable Select an existing environment variable from the list. Registry entry Type the name of an existing Registry key. The key must exist in HKEY_CURRENT_USER, and the key must include a value named Value. The value can be a string or a DWORD. Parent package variable Type the name of a user-defined variable with packagelevel scope. Variable names are case sensitive and the name you provide must be a case-sensitive match of an existing variable. SQL Server Select an existing OLE DB connection manager to use or create a new connection manager. The connection manager, in turn, specifies the SQL Server database that contains the table to store the configurations. You can select an existing table or create a new table. After you specify the table, you can select the filter to use for the configuration or type the name of a new filter. If a configuration already exists for a property that uses the specified filter, the configuration is overwritten. Depending on whether the configuration type supports multiple configurations, you can select one or multiple package and package object properties and then complete the wizard. The configuration is added to the bottom of the list of package configurations in the Package Configuration Organizer dialog box. You can use the up and down arrows to position the new configuration in its correct loading position. If you want to edit the package configuration, click Edit and then rerun the Package Configuration Wizard. The last page in the wizard contains lists of the paths of properties to configure. If you need to use paths when programming the Integration Services object model, you can copy them from the wizard page. Business Intelligence Development Studio provides the same debug windows as Visual Studio. If you have debugged applications in Visual Studio, you already know how to set breakpoints and how to use the windows. If you are new to the debug environment, the Microsoft Visual Studio documentation provides information about how you access and use the debug windows. The Integration Services breakpoints are similar to the breakpoints you may have used when writing code in Visual Studio. As in code, Integration Services breakpoints suspend execution to enable you to examine the values of variables, the call stack, and so forth to help you identify and correct errors. You can set breakpoints on packages, tasks, and other container types. To set a breakpoint you enable a break condition on the container. In addition to enabling a break condition, you further identify break conditions by specifying how many times the break condition occurs before execution is suspended. You use the Set Breakpoints <container name> dialog box to set breakpoints (see Figure 16-24). If a container in a package has a breakpoint, the breakpoint icon (a red dot) appears on the container shape in the control flow designer. The Control Flow tab represents the package, and if you enable breakpoints on the package, the breakpoint icon appears on the label of the Control Flow tab. To set breakpoints on the package, place the cursor anywhere in the background of the Control Flow tab, right-click, and click Edit Breakpoints. After package development is completed, you use the Build feature for Integration Services that Business Intelligence Development Studio provides to create a deployment bundle. The deployment bundle is the set of files you will copy to the target computer and then use to install the packages and their dependencies. The Build process creates a deployment manifest and includes it, the packages in the Integration Services project, package dependencies, and any files that you added to the Miscellaneous folder in the deployment bundle. After you have copied the deployment bundle to the computer on which you want to install the packages, you run the Package Installation Wizard on the target computer. The wizard guides you through the steps to install packages. On the wizard pages, you must make the following decisions: Install packages to the file system or an instance of the SQL Server Database Engine. Choose whether or not the packages are validated after installation. Specify the folder in which packages (if deploying to the file system) and package dependencies are installed. If packages use configurations, whether to update the value of properties in the configurations or not. SQL Server Management Studio is the SQL Server 2005 studio for managing Integration Services packages. In SQL Server Management Studio you perform the following tasks: Organize packages in folders Import and export packages Assign read and write permissions to packages Run packages Monitor running packages View summaries of packages properties After you connect to Integration Service, the Object Explorer in SQL Server Management Studio provides access to the folders for storing and running packages (see Figure 16-25). The Stored Packages folder and its subfolders list the packages saved to the package store; the packages can be saved to the sysdtspackages90 table in the msdb database or to the file system folders that the Integration Services service monitors. The package store is a logical store that can consist of msdb and specified folders in the file system. To learn more about which folders are by default part of the package store and how to add other folders to the package store, read the Common Package Management Scenarios section later in this chapter. From the right-click menus of folders you access the tools to perform various management tasks (see Figure 16-26). For example, expand the Stored Packages folder and its subfolders, right-click a package, and then click the menu option to import or export the package, run the package, assign roles to the packager, or delete the package. Integration Services provides two command prompt utilities for running and managing packages. You use the dtexec utility to run packages, and the dtutil utility to manage packages. In addition, Integration Services provides the Execute Package Utility, a graphical interface on dtexec. The SQL Server 2005 documentation provides detailed information about the options and option arguments for both utilities. This chapter does not include this information; instead, it includes samples of command lines that you might find useful to help you write command lines that fit your business needs. For usage scenarios, see the "Common Package Management Scenarios" section later in this chapter.
http://www.yaldex.com/sql_server_tutorial_3/ch16lev1sec2.html
CC-MAIN-2016-44
refinedweb
4,545
50.36
3299/converting-string-and-float-data-types How could I convert from float to string or string to float? In my case I need to make the assertion between 2 values string (value that I have got from table) and float value that I have calculated. String valueFromTable = "25"; Float valueCalculated =25.0; I tried from float to string: String sSelectivityRate = String.valueOf(valueCalculated ); but the assertion fails Using Java’s Float class. float f = Float.parseFloat("25"); String s = Float.toString(25.0f); To compare it's always better to convert the string to float and compare as two floats. This is because for one float number there are multiple string representations, which are different when compared as strings (e.g. "25" != "25.0" != "25.00" etc.) import java.io.BufferedWriter; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; public class WriteFiles{ ...READ MORE You can also use Java Standard Library ...READ MORE Though both are used for comparison, but the ...READ MORE The best possible way of conversion between byte[] and String is to ...READ MORE Hadoop provides us Writable interface based data ...READ MORE Java 8 Lambda Expression is used: String someString ...READ MORE Java supplies a way of doing this ...READ MORE If you're looking for an alternative that ...READ MORE Basically, StringBuffer methods are synchronized while StringBuilder ...READ MORE Try this: Date in = new Date(); LocalDateTime ldt ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/3299/converting-string-and-float-data-types
CC-MAIN-2021-10
refinedweb
248
70.29
Last year and a half I was working in research. This position was about, among other things, to port a programming language (two, actually, Hop and Bigloo) to the Antroid platform. I already had written something about it, but this time I want to show my high level impressions of the platform. What follows is part of a report I wrote at the end of that job, which includes the things I wanted to say. The Android port can be viewed as four separate sub-tasks. Hop is software developed in the Scheme language; more particularly, it must be compiled with the Bigloo Scheme compiler, which in turn uses gcc for the final compilation. That means that we also needed to port Bigloo first to the platform, not because we were planning to use it in the platform, but because we need the Bigloo runtime libraries ported to Android, as Hop and any other program compiled with Bigloo uses them. The other three subtasks, which are discussed later, are porting Hop itself; developing libraries to access devices and other features present in the platform; and, we'll see later the reasons, make the port work with threads. When we started to investigate how to port native code to the platform we found that there wasn't much support. At fisrt the only documentation we could find was blog posts of people trying to do it by hand. They were using the compiler provided in Android's source code to compile static binaries that could be run on the platform. Because Bigloo uses dinamic libraries to implement platform dependent code and modules, we aimed to find a way to compile things dinamically. After 3 or 4 weeks we found a wrapper written in Ruby that managed all the details of calling gcc with the proper arguments. With this we should be able to port anything that uses gcc as the compiler, just like Bigloo does. At the same time, the first version of Android's NDK (Native Development Kit) appeared, but it wasn't easy to integrate in our build system. (Note: Actually I think most of the problems we faced doing this port stem from this. The NDK forces you to write a new set of Makefiles, but our hand-made build system and build hierarchy made such an effort quite big. Also, that mean supporting a parallel build system, while it should not be so crazy to spect a cleaner way to integrate the toolchain into an existing build system, not only in hand-made like in this case, but also the most common ones, like autotools, cmake, etc.) Even having the proper compiler, we found several obstacles related to the platform itself. First of all, Bigloo relies heavily on the C and pthread libraries to implement lowlevel functionality. Bigloo can use both glibc, GNU's implementation, or µlibc, an implementation aimed for embedded aplications. Bigloo also relies on Boehm's Garbage Collector ( GC) for its memory management. The C library implementation in Android is not the glibc or the µlibc, but an implementation developed by Google for the platform, called Bionic. This version of the C library is tailored to the platform's need, with little to no regards to native application development. The first problem we found is that GC compiled fine with Bionic, but the apllications that used GC did not link: there was a missing symbol that normally is defined in the libc, but that Bionic did not. We tried cooperating with the GC developers, we tried inspecting a Mono port to Android, given that this project also uses GC, trying to find a solution that could be useful for everyone, but at the end we just patched our sources to fake such symbol with a value that remotely made sense. We also found that Bionic's implementation of pthreads is not only incomplete, but also has some glitches. For instance, in our build system, we test the existence of a function like everybody else: we compile a small test program wich uses it. With this method we found at least one function that is declared but never defined. That means that Bionic declares that the function exists, but then it never implements it. Another example is the declaration and definition of a function, but the lack of definition of constants normally used when calling this function. Also, because most of the tests also must be executed to inform about the peculiarities of each implementation, we had to change our build system to be able to execute the produced binaries in the Android emulator. Google also decided to implement their own set of tools, again, trimmed down to the needs of the platform, instead of using old and proven versions, like Busybox. This means that some tools behave differently, with no documentation about it, so we mostly had to work around this differences everytime a new one apperared. All in all, we spent two and a half months just getting Bigloo to run in Android, dismissing the problem that Boehm's GC, using its own build system, detected that the compiler declared to not support threads, and refused to compile with threads enabled. This meant that Bigloo itself could not be compiled with pthreads support. With this caveat in mind, we tackled the second subtask, porting Hop itself. This still raised problems with the peculiarities of the platform. We quickly found that the dinamic linker wasn't honoring the LD_LIBRARY_PATH environment variable, which we were trying to use to tell the system where to find the dynamic libraries. The Android platform installs new software using a package manager. The package manager creates a directory in the SD card that it's only writable by the applilcation being installed. Within this directory the installer puts the libraries declared in the package. Bigloo, besides the dinamic libraries, requieres some aditional files that initialize the global objects. This files are not extracted by the installer, so we had to make a frontend in Java that opens the package and extract them by hand. But the installer creates the directory for the libraries in such a way that the application later cannot write in it. Also, we found that the dinamic linker works for libraries linked at runtime, but does not for dlopen()'ing them, so we also had to rewrite a great part of our build system for both Bigloo and Hop to produce static libraries and binaries. This also needed disabling the dynamic loading of libraries, and with them, their initialization, so we had to initialize them by hand. To add more unsuspected work, the Android package builder, provided with the SDK, ignores hidden files, which Bigloo uses to map Scheme module names to dynamic libraries. We had to work around this feature in the unpacking algorithm. Then we moved to improve the friendliness of the frontend. So far, we could install Hop in the platform, either in a phone or in the emulator, but we could only run it in the emulator, because we were using a shell that runs as root on the emulator, but that runs as a user in a real device. This user, for the reasons given above, cannot even get into Hop's install dir. Even when Android has a component interface that allows applications to use components from other apps, none of the terminal apps we found at that time declared the terminal itself as a reusable component. We decided to use the code from the most popular one, which was based on a demo available on Android's source code, but not installed in actual devices. We had to copy the source code and trimm it down to our needs. Having a more or less usable Hop package for Android, we decided to try and fix the issue we mentioned before: GC didn't compile with threads enabled. This means that we can't use the pthreads library, which is very useful for Hop. Hop uses threads to attend several requests at the same time. Bigloo implements two threads APIs, one based on pthreads and another which implements fair threads. Hop is able to use 5 different request schedulers, but works better with the one based on pthreads. For these reasons we decided to focus in getting GC to use threads with the Android platform. GC's build system tests the existence of a threading platform checking the thread model declared by gcc. The gcc version provided with Android's SDK declares to have a 'single thread model', but we couldn't find what does this mean in terms of the code produced by gcc or how this could affect to GC's execution. (Note: we didn't manage to make GC compile with threads.) With a threadless Hop running, we had to add code to the server so we could talk between the server and the frontend while at the same time it is attending the requests from a web client. After several attempts to attack this problem, we decided that the best solution was to make this interface another service served by Hop. This meant less modifications to Hop itself, but a bigger one to the frontend we already had. During these changes we found out a problem with JNI. The terminal component we imported into our code uses a small C library for executing the application inside (normaly a shell, in the original code, but Hop in our case) which is accessed from Java using JNI. The original Term application exported this class as com.android.term.Exec, but our copy exported it as fr.inria.hop.Exec. Even with this namespace difference JNI got confused and tried to use the Exec class from the original Term app. This is just another example how the platform is hard to work with. We found that the community support is more centered around Java and that very few people know about JNI, the NDK or any other native related technologies. We couldn't find an answer to this problem, so we worked around this by renaming the class. So that's it. I can provide all the technical details for most the assertions I postulated above, but that would make this post unreadbal for its length. If you have any question about them, just conact me.
http://www.grulic.org.ar/~mdione/glob/posts/porting-big-codebases-to-android/
CC-MAIN-2014-41
refinedweb
1,728
57.91
PANEL_HIDDEN(3) NetBSD Library Functions Manual PANEL_HIDDEN(3)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME hide_panel, show_panel, panel_hidden -- visibility of panels LIBRARY Z-order for curses windows (libpanel, -lpanel) SYNOPSIS #include <panel.h> int hide_panel(PANEL *p); int show_panel(PANEL *p); int panel_hidden(PANEL *p); DESCRIPTION Panels are initially created visible. The function hide_panel() can be used to hide a panel. The panel is removed from the deck. A panel can be made visible again with a call to show_panel(). The panel is returned to the top of the deck. The current visibility status of a panel can be queried with panel_hidden(). IMPLEMENTATION NOTES The show_panel() function will return an error if the panel is already visible. Use top_panel(3) to change z-order of an already visible panel. This is the behaviour specified by the original AT&T System V UNIX panel library. In the ncurses implementation of the panel library show_panel() and top_panel() are identical and handle both visible and hidden panels. This may be a source of bugs in programs tested only against ncurses. RETURN VALUES The panel_hidden() function returns TRUE or FALSE. It will return ERR if passed a null pointer. Other functions will return one of the following values: OK The function completed successfully. ERR An error occurred in the function. SEE ALSO panel(3) NetBSD 9.99 October 28, 2015 NetBSD 9.99
https://man.netbsd.org/panel_hidden.3
CC-MAIN-2021-39
refinedweb
243
52.15
- NAME - VERSION - SYNOPSIS - DESCRIPTION - ARGUMENTS - RATIONALE - CAVEATS - ACKNOWLEDGEMENTS - SEE ALSO - AUTHOR NAME namespace::sweep - Sweep up imported subs in your classes VERSION version 0.006 SYNOPSIS package Foo; use namespace::sweep; use Some::Module qw(some_function); sub my_method { my $foo = some_function(); ... } package main; Foo->my_method; # ok Foo->some_function; # ERROR! DESCRIPTION Because Perl methods are just regular subroutines, it's difficult to tell what's a method and what's just an imported function. As a result, imported functions can be called as methods on your objects. This pragma will delete imported functions from your class's symbol table, thereby ensuring that your interface is as you specified it. However, code inside your module will still be able to use the imported functions without any problems. ARGUMENTS The following arguments may be passed on the use line: - -cleanee If you want to clean a different class than the one importing this pragma, you can specify it with this flag. Otherwise, the importing class is assumed. package Foo; use namespace::sweep -cleanee => 'Bar' # sweep up Bar.pm - -also This lets you provide a mechanism to specify other subs to sweep up that would not normally be caught. (For example, private helper subs in your module's class that should not be called as methods.) package Foo; use namespace::sweep -also => '_helper'; # sweep up single sub use namespace::sweep -also => [qw/foo bar baz/]; # list of subs use namespace::sweep -also => qr/^secret_/; # subs matching regex You can also specify a subroutine reference which will receive the symbol name as $_. If the sub returns true, the symbol will be swept. # sweep up those rude four-letter subs use namespace::sweep -also => sub { return 1 if length $_ == 4 } You can also combine these methods into an array reference: use namespace::sweep -also => [ 'string', sub { 1 if /$pat/ and $_ !~ /$other/ }, qr/^foo_.+/ ]; RATIONALE This pragma was written to address some problems with the excellent namespace::autoclean. In particular, namespace::autoclean will remove special symbols that are installed by overload, so you can't use namespace::autoclean on objects that overload Perl operators. Additionally, namespace::autoclean relies on Class::MOP to figure out the list of methods provided by your class. This pragma does not depend on Class::MOP or Moose, so you can use it for non-Moose classes without worrying about heavy dependencies. However, if your class has a Moose (or Moose-compatible) meta object, then that will be used to find e.g. methods from composed roles that should not be deleted. In most cases, namespace::sweep should work as a drop-in replacement for namespace::autoclean. Upon release, this pragma passes all of namespace::autoclean's tests, in addition to its own. CAVEATS This is an early release and there are bound to be a few hiccups along the way. ACKNOWLEDGEMENTS Thanks Florian Ragwitz and Tomas Doran for writing and maintaining namespace::autoclean. Thanks to Toby Inkster for submitting some better code for finding meta objects. SEE ALSO namespace::autoclean, namespace::clean, overload AUTHOR Mike Friedman <friedo@friedo.com> This software is copyright (c) 2011 by Mike Friedman. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
https://metacpan.org/pod/namespace::sweep
CC-MAIN-2017-17
refinedweb
543
54.02
![: The example works ok with/without the UART framework as I mentioned. It's just my actual project when I add the UART it doesn't display anything on the board when loading the software to it =/ I tried changing the priority to 5 on the UART and no luck so far. In reply to zfeng: Hi adboc, I've been comparing the configurations used for my project and the blinky Thread X example w/ uart /* * User_BSP_WarmStart.c * * Created on: 13 Jan 2016 * Author: bakerj */ #include "bsp_api.h" #if defined(__ICCARM__)/* Need to installed the source code for the IAR Runtime library *//* Need to add $TOOLKIT_DIR$\src\lib to include path */#include "xcstartup.h"#pragma section = "SHT$$PREINIT_ARRAY" const#pragma section = "SHT$$INIT_ARRAY" const#endif extern void __libc_init_array (void). */ } else if (BSP_WARM_START_POST_C == event) { /* C runtime environment, system clocks, and pins are all setup. */ /* Static Constructor for C++ */#if defined(__GNUC__) __libc_init_array();#elif defined(__ICCARM__) void const * pibase = __section_begin("SHT$$PREINIT_ARRAY"); void const * ilimit = __section_end("SHT$$INIT_ARRAY"); __call_ctors((__func_ptr *)pibase, (__func_ptr *)ilimit); /* End - Static Constructor */#endif } else { /* Unhandled case. */ }} Note also that sometimes when I am using synergy configuration on my project I get this exception. Please let me know if this helps with finding what might be missing on my project. Thanks. Hi zfeng, "For ports: the sample has the followed enabled but my project doesn't: P100-P103...SPI0 is enabled on the sample but not my project" This is possible reason why the screen is blank. The SPI driver on channel 0 is required to initialize the display on SK-S7G2 and PK-S5D9 boards. I'm not sure, if we're looking at the same example project, because in my project there is User_BSP_WarmStart.c. I also have SPI driver on SCI, so SCI0 is configured, not SPI0. Regards,adboc My project is configured for SCI0 although its operation mode is set as Asynch UART. The sample project I used was this one: It uses the Blinky Thread X template. Which project example are you using that utilizes the LCD and UART? Are you using the demo under TouchGFX_Renesas_Synergy_BSP\app\demo\Renesas_SK_demo? And then adding the UART stack? Update: I loaded the SK_demo_2014 and added the same uart stack configuration as the tutorial and the same thing happens (nothing loads on the screen). So I decided to change the RX/TX to pins 100/101 as you had it. There appears to be a conflict when using 410/411. I compiled and loaded the software and this time it boots! BUT, the uart application is not working. I tried to get a reply back on the terminal (tera term) after typing 'open' and it doesn't trigger anything. When I was just testing the uart alone in a project it was working fine when using 410/411 pins on the breakout headers: For P101/100 I'm using the pins 3 and 5 on PMODA: Here's my uart application: uart_thread_entry.cpp extern "C" {#include "uart_thread.h"}void uart_thread_entry(void); uint8_t Message[]="Damper is opening \n\r";static uint8_t uart_buffer[8];static uint8_t read_byte;static uint8_t indx;static bool status; /* uart entry function */void uart_thread_entry(void){ g_sf_comms0.p_api->open(g_sf_comms0.p_ctrl, g_sf_comms0.p_cfg); while (1) { status = g_sf_comms0.p_api->read(g_sf_comms0.p_ctrl, &read_byte,1,TX_WAIT_FOREVER); if(indx>sizeof(uart_buffer)) indx=0; else { indx++; } uart_buffer[indx]=read_byte; for(int in=0; in <sizeof(uart_buffer); in++) { if(uart_buffer[in]=='o') { if(uart_buffer[in+1]=='p') { if(uart_buffer[in+2]=='e') { if(uart_buffer[in+3]=='n') { //g_sf_comms0.p_api->write(g_sf_comms0.p_ctrl, "Renesas Synergy !!!! \r\n ",24,100); g_sf_comms0.p_api->write(g_sf_comms0.p_ctrl, Message,sizeof(Message),0); indx=0; for(int i=0;i<sizeof(uart_buffer);i++) { uart_buffer[i]=0; } } } } } } if(status) while(1); }} Now to figure out why TX/RX on PMODA aren't functioning. " PMODB's SPI pins are not shared, so it's easier to get them working." What's the difference between pins 411/410 on the J22 header vs. 411/410 on the PMOD J14 header? When using the UART example tutorial, it only works when using 411/410 on J22. But if you remember, using 411/410 on SCI0 on the sample project with LCD stacks doesn't boot up the software on the board. So right now I'm at a crossroads. This is how the uart project works on its own. I type 'open' and it acknowledges and it echoes back the results: So far I've tried PMODA and PMODB with no luck even on the stand-alone uart example. For display initialization the SPI driver on RSPI can be used. In you would like to use UART on pins P410, P411 I suggest doing the following: 1. Exchange "g_spi_lcdc" SPI Driver on r_sci_spi for SPI Driver on r_rspi - please use the same name, bitrate, callback name You should remove this driver: And add this one: 2. Change pin configuration for SCI0 and SPI0: 3. Now SCI on channel 0 will use pins P410 and P411, so configure the UART driver to use channel 0: Thanks for the info. I did the modifications and the UART application works but the GUI is still not loading. r_spi driver SCI0 config SPI0 config UART application response Thread stacks If possible to move along quicker, can you use my project file? I've uploaded here: Thanks. I encountered the same issue as you did. It seems the supplied display driver (inc/lcd.h and lcd_setup.c) is not working properly with the SPI driver on RSPI. Please find the attached updated driver and remove inc/lcd.h and lcd_setup.c files and all references. Put display_init folder and its contents in src/ directory of your project. In g_spi_lcdc (on RSPI) set callback to NULL, Data sampling on even edge, high when idle. Remember to set these pins as output: P115, P610, P611. In Main_thread_entry.c, include "display_init/display_init.h" and instead of);} just call: display_init(&g_spi_lcdc); display_init.zip There appears to be activity on the GUI thread stack when pausing the debug session. The UART app is still functioning. However, still nothing on the LCD board. For lcd.h and lcd_setup.c I excluded the files from the build as you can see from the screenshots below. r_spi properties: P115/P610/P611: main thread entry.c includes and void function: GUI thread stack Sorry, I forgot to mention that RSPI driver should have bitrate set to 500000. Please find your project with required modifications, however note that it uses TouchGFX 4.9.0. Please rename these files to Renesas_SK_demo2p.zip.001 and Renesas_SK_demo2p.zip.002, these are two parts of one zip archive. Renesas_SK_demo2p.zip.001.zip Renesas_SK_demo2p.zip.002.zip I cannot upload a whole workspace here, but here's the project which contains proper configuration (configuration.xml, S7G2-SK.pincfg), display_init driver and updated Main_thread_entry.c file. SK_demo_2014_TouchGFX.zip
https://renesasrulz.com/synergy/f/synergy---forum/9793/sks7g2---using-uart-stack-on-a-touchgfx-application/33757
CC-MAIN-2019-43
refinedweb
1,145
66.64
BytesIO File Uploads to Django Using Requests It's very easy to post file data to Django using requests: import requests requests.post(url, files={'cover': open('imgpath.jpg', 'rb')}) However I was having a hard time getting that to work using BytesIO. The reason I wanted to use BytesIO was because I was reading the file binary data located in S3, from AWS Lambda. I didn't want to write the file to disk first and then do something like the code shown above. Here's how to achieve that: import io import boto3 import requests # Object in S3 s3_file = boto3.resource('s3').Object('my-bucket', 'key') # Read Bytes data into BytesIO file_bytes = io.BytesIO(s3_file.get()['Body'].read()) # Post file using requests files = {'avatar': ('myimage.jpg', file_bytes)} requests.post('someurl', files=files) The trick here in order for Django to recognize this file upload, is to define the files dictionary with a tuple that contains a dummy file name along with the binary data. As you can see, the approach that uses open() does not need this, and this is what took me a while to figure out. If in your Django application you assign a particular file name to the uploaded file, then this dummy name will be discarded after the file is posted.
https://aalvarez.me/posts/bytesio-file-uploads-to-django-using-requests/
CC-MAIN-2020-10
refinedweb
217
63.39
: Lambdas and SQL If you’re used to writing Groovy, this may appear “so 2003″ to you. We know. Groovy has known a very useful way to write string-based SQL since its early days. Here’s an example written in Groovy (see the official docs here): import groovy.sql.Sql sql = Sql.newInstance( 'jdbc:h2:~/test', 'sa', '', 'org.h2.Driver' ) sql.eachRow( 'select * from information_schema.schemata' ) { println "$it.SCHEMA_NAME -- $it.IS_DEFAULT" } Note also Groovy’s built-in String interpolation, where you can put expressions into strings. But we’re in Java land, and with Java 8, things get better in the Java / SQL integration as well, if we’re using third-party libraries, instead of JDBC directly. In the following examples, we’re looking at how to fetch data from an H2 database and map records into custom POJOs / DTOs using these three popular libraries: - jOOQ. (Shocker, I know) - Spring Data / JDBC - Apache Commons DbUtils As always, the sources are also available from GitHub. For these tests, we’re creating a little POJO / DTO to wrap schema meta-information: class Schema { final String schemaName; final boolean isDefault; Schema(String schemaName, boolean isDefault) { this.schemaName = schemaName; this.isDefault = isDefault; } @Override public String toString() { return "Schema{" + "schemaName='" + schemaName + '\'' + ", isDefault=" + isDefault + '}'; } } Our main method will get an H2 connection through DriverManager: Class.forName("org.h2.Driver"); try (Connection c = getConnection( "jdbc:h2:~/test", "sa", "")) { String sql = "select schema_name, is_default "+ "from information_schema.schemata "+ "order by schema_name"; // Library code here... } Now, how does Java 8 improve upon the jOOQ API, when using String-based SQL? Greatly! Check out the following little query: DSL.using(c) .fetch(sql) .map(r -> new Schema( r.getValue("SCHEMA_NAME", String.class), r.getValue("IS_DEFAULT", boolean.class) )) .forEach(System.out::println); This is how things should be, right? Note that jOOQ’s native APIs are also capable of mapping the database Record onto your POJO directly, as such: DSL.using(c) .fetch(sql) .into(Schema.class) .forEach(System.out::println); Things look just as nice when doing the same with Spring JDBC and RowMapper (note, the following still throws checked SQLExceptions): new JdbcTemplate( new SingleConnectionDataSource(c, true)) .query(sql, (rs, rowNum) -> new Schema( rs.getString("SCHEMA_NAME"), rs.getBoolean("IS_DEFAULT") )) .forEach(System.out::println); … and if you’re using Apache DbUtils, you can do almost the same: new QueryRunner() .query(c, sql, new ArrayListHandler()) .stream() .map(array -> new Schema( (String) array[0], (Boolean) array[1] )) .forEach(System.out::println); Conclusion All three solutions are more or less equivalent and quite lean. The point here, again, is that Java 8 will improve all existing APIs. The more unambiguous (few overloads!) methods accepting SAM arguments (single abstract method types), the better for a Java 8 integration. Next week, we’re going to see a couple of things that will greatly improve when using the java.util.Map API.
https://www.javacodegeeks.com/2014/02/java-8-friday-goodies-lambdas-and-sql.html
CC-MAIN-2017-09
refinedweb
475
58.38
03 May 2012 20:56 [Source: ICIS news] (updates with Canadian and Mexican chemical railcar traffic data) ?xml:namespace> Canadian chemical railcar loadings for the week totalled 11,760, up from 11,673 in the same week in 2011, the Association of American Railroads (AAR) said. The previous week, ended 21 April, saw a year-on-year increase of 9.4% year in chemical shipments. The weekly chemical railcar loadings data are seen as important real-time measures of chemical industry activity and demand. From 1 January to 28 April, Canadian chemical railcar loadings were down by 9.6% year on year to 180,685. The AAR said weekly chemical railcar traffic in US chemical railcar traffic fell by 1.4% year on year for the week ended 28 April, marking their tenth decline so far this year. There were 31,775 chemical railcar loadings last week, compared with 32,226 in the corresponding week of 2011. In the previous week, ended 21 April, US weekly chemical railcar loadings rose by 5.4% year on year - the second increase in a row after five consecutive declines. From 1 January to 28 April, US chemical railcar loadings were down by 0.8% to 516,512, compared with the same period of last year. Meanwhile, overall US weekly railcar loadings for the 19 high-volume freight commodity groups tracked by the AAR fell by 4.1% year on year to 283,080
http://www.icis.com/Articles/2012/05/03/9556483/canada-weekly-chemical-railcar-traffic-rises-0.7.html
CC-MAIN-2015-06
refinedweb
240
65.93
Dart Sealed Class Generator Generate sealed class hierarchy for Dart and Flutter. Features - Generate sealed class with abstract super type and data sub-classes. - Static factory methods. for example Result.success(data: 0). - Cast methods. for example a.asSuccess, a.isSuccess for null-safety. - Generate toString for data classes. - Generate 6 types of different matching methods. like when, maybeWhenand map. Usage Add dependencies in your pubspec.yaml file. dependencies: sealed_annotations: ^latest.version dev_dependencies: sealed_generators: ^latest.version Import sealed_annotations. import 'package:sealed_annotations/sealed_annotations.dart'; Add part pointing to a file which you want classes be generated in. with .sealed.dart extension. part 'weather.sealed.dart'; Add @Sealed annotation, and an abstract private class as a manifest for generated code. For example: @Sealed() abstract class _Weather { void sunny(); void rainy(int rain); void windy(double velocity, double? angle); } Then run the following command to generate code for you. If you are developer for flutter: flutter pub run build_runner build And if you are developing for pure dart: dart run build_runner build The generated code will look like: (the following code is summarised) abstract class Weather { const factory Weather.rainy({required int rain}) = WeatherRainy; bool get isRainy => this is WeatherRainy; WeatherRainy get asRainy => this as WeatherRainy; WeatherRainy? get asRainyOrNull { /* ... */ } /* ... */ R when<R extends Object?>({ required R Function() sunny, required R Function(int rain) rainy, required R Function(double velocity, double? angle) windy, }) { /* ... */ } R maybeWhen<R extends Object?>({ R Function()? sunny, R Function(int rain)? rainy, R Function(double velocity, double? angle)? windy, required R Function(Weather weather) orElse, }) { /* ... */ } void partialWhen({ void Function()? sunny, void Function(int rain)? rainy, void Function(double velocity, double? angle)? windy, void Function(Weather weather)? orElse, }) { /* ... */ } R map<R extends Object?>({ required R Function(WeatherSunny sunny) sunny, required R Function(WeatherRainy rainy) rainy, required R Function(WeatherWindy windy) windy, }) { /* ... */ } R maybeMap<R extends Object?>({ R Function(WeatherSunny sunny)? sunny, R Function(WeatherRainy rainy)? rainy, R Function(WeatherWindy windy)? windy, required R Function(Weather weather) orElse, }) { /* ... */ } void partialMap({ void Function(WeatherSunny sunny)? sunny, void Function(WeatherRainy rainy)? rainy, void Function(WeatherWindy windy)? windy, void Function(Weather weather)? orElse, }) { /* ... */ } } class WeatherSunny extends Weather { /* ... */ } class WeatherRainy extends Weather with EquatableMixin { WeatherRainy({required this.rain}); final int rain; @override String toString() => 'Weather.rainy(rain: $rain)'; @override List<Object?> get props => [rain]; } class WeatherWindy extends Weather { /* ... */ } Notes: - Prefer using factories in super class instead of sub-class constructors. like Whether.rainy()instead of WhetherRainy() - Minimize usage of cast methods, most of the time they can be replaced with a match method. Equality and generated class names useful); } Common Fields Sometimes you need some fields to be present in all of your sealed classes. For example consider making a sealed class for different types of errors, and all of them are required to have code and message. It is very annoying to add code and message to all of sealeds manually. Also if you have an error object you are unable to get its code or message without using cast or match methods. Here you can use common fields. To declare a common field you can add a getter or a final field to a manifest class, and it will automatically be added to all of your sealed classes. for example: @Sealed() abstract class _ApiError { // using getter String get message; // using final field final String? code = null; // code and message will be added to this automatically void internetError(); void badRequest(); void internalError(Object? error); } common fields are available on ApiError objects as well as it's sub-classes. If you specify common fields in your seaeld classes it has no effect. for example: @Sealed() abstract class _Common { Object get x; // one and two will have identical signatures void one(Object x); void two(); } You can use sub-class of common field type in sealed classes. For example: @Sealed() abstract class _Common { Object get x; // x has type int void one(int x); // x has type String void one(String x); // x has type Object void three(); } common fields also works with other constructs of dart_sealed like generics and @WithType. for example: @Sealed() abstract class _Common { @WithType('num') dynamic get x; // you can omit dynamic // x has type int void one(@WithType('int') dynamic x); // you can omit dynamic // x has type num void two(); } and, for example: @Sealed() abstract class _Result<D extends num> { Object? get value; void success(D value); void error(); } Ignoring Generated Files It is recommended to ignore generated files on Git. Add this to your .gitignore file: *.sealed.dart It is NOT recommended to exclude generated files from analysis. But if you decide to do so, add this to your analysis_options.yaml file: analyzer: exclude: - **.sealed.dart Libraries - sealed_annotations - Sealed Annotations Library
https://pub.dev/documentation/sealed_annotations/latest/
CC-MAIN-2021-39
refinedweb
781
52.05
30 August 2013 23:59 [Source: ICIS news] ?xml:namespace> An increase of €25-30/tonne ($33-39/tonne) was agreed with customers in northwest Europe and the Mediterranean, while buyers in the ?xml:namespace> August prices for European PVC pipe grades are, €955-990/tonne FD (free delivered) NWE (northwest Europe), €890-975/tonne FD in the Mediterranean and £845-875/tonne FD UK. The ranges were increased by €25/tonne in Europe and £25/tonne in the Numbers below the ranges were also heard, but they were not widely confirmed. Sellers were able to achieve a higher price increase of €35-40/tonne in eastern Europe because a series of planned and unplanned outages resulted in tighter supply in the region. Some European sellers said that they intend to pass down the increase in feedstock ethylene if it gets more expensive next month. PVC buyers accept that they might have to absorb higher prices in September, but are cautious about raising prices too sharply as they may struggle to pass down their higher costs to downstream customers, sources added. Significantly higher European PVC prices may compel PVC customers to consider buying lower-priced product from outside Europe, such as the While some PVC customers said they plan to buy more ($1 = €0.76, $1 = £0.65, €1 = £0
http://www.icis.com/Articles/2013/08/30/9702088/buyers-agree-to-price-increase-for-august-pvc.html
CC-MAIN-2014-41
refinedweb
220
68.1
Verify your Setup and Get Started¶ In order to verify your computer’s setup, we are going to run a program from it, and see if the robot answers as expected. Note Before verifying your setup, be sure that your physical robot (or simulation) is turned on. Firstly, go in the folder of your choice and create an empty file named “pyniryo_test.py”. This file will contain the checking code. Edit this file and fill it with the following code: from pyniryo import * robot_ip_address = "10.10.10.10" # Connect to robot & calibrate robot = NiryoRobot(robot_ip_address) robot.calibrate_auto() # Move joints robot.move_joints(0.0, 0.0, 0.0, 0.0, 0.0, 0.0) # Turn learning mode ON robot.set_learning_mode(True) # Stop TCP connection robot.close_connection() Attention Replace the third line with your Robot IP Address if you are not using Hotspot Mode. Still on your computer, open a terminal, and place your current directory in the same folder than your file. Then, run the command: python pyniryo_test.py Note If you are using Python 3, you may need to change python to python3. If your robot starts calibrating, then moves, and finally, goes to learning mode, your setup is validated, you can now start coding!
https://docs.niryo.com/dev/pyniryo/v1.0.5/en/source/setup/verify_setup.html
CC-MAIN-2021-43
refinedweb
205
66.44
. typedef struct { int x; } Foo; Then each class needs a constructor named like fits. #define RAII_INIT typedef void DtorFn( void* ); struct DtorNode { DtorFn* dtor; void* object; struct DtorNode* next; } * dtorHead__ = NULL #define RAII_CTOR( x__, T__, ... ) RAII_CTOR_WITH_LINE( __LINE__, x__, T__, __VA_ARGS__ ) #define RAII_CTOR_WITH_LINE( L__, x__, T__, ... ) struct DtorNode dtor_##T__##_##L__; if( T__##_ctor( x__, __VA_ARGS__ ) ) { dtor_##T__##_##L__.dtor = (DtorFn*)T__##_dtor; dtor_##T__##_##L__.object = x__; dtor_##T__##_##L__.next = dtorHead__; dtorHead__ = &dtor_##T__##_##L__; } else { goto failureTrap__; } #define RAII_TRAP failureTrap__: while( dtorHead__ != NULL ) { dtorHead__->dtor( dtorHead__->object ); dtorHead__ = dtorHead__->next; } there won’t be a collision in the global namespace. RAII_CTOR macro is used to invoke an object constructor. The real work is done by the RAII_CTOR_WITH_LINE, which this: bool f( /* whatever */ ) { RAII_INIT; // some code RAII_CTOR( ... ); // one or more ctor(s) return true; // everything was fine RAII_TRAP; // code below is executed only in case of error. return false; } doesn’t fit my macro requirements. Don’t worry, it is quite straightforward to hammer malloc/ free and fopen/ fclose in the _ctor/ _dtor schema. It is as simple as: #define malloc_ctor(X__,Y__) (((X__) = malloc( Y__ )) != NULL) #define malloc_dtor free #define fopen_ctor(X__,NAME__,MODE__) (((X__) = fopen( NAME__, MODE__ ))!= NULL ) #define fopen_dtor fclose Here is an example of how the code that employs my RAII macros could look: bool f( void ) { RAII_INIT; Foo foo; FILE* file; void* memory; RAII_CTOR( memory, malloc, 100 ); RAII_CTOR( file, fopen, "zippo", "w" ); RAII_CTOR( &foo, Foo, 0 ); return true; RAII_TRAP; return false; } This code has some great advantages over the solutions I presented in my old post. First, it has no explicit goto (the goto is hidden, as much as it is in any other structured statement). Then you don’t have to care about the construction order and explicitly beginning). I think that the function pointer and argument pointer juggling are a bit on (or maybe beyond) the edge of the standard compliance. I tested the code on a PC, but maybe it fails on more exotic architectures. So, what do you think?
https://www.maxpagani.org/2010/03/
CC-MAIN-2022-27
refinedweb
344
56.45
Events Events are behaviors or happenings that occur when the program is running. Events are often used in visual programming like windows and web forms. Some examples of events are clicking a mouse, typing a text in a text box, changing the selected item in a list and many more. In console applications, we can manually trigger events. You must subscribe to an event which means adding event handlers or code blocks that will be executed when a particular event happens. Multiple event handlers can subscribe to an event. When an event triggers, all the event handlers will execute. The following example shows a simple usage of events. using System; namespace EventsDemo { public delegate void MessageHandler(string message); public class Message { public event MessageHandler ShowMessage; public void DisplayMessage(string message) { Console.WriteLine(message); } public void ExecuteEvent() { ShowMessage("Hello World!"); } } public class Program { public static void Main() { Message myMessage = new Message(); myMessage.ShowMessage += new MessageHandler(myMessage.DisplayMessage); myMessage.ExecuteEvent(); } } } Example 1 – Events Demo Hello World! An event handler is a method that matches the signature of a delegate. For example, we defined a delegate that has a void return type and one string parameter. This will be the signature of the event handler method. After defining the delegate, we created a class that contains an event. To define an event, the following syntax is used. accessSpecifier event delegateType name; The delegate type is the type of delegate to be used. The delegate is used to determine the signature of the event handler that can subscribe to it. We created an event handler DisplayMessage that has the same signature as the delegate. We also created another method that will trigger the event manually. Events cannot be triggered outside the class the contains it. Therefore, we will use this method to indirectly execute it. Inside our Main method, we created a new instance of the Message class. On the next line, an event handler subscribed to our event. Notice we used the += operator which means, we are adding the event handler to the list of event handlers for an event. We created a new instance of MessageHandler delegate and inside it, we passed the name of the event handler. We then call the ExecuteEvent method to trigger the event and show the message. You will see the usefulness of events when you work with windows forms or web forms in ASP.NET.
https://compitionpoint.com/events/
CC-MAIN-2021-21
refinedweb
399
58.28
commands: 1. This command will create a service account for dashboard in the default namespace $ kubectl create serviceaccount dashboard -n default 2. This command will add the cluster binding rules to your dashboard account $ kubectl create clusterrolebinding dashboard-admin -n default \ --clusterrole=cluster-admin \ --serviceaccount=default:dashboard 3. This command will give you the token required for your dashboard login $ kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode Now copy the token and paste it in the token field on your dashboard login page. This completely depends on what roles you ...READ MORE if you need to do this is ...READ MORE One way is Init Containers. These are for ...READ MORE This article will help you. You have Kube-proxy can get out of the loop ...READ MORE Hey, You can edit or recreate it ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/19317/how-do-i-sign-in-to-kubernetes-dashboard
CC-MAIN-2022-27
refinedweb
172
67.55
2015-03-26) Private state continued KS: the "nested stuff" part of the private state implementation might be more controversial, but please note that it's separable. KS: Modified the V8 self hosted Promise to see what it would look like with "private state" AWB: the most interesting use of private state is in the typed array hierarchy, especially in terms of subclassing KS: presents zenparsing/es-private-fields/blob/master/examples/promise-before.js KS: After: zenparsing/es-private-fields/blob/master/examples/promise-after.js ARB: The implicit this in @name is not something we have done before in JS. BE: CoffeeScript set the precedence. Are you suggesting that you always have to write AWB: The semantics does not change. DD: The name is in a lexical scope AWB: We could do the same with non private. That would not be a good idea. JM: should there be symmetry between private and public properties? BE: it's part of our job to make the ergonomics sweet and competitive. KS: moving on to helper functions. If the helper function itself needs access to the private state, where can I put it? I can't put it outside the class anymore (like in promise-before.js), because then it doesn't have lexical access to the private state. So that seems to want to push that function declaration of the helper function into the class body. In the example here I'm mixing my proposals (with this::_resolve(x)), but you could have done it just as a function. ARB: Might be cleaner to introduce @name methods. Both static and prototype methods. JM: three alternatives here. What you have here, a lexically confined closure. Or, a static method that is "private". A third way is a private instance method (on the prototype). KS: so you're talking about this kind of setup class C { @x() { [email protected](); } static @x() { } } I tried to go down this path. It seems weird that I have to do ) to call they method fromx`. AWB/ARB/others: no, that seems perfectly fine to me. BE/AWB: The @x method is on the prototype. MM: The visibility is that it is only visible inside the class body. AK: I don't think [email protected]() means the right thing, because we didn't want private field access to go through prototype chains. (general acceptance that this is a deep issue) MM: I now think they should be on the instance DD: on the instance is bad ... memory-inefficient ... BE: implementation doesn't have to work that way, even if semantics do DD: I see ... so since they're private, we can lock them down enough that such an optimization is not observable EA: These methods do not need to be properties anywhere. MM: even in the weak map explanation, it's not like the closure pattern with separate-method-per-instance; it's a separate weak map per instance, but each of those weak maps points to the same function object. (discussion of how this is related to vtables) ARB: hold on, if this is private, why do you need a vtable at all... they're completely static AWB: well ... what if you reify them ... MM: reifying them is not what we're considering today; that'll be a much bigger fight. ARB: it's basically a private scope. MM: Compile time symbol table. AWB: When you start to combine subclasses the tables makes things simpler because you can combine these tables. ARB: not for methods ... for private methods it completely doesn't matter BE: yes, it boils away. When we're talking about vtable it's purely about future things like interfaces ARB: doesn't even matter for interfaces... EA: what about call and apply? ARB: I think you can do that. And that's part of why I don't think we should allow a shorthand that omits this. AWB: why would you ever do that? DD: can you use super inside private methods? MM: The home object for a @m() {} method needs to be the prototype ( C.prototype) and similarly the constructor for a static at-method. JM: Would it make more sense to think of these as functions that are set as instance properties during the instance property initialization. MM: We want class private ARB/MM: The --weak-- map explanation vs the private symbol is that the map case the function is associated with the object ....??? AWB: Or the vtable explanation. EA: What if you had C. class C { @a; m() { let somethingElse = C; // <-- [email protected](); } @x() { [email protected](); } static @x() { } } ARB: that is a very good point; I think it means we can't have the same name as both static and instance. MM: It would force internal polymorphism; it's not good to pay the cost of this when it's essentially accidental. ARB: given a random object o and you do [email protected] on it, you have no way of knowing which type of private that is referencing statically (i.e. instance method private or static method private or instance data private), and that defeats the point. MM/ARB: Do not allow them to have the same name. DD: now this is bizarre. They act too different from public methods/public static methods. EA: Kevin's original (nested declarations) idea is seeming more attractive after this exercise. KS: (presents original nested declarations idea) class C { @a; function xHelper(obj) { [email protected]; } m() { xHelper(this); } } (general admiration) ARB: Weird about this is that it looks like a public declaration. AWB: In this case you are not talking about dispatch. ARB: at least it should not be function, maybe private xHelper() { } KS: the concern is that if you use private then people would think it's a private method... BE: private function xHelper() { } might help DD: Not intuitive to programmers coming from other languages. People already use external helper functions. KS: Feedback has not been too positive. The external helper function is used in Python for example. ARB: fear that most programmers will want to use method syntax and will make helpers public just to get that Immutable Records, Tuples, Maps and Sets (Sebastian Markbåge and Lee Byron) sebmarkbage/ecmascript-immutable-data-structures LB: ImmutableMap reutrns a new Map when you mutate them. LB: Wants value semantics. Especially for == and ===. MM: As well as Object.is. BE: "NaNomali" LB: Wants deep equality so that a Map of Maps or vector/records works too. LB: Wants to work across realms and shared memory for workers. LB: Based on Typed Objects. Wants new syntax. const xy = #{x: 1, y: 2}; const xyz = #{...xy, z: 3}; const xy = #[x, y]; const xyz = #[...xy, z]; LB: Record is the realm specific constructor/wrapper (like String) BE: For value types we didn't want the wrappers. JHD: What happens if you pass these primitives into the Object function? (like Object('string')) LB: Works the same. It creates a wrapper of the correct type. EA: Are the keys always sorted? LB: Wants to allow implementations to do whatever what they want but... MM: I think you are doing the right thing since then the comparison is cleaner. AWB: Module namespace exports are sorted with the default comparison. ARB: When you implement immutable maps you do not use hash tables but ordered trees. MM: You cannot define an order for opaque object. MM: Can an immutable map refernce a non mutable object? ARB: You can specifiy that keys have to be deeply immutable. LB: Would like to have same keys as in ordinary maps. ARB: Then how would you implement this? LB: You would use a hash function/method and a trie. The hash is optional. That is why we want it implementation dependent. LB: Would have to define what order to use. ARB: That means undeterministic behavior. LB: In immutable.js the order is part of the equality. MM: If you want you could canoninicalize the order before comparing. LB: Most people do not depend on the order. MM: You could do insertion order and make === be equal but make Object.is return false if the order is not the same. ARB: Another option is to use value order where possible but use insertion order otherwise. MM: You could use hash consing. ARB: Does not help for ordering. MM: Need to make sure that you do not use the real physical address, due to mutually suspicious code in the same realm. Need to be unguessable or use the hash to achieve information about code it should not know about. YK: Maybe you want lower level primitives so that user code can implement this. LB: No. We need to define the semantics so that VM can optimize this. LB: A VM can use shared memory. It can use low level structs and memory to achieve. ARB: A VM can have efficient per object hash codes. JH: Also syntax. AWB: Per object hash code is the big missing building block that we do not want to provide. EA: It is not possible to make a user exposed hash code that is going to be as efficient as the internal hash code which might just be the address of the object. MM: For ordering you might get away with a not very good pseudo random generator. LB: It seems simpler and good enough to just stick with the insertion order for immutable maps. MM: Implied cost. The precise equality would have to take the ordering into consideration. LB: Batteries included philosophy. AWB: You can still have a standard library that can be implemented in JS. LB: Libraries can always provide more data structures. AWB: I don't think we should provide new libraries without providing the primitives that allows these to be implemented in user code. BE: What primitives are missing? LB: Assuming we have typed objects and value types, value semantics MM: Cryptographically safe pseudo-random numbers as a hash code. DH: Sharing immutable objects across workers. MM: Cryptographic pseudo random number generators. MM: The number has to be large because collision is fatal LB: Another issue is that we are creating a new value type for every record. Conclusion/Resolution - Identify the requirements. - Progress report at a future meeting. Composition Functions (Jafar presents slides) (Lots of discussion. Notes lost to a grue.) Conclusion/Resolution - Wide agreemeent that async functions are worth generalizing - Wide agreement that promises are the dominant use case and should be the default - Tricky problems with hoisting semantics, need more time to woodshed that problem - async/await is wanted/needed so we should urgently figure out if this can ever be made to work (discussion about wtf woodshed means - behind woodshed = to kill, to/inside woodshed = to spank) Additional export __ from statements (Lee Byron presents leebyron/ecmascript-more-export-from ) General agreement, suggestion to add Babel transpilation and fill out spec text, bring back at next committee meeting for a fast-track through stages. 64-bit math (Brendan Eich presents gist.github.com/BrendanEich/4294d5c212a6d2254703) Moves to stage
https://esdiscuss.org/notes/2015-03-26
CC-MAIN-2020-50
refinedweb
1,843
66.84
Now that the Office Interop API Extensions have been released, I thought I would post a complete walkthrough of a simple LINQ to DASL application. Let’s start with my fictitious Outlook calendar: This calendar shows that I have four appointments today. The appointments have been categorized as either “Work” (blue) or “Personal” (green). Suppose I would like to create an Outlook add-in that displays my personal appointments on startup. I will first create a new C#-based Outlook 2007 Add-in project in Visual Studio 2008. Next I’ll add a reference to one of the Outlook extension assemblies, which were installed as part of the Office Interop API Extensions. I’ll select version 12.0.0.0 of the assembly because I’m using Outlook 2007. Version 11.0.0.0 of the assembly would be used with Outlook 2003. Before they can be used, I have to tell the compiler to look for the extensions during build. I’ll do that through a set of using statements at the beginning of my source file. Two using statements are required: using Microsoft.Office.Interop.Outlook.Extensions; This statement brings in the Items.AsQueryable<T>() extension method that I’ll use in my LINQ expression. using Microsoft.Office.Interop.Outlook.Extensions.Linq; This statement brings in the LINQ to DASL types that form the basis for my LINQ expression. With that, I can then write my LINQ expression in the startup event handler of the add-in: private void ThisAddIn_Startup(object sender, System.EventArgs e) { Outlook.Folder folder = (Outlook.Folder) Application.Session.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderCalendar); var appointments = from item in folder.Items.AsQueryable<Appointment>() where item.Categories.Contains(“Personal”) select item.Item; var builder = new StringBuilder(); builder.AppendLine(“Personal Appointments:”); builder.AppendLine(); foreach (var appointment in appointments) { builder.AppendLine(appointment.Subject); } MessageBox.Show(builder.ToString()); } Let’s look at this query more closely. The first clause “from item in folder.Items.AsQueryable<Appointment>()” uses the new Items.AsQueryable<T> extension method. This extension method simply returns a new instance of the ItemsSource<T> class, which implements the LINQ interfaces IQueryProvider and IQueryable. I know this folder contains only appointments, so I specify the Appointment class for the generic type. The Appointment class is the LINQ to DASL class associated with the AppointmentItem interface in Outlook. If the folder contained a mixture of Outlook item types (such as both appointments and meetings), I would either need to use the more generic OutlookItem class or use the MessageClass property in my query to restrict the types of the items returned. The second clause “where item.Categories.Contains(“Personal”)” is the “meat” of the query. This is the expression translated into a DASL query string and passed to Outlook. Outlook then returns a collection of Items matching the query string. In this case, I want Outlook to return only items where the categories property contains the string “Personal”. The where clause can contain a number of different types of expressions: - The typical set of comparisons (==, !=, <, <=, >=, >, &&, ||) - Negation (!) - Method calls on properties using String.Contains(), String.StartsWith(), and String.EndsWith() - Expressions involving user properties (e.g. item.UserProperties[“Foo”].Value) The last clause “select item.Item” specifies what items to return from the query. LINQ to DASL will wrap each item returned by Outlook with an instance of the type specified in the AsQueryable<T>() extension method. That instance can be returned as-is, or a projection on that instance can be returned instead. I want the original AppointmentItem instance returned by Outlook so I specify a simple projection that returns the Item property on the Appointment class. The select clause also determines the ultimate type of the returned data, IEnumerable<Outlook.AppointmentItem> in this case. This is what I iterate over in my foreach loop. Finally, I can hit ‘F5’ and see the results. Hopefully this helps people get started with LINQ to DASL. (If it doesn’t, please let me know what else I can cover to make things more clear.) This sample can be found on Code Gallery here.
https://blogs.msdn.microsoft.com/philliphoff/2008/02/25/linq-to-dasl-walkthrough/
CC-MAIN-2016-36
refinedweb
680
59.9
I’m dissecting a Max patch and still learning much of the basics. I’m confused by the Sends and Receives in this patch. There’s an object [s ---rec]. To my understanding that should be sending to another object that has the same argument name [r ---rec]. However in the screenshot attached has a live.text object set up as a toggle called "Mute". This toggle is sending to [s ---rec] but there are no corresponding receive objects with the same argument name. Any explanations? There is a patcher with the name "rec" though. The "Mute" button is sending information to that patch but why and how? Thanks in advance. There is probably a [r ---rec] inside the [p rec] patcher, otherwise the [s ---rec] would make no sense. Oh yes. Thank you. So you can send and receive to sub-patches. I guess I didn’t realize that. So the "—" is not some special kind of syntax in Max? Named objects (like send, receive, buffer etc.) share a global namespace. That means, if you have a [s foo] object in one device, any [r foo] object in any m4l device will receive the messages you send to [s foo]. Even if it’s a completely different m4l device (let’s say the [send] is inside a max-midi-effect and the [receive] inside a max-audio-effect. To prevent that, you can put "—" at the beginning of the name. The "—" gets replaced by a different number for each m4l device, for example the [s ---foo] will become [s 001foo] in the first device [s 002foo] in the second and so on. There’s a very short explanation here : Thanks a lot for that clarification. When Thé patch is locked, click (or cmd-click maybe)on any send / receive and you will get a list of all objects in that namespace in your patch or sub patches. Choosing one in the list will select it. oh, didn’t know Live had its own unique identifiers that --- comes from the old pluggo times, and they were just leaving it like it was. --- Log in to reply OUR PRODUCTS C74 ELSEWHERE C74 RSS Feed | © Copyright Cycling '74 | Terms & Conditions
https://cycling74.com/forums/topic/understanding-sends-and-receives/
CC-MAIN-2015-14
refinedweb
368
82.54
Routing in Phoenix Umbrella Apps Miguel Palhas Updated on ・7 min read Umbrella apps are an awesome way to structure Elixir projects. Behind the curtains, they are a very thin layer that just compiles everything to a single package. Instead of building a single large monolith, you can structure your code with multiple isolated contexts. They all get compiled and run under the same BEAM instance, so they still have access to each other. Meanwhile the conceptual separation ensures you have separate OTP apps for each of your umbrella children. And it allows you to work on each of them with a certain degree of isolation. Think of this as a poor man’s microservices solution. You don’t need to add a messaging queue or send HTTP requests between each service since they’re all actually running under the same process, but you still get some of the benefits. If you want to know more about umbrella applications, I suggest the official guide as a starter, as it clearly outlines the advantages and caveats of umbrella apps. Now let's look at a real life example where I've implemented an umbrella app. A Real Example Let’s say I’m building a website for Magic: The Gathering (MTG) cards. Which… well, I am. The idea is to create an interface where users can browse and search a database of cards. There’s also an admin panel where some administrative tasks can be performed. Clearly, each of these frontend interfaces has different requirements: - The main frontend is public while the admin side only has private access. - The admin panel may even have its own UI requirements. In this case, I’m using ex_admin for convenience. This means, even UI assets are not shared. - They mostly have completely different back-end logic as well. Only a small subset of the queries and operations can be shared between the two. - I may also want to access both of them through different URLs (e.g. use an adminsubdomain for the Admin frontend). The multiple differences between the two make it clear that it would be better for these to be two separate phoenix apps—each with its own setup. Something like this: apps/ client/ admin/ shared/ Looks Easy Enough. What’s the Issue? The problem comes when you try to figure out how to actually implement this. How do you route requests from the admin subdomain to another Phoenix app while routing other requests to the main Phoenix app? One solution would be to run each of those apps on a different port. But then, you'll either be left accessing admin.mydomain.com:4001, or you’ll need some other middle layer to abstract away that port distinction from your browser. While this may be fine for an admin page that only you will access, it doesn’t work as well for a general solution. The old school solution is to put a reverse proxy between your clients and your server. nginx does this job pretty well. But in reality, you know all this is a single Elixir application. It seems weird to need a third party server to be able to route requests to different parts of it. It also doesn’t solve the problem of local development, unless you want to run nginx locally as well, which is less than ideal. We’re Elixir developers after all, and we’re pretty smart. So let’s do this the Elixir way: Introducing a Proxy App The solution I came up with (i.e. read suggestions from similar use cases on Stack Overflow) was to create an additional umbrella child, which will be the main point of contact to the outside world. This app, which we’ll call proxy, will receive all incoming HTTP requests and forward them to the appropriate Phoenix app, based on a few simples rules. In our simple use case, requests to admin.mydomain.com will be forwarded to the admin app, and all others will be forwarded to the client app. This is a very simple phoenix app, which you can generate with mix phx.new like all the others. Dependencies will be kept to a minimum here. We only have phoenix & cowboy as external dependencies (to set up our web server), as well as the client and admin apps to which we’ll be forwarding requests: def deps do [ {:client, in_umbrella: true}, {:admin, in_umbrella: true}, {:phoenix, "~> 1.3.2"}, {:cowboy, "~> 1.0"} ] end Since this app will be the actual web server, we should disable the server setting in the other two: # apps/client/config/config.exs config :client, Client.Web.Endpoint, server: false # apps/admin/config/config.exs config :admin, Admin.Web.Endpoint, server: false # apps/proxy/config/config.exs config :proxy, Proxy.Endpoint, server: true This ensures that only the proxy app will be listening to a port. This is not mandatory but it saves you the trouble of having to define different ports for each one (remember: only one listener per port is allowed) and ensures all requests actually go through the proxy app—which is indeed the expected behavior. Leaving server: true might be useful in development or testing mode, depending on how you want to set up your environment. Setting up the Endpoint The entry point of a Phoenix app is the Endpoint module. In this case, we’ve set this to Proxy.Endpoint. Since this app really has no other responsibility, there’s no need to nest it under the Web module, as is common practice in Phoenix. However, we can strip down most things from the Endpoint module created for us by the Phoenix generator and end up with a very simple module: defmodule Proxy.Endpoint do use Phoenix.Endpoint, otp_app: :proxy @base_host_regex ~r|\.?mydomain.*$| @subdomains %{ "admin" => Admin.Web.Endpoint, "client" => Client.Web.Endpoint } @default_host Client.Web.Endpoint def init(opts), do: opts def call(conn, _) do with subdomain <- String.replace(conn.host, @base_host_regex, ""), endpoint <- Map.get(@subdomains, subdomain, @default_host) do endpoint.call(conn, endpoint.init()) end end end Let’s go over this one step at a time: @base_host_regex ~r|\.?mydomain.*$| This is used to extract the subdomain part of the host URL of every request. So for admin.mydomain.com we want to get the string admin and for mydomain.com we will end up with an empty string (meaning, we’ll forward this to the default app. More on that later). Notice that this doesn’t exactly match the .com part. This is a convenience change I made for local development. Matching on mydomain.* allows me to use admin.mydomain.lvh.me when working on my local machine, and still have this whole logic working without making development-specific changes. If you don’t know what lvh.me is, this article might be helpful (TL;DR: It’s a development service that resolves its name to localhost). With the above regex in mind, the next part should be easy to understand: @subdomains %{ "admin" => Admin.Web.Endpoint, "client" => Client.Web.Endpoint } @default_host Client.Web.Endpoint For every subdomain, we want to match a particular Phoenix endpoint belonging to the app that we want to forward the request to. @default_host is what we’ll use if the subdomain is missing (the empty string scenario we talked above). def call(conn, _) do with subdomain <- String.replace(host, @base_host_regex, ""), endpoint <- Map.get(@subdomains, subdomain, @default_host) do endpoint.call(conn, endpoint.init()) end end When this endpoint—which is actually not much more than an Elixir Plug—is called, we just grab the subdomain from the request host, then find the matching endpoint from our mapping (defaulting to @default_host), and call endpoint.call/2 on it. This is essentially delegating the call down to the appropriate app. Now client and admin both have to only worry about their corresponding requests and authentication. All logic related to the multiple subdomains & clients we may need is abstracted away in this app. Want a new client in the same umbrella? Add it here! Want the same endpoint to respond to additional subdomains? Add it here! Taking the routing even further By adding a smart router to our umbrella application, we’re now able to serve requests to different subdomains to different apps in our umbrella application. I first implemented this pattern on a pet project of mine, but have since used and improved it on a few production projects as well. We could take this much further. For example, if you’re migrating an existing service from Ruby to Elixir. You can have this proxy application route all requests made to the Ruby version of your service redirected back to the Ruby application, ensuring backward-compatibility. Or you may want the opposite scenario, where you’re creating a new API service and want to forward matching requests to a different client or even to a different web server altogether. We can also take the routing complexity to another level. Routing was done here based solely on the subdomain of the request. But depending on your needs, you can create more complex routing rules using HTTP headers or query parameters. All of this can be done while keeping your actual web services completely oblivious to it. We had a blast 💥thinking about all the possibilities of Umbrellas and Routing. We hope it set your mind on 🔥fire as well.. Thanks for stopping by ❤️ I love this post! I'm so excited to implement this myself - the "poor man's microservice" is sweet. One question - the first withstatement uses a variable hostwhich doesn't appear to be in the scope. Can you explain please? Thanks for the feedback! It appears you caught a typo in the post. It was suposed to be conn.host. Already edited with a fix :) Darn, I was hoping there was some Elixir magic I was unaware of!
https://dev.to/appsignal/routing-in-phoenix-umbrella-apps-3g0j
CC-MAIN-2019-35
refinedweb
1,649
66.74
Maintains a trail of topology or spatial objects in time-order. More... #include <vcl_vector.h> #include <vcl_deque.h> #include <vtol/vtol_topology_object_sptr.h> #include <vsol/vsol_spatial_object_2d_sptr.h> Go to the source code of this file. Maintains a trail of topology or spatial objects in time-order. The idea is to display video segmentation for a given number of frames in a finite queue. The most recent frame result is added and the oldest result is removed. Creates the impression of a feature-time trail in the current frame display Modifications: J.L. Mundy March 15, 2003 Initial version. Definition in file vvid_frame_trail.h.
http://public.kitware.com/vxl/doc/release/contrib/brl/vvid/html/vvid__frame__trail_8h.html
crawl-003
refinedweb
102
54.59
Adding a license to source files July 6, 2012 Leave a comment A few years ago I remember seeing a blog post by a person interviewing potential new developers. He said that when the prospect featured Java prominently on their resume, he would make sure to give them a programming test that would be easy to solve in a different language but hard in Java, just to see how they handled it. He normally used some sort of file manipulation as an example, because Java makes that particularly challenging while scripting languages often make those problems simple. About a week ago, someone contacted me about the source code from my book, Making Java Groovy. The source code is located at my GitHub repository,. The requester noted that I didn’t have any sort of license file on my code and wondered what the terms were. Leaving aside the wonder that (1) somebody found the book code useful (ack! humblebrag alert!) and (2) he actually asked permission to use it, it occurred to me I really ought to have something in place for that eventuality. I asked my editor at Manning about it and she didn’t answer right away, so I interpreted that as freedom to do whatever I wanted. 🙂 A friend on a mailing list suggested that the Apache 2 license is appropriate if I don’t care too much how the code is used (and I don’t), so I decided to add that license to each source file. That brings me, at long last, to the original subject of this post: how do I add a license statement to the top of a large number of source files nested in many subdirectories? I thought I would solve the problem with the eachFileRecurse method that Groovy adds to Java’s java.io.File class. I quickly realized, though, that there were directories I wanted to skip, and that lead me to the traverse method, which takes a Map of parameters. Here’s the result: import static groovy.io.FileType.* import static groovy.io.FileVisitResult.* String license = '''/* =================================================== * Copyright 2012 Kousen. * ========================================================== */ ''' dir = '/Users/kousen/mjg' new File(dir).traverse( type : FILES, nameFilter : ~/.*(java|groovy)$/, preDir : { if (it.name == '.metadata') return SKIP_SUBTREE }) { file -> // only add license if not already there if (!file.text.contains(license)) { def source = file.text file.text = "$license$source" } assert file.text.contains(license) } I used static imports for the FileType and FileVisitResult classes. The FILES constant comes from FileType, and the SKIP_SUBTREE constant comes from FileVisitResult. The parameters I used return only files whose name ends in either ‘java’ or ‘groovy’ and aren’t in any directory tree including ‘.metadata’. Ultimately everything is based on the getText and setText methods that the Groovy JDK adds to the java.io.File class. Both are called using the text property. The getText method returns all the existing source code, and the setText method acts as an alias for the write method, which automatically closes (and therefore flushes) the file when finished. I used a multiline string for the license and included the training carriage return, so writing the license followed by the source did the trick. The documentation for these methods is, shall we say, a little thin. I therefore did what I normally do in these situations: I found the test case in the Groovy source distribution. The test in question is called FileTest and can be viewed in the Groovy GitHub repository here. The test cases showed how to use all of the methods, including traverse, so it was just a question of looking for the right example. (Incidentally, one of the less publicized but really sweet features of GitHub is the code browser. Just find the project you want and dig into the directories until you find a file, and then GitHub provides syntax highlighting and everything. It’s a great, great feature, especially if you don’t want to clone the source of every project you care about onto your own local disk.) Since I hadn’t known about the traverse method ahead of time, and I messed up the regular expressions for a while (sigh), solving the problem took longer than I expected. Still, it’s hard to beat a solution that takes less than a dozen lines. Hopefully someone else will find this helpful as well. And regarding the job interview situation described above, to paraphrase John Lennon in the rooftop concert at the end of Let It Be, on behalf of Groovy and myself, I hope I passed the audition. Recent Comments
https://kousenit.org/2012/07/06/adding-a-license-to-source-files/
CC-MAIN-2017-34
refinedweb
761
69.92
<1774691105.20030306192847@...> > * On the subject of "where is our 61440 dimensions ?" > * Sent on Thu, 6 Mar 2003 19:28:47 +1000 > * Honorable Arseny Slobodjuck <ampy@...> writes: > > Does anybody knows where 61440 dimensions gone ? Clisp for Win32 > 2.30 reports array-rank-limit 65536 while current CVS only 4096. CLISP pushes all dimensions on the STACK in some functions, so it cannot be more than the permissible stack depth. See my 2003-01-23 patch: (arrayrank_limit_1): set to lp_limit_1 because array_dimensions() pushes the dimensions on the STACK -- Sam Steingold () running RedHat8 GNU/Linux <> <> <> <> <> Those who don't know lisp are destined to reinvent it, poorly. > * In message <1214614365.20030306192730@...> > * On the subject of "pathname.d patch" > * Sent on Thu, 6 Mar 2003 19:27:30 +1000 > * Honorable Arseny Slobodjuck <ampy@...> writes: > > I repaired assure-dir-exists for win32 (remember the failed test I > mentioned recently). I can commit it now or wait for when Peter > finishes his great effort. please do not wait. these things are orthogonal. (just make sure you do not introduce things like label: var type name; that he eliminated) -- Sam Steingold () running RedHat8 GNU/Linux <> <> <> <> <> Sex is like air. It's only a big deal if you can't get any. Hi, Does anybody knows where 61440 dimensions gone ? Clisp for Win32 2.30 reports array-rank-limit 65536 while current CVS only 4096. I personally currently not need it, but is this intended or this is an accident ? -- Best regards, Arseny Hi, My primary mail box is broken, so I repeating the messages: I repaired assure-dir-exists for win32 (remember the failed test I mentioned recently). I can commit it now or wait for when Peter finishes his great effort. -- Best regards, Arseny Hi, Sam Steingold wrote: >the impnotes say: > 30.3.5. Operations on foreign places >it appears that (SLOT var foo) would not work - it will require > (SLOT (FOREIGN-VALUE var) foo) > >what is correct? the impnotes or the sources? I realized after writing my article on variable-length FFI arrays that I probably did not understand what you mean here. The documentation is always correct (at least its spirit, if not the letter :). You do not provide enough information on VAR to answer your question precisely. If VAR denotes a foreign place, as e.g. defined by DEF-C-VAR, then (SLOT VAR foo) is the right thing. If it does not denote a foreign place, then what is it? If var is a foreign-variable object , then it looks completely weird to be abliged to write (slot (foreign-value fv) foo). Mentally this could be translated as "dereference the whole thing in order to keep only the one slot" - plain crazy. By contrast, (slot* fv foo) would not have this mental interpretation, just "extract slot of thing referenced by foreign-variable". Regards, Jorg Hohle. > * In message <002901c2e2ea$49ed5370$9880520c@...> > * On the subject of "MSVC7 (Visual Studio .NET) makefile changes" > * Sent on Tue, 4 Mar 2003 23:38:59 -0800 > * Honorable "Jay Kint" <jkint@...> writes: > > Included is a diff between the current version of makefile.msvc5 > necessary to make CLISP compile and link against the Microsoft Visual > C++ 7 toolchain. They were minor. these makefiles are generated by src/makemake.in. I will modify it to produce a proper makefile.msvc7, according to the patch you sent us. > Also necessary was a source code change to a Microsoft header file in > order to avoid a potentially more sweeping change to the CLISP source. > This change will not affect any current VC++ source code as it was > simply an inert variable in a structure. (See for yourself). Yuk. please try the appended patch instead. > Peter, as to the C99 compiler compliance, I don't know, but I can find > out for you. I imagine it isn't. Microsoft is not know for strict > compliance, although this compiler is much more C++ compliant than any > other to date they've produced. Could you please also check that the MS CPP is now good enough and we do not need to carry cccp around? makemake.in carries the following note: "The MSVC4 preprocessor is not usable because of its treatment of empty macro arguments". Another note it carries says "-O1 and -O2 are buggy in msvc5.0". Could you please check whether this still applies to msvc7? Please try replacing "-Os -Oy -Ob1 -Gs -Gf -Gy" with "-O2", "-O1" ("-O3"?:-) and see how it affects the performance. Thanks! -- Sam Steingold () running RedHat8 GNU/Linux <> <> <> <> <> History doesn't repeat itself, but historians do repeat each other. --- win32.d.~1.31.~ 2003-03-03 12:22:53.000000000 -0500 +++ win32.d 2003-03-05 20:03:00.000000000 -0500 @@ -33,6 +33,10 @@ #undef ULONG #undef ULONGLONG #define unused (void) + #elif _MSC_VER >= 1300 + #undef unused + #include <windows.h> + #define unused (void) #else #include <windows.h> #endif I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/clisp/mailman/clisp-devel/?viewmonth=200303&viewday=6
CC-MAIN-2017-13
refinedweb
852
67.86
Cause a window to share buffers which have been created for or attached to another window. #include <screen/screen.h> int screen_share_window_buffers(screen_window_t win, screen_window_t share) The handle of the window that will be sharing the buffer(s) owned by another window. The handle of the window whose buffer(s) is to be shared.() or attached with screen_attach_window_buffer()... Any window property, such as SCREEN_PROPERTY_FORMAT, SCREEN_PROPERTY_USAGE, and SCREEN_PROPERTY_BUFFER_SIZE, which was set prior to calling screen_share_window_buffers(), is ignored and reset to the values of the parent window. 0 if the windows are sharing buffers, or -1 if an error occurred (errno is set; refer to /usr/include/errno.h for more details). Note that the error may also have been caused by any delayed execution function that's just been flushed.
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.screen/topic/screen_share_window_buffers.html
CC-MAIN-2018-13
refinedweb
129
56.25
Last month young publisher glasshaus released "Practical XML for the Web," a book "for Web professionals of all levels who want to know how XML can be put to great practical use in their work today." I was already pleasantly surprised when the package arrived... Rather than choosing the prevalent cumbersome format of a 1000+ page bible with pointless appendixes, a slim padded envelope revealed a 440 page book, thinner than my ThinkPad. The appendixes are real case studies, not XML references or Javadoc printouts. A first look at the Table of Contents shows a logical flow of topics from a concise and brief introduction of XML, to its implementation and use in the contemporary browsers, all the way to server-side applications. Most chapters set out by defining the ground they intend to cover, and close with a summary of the addressed topics. The introduction covers XML syntax, rules, constructs, namespaces and definition through DTDs and Schemas in a mere 36 pages, with many pointers to external resources for the more esoteric details. Another roughly 20 pages are dedicated to introducing the main areas of concern for XML on the Web: Parsing, displaying and linking XML documents. Light reading for everybody who has ever ploughed through books with hundreds of pages of reproduced material from the W3C standards. Chapter 2 covers the most important display-related standards XHTML, MathML, and SVG in some depth, combined with a quick mentioning of RSS, XForms, VoiceXML and database formats. Again, pointers to Web resources and other books give more depth to the presented content where required. The next chapter deals with the support of XML in the current versions of the major Web browsers, being defined as While some warnings are present regarding compatibility with other browsers, the obvious question for every Web developer is neither answered nor even mentioned: "How do I design XML on the Web for ALL to see?" While there certainly is no easy answer, this vital topic for "practical" XML should not be overlooked. The situation became even more complicated in recent days, with the arrival of new browsers: Netscape 7, and more significantly Apple's Safari browser for OS X, derived from the Konqueror Web browser of the KDE desktop for Linux, which is a whole different codebase altogether. While I have only one easy answer (ignore the low-marketshare browsers, if you had to ask) I want to at least list the problems you face, should you choose the hard way: But back to the challenges of XML development... Produced by Michael Claßen URL: Created: Feb 03, 2003 Revised: Feb 03, 2003
http://www.webreference.com/xml/column74/
crawl-002
refinedweb
437
52.43
Hello I want to array with key and valu in vb.net as like in php $array['name']="xyz"; $array['surname']="abc"; so is this possible ? please any suggestion. Thanks in advance >I want to array with key and valu in vb.net as like in php. so is this possible ? No You may use Hashtable class from System.Collections namespace for that purpose. Hashtable instance represents data in key-value pair form. Here is a sample: ..... Dim rec As New System.Collections.Hashtable rec.Add("roll", 10) rec("name") = "Ronny" rec("bdate") = Date.Parse("12-31-1990") For Each n In rec.Values Console.WriteLine(n) Next .....
https://www.daniweb.com/programming/software-development/threads/197048/array-in-vb-net
CC-MAIN-2019-30
refinedweb
108
73.13
It’s been a long time coming, but we have now published updated and new NuGet packages for RIA Services, including an update to RIAServices.EntityFramework that adds support for EntityFramework 5.0.0 (including all versions >= 4.1). Each of the following NuGet packages was uploaded to nuget.org today with the version number of 4.2.0. Aside from improving the EntityFramework support with a detailed README, NuGet packaging updates, and the creation of a couple of new packages from existing Toolkit assets, there are no further updates across the features. I do hope the new and updates packages are helpful though! RIAServices.EntityFramework provides the DbDomainService<T> class which can be used to create Domain Service classes for use with a DbContext from the EntityFramework package. This package supports versions of EntityFramework starting with EF 4.1, including EntityFramework 5.0.0. Special thanks to the EF Team’s Brice Lambson, as this package would not have been possible without his work on it. The package includes a README.txt that NuGet will open automatically, as there are some known installation issues with the package, but they all have straight-forward solutions within your web.config file. For more information, Varun Puranik’s blog post from when this package was initially published is still applicable. Note that while the NuGet packages are now compatible, there may be features in EF 5.0+ that will not be compatible with RIA Services due to how it serializes entities. Unless a property’s data type is known to RIA Services, it is unable to serialize the property. Therefore, properties such as Geometry will not be supported.. RIAServices.LinqToSql provides the LinqToSqlDomainService<T> class which can be used to create Domain Service classes for use with a DataContext class from Linq to Sql. RIAServices.WindowsAzure provides the TableDomainService<T> class, as well as the TableEntity and TableEntityContext classes that can be used to create Domain Services backed by Windows Azure Table Storage. For more information, Kyle McClellan’s blog has some good documentation: RIAServices.Server provides the System.ServiceModel.DomainServices.Hosting and System.ServiceModel.DomainServices.Server assemblies in place of the GAC references, making bin-deployment easier. The web.config.transform adds the necessary configSections, HTTP modules, and system.serviceModel elements to make your Domain Services available at runtime. The package also includes a targets file and tools assembly that provides build-time validation of your DomainService classes. This package can be used instead of going through the Add New Domain Service wizard to prime your ASP.NET web application with the configuration necessary to utilize WCF RIA Services on the Server. There is one known issue, where some users have experienced duplicate web.config entries getting created as a result of installing this package. To resolve this issue, simply eliminate one of the duplicate entries -- they vary only by character case. RIAServices.Endpoints provides the Microsoft.ServiceModel.DomainServices.Hosting assembly, which serves the SOAP and JSON endpoints for Domain Services. The necessary web.config entries are added with this package as well. RIAServices.T4 provides the CSharpClientCodeGenerator class, as well as many supporting classes, which can be used to override the existing code generation pattern for RIA Services. For more information, Varun Puranik’s blog has a couple of posts: RIAServices.UnitTesting provides a DomainServiceTestHost that can be used to unit test your Domain Services. For more information, Kyle McClellan has a few blog posts on this package: RIAServices.WebForms contains the ASP.NET DomainDataSource and DomainValidator controls for using a DomainService in an ASP.NET Web Forms application. Monday, December 10, 2012 5:28 PM You mixed EF 5.0 with 4.0 didn't you? @Shimmy,I'm not sure what you mean. We now have support for EF 5.0, but we also create a new NuGet package for EF 4.0 support to make it clear that one package only supports DbContext models while the other only supports ObjectContext models. So, I can use Enums in RiaServices ?? I second that last comment, can I now use enums with RiaServices?This toolkit update is very helpful anyway as it enables us to update to EF5 and .NET 4.5, so thank you! Thank you Jeff!!! Long live RIA Services Awesome work! I have a huge project that I converted in 6 hours. Everything is testing out beautifully. Thanks for all the hard work. Thanks a lot! I converted my project during day. You made me happy! Hi Jeff,I'm so happy to hear the new version of WCF RIA Services. (renamed to RIA Services?)When I try to install RIAServices.Toolkit. All from NuGet I get the following error:'RIAServices.Server' not installed. Attempting to retrieve dependency from source...Done.'RIAServices.WindowsAzure' not installed. Attempting to retrieve dependency from source...Done.The element 'dependencies' in namespace 'schemas.microsoft.com/packaging/2010/07/nuspec.xsd" title="schemas.microsoft.com/packaging/2010/07/nuspec.xsd">schemas.microsoft.com/packaging/2010/07/nuspec.xsd' has incomplete content. List of possible elements expected: 'dependency' in namespace 'schemas.microsoft.com/packaging/2010/07/nuspec.xsd" title="schemas.microsoft.com/packaging/2010/07/nuspec.xsd">schemas.microsoft.com/packaging/2010/07/nuspec.xsd'.What is the case of this error? Can you help me?Thanks,Tom Yes, Enums are supported!Here is a sample project showing Enums: @Tom, It sounds like you need to update the version of NuGet you have on your machine.In Visual Studio, in the Extensions and Updates, you should see that there's an update available for NuGet.Alternatively, you can go to and click the button there to install the latest version.Hope this helps,Jeff Any documentation for this release? I dont know what the deal was, but I was having some issues after the upgrade so I rolled back the updates and went back to object context and my problem went away. The problem to be more specific was just trying to update a DateTime? field. It would go through all the motions after calling SaveChanges, but nothing would get updated and no errors. Just simply returned to the client like everything was good but nothing actually happened. Guess I will just go with what we have a take a look at this some other time. Thanks though. @Ron - I recommend trying to narrow it down to EntityFramework vs. RIA Services. It sounds like this might be an EntityFramework issue and not related to RIA Services. Once you get that narrowed down, it might be easier to figure out the root cause. I noticed that the package RIAServices.EntityFramework says that it is for use the DbContext code first. Is there any reason why this same package wouldn't work with database first?ThanksPhil With this new package, can we use the DomainDataSource control in XAML to connect to a DataContext that was built as a DbDomainService? I am trying a simple example using this to load a DataGrid and it appears that the context is coming up NULL and nothing is loaded in the DataGrid. Should it be working? I got it working - it was a mis-type on my part that was hard to detect when buried in my XAML. Instead of:<riaControls:DomainDataSource.DomainContext>I had typed:<riaControls:DomainDataSource.DataContext>Now it works! D'Oh! Who do I contact with a question about how to configure the web.config in a RIA Services Class Library that is referencing 2 other projects, which contain my DbContext and my POCO classes for a Code-First EF5 project? I read the readme.txt and still can't figure out why the Add New Domain Service wizard won't pick up my Code First DbContext. I am wondering if I need to change the Connection String in the Web.Config in some way for the wizard to see it?Thank you, Hi Jeff, I'm starting to have a very uninformative error every time I make a mistake on my database code the error is: "Failed to get the MetadataWorkspace for the DbContext type" then I have to spend 10 minutes or more trying to guess which is the real error. Is there any way I can get back my nice old real error messages? The error I'm getting are compile time errors.Thanks.P.S.: Already tried on the forum with no results. I want to update my Silverlight code to new RIA but I can’t find Silverlight assemblies on Nuget (System.ServiceModel.DomainServices.Client.dll and System.ServiceModel.DomainServices.Client.Web.dll). This updates only applies to server side or I’m missing something? Regarding: Failed to get the MetadataWorkspace for the DbContext typeTry running VS2012 as Administrator. What are your plans for supporting EF6? It seems to me that DbDomainService doesn't validate IValidatableObject. You have mentioned in the past jeffhandley.com/... that IValidatableObject is run on the server side. I filed a question and a test case here stackoverflow.com/... Apologies, I think I've resolved it. IValidatableObject is only checked by DomainServices if there is at least one dataannotation attribute on the POCO's metadata class or on any of its members. I think this is a bug, as DbContext does not have this requirement. Thanks! I have downloaded and quickly verifed that I can use EF4.4 with DbContext. Now that this has been out for a while, are there any problems that has been discovered or is it safe to migrate an enterprise application to DbContext?Does switching to DbContext mean that any feature in RIA Services will not work or behave differently?Best regards,Mikael to get the Business Template, can I just install some Nuget package? Or has it gone
http://jeffhandley.com/archive/2012/12/10/RIA-Services-NuGet-Package-Updates-ndash-Including-Support-for-EntityFramework.aspx
CC-MAIN-2017-13
refinedweb
1,626
59.7
Having mentioned it last time and had someone ask in the newsgroup I thought it would be a good idea to post a quick summary of the syntax available with text templating in the March release of the DSL tools. To set the scene, static class which the templating system generates. This class has at least one static method, which when executed writes out the desired output of the transformation of the template. In the parlance of the current March release we’d call this the generated code, but we’re moving the naming over to be ‘Text Templating’ and “Transformation” as it’s not just code you can generate, rather it is any text-based artefact. If you do nothing else in your template, the static method will write out all of the text in literal blocks by simply writing out the raw text using a simple WriteLine() type statement. I’m sure that all this is very familiar to users of ASP/ASP.Net, but not everyone has a web development background, so I’ll persevere. Control blocks You can add three types of control block in this release: <% %> – Regular control block Embeds some control code in the static method. Regular control blocks affect the literal blocks that they surround by means of the flow of control within them. This happens because the regular blocks’ code is written verbatim into the static method. So if you have a regular block that begins a for loop that loops five times, then have a static block, then have another regular block that closes the for loop, you’ll get the contents of the static block written into the final output text five times. You’d typically use some expression blocks inside this loop to do something interesting with the loop variable. You can put anything in a regular block that you can put in the body of a static method. <%# %> – Expression control block Embeds the value of a C# expression. The expression will automatically have .ToString() appended to it with no culture specified. <%! %> – Class features control block Embeds control code at the static class scope. This block allows you to add new static methods, static fields, static properties etc to the static class. You can then use these from regular and expression control blocks. Trying to write all of your control code inside one static function can be tiresome if you have a big model to traverse – especially if the model has a recursive structure like a tree in it. Class features blocks allow you to add further static methods to the static class to make this sort of thing easier and to structure your control code well. You can also add static fields and properties to support those methods if you wish. Limitations There can only be one class feature block and it must come immediately after the directives. Code in control blocks has to be C#. We’ll add VB.Net support in our next release. Directives Directives provide instructions to the templating engine. Their syntax is: <%@ directiveName parameter=”Value” %> The current set for any template file is: <%@ generatedFile extension=”.cs” %> Specify the extension of the file that gets generated. <%@ assembly name=”System.Drawing.dll” %> Reference the assembly in compilation of control blocks. <%@ import namespace=”System.Collections” %> Import the namespace in compilation of control blocks. (i.e. a C# using statement) For .mdfomt templates: <%@ modelFile path=”UtilitiesModel.dmd” %> Load the given domain model and provide a reference to it in the property Model on the context object. For .mdfddt templates: <%@ modelFile path=”..\\SimpleArchitectureChart.dd” %> Load the given designer definition and provide a reference to it in the property Definition on the context object. Phew – OK, so maybe that wasn’t quite such a quick post! Visual Studio Team System Richard Campbell has announced that he and Carl Franklin will tape an episode… I was in Barcelona all last week, so I’m behind on email and blogs. I missed a couple of great posts… Hello, You might not have mentioned it, but custom attributes are a vital "class feature" to be able to generate. Plans?
https://blogs.msdn.microsoft.com/garethj/2005/04/06/quick-summary-of-current-dsl-tools-text-templating-syntax/
CC-MAIN-2017-09
refinedweb
687
62.78
README ¶ Casing A small Go library to switch between CamelCase, lowerCamelCase, snake_case, and kebab-case, among others. Features: - Unicode support 👩💻 - Intelligent Splitfunction - Camel, lowerCamel, snake, and kebab casing built-in - Extensible via simple composition - Optional handling of initialisms like IDand HTTP Alternatives I wrote this library because the below ones don't support unicode, or have strange edge cases, or used a different implementation approach that was less flexible / extensible. They are good projects worth linking to nonetheless, and inspired me to create this library. Due to the different approaches I decided to create a new library rather than submitting a PR to completely rewrite their projects, potentially introducing unexpected behavior or breaking their existing users. Usage The library is easy to get started with: import "github.com/danielgtaylor/casing" input := "AnyKind of_string" // Convert to different cases! fmt.Println(casing.Camel(input)) fmt.Println(casing.LowerCamel(input)) fmt.Println(casing.Snake(input)) fmt.Println(casing.Kebab(input)) // Unicode works too! fmt.Println(casing.Snake("UnsinnÜberall🎉")) If you would rather convert unicode characters to ASCII approximations, check out rainycape/unidecode as a potential preprocessor. Transforms Each casing function can also take in TransformFunc parameters that apply a transformation to each part of a split string. Here is a simple identity transform showing how to write such a function: // Identity returns the same part that was passed in. func Identity(part string) string { return part } You may notice the transform function signature matches some existing standard library signatures, for example strings.ToLower and strings.ToUpper. These can be used directly as transform functions without needing to wrap them. This supports any number of transformations before the casing join operation. For one example use-case, imagine generating Go code and you want to generate a public variable name that might contain an initialism that should be capitalized: // Generates `UserID` which passes go-lint name := casing.Camel("USER_ID", strings.ToLower, casing.Initialism) Transformations are applied in-order and provide a powerful way to customize output for many different use-cases. If no transformation function is passed in, then strings.ToLower is used by default. You can pass in the identity function to disable this behavior. Split, Join & Merge The library also exposes intelligent split and join functions along with the ability to merge some parts together before casing, which are used by the high-level casing functions above. // Returns ["Any", "Kind", "of", "STRING"] parts := casing.Split("AnyKind of_STRING") // Returns "Any.Kind.Of.String" casing.Join(parts, ".", strings.ToLower, strings.Title) There is also a function to merge numbers in some cases for nicer output. This is useful when joining with a separator and you want to prevent separators between some parts that are prefixed or suffixed with numbers. for example: // Splits into ["mp", "3", "player"] parts := casing.Split("mp3 player") // Merges into ["mp3", "player"] merged := casing.MergeNumbers(parts) // Returns "MP3.Player" casing.Join(merged, ".", strings.Title, casing.Initialism) Together these primitives make it easy to build your own custom casing if needed. Documentation ¶ Overview ¶ Package casing helps convert between CamelCase, snake_case, and others. It includes an intelligent `Split` function to take almost any input and then convert it to any type of casing. Index ¶ - func Camel(value string, transform ...TransformFunc) string - func Identity(part string) string - func Initialism(part string) string - func Join(parts []string, sep string, transform ...TransformFunc) string - func Kebab(value string, transform ...TransformFunc) string - func LowerCamel(value string, transform ...TransformFunc) string - func MergeNumbers(parts []string, suffixes ...string) []string - func Snake(value string, transform ...TransformFunc) string - func Split(value string) []string - type TransformFunc Constants ¶ This section is empty. Variables ¶ This section is empty. Functions ¶ func Camel ¶ func Camel(value string, transform ...TransformFunc) string Camel returns a CamelCase version of the input. If no transformation functions are passed in, then strings.ToLower is used. This can be disabled by passing in the identity function. func Initialism ¶ Initialism converts common initialisms like ID and HTTP to uppercase. func Join ¶ func Join(parts []string, sep string, transform ...TransformFunc) string Join will combine split parts back together with the given separator and optional transform functions. func Kebab ¶ func Kebab(value string, transform ...TransformFunc) string Kebab returns a kabob-case version of the input. func LowerCamel ¶ func LowerCamel(value string, transform ...TransformFunc) string LowerCamel returns a lowerCamelCase version of the input. func MergeNumbers ¶ MergeNumbers will merge some number parts with their adjacent letter parts to support a smarter delimited join. For example, `h264` instead of `h_264` or `mp3-player` instead of `mp-3-player`. You can pass suffixes for right aligning certain items, e.g. `K` to get `MODE_4K` instead of `MODE4_K`. If no suffixes are passed, then a default set of common ones is used. Pass an empty string to disable the default. func Snake ¶ func Snake(value string, transform ...TransformFunc) string Snake returns a snake_case version of the input. Types ¶ type TransformFunc ¶ TransformFunc is used to transform parts of a split string during joining.
https://pkg.go.dev/github.com/danielgtaylor/casing
CC-MAIN-2021-39
refinedweb
819
50.23
#include <gromacs/restraint/manager.h> Manage the Restraint potentials available for Molecular Dynamics. A simulation runner owns one manager resource to hold restraint objects used in the simulation. In the case of thread MPI simulations, multiple runner instances will have handles to the same underlying resource. With further factoring of the mdrun call stack, this facility can be combined with others into a simulation context object from which simulation code can retrieve support code for a user-configured simulation. Calling code provides the manager with a means to access the various required input data to be used when restraints are computed. Obtain the ability to create a restraint MDModule. Though the name is reminiscent of the evolving idea of a work specification, the Spec here is just a list of restraint modules. Get the number of currently managed restraints. Get a copy of the current set of restraints to be applied. This function is to be used when launching a simulation to get the restraint handles to bind, so it is not performance sensitive. A new vector is returned with each call because it is unspecified whether the set of handles point to the same objects on all threads or between calls to getRestraints.
https://manual.gromacs.org/current/doxygen/html-lib/classgmx_1_1RestraintManager.xhtml
CC-MAIN-2021-17
refinedweb
204
53
> Do not know why the patch is still not installed, although from the > discussion thread nobody oppose it indeed. Sorry, I can't remember seeing your patch before, I must have overlooked it. Is it related to? This said, I don't know much of anything about the font drivers, so I can't really comment on the substance of the code, but while waiting for someone like Jan or Handa to look at it, I can give you some trivial cosmetic recommendations (most of them documented in the "GNU Coding Standards"). If you resubmit a new patch, I recommend you send it to address@hidden (so that it gets a tracking number) or directly to address@hidden if it's related to that bug. > + struct frame *frame; /* hold frame ptr, cjk double width fix need it */ Please capitalize and punctuate your comments. > + int is_cjk; /* Flag to tell if it is CJK font or not. */ Thanks for capitalizing and punctuating this comment, this one is good. We usually prefer to two spaces after the final ".", in case you feel like polishing it yet a bit more. > + because Korean fonts may not have any Chinese characters at all. > + codes from xterm.*/ Here we do need spacing (ideally two spaces) between "." and "*/" I don't understand exactly what is meant by "codes from xterm". Maybe that should be "Code inspired by similar logic in XTerm"? > +static int > +xftfont_is_cjk_font(struct xftfont_info *xftfont_info) Always put a space before the open parenthesis. > +{ > + if(XftCharExists(xftfont_info->display, xftfont_info->xftfont, 0x4E00) || > + XftCharExists(xftfont_info->display, xftfont_info->xftfont, 0xAC00)) Same here. Additionally, please but the line before "||" rather than after. > + return 1; > + return 0; Please make the return type "bool", then. And use "true" and "false" rather than 1 and 0. Also, you can apply eta-reduction to the above code and just write return (XftCharExists (xftfont_info->display, xftfont_info->xftfont, 0x4E00) || XftCharExists (xftfont_info->display, xftfont_info->xftfont, 0xAC00)); [ Tho that would step over the 80 columns limit, so you may then want to introduce a local var to hold xftfont_info->display, maybe. ] > + if(half_width_cjk) > + *half_width_cjk = 0; I think half_width_cjk should be a pointer to "bool" and we should use "false" here. > + if( default_width == 0 || /* something wrong */ Please don't put a space after an open paren (but do put one before). > + if( char_width < default_width) { The "{" should be on a line of its own. > + } else /* get the padding, all cjk symbols is DOUBLE width */ And the "}" should also be on its own line. > + xftfont_info->is_cjk = xftfont_is_cjk_font(xftfont_info); `is_cjk' should be a `bool' field. Stefan
https://lists.gnu.org/archive/html/emacs-devel/2014-04/msg00407.html
CC-MAIN-2019-35
refinedweb
423
72.87
In MatLab if I want the first column of a matrix I simply do: - Code: Select all first = b(:,1); which will yield a new matrix with dimensions of [r,c]. However, if I try something similar in Python: - Code: Select all from numpy import * first = b[:,0] I get a new array with dimensions [r,], a one-dimensional array. This is ok, until I try to join this new array with another of same length: - Code: Select all new = hstack( (b[:,0], b[:,100:106]) ) This elicits a very angry response from Python regarding a dimension mis-match. This can be overcome, in a somewhat ugly fashion by: - Code: Select all new = hstack( (b[:,0:1], b[:,100:106]) ) Whereas in MatLab this is a very simple operation: - Code: Select all new = [b(:,1),b(:,100:105)] Question 1: Is there a more elegant way of stacking the arrays in Python? I doubt it, and this is not a big deal since I can get around this minor inconvenience. Now, when I continue on with my array work, I am hitting another issue with dimensions, this time using mean(). If I take my array and want to perform a mean over the columns (i.e. axis = 1): - Code: Select all new = mean(old,axis=1) I get, wait for it, a 1-d array! Again, I run into the same problem as above when trying to stack this "new" array with another array. I can force the array into 2-d via: - Code: Select all new = atleast_2d( mean(old,axis=1) ) # Transposes (not really a transpose since 1-d) from shape of [r,] to [1,r] new = new.T # Fixes transpose issue Again, this can be handled, but is a non-elegant way. I can't understand WHY numpy would default everything to 1-d as this is super annoying. Question 2: Is there any way to preserve the two dimensions of my array after applying the mean, without invoking atleast_2d and .T?
http://www.python-forum.org/viewtopic.php?f=6&t=1748
CC-MAIN-2015-11
refinedweb
334
64.75
Hi , Im converting a doc tat contains some latex equations (attached with this post )to HTML through aspose .Im getting extra symbols like box in between every latex equation.How can i remove tat ? do help me . Thanks in advance. Hi , Hi <?xml:namespace prefix = o Thanks for your inquiry. I can’t reproduce the problem on my side. I use the latest version of Aspose.Words (6.6.0) for testing. You can download it from here: Please attach also your HTML here for testing? Best regards, Hi , Thanks for ur reply .Im using Java .Anyway i attached my HTML with this post .wat im using is Aspose 3 .Here is my java code . " Document doc = new Document(“Latex1.doc”); doc.save("/home/anbu/latex.html",SaveFormat.HTML); " Thanks in advance. Hi Thank you for additional information. I also tried converting your document to HTML using Aspose.Words for Java, and still no problems on my side. I use the latest version of testing (3.1.1.1). As I can see, you use 3.1.0.0 version of Aspose.Words in your application, so please try upgrading to the latest version. Best regards, Hi ! Thanks for ur prompt response .Its working great in aspose 3.1.1 . Actually is it a fix in 3.1.1 ?? Bec previously i were worked in Aspose 3.1.0 and i couldn get this result . How can i get the lateset upgrade information from Aspose ? So tat i can upgrade it to latest version whenever Aspose release new version Regards, Anbu.S Hi Thanks for your inquiry. You can enable e-mail subscription here: And you will receive e-mail notification when new releases are published. Best regards, Thanks Andrey , Can you answer for the question that i asked earlier ?? " Is it a fix in 3.1.1.1 ? " I have gone through release notes of Aspose 3.1.1 and 3.1.1.1,but i could not find this fix . Hi <?xml:namespace prefix = o There were no reports regarding such problems earlier. Maybe the problem was resolved by some recent changes in the latest version. Best regards, Hi , Aspose 3.1.1.1 worked for me . Thanks .But this problem still occurs in few of my Linux machines .Is there any third-party dependency ? Should i install any package for this to work ? Thanks in advance Hi<?xml:namespace prefix = o It is perfect, that you 3.1.1 works for you. (currently there is also 3.2.0 version, so you can download and try it). No, there are no third-party dependencies in Aspose.Words. Maybe there are some problems with caching, maybe in some of your environments the older version is still used. Best regards. Hi, I did few analysis over this and come up with some points. I am doing the DOC->HTML conversion in linux machine by using aspose(we are using aspose java api’s. Since java is platform independent one). In that machine, there is no symbol.ttf font installed. This is a root cause for this problem. After i installed the font the equation comes(PNG) properly in the converted html. What is the third party you are using for the WMF->PNG conversion. It seems, the third party which is expecting symbol.ttf while doing this conversion. The symbol.ttf fonts is from microsoft. By default which is not be there in any of the linux flavour. Why do not you tell us, the dependencies which aspose needs including the third party’s dependencies even the fonts too. After spending plenty of time then only we can found this. Anyway the below forum post is helped me a lot. Hi <?xml:namespace prefix = o Thanks for your inquiry. MS Word uses symbol font in metafiles, which are used to display your equations. So it is not dependency of Aspose.Words it is dependency of your metafiles. For WMF to PNG conversion we do not use third party components. We use our own converter. Best regards. Hi , I agree with you .But on above link you have mentioned that Aspose using some third party to convert wmf to png . Between, I came to this link they says about the font embedded with the document. I embedded the font in the same document and tried. But still the problem is not solved with out installed the symbol.ttf. I am not sure about the microsoft word document format and where they keep the fonts with it. Also why the WMF->PNG not getting the fonts from the document itself. May be the fonts not available in the WMF file i guess. Aspose.Words does not support embedded fonts upon rendering metafiles. I linked your request to the appropriate issue, you will be notified as soon as this feature is supported. <?xml:namespace prefix = o Best regards. Hi , I am trying to convert doc to html containing latex equation image .But am getting broken image. Please do help me to solve this problem Regards, Anbu.S Hi <?xml:namespace prefix = o Thanks for your request. As I can see the image is not broken. Just quality of an image is poor. You can try setting resolution of an image to improve its quality. See the following code: Document doc = new Document(@"Test001\pro1.doc"); doc.SaveOptions.HtmlExportImageResolution = 200; doc.Save(@"Test001\out.html"); Hope this helps. Best regards. Hi , I am unable to solve this problem even after trying the same thing in Aspose.words.java latest build, but i am getting the same broken image. Can you update me with the version of Aspose.words.java that you are using to fix this problem. Regards, Anbu Chezhian.S Hi <?xml:namespace prefix = o Thank you for additional information. I managed to reproduce this problem with Java version of Aspsoe.Words. Your request has been linked to the appropriate issue. You will be notified as soon as it is resolved. Best regards. A fix for the issue(s) you've reported (filed as 7626) will be released in the next release at the end of this month. You will be notified. This message was posted using Notification2Forum from Downloads module by aspose.notifier. (7) The issues you have found earlier (filed as WORDSNET-2103) have been fixed in this .NET update and in this Java update. This message was posted using Notification2Forum from Downloads module by aspose.notifier.
https://forum.aspose.com/t/problem-while-importing-latex-equations/86909
CC-MAIN-2021-21
refinedweb
1,077
70.8
New submission from Raymond Hettinger <rhettinger at users.sourceforge.net>: In r59144 , a bunch of internal-use constants were dumped in the main namespace. These all need to be prefixed with an underscore. They should be fixed right-away before people start using them. Since they are externally undocumented and since the internal notes describe them as being only for internal-use, I think this can go in as a bugfix. ---------- assignee: facundobatista components: Library (Lib) messages: 78893 nosy: facundobatista, rhettinger priority: high severity: normal status: open title: Junk in the decimals namespace versions: Python 2.6, Python 2.7, Python 3.0, Python 3.1 _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________
https://mail.python.org/pipermail/new-bugs-announce/2009-January/003703.html
CC-MAIN-2018-26
refinedweb
115
60.92
: Libtool paradigm, Up: Top [Contents][Index] In the past, if you were a source code package developer and wanted to take advantage of the power of shared libraries, you needed to write custom support code for each platform on which your package ran. You also had to design a configuration interface so that the package installer could choose what sort of libraries were built. GNU Libtool simplifies your job by encapsulating both the platform-specific dependencies, and the user interface, in a single script. GNU Libtool is designed so that the complete functionality of each host type is available via a generic interface, but nasty quirks are hidden from the programmer. GNU Libtool’s consistent interface is reassuring… users don’t need to read obscure documentation to have their favorite source package build shared libraries. They just run your package configure script (or equivalent), and libtool does all the dirty work.. Next: Issues, Up: Introduction [Contents][Index]::. Previous:. Next: Using libtool, Previous: Introduction, Up: Top [Contents][Index]. Next: executables, Previous: Creating object files, Up: Using libtool [Contents][Index] Without libtool, the programmer would invoke the ar command to create a static library: burger$ ar cru libhello.a hello.o foo.o burger$ But of course, that would be too simple, so many systems require that you run the ranlib command on the resulting library (to give it better karma, or something): burger$ ranlib libhello.a burger$ It seems more natural to use the C compiler for this task, given libtool’s “libraries are programs” approach. So, on platforms without shared libraries, libtool simply acts as a wrapper for the system ar (and possibly ranlib) commands. Again, the.) So, let’s try again, this time with the library object files. Remember also that we need to add -lm to the link command line because foo.c uses the cos math library function (see Using libtool). Another complication in building shared libraries is that we need to specify the path to the directory wher they will (eventually) be installed (in this case, /usr/local/lib)1: a23$ libtool --mode=link gcc -g -O -o libhello.la foo.lo hello.lo \ -rpath /usr/local/lib -lm ar cru .libs/libhello.a foo.o hello.o ranlib .libs/libhello.a creating libhello.la (cd .libs && rm -f libhello.la && ln -s ../libhello.la libhello.la) a23$ Now, let’s try the same trick on the shared library platform: burger$ libtool --mode=link gcc -g -O -o libhello.la foo.lo hello.lo \ -rpath /usr/local/lib -lm rm -fr .libs/libhello.a .libs/libhello.la ld -Bshareable -o .libs/libhello.so.0.0 .libs/foo.o .libs/hello.o -lm ar cru .libs/libhello.a foo.o hello.o ranlib .libs/libhello.a creating libhello.la (cd .libs && rm -f libhello.la && ln -s ../libhello.la libhello.la) burger$ Now that’s significantly cooler… Libtool just ran an obscure ld command to create a shared library, as well as the static library. Note how libtool creates extra files in the .libs subdirectory, rather than the current directory. This feature is to make it easier to clean up the build directory, and to help ensure that other programs fail horribly if you accidentally forget to use libtool when you should. Again, you may want to have a look at the .la file to see what Libtool stores in it. In particular, you will see that Libtool uses this file to remember the destination directory for the library (the argument to -rpath) as well as the dependency on the math library (‘-lm’). Next: Debugging executables, Previous: Linking libraries, Up: Using libtool [Contents][Index]. Up:. Next: Installing libraries, Previous: Linking executables, Up: Using libtool [Contents][Index]$ gdb hell GDB is free software and you are welcome to distribute copies of it under certain conditions; type "show copying" to see the conditions. There is no warranty for GDB; type "show warranty" for details. GDB 4.16 (i386-unknown-netbsd), (C) 1996 Free Software Foundation, Inc. "hell": not in executable format: File format not recognized (gdb) quit burger$ Sad. It doesn’t work because GDB doesn’t know where the executable lives. So, let’s try again, by invoking GDB directly on the executable: burger$ gdb .libs /home/src/libtool/demo/.libs/hell: can't load library 'libhello.so.0' Program exited with code 020. (gdb) quit burger$ Argh. Now GDB complains because it cannot find the shared library that hell is linked against. So, we must use libtool to properly set the library path and run the debugger. Fortunately, we can forget all about the .libs directory, and just run it on the executable wrapper (see Execute mode): burger$ libtool --mode=execute gdb Breakpoint 1, main (argc=1, argv=0xbffffc40) at main.c:29 29 printf ("Welcome to GNU Hell!\n"); (gdb) quit The program is running. Quit anyway (and kill it)? (y or n) y burger$ Next:# Previous: Installing executables, Up: Using libtool [Contents][Index]. Next:. DESTDIR. For instance, if prefix. Next: Clean mode, Previous: Finish mode, Up: Invoking libtool [Contents][Index]. Previous: Uninstall mode, Up: Invoking libtool [Contents][Index] Clean mode deletes uninstalled libraries, executables, objects and libtool’s temporary files associated with them. The first mode-arg is the name of the program to use to delete files (typically /bin/rm). The remaining mode-args are either flags for the deletion program (beginning with a ‘-’), or the names of files to delete. Next: Other languages, Previous: Invoking libtool, Up: Top [Contents][Index] This chapter describes how to integrate libtool with your packages so that your users can install hassle-free shared libraries. There are several ways that Libtool may be integrated in your package, described in the following sections. Typically, the Libtool macro files as well as ltmain.sh are copied into your package using libtoolize and aclocal after setting up the configure.ac and toplevel Makefile.am, then autoconf adds the needed tests to the configure script. These individual steps are often automated with autoreconf. Here is a diagram showing how such a typical Libtool configuration works when preparing a package for distribution, assuming that m4 has been chosen as location for additional Autoconf macros, and build-aux as location for auxiliary build tools (see The Autoconf Manual in The Autoconf Manual): libtool.m4 -----. .--> aclocal.m4 -----. ltoptions.m4 ---+ .-> aclocal* -+ +--> autoconf* ltversion.m4 ---+--+ `--> [copy in m4/] --+ | ltsugar.m4 -----+ | ^ | \/ lt~obsolete.m4 -+ +-> libtoolize* -----' | configure [ltdl.m4] ------+ | | `----------------------------------' ltmain.sh -----------> libtoolize* -> [copy in build-aux/] During configuration, the libtool script is generated either through config.status or config.lt: .--> config.status* --. configure* --+ +--> libtool `--> [config.lt*] ----' ^ | ltmain.sh --------------------------------' At make run time, libtool is then invoked as needed as a wrapper around compilers, linkers, install and cleanup programs. There are alternatives choices to several parts of the setup; for example, the Libtool macro files can either be copied or symlinked into the package, or copied into aclocal.m4. As another example, an external, pre-configured libtool script may be used, by-passing most of the tests and package-specific setup for Libtool. Next:: Next: Configuring, Previous: Makefile rules, Up: Integrating libtool [Contents][Index] The Automake Manual in The Automake Manual, for more information. Next: Distributing, Previous: Using Automake, Up: Integrating libtool [Contents][Index] Libtool requires intimate knowledge of your compiler suite and operating system to be able to create shared libraries and link against them properly. When you install the libtool distribution, a system-specific libtool script is installed into your binary directory. However, when you distribute libtool with your own packages (see Distributing), you do not always know the compiler suite and operating system that are used to compile your package. For this reason, libtool must be configured before it can be used. This idea should be familiar to anybody who has used a GNU configure script. configure runs a number of tests for system features, then generates the Makefiles (and possibly a config.h header file), after which you can run make and build the package. Libtool adds its own tests to your configure script to generate a libtool script for the installer’s host machine. Next: Configure notes, Up: Configuring [Contents][Index] LT_INITmacrog++’, you can run all three configure scripts. Aside from disable-static. This option. Provision must be made to pass -no-undefined to libtool. Change the default behaviour of libtool to try to use only non-PIC objects. The user may still override this default by specifying --with-pic to configure. This macro is deprecated, the ‘dlopen’ option to LT. Program to use rather than checking for mt, the Manifest Tool. Only used on Cygwin/MS-Windows at the moment.. With 1.3 era libtool, if you wanted to know any details of what libtool had discovered about your architecture and environment, you had to run the script with --config and grep through the results. This idiom was supported up to and including 1.5.x era libtool, where it was possible to call the generated libtool script from configure.ac as soon as LT_INIT had completed. However, one of the features of libtool 1.4 was that the libtool configuration was migrated out of a separate ltconfig file, and added to the LT_INIT macro (nee AC_PROG_LIBTOOL), so the results of the configuration tests were available directly to code in configure.ac, rendering the call out to the generated libtool script obsolete. Starting with libtool 2.0, the multipass generation of the libtool script has been consolidated into a single config.status pass, which happens after all the code in configure.ac has completed. The implication of this is that the libtool script does not exist during execution of code from configure.ac, and so obviously it cannot be called for --config details anymore. If you are upgrading projects that used this idiom to libtool 2.0 or newer, you should replace those calls with direct references to the equivalent Autoconf shell variables that are set by the configure time tests before being passed to config.status for inclusion in the generated libtool script.. Because of these changes, and the runtime version compatibility checks Libtool now executes, we now advise against including a copy of libtool.m4 (and brethren) in acinclude.m4. Instead, you should set your project macro directory with AC_CONFIG_MACRO_DIRS. When you libtoolize your project, a copy of the relevant macro definitions will be placed in your AC_CONFIG_MACRO_DIRS, where aclocal can reference them directly from aclocal.m4. Previous:. Next:. Next: Autoconf and LTLIBOBJS, Up: Distributing [Contents][Index] libtoolize: Copy files from the libtool data directory rather than creating symlinks. Dump a trace of shell script execution to standard output. This produces a lot of output, so you may wish to pipe it to less (or more) or redirect to a file. Don’t run any commands that modify the file system, just print them out. Replace existing libtool files. By default, libtoolize won’t overwrite existing files. Display a help message and exit.. Normally, Libtoolize tries to diagnose use of deprecated libtool macros and other stylistic issues. If you are deliberately using outdated calling conventions, this option prevents Libtoolize from explaining how to update your project’s Libtool conventions.: ## libltdl/ltdl.mk/ltdl.mk Work silently. ‘libtoolize --quiet’ is used by GNU Automake to add libtool files to your package if necessary... Work noisily! Give a blow by blow account of what libtoolize is doing. Print libtoolize version information and exit._DIRS (see The Autoconf Manual in The Autoconf Manual) in your configure.ac, it will put the Libtool macros in the specified directory. In the future other Autotools will automatically check the contents of AC_CONFIG_MACRO_DIRS, but at the moment it is more portable to add the macro directory to ACLOCAL_AMFLAGS in Makefile.am, which is where the tools currently look. If libtoolize doesn’t see AC_CONFIG_MACRO_DIRS, it too will honour the first ‘-I’ argument in ACLOCAL_AMFLAGS when choosing a directory to store libtool configuration macros in. It is perfectly sensible to use both AC_CONFIG_MACRO_DIRS. Previous:. Previous: Distributing, Up: Integrating libtool [Contents][Index]7 distribution README: The GIMP. Next:.:. Next: Libtool versioning, Up: Versioning [Contents][Index] Interfaces for libraries may be any of the following (and more): Note that static functions do not count as interfaces, because they are not directly available to the user of the library. Next:: Release numbers, Previous: Libtool versioning, Up: Versioning [Contents][Index]. Previous: Updating version info, Up: Versioning [Contents][Index] Often, people want to encode the name of the package release into the shared library so that it is obvious to the user what package their programs are linked against. This convention is used especially on GNU/Linux: trick$ ls /usr/lib/libbfd* /usr/lib/libbfd.a /usr/lib/libbfd.so.2.7.0.2 /usr/lib/libbfd.so trick$ On ‘trick’, /usr/lib/libbfd.so is a symbolic link to libbfd.so.2.7.0.2, which was distributed as a part of ‘binutils-2.7.0.2’. Unfortunately, this convention conflicts directly with libtool’s idea of library interface versions, because the library interface rarely changes at the same time that the release number does, and the library suffix is never the same across all platforms. So, to accommodate both views, you can use the -release flag to set release information for libraries for which you do not want to use -version-info. For the libbfd example, the next release that uses libtool should be built with ‘-release 2.9.0’, which will produce the following files on GNU/Linux: trick$ ls /usr/lib/libbfd* /usr/lib/libbfd-2.9.0.so /usr/lib/libbfd.a /usr/lib/libbfd.so trick$ In this case, /usr/lib/libbfd.so is a symbolic link to libbfd-2.9.0.so. This makes it obvious that the user is dealing with ‘bin. Next:. Up:. Next: Dlopened modules, Previous: Library tips, Up: Top [Contents][Index] By definition, every shared library system provides a way for executables to depend on libraries, so that symbol resolution is deferred until runtime. An inter-library dependency is where, to guarantee that all the required libraries are found. This restriction is only necessary to preserve compatibility with static library systems and simple dynamic library systems. Some platforms, such as Windows, do not even allow you this flexibility. In order to build a shared library, it must be entirely self-contained or it must have dependencies known at link time (that is, have references only to symbols that are found in the . Next:: Dlpreopening, Up: Dlopened modules [Contents][Index]$ Next:. Previous: Finding the dlname, Up: Dlopened modules [Contents][Index]. Next:: Modules for libltdl, Up: Using libltdl [Contents][Index] The libltdl API is similar to the POSIX dlopen interface, which is very simple but powerful. To use libltdl in your program you have to include the header file ltdl.h: #include <ltdl.h> The early releases of libltdl used some symbols that violated the POSIX namespace conventions. These symbols are now deprecated, and have been replaced by those described here. If you have code that relies on the old deprecated symbol names, defining ‘LT_NON_POSIX_NAMESPACE’ before you include ltdl.h provides conversion macros. Whichever set of symbols you use, the new API is not binary compatible with the last, so you will need to recompile your application to use this version of libltdl.’ (that The following macros are defined by including ltdl.h: LT_PATHSEP_CHAR (see Dlpreopening). libltdl provides the following functions: Initialize libltdl. This function must be called before using libltdl and may be called several times. Return 0 on success, otherwise the number of errors. Shut down libltdl and close all modules. This function will only then shut down libltdl when it was called as many times as lt_dlinit has been successfully called. Return 0 on success, otherwise the number of errors. Open the module with the file name filename and return a handle for it. lt_dlopen is able to open libtool dynamic modules, preloaded static modules, the program itself and native dynamic modules10. Unresolved symbols in the module are resolved using its dependency libraries look in the following search paths for the module (in the following order): lt_dlsetsearchpath, lt_dladdsearchdirand lt_dlinsertsearchdir. LTDL_LIBRARY_PATH. LD_LIBRARY_PATH). Each search path must be a list of absolute directories separated by LT_PATHSEP_CHAR, for example, "/usr/lib/mypkg:/lib/foo". The directory names may not contain the path separator. If the same module is loaded several times, the same handle is returned. If lt_dlopen may contain additional hints to the underlying system module loader. The advise parameter is opaque and can only be accessed with the functions documented below. Note that this function does not change the content of advise, so unlike the other calls in this API takes a direct lt_dlad with this hint set causes it to try to append different file name extensions like lt_dlopenext. The following example is equivalent to calling lt_dlopenext (filename): lt_dlhandle my_dlopenext (const char *filename) { lt_dlhandle handle = 0; lt_dladvise advise; if (!lt_dladvise_init (&advise) && !lt_dladvise_ext (&advise)) handle = lt_dlopenadvise (filename, advise); lt_dladvise_destroy (&advise); return handle; } On failure, lt_dladvise_ext will return NULL. Decrement the reference count on the module handle. If it drops to zero and no other module depends on this module, then the module is unloaded. Return 0 on success. Return the address in the module handle, where the symbol given by the null-terminated string name is loaded. If the symbol cannot be found, NULL is returned. Return a human readable string describing the most recent error that occurred from any of libltdl’s functions. Return NULL if no errors have occurred since initialization or since it was last called. Append the search directory search_dir to the current user-defined library search path. Return 0 on success. Insert the search directory search_dir into the user-defined library search path, immediately before the element starting at address before. If before is ‘NULL’, then search_dir is appending as if lt_dladdsearchdir had been called. Return 0 on success. Replace the current user-defined library search path with search_path, which must be a list of absolute directories separated by LT_PATHSEP_CHAR. Return 0 on success. Return the current user-defined library search path. In some applications you may not want to load individual modules with known names, but rather find all of the modules in a set of directories and load them all during initialisation. With this function you can have libltdl scan the LT_PATHSEP_CHAR-delimited directory list in search_path for candidates, and pass them, along with data to your own callback function, func. If search_path is ‘NULL’, then search all of the standard locations that lt_dlopen would examine. This function will continue to make calls to func for each file that it discovers in search_path until one of these calls returns non-zero, or until the files are exhausted. ‘lt_dlforeachfile’ returns the value returned by the last call made to func. For example you could define func to build an ordered argv-like vector of files using data to hold the address of the start of the vector. Mark a module so that it cannot be ‘lt_dlclose’d. This can be useful if a module implements some core functionality in your project that would cause your code to crash if removed. Return 0 on success. If you use ‘lt_dlopen (NULL)’ to get a handle for the running binary, that handle will always be marked as resident, and consequently cannot be successfully ‘lt_dlclose’d. Check whether a particular module has been marked as resident, returning 1 if it has or 0 otherwise. If there is an error while executing this function, return -1 and set an error message for retrieval with lt_dlerror. Next:: User defined module data, Previous: Modules for libltdl, Up: Using libltdl [Contents][Index] Libltdl provides a wrapper around whatever dynamic run-time object loading mechanisms are provided by the host system, many of which are themselves not thread safe. Consequently libltdl cannot itself be consistently thread safe. If you wish to use libltdl in a multithreaded environment, then you must mutex lock around libltdl calls, since they may in turn be calling non-thread-safe system calls on some target hosts. Some old releases of libtool provided a mutex locking API that was unusable with POSIX threads, so callers were forced to lock around all libltdl API calls anyway. That mutex locking API was next to useless, and is not present in current releases. Some future release of libtool may provide a new POSIX thread compliant mutex locking API.: ()); Previous: Module loaders for libltdl, Up: Using libltdl [Contents][Index]12' Next: Troubleshooting, Previous: Trace interface, Up: Top [Contents][Index] This chapter covers some questions that often come up on the mailing lists. When. Next:. Next: Reporting bugs, Up: Troubleshooting [Contents][Index] Libtool comes with two integrated sets of tests to check that your build is sane, that test its capabilities, and report obvious bugs in the libtool program. These tests, too, are constantly evolving, based on past problems with libtool, and known deficiencies in other operating systems. As described in the README file, you may run make -k check after you have built libtool (possibly before you install it). The tests/cdemo subdirectory contains a demonstration of libtool convenience libraries, a mechanism that allows build-time static libraries to be created, in a way that their components can be later linked into programs or other libraries, even shared ones. The tests matching (‘--disable-shared’), and cdemo-shared.test builds only shared libraries (‘--disable-static’). The test cdemo-undef.test tests the generation of shared libraries with undefined symbols on systems that allow this. These programs check to see that the tests/demo subdirectory of the libtool distribution can be configured, built, installed, and uninstalled correctly. The tests/demo subdirectory contains a demonstration of a trivial package that uses libtool. The tests matching. The tests/depdemo subdirectory contains a demonstration of inter-library dependencies with libtool. The test programs link some interdependent libraries. The tests matching). These programs check to see that the tests/mdemo subdirectory of the libtool distribution can be configured, built, installed, and uninstalled correctly. The tests/mdemo subdirectory contains a demonstration of a package that uses libtool and the system independent dlopen wrapper libltdl to load modules. The library libltdl provides a dlopen wrapper for various platforms (POSIX) including support for dlpreopened modules (see Dlpreopening). The tests matching. The tests/f77demo tests test Fortran 77 support in libtool by creating libraries from Fortran 77 sources, and mixed Fortran and C sources, and a Fortran 77 program to use the former library, and a C program to use the latter library. These programs check to see that the tests/fcdemo subdirectory of the libtool distribution can be configured, built, and executed correctly. The tests/fcdemo is similar to the tests/f77demo directory, except that Fortran 90 is used in combination with the ‘FC’ interface provided by Autoconf and Automake. The new, Autotest-based test suite uses keywords to classify certain test groups: before the expansion of the INNER_TESTSUITEFLAGS variable (without an intervening space, so you get the chance for further delimitation). Test groups with the keyword ‘recursive’ should not be denoted with keywords, in order to avoid infinite recursion. As a consequence, recursive test groups themselves should never require user interaction, while the test groups they invoke may do so. There is a convenience target ‘check-noninteractive’ that runs all tests from both test suites that do not cause user interaction on Windows. Conversely, the target ‘check-interactive’ runs the complement of tests and might require closing popup windows about DLL load errors on Windows. Previous: Test descriptions, Up: Libtool test suite [Contents][Index] that contains information about failed tests. You can pass options to the test suite through the make variable TESTSUITEFLAGS (see The Autoconf Manual in The Autoconf Manual). Previous:). to make platform-specific changes to the configuration process. You should search that file for the PORTME keyword, which will give you some hints on what you’ll need to change. In general, all that is involved is modifying the appropriate configuration variables (see. Previous: Information sources, Up: New ports [Contents][Index] ‘$deplibs’ is included in ‘$archive_cmds’ somewhere and also sets the variable ‘$deplibs_check_method’, and maybe ‘$file_magic_cmd’ when ‘deplibs_check_method’ is file_magic. ‘deplibs_check_method’ can be one of five things: looks in the library link path for libraries that have the right libname. Then it runs ‘$file_magic_cmd’ on the library and checks for a match against the extended regular expression regex. When file_magic_test_file is set by libtool.m4, it is used as an argument to ‘$file_magic_cmd’ to verify whether the regular expression matches its output, and warn the user otherwise. just checks whether it is possible to link a program out of a list of libraries, and checks which of those are listed in the output of ldd. It is currently unused, and will probably be dropped in the future. will pass everything without any checking. This may work on platforms where code is position-independent by default and inter-library dependencies are properly supported by the dynamic linker, for example, on DEC OSF/1 3 and 4. It causes deplibs to be reassigned ‘deplibs=""’. That way ‘archive_cmds’ can contain deplibs on all platforms, but not have deplibs used unless needed. is the default for all systems unless overridden in libtool.m4. It is the same as ‘none’, but it documents that we really don’t know what the correct value should be, and we welcome patches that improve it. Then in ltmain.in we have the real workhorse: a little initialization and postprocessing (to setup/release variables for use with eval echo libname_spec etc.) and a case statement that decides the method that. Next: Platform quirks, Previous: New ports, Up: Maintaining [Contents][Index]'. Next: libtool script contents, Previous: Tested platforms, Up: Maintaining [Contents][Index]. Next: Compilers, Up: Platform quirks [Contents][Index] The following is a list of valuable documentation references: Next: Reloadable objects, Previous: References, Up: Platform quirks [Contents][Index]:. Next: Cross compiling, Previous: Multiple dependencies, Up: Platform quirks [Contents][Index] On all known systems, building a static library can be accomplished by running ar cru libname.a obj1.o obj2.o …, where the .a file is the output library, and each .o file is an object file. On all known systems, if there is a program named ranlib, then it must be used to “bless” the created library before linking against it, with the ranlib libname.a command. Some systems, like Irix, use the ar ts command, instead. Next:: Native MinGW File Name Conversion, Up: File name conversion [Contents][Index]. Next:.. Next:. Previous: LT_CYGPATH, Up: File name conversion [Contents][Index]. Previous: File name conversion, Up: Platform quirks [Contents][Index] This topic describes a couple of ways to portably create Windows Dynamic Link Libraries (DLLs). Libtool knows how to create DLLs using GNU tools and using Microsoft tools. A typical library has a “hidden” implementation with an interface described in a header file. On just about every system, the interface could be something like this: Example foo.h: #ifndef FOO_H #define FOO_H int one (void); int two (void); extern int three; #endif /* FOO_H */ And the implementation could be something like this: Example foo.c: #include "foo.h" int one (void) { return 1; } int two (void) { return three - one (); } int three = 3; When using contemporary GNU tools to create the Windows DLL, the above code will work there too, thanks to its auto-import/auto-export features. But that is not the case when using older GNU tools or perhaps more interestingly when using proprietary tools. In those cases the code will need additional decorations on the interface symbols with __declspec(dllimport) and __declspec(dllexport) depending on whether the library is built or it’s consumed and how it’s built and consumed. However, it should be noted that it would have worked also with Microsoft tools, if only the variable three hadn’t been there, due to the fact the Microsoft tools will automatically import functions (but sadly not variables) and Libtool will automatically export non-static symbols as described next. With Microsoft tools, Libtool digs through the object files that make up the library, looking for non-static symbols to automatically export. I.e., Libtool with Microsoft tools tries to mimic the auto-export feature of contemporary GNU tools. It should be noted that the GNU auto-export feature is turned off when an explicit __declspec(dllexport) is seen. The GNU tools do this to not make more symbols visible for projects that have already taken the trouble to decorate symbols. There is no similar way to limit what symbols are visible in the code when Libtool is using Microsoft tools. In order to limit symbol visibility in that case you need to use one of the options -export-symbols or -export-symbols-regex. No matching help with auto-import is provided by Libtool, which is why variables must be decorated to import them from a DLL for everything but contemporary GNU tools. As stated above, functions are automatically imported by both contemporary GNU tools and Microsoft tools, but for other proprietary tools the auto-import status of functions is unknown. When the objects that form the library are built, there are generally two copies built for each object. One copy is used when linking the DLL and one copy is used for the static library. On Windows systems, a pair of defines are commonly used to discriminate how the interface symbols should be decorated. The first define is ‘-DDLL_EXPORT’, which is automatically provided by Libtool when libtool builds the copy of the object that is destined for the DLL. The second define is ‘-DLIBFOO_BUILD’ (or similar), which is often added by the package providing the library and is used when building the library, but not when consuming the library. However, the matching double compile is not performed when consuming libraries. It is therefore not possible to reliably distinguish if the consumer is importing from a DLL or if it is going to use a static library. With contemporary GNU tools, auto-import often saves the day, but see the GNU ld documentation and its --enable-auto-import option for some corner cases when it does not (see Options specific to i386 PE targets in Using ld, the GNU linker). With Microsoft tools you typically get away with always compiling the code such that variables are expected to be imported from a DLL and functions are expected to be found in a static library. The tools will then automatically import the function from a DLL if that is where they are found. If the variables are not imported from a DLL as expected, but are found in a static library that is otherwise pulled in by some function, the linker will issue a warning (LNK4217) that a locally defined symbol is imported, but it still works. In other words, this scheme will not work to only consume variables from a library. There is also a price connected to this liberal use of imports in that an extra indirection is introduced when you are consuming the static version of the library. That extra indirection is unavoidable when the DLL is consumed, but it is not needed when consuming the static library. For older GNU tools and other proprietary tools there is no generic way to make it possible to consume either of the DLL or the static library without user intervention, the tools need to be told what is intended. One common assumption is that if a DLL is being built (‘DLL_EXPORT’ is defined) then that DLL is going to consume any dependent libraries as DLLs. If that assumption is made everywhere, it is possible to select how an end-user application is consuming libraries by adding a single flag ‘-DDLL_EXPORT’ when a DLL build is required. This is of course an all or nothing deal, either everything as DLLs or everything as static libraries. To sum up the above, the header file of the foo library needs to be changed into something like this: Modified foo.h: #ifndef FOO_H #define FOO_H #if defined _WIN32 && !defined __GNUC__ # ifdef LIBFOO_BUILD # ifdef DLL_EXPORT # define LIBFOO_SCOPE __declspec (dllexport) # define LIBFOO_SCOPE_VAR extern __declspec (dllexport) # endif # elif defined _MSC_VER # define LIBFOO_SCOPE # define LIBFOO_SCOPE_VAR extern __declspec (dllimport) # elif defined DLL_EXPORT # define LIBFOO_SCOPE __declspec (dllimport) # define LIBFOO_SCOPE_VAR extern __declspec (dllimport) # endif #endif #ifndef LIBFOO_SCOPE # define LIBFOO_SCOPE # define LIBFOO_SCOPE_VAR extern #endif LIBFOO_SCOPE int one (void); LIBFOO_SCOPE int two (void); LIBFOO_SCOPE_VAR int three; #endif /* FOO_H */ When the targets are limited to contemporary GNU tools and Microsoft tools, the above can be simplified to the following: Simplified foo.h: #ifndef FOO_H #define FOO_H #if defined _WIN32 && !defined __GNUC__ && !defined LIBFOO_BUILD # define LIBFOO_SCOPE_VAR extern __declspec (dllimport) #else # define LIBFOO_SCOPE_VAR extern #endif int one (void); int two (void); LIBFOO_SCOPE_VAR int three; #endif /* FOO_H */ This last simplified version can of course only work when Libtool is used to build the DLL, as no symbols would be exported otherwise (i.e., when using Microsoft tools). It should be noted that there are various projects that attempt to relax these requirements by various low level tricks, but they are not discussed here. Examples are FlexDLL and edll. Next: Cheap tricks, Previous: Platform quirks, Up: Maintaining [Contents][Index] libtoolscript contents Since version 1.4, the libtool script is generated by configure (see Configuring). In earlier versions, configure achieved this by calling a helper script called ltconfig. From libtool version 0.7 to 1.0, this script simply set shell variables, then sourced the libtool backend, ltmain.sh. ltconfig from libtool version 1.1 through 1.3 inlined the contents of ltmain.sh into the generated libtool, which improved performance on many systems. The tests that ltconfig used to perform are now kept in libtool.m4 where they can be written using Autoconf. This has the runtime performance benefits of inlined ltmain.sh, and improves the build time a little while considerably easing the amount of raw shell code that used to need maintaining. The convention used for naming variables that hold shell commands for delayed evaluation, is to use the suffix _cmd where a single line of valid shell script is needed, and the suffix _cmds where multiple lines of shell script may be delayed for later evaluation. By convention, _cmds variables delimit the evaluation units with the ~ character where necessary. Here is a listing of each of the configuration variables, and how they are used within ltmain.sh (see Configuring): The name of the system library archiver. The name of the compiler used to configure libtool. This will always contain the compiler for the current language (see Tags). An echo program that does not interpret backslashes as an escape character. It may be given only one argument, so due quoting is necessary. The name of the linker that libtool should use internally for reloadable linking and possibly shared libraries. The name of the C compiler and C compiler flags used to configure libtool. The name of a BSD- or MS-compatible program that produces listings of global symbols. For BSD nm, the symbols should be in one the following formats: address C global-variable-name address D global-variable-name address T global-function-name For MS dumpbin, the symbols should be in one of the following formats: counter size UNDEF notype External | global-var counter address section notype External | global-var counter address section notype () External | global-func The size of the global variables are not zero and the section of the global functions are not "UNDEF". Symbols in "pick any" sections ("pick any" appears in the section header) are not global either. Set to the name of the ranlib program, if any. The flag that is used by ‘archive_cmds’ to declare that there will be unresolved symbols in the resulting shared library. Empty, if no such flag is required. Set to ‘unsupported’ if there is no way to generate a shared library with references to symbols that aren’t defined in that library. Whether libtool should automatically generate a list of exported symbols using export_symbols_cmds before linking an archive. Set to ‘yes’ or ‘no’. Default is ‘no’. Commands used to create shared libraries, shared libraries with -export-symbols and static libraries, respectively. Specify filename containing input files for AR. If the shared library depends on a static library, ‘old_archive_from_new_cmds’ contains the commands used to create that static library. If this variable is not empty, ‘old_archive_cmds’ is not used. If a static library must be created from the export symbol list to correctly link with a shared library, ‘old_archive_from_expsyms_cmds’ contains the commands needed to create that static library. When these commands are executed, the variable soname contains the name of the shared library in question, and the ‘$objdir/$newlib’ contains the path of the static library these commands should build. After executing these commands, libtool will proceed to link against ‘$objdir/$newlib’ instead of soname. Set to ‘yes’ if the extraction of a static library requires locking the library file. This is required on Darwin. Set to the specified and canonical names of the system that libtool was built on. Whether libtool should build shared libraries on this system. Set to ‘yes’ or ‘no’. Whether libtool should build static libraries on this system. Set to ‘yes’ or ‘no’. Whether the compiler supports the -c and -o options simultaneously. Set to ‘yes’ or ‘no’. Whether the compiler has to see an object listed on the command line in order to successfully invoke the linker. If ‘no’, then a set of convenience archives or a set of object file names can be passed via linker-specific options or linker scripts. Whether dlopen is supported on the platform. Set to ‘yes’ or ‘no’. Whether it is possible to dlopen the executable itself. Set to ‘yes’ or ‘no’. Whether it is possible to dlopen the executable itself, when it is linked statically (-all-static). Set to ‘yes’ or ‘no’. List of symbols that should not be listed in the preloaded symbols. Compiler link flag that allows a dlopened shared library to reference symbols that are defined in the program. Commands to extract exported symbols from libobjs to the file export_symbols. Commands to extract the exported symbols list from a shared library. These commands are executed if there is no file ‘$objdir/$soname-def’, and should write the names of the exported symbols to that file, for the use of ‘old_archive_from_expsyms_cmds’. Determines whether libtool will privilege the installer or the developer. The assumption is that installers will seldom run programs in the build tree, and the developer will seldom install. This is only meaningful on platforms where shlibpath_overrides_runpath is not ‘yes’, so fast_install will be set to ‘needless’ in this case. If fast_install set to ‘yes’, libtool will create programs that search for installed libraries, and, if a program is run in the build tree, a new copy will be linked on-demand to use the yet-to-be-installed libraries. If set to ‘no’, libtool will create programs that use the yet-to-be-installed libraries, and will link a new copy of the program at install time. The default value is ‘yes’ or ‘needless’, depending on platform and configuration flags, and it can be turned from ‘yes’ to ‘no’ with the configure flag --disable-fast-install. On some systems, the linker always hardcodes paths to dependent libraries into the output. In this case, fast_install is never set to ‘yes’, and relinking at install time is triggered. This also means that DESTDIR installation does not work as expected. How to find potential files when deplibs_check_method is ‘file_magic’. file_magic_glob is a sed expression, and the sed instance is fed potential file names that are transformed by the file_magic_glob expression. Useful when the shell does not support the shell option nocaseglob, making want_nocaseglob inappropriate. Normally disabled (i.e. file_magic_glob is empty). Commands to tell the dynamic linker how to find shared libraries in a specific directory. Same as finish_cmds, except the commands are not displayed. A pipeline that takes the output of NM, and produces a listing of raw symbols followed by their C names. For example: $ eval "$NM progname | $global_symbol_pipe" D symbol1 C-symbol1 T symbol2 C-symbol2 C symbol3 C-symbol3 … $ The first column contains the symbol type (used to tell data from code) but its meaning is system dependent. A pipeline that translates the output of global_symbol_pipe into proper C declarations. Since some platforms, such as HP/UX, have linkers that differentiate code from data, data symbols are declared as data, and code symbols are declared as functions. Either ‘immediate’ or ‘relink’, depending on whether shared library paths can be hardcoded into executables before they are installed, or if they need to be relinked. Set to ‘yes’ or ‘no’, depending on whether the linker hardcodes directories if a library is directly specified on the command line (such as ‘dir/libname.a’) when hardcode_libdir_flag_spec is specified. Some architectures hardcode "absolute" library directories that cannot be overridden by shlibpath_var when hardcode_direct is ‘yes’. In that case set hardcode_direct_absolute to ‘yes’, or otherwise ‘no’. Whether the platform supports hardcoding of run-paths into libraries. If enabled, linking of programs will be much simpler but libraries will need to be relinked during installation. Set to ‘yes’ or ‘no’. Flag to hardcode a libdir variable into a binary, so that the dynamic linker searches libdir for shared libraries at runtime. If it is empty, libtool will try to use some other hardcoding mechanism. If the compiler only accepts a single hardcode_libdir_flag, then this variable contains the string that should separate multiple arguments to that flag. Set to ‘yes’ or ‘no’, depending on whether the linker hardcodes directories specified by -L flags into the resulting executable when hardcode_libdir_flag_spec is specified. Set to ‘yes’ or ‘no’, depending on whether the linker hardcodes directories by writing the contents of ‘$shlibpath_var’ into the resulting executable when hardcode_libdir_flag_spec is specified. Set to ‘unsupported’ if directories specified by ‘$shlibpath_var’ are searched at run time, but not at link time. Set to the specified and canonical names of the system that libtool was configured for. List of symbols that must always be exported when using export_symbols. Whether the linker adds runtime paths of dependency libraries to the runtime path list, requiring libtool to relink the output when installing. Set to ‘yes’ or ‘no’. Default is ‘no’. Permission mode override for installation of shared libraries. If the runtime linker fails to load libraries with wrong permissions, then it may fail to execute programs that are needed during installation, because these need the library that has just been installed. In this case, it is necessary to pass the mode to install with -m install_override_mode. The standard old archive suffix (normally ‘a’). The format of a library name prefix. On all Unix systems, static libraries are called ‘libname.a’, but on some systems (such as OS/2 or MS-DOS), the library is just called ‘name.a’. A list of shared library names. The first is the name of the file, the rest are symbolic links to the file. The name in the list is the file name that the linker finds when given -lname. Whether libtool must link a program against all its dependency libraries. Set to ‘yes’ or ‘no’. Default is ‘unknown’, which is a synonym for ‘yes’. Linker flag (passed through the C compiler) used to prevent dynamic linking. The release and revision from which the libtool.m4 macros were taken. This is used to ensure that macros and ltmain.sh correspond to the same Libtool version. The approximate longest command line that can be passed to ‘$SHELL’ without being truncated, as computed by ‘LT_CMD_MAX_LEN’. Whether we can dlopen modules without a ‘lib’ prefix. Set to ‘yes’ or ‘no’. By default, it is ‘unknown’, which means the same as ‘yes’, but documents that we are not really sure about it. ‘no’ means that it is possible to dlopen a module without the ‘lib’ prefix. Whether versioning is required for libraries, i.e. whether the dynamic linker requires a version suffix for all libraries. Set to ‘yes’ or ‘no’. By default, it is ‘unknown’, which means the same as ‘yes’, but documents that we are not really sure about it. Whether files must be locked to prevent conflicts when compiling simultaneously. Set to ‘yes’ or ‘no’. Specify filename containing input files for NM. Compiler flag to disable builtin functions that conflict with declaring external global symbols as char. The flag that is used by ‘archive_cmds’ to declare that there will be no unresolved symbols in the resulting shared library. Empty, if no such flag is required. The name of the directory that contains temporary libtool files. The standard object file suffix (normally ‘o’). Any additional compiler flags for building library object files. Commands run after installing a shared or static library, respectively. Commands run after uninstalling a shared or static library, respectively. Commands necessary for finishing linking programs. postlink_cmds are executed immediately after the program is linked. Any occurrence of the string @OUTPUT@ in postlink_cmds is replaced by the name of the created executable (i.e. not the wrapper, if a wrapper is generated) prior to execution. Similarly, @TOOL_OUTPUT@ is replaced by the toolchain format of @OUTPUT@. Normally disabled (i.e. postlink_cmds empty). Commands to create a reloadable object. Set reload_cmds to ‘false’ on systems that cannot create reloadable objects. The environment variable that tells the linker what directories to hardcode in the resulting executable. Indicates whether it is possible to override the hard-coded library search path of a program with an environment variable. If this is set to no, libtool may have to create two copies of a program in the build tree, one to be installed and one to be run in the build tree only. When each of these copies is created depends on the value of fast_install. The default value is ‘unknown’, which is equivalent to ‘no’. The environment variable that tells the dynamic linker where to find shared libraries. The name coded into shared libraries, if different from the real name of the file. Command to strip a shared ( striplib) or static ( old_striplib) library, respectively. If these variables are empty, the strip flag in the install mode will be ignored for libraries (see Install mode). Expression to get the run-time system library search path. Directories that appear in this list are never hard-coded into executables. Expression to get the compile-time system library search path. This variable is used by libtool when it has to test whether a certain library is shared or static. The directories listed in shlibpath_var are automatically appended to this list, every time libtool runs (i.e., not at configuration time), because some linkers use this variable to extend the library search path. Linker switches such as -L also augment the search path. Linker flag (passed through the C compiler) used to generate thread-safe libraries. ‘func_convert_file_noop’, libtool will autodetect most cases where other values should be used. On rare occasions, it may be necessary to override the autodetected value (see Cygwin to MinGW Cross). If the toolchain is not native to the build platform (e.g. if you are using some Unix to drive the scripting together with a Windows toolchain running in Wine) this variable describes how to convert file names from the format used by the build platform to the format used by the toolchain. Normally set to ‘func_convert_file_noop’. The library version numbering type. One of ‘libtool’, ‘freebsd-aout’, ‘freebsd-elf’, ‘irix’, ‘linux’, ‘osf’, ‘sunos’, ‘windows’, or ‘none’. Find potential files using the shell option nocaseglob, when deplibs_check_method is ‘file_magic’. Normally set to ‘no’. Set to ‘yes’ to enable the nocaseglob shell option when looking for potential file names in a case-insensitive manner. Compiler flag to generate shared objects from convenience archives. The C compiler flag that allows libtool to pass a flag directly to the linker. Used as: ${wl}some-flag. Variables ending in ‘_cmds’ or ‘_eval’ contain a ‘~’-separated list of commands that are evaled one after another. If any of the commands return a nonzero exit status, libtool generally exits with an error message. Variables ending in ‘_spec’ are evaled before being used by libtool. Previous: libtool script contents, Up: Maintaining [Contents][Index] Here are a few tricks that you can use) 2014. Next: Combined Index, Previous: Maintaining,] If you don’t specify an rpath, then libtool builds a libtool convenience archive, not a shared library (see Static libraries).. And why should we? main.o doesn’t directly depend on -lm after all. Don’t strip static libraries though, or they will be unusable. LT_INIT requires that you define the Makefile variable top_builddir in your Makefile.in. Automake does this automatically, but Autoconf users should set it to the relative path to the top of your build directory (../.., for example). GNU Image Manipulation Program, for those who haven’t taken the plunge. See. We used to recommend __P, __BEGIN_DECLS and __END_DECLS. This was bad advice since symbols (even preprocessor macro names) that begin with an underscore are reserved for the use of the compiler. LIBPATH on AIX, and SHLIB_PATH on HP-UX. Some platforms, notably Mac OS X, differentiate between a runtime library that cannot be opened by lt_dlopen and a dynamic module that can. For maximum portability you should try to ensure that you only pass lt_dlopen objects that have been compiled with libtool’s -module flag..
https://www.gnu.org/software/libtool/manual/libtool.html
CC-MAIN-2018-51
refinedweb
8,223
55.84
Keyboard hiding TextView I have a view with a canvas size of small portrait (iPhone) containing a TextViewin the lower half. When I enter text into the view the keyboard completely covers the TextView. It accepts text but it is not visbible. How do I make the canvas automatically shift upward in such a case? I thought that the iOS framework would do this for me. Thanks for your help! Automatic content inset adjustment can only shift the text within the limits of the TextView, which will not help in your case. The approach I use is to implement keyboard_frame_did_changeon the containing view to move the TextView back and forth to stay in view. Small problem there is that the keyboard frame provided to the method is in screen coordinates, and your view is not, and the ui.convert_pointis somehow still messed up, giving wildly unpredictable results. @JonB has implemented a replacement method, but I have found that if your views are not transformed, simply walking up the view hierarchy to adjust the y value of the top of the keyboard to your view's coordinates works well. And sample code: def keyboard_frame_did_change(self, frame): kb_y = frame[1] if self.kb_y == 0: # move TextView back to orig pos else: dy = 0 view = self while view: dy += view.y view = view.superview kb_y -= dy # Move TextView to be all above kb_y @mikael Thanks, Mikael! That was a very helpful hint. I've proceeded quite a bit with my implementation. I've created a view called EnhancedViewwhich can serve as a drop-in replacement for ui.Viewin layouts that have at least one ui.TextViewor ui.TextFieldin them. See here. The view does the following: - It overwrited the method keyboard_frame_did_changeto hook into the changes of the keyboard frame (as suggested by you). - Upon first call of the method it scans all subviews recursively to find all ui.TextView's and ui.TextField's. - The default delegatemethod of all these subviews is replaced by a delegate listening to the *_did_*_editinghooks. - When editing is started in one of the subviews, this subview stores itself as current view in the EnhancedView. - When editing is finished the view removes itself again. - For each call to keyboard_frame_did_changethe EnhancedViewchecks if a) the keyboard is visible, b) it has an active editing subview and c) the subview is at least partially hidden by the keyboard. If so, it computes a delta y offset for its boundsattribute and thus moves itself upward. In all other cases it restores the boundsto its original value (which is y=0). The only little problem left is that even the recursive scan of the delta offsets for the y coordinate does not yield the correct value on the iPad. For some reason the topmost view has an offset of y=0according to its frame. However, it is NOT at the top of screen but underneath the status bar which seems to have a height of 20 pixels. If I add those 20 extra pixels on the iPad it works fine. But on my iPhone, it actually shifts the the view to hight, although the status bar is visible there, too. Is there a way to distiguish between these cases? Thanks a lot! Is the problem tied to orientation or iPad/iPhone? If former, you can check the dimensions. If latter, use cclauss' script to find the model. @mikael For the time being I'm just checking the screen dimensions and add the offset when I find out that the app is running on an iPad. This is probably wrong but for my two cases it works. It did, however, add some more functionality to my EnhancedView: In case the app is running on an iPhone which does not offer the hide keyboard key the view will automatically show a ButtonItemon the upper right which will call end_editing()on the open view. After closing it will restore any ButtonItem's previously replaced by the keyboard ButtonItem. See this screen shot. Looks very useful. What assumptions are you making regarding the view, i.e. existence of a top bar where you can place the extra button? Was thinking about a further option where the app is fullscreen and you would maybe need to show an auxiliary keyboard key instead. You can use ui.convert_point((0,0),rootview,None) to get the screen position of the top frame. However, this has long been broken. in my uicomponents library, I had a RootView class with a replacement for this function. Quite honestly, never been tested on iPhone, but you might find it useful. Another thing you might try: use convert_point to get the relative y of your textview relative to the root view. This does work. Then simply move the rootview.frame up by that y value. Should always bring the component in question to the top of the screen. For extra credit, account for the content_offset to keep the current line at the top. @JonB Thanks for the hint! ui.convert_point((0,0),rootview,None)works for me. Whatever was wrong with it was either fixed or does not seem to bother me in my context. :-) @mikael I don't make any assumptions but the existence of the title row which is usually shown when the view is presented using View.present(). Of course, there's no way to offer the ButtonItemwhen there's no title row, but then again there's no close button either.. @marcus67: To be clear, my comment was intended for the case where you e.g. want to publish your app on the App Store, and the title bar and the close button no longer make sense. Or you don't like them for esthetic reasons, and are willing to put up with the swipe to close. @mikael Ok, now I get it. Sorry for being so slow. You're right. I haven't thought about this. I guess I would need to put the button somewhere else since I would definitely still need one until Apple finally decides to let the user dismiss the keyboard on the iPhone, too. Is the title row actually a feature which would automatically be hidden when the app is run as an appstore app? Or will I still have control over it? Disclaimer: I have not published anything to App Store. You probably have control over it, but what would the close button do? Thus I am guessing that if you publish a stand-alone app and you want the title bar, you use e.g. a NavigationView for it, or roll your own. iirc convert point was broken in some orientations in fullscreen. I had a little tester that showed this behavior. i think i tested this in 2.0. @JonB is this the beg reports that you are referring to? I just ran comparisons between doing a manual walk up the view hierarchy and using ui.convert_point. I got the same results regardless of orientation, full screen or not, title bar or not - but all on iPhone 6s Plus. try this On ipad, the problem is that in fullscreen, using convert_point with None thinks 0,0 is at the physical bottom left corner of the screen. also ui.get_keyboard_frame returns nonsense values. things work normally if converting between two views, just not view to None it seems my code to fix this issue is not working either though... so somewhere along the line the problem changed.
https://forum.omz-software.com/topic/2701/keyboard-hiding-textview
CC-MAIN-2018-05
refinedweb
1,244
74.29
In this tutorial we will check how to interact with the two buttons of the Micro:bit board, using MicroPython. Introduction In this tutorial we will check how to interact with the two buttons of the Micro:bit board, using MicroPython. The board has two buttons in the front (button A and button B) which can be used in the development of the application that will run on the Micro:bit [1]. MicroPython offers a very simple to use interface to interact with both these buttons and check if they are /were pressed. As we will see below, we have available very high level methods, which means we don’t need to worry about handling bouncing effects of the buttons or interrupts. You can check here the documentation for these functionalities. The code We will start by importing the microbit module, which will make available the objects we need to interact with the buttons from the Micro:bit board. import microbit In this module there are two instances of the Button class that we can use to obtain the status of the buttons. Since the Micro:bit board has two buttons (button A and button B), there are two instances of the Button class representing them (named button_a and button_b, respectively) [2]. To check if a button is currently being pressed, we simply need to call the is_pressed method on the Button instance we want to analyze. This method takes no arguments and returns a Boolean value indicating if the button is currently being pressed. microbit.button_a.is_pressed() microbit.button_b.is_pressed() Figure 1 shows all the possible outcomes of calling this method for both buttons. In the first invocation for button A, the button was not being pressed at the time, which is why it returned the value False. On the second invocation I was already pressing button A, which is why the is_pressed method returned True. Then I’ve done the same for button B. In the first invocation I was not pressing the button, which is why the value False was obtained, and in the second invocation I was pressing the button, which is why the value True was returned. There’s also another very interesting method on the Button class which is the was_pressed. This method takes no arguments and it will return True if the button was pressed since the Micro:bit started or since the last time the method was called. Otherwise, it will return False [3]. When we call this method, the “pressed state” will be cleared. This means that we need to press the button again for the method to return True again [3]. In other words, if we press the button once and we call the was_pressed method twice, only the first invocation will return True and the second one will return False. microbit.button_a.was_pressed() Figure 2 illustrates a test to this method. The first thing I did before sending any command was pressing button A. Then I called the was_pressed method and, as can be seen, it returned True. After that I’ve invoked this method two more times without pressing the button before. Consequently, the method now returned False both times. This happened because the first invocation cleared the “pressed state”, like we already mentioned. Finally, I’ve clicked button A again and called the method one last time. Then it returned True again, as expected. To finalize, there’s one additional method mentioned in the documentation that can be useful. This method is called get_presses and it returns the total number of button presses since this method was last called [4]. Like before, a call to the method will reset the counter and next calls without clicking the button will return 0. microbit.button_b.get_presses() Figure 3 illustrates an example of calls to this method. The first thing I did was clicking the button 3 times. Then, I have called the get_presses method, which returned exactly the number 3. After that I called the method again, without pressing the button again. This time, it returned the value 0. Then I’ve pressed the button 2 more times and called the method afterwards. It returned the value 2, as expected. References [1] [2] [3] [4]
https://techtutorialsx.com/2019/05/15/microbit-micropython-getting-the-status-of-the-buttons/?shared=email&msg=fail
CC-MAIN-2020-34
refinedweb
706
71.44
23 May 2012 15:45 [Source: ICIS news] TORONTO (ICIS)--Canadian Pacific (CP) has shut down its freight train operations across ?xml:namespace> The workers withdrew their services early on Wednesday, CP said. The suspension of CP's freight service will also impact many of the connecting railways with whom the company does business, it said. CP and labour union Teamsters Canada will continue to meet, with the assistance of a federal government conciliation and mediation service, on Wednesday, the company added. The labour dispute is mainly about work rules and pension plans. Officials at Prime Minister Stephen Harper’s government has been quick to end recent strikes – including a strike at Air CP and its competitor, Canadian National,
http://www.icis.com/Articles/2012/05/23/9563042/canadian-pacific-shuts-freight-rail-service-as-workers-begin.html
CC-MAIN-2015-14
refinedweb
119
52.19
I'm a total newbie to Java and was wondering if you can save the incoming parameter object of a web service to am XML file. I'm using netbeans and have created the following: 1) an xml schema to describe the parameters being passed by the soap message. 2) WSDL to describe my web service using the complex type defined in my schema. 3) Created my web sevice from the WSDL. Here is a snippet of my web service.... public class OrderService implements WebOrderPortType { public String getWebOrder(webservices.weborderschema.OrderType inOrder) All I want the web service to do is save the incoming object (inOrder) as an xml file in a directory. Do I need to parse the object to a DOM and then save ?? Or is there an easier way to do it? Any help would be appreciated.
https://www.javaprogrammingforums.com/java-theory-questions/3089-web-service-wsdl.html
CC-MAIN-2021-39
refinedweb
140
66.23
division 4676 Dividing two large longs while retaining precision and accuracy 2975 integer type long and division 5346 Why is 0 divided by 0 an error? 4471 Java Dividing Two Floats Getting 0 6919 Problems with division in c++ 9879 right shift a binary for division 6070 division of binary numbers with zeros 9597 How to implement long division for enormous numbers (bignums) 4509 Overflow Exception when dividing two decimals in .NET 5877 drag and drop a copy of div into another div « 1 » Hot Questions Google Drive API, can't open standard sharing dialog via JS (x-frame-options error) Symfony 1.4 backend two forms in one Method will be shown automatically with the try catch block in android studio execute code when tomcat server turns on Does ColdSpring work with MX 6? How to make multiple user types when signing up? How do you delete an Azure Web Job? Run jQuery function stored in variable What design pattern to implement in order to use a REST API? In App Purchase (Sandbox) - Receipt Validation Fails Because of curl / SSL Can't get Core Bluetooth to work in Swift iOS playground Abstracting a Method that Returns a List of Objects How to get string data in main form from second from, when button on second form is clicked in C# .net? printf format specifiers for uint32_t and size_t LAMP / Laravel - Report generation maxing out single CPU What's the current best practice for creating child objects on creation of the parent in Rails ActiveRecord models? Joining a column from another complex table in sql checking input type and work according to that MySQL query returning 0 rows assign multiple value to javaScript array object SSL - invalid CN after installing certificate (getting CN *.herokuapp.com) JQuery Ajax Binding Batch crawling youtube search Add specific value to a data.frame column by matching a pattern Input parameter passing: is there a size threshold for efficient pass-by-value? Unable to install rbenv correctly AnonymousEndpoint When send email in WSO2 ESB dealing with async code on frontend HTML unicode ☰ not detected in mobile web application menu in android chrome browser Windows XP updates still available after support ends? Get Internet Time instead of pc time in vb.net Android - Button disappear from the layout when it has transparent background handling appDelegate in my app Auto generate grid using mvccontrib by passing datatable in ASP.net MVC 2 application Creating a Kafka topic results in no leader Javascript not Firing in Chrome and Safari Lync (2013) Client SDK does not work with Skype for Business Online How to index a multidimensional R array dynamically? Can't run Chrome from Jenkins with Cygwin (de)Serialization of a custom type Cant see controls inside User Control in the VisualTreeHelper How to create rss feeds (.xml) for our own dynamic site while using php and mysql? Choppy result from DeltaTime function in JS Passing Parameters into JavaScript object events using jQuery Play Framework: How do I change active class on route change gcc gives error while using fmod() Sort Column Based on Another Column Order of Values Get list of users from SQL Anywhere v10+ without knowing password JSP java io file last date modified is epoch eventhough in FTP the file date is yesterday- tomcat6 How to check if a class or module is namespaced? How do I make a function that usually runs after an entry update the previous ones? creating a reference for script How to run multiple commands in Linux? Make a programm? network_site_url() from wp-login page WordPress not working jQuery click() not firing on tap until hover state is active iOS Background app : best way to run app load into the input file the image that was chosen before Python: how to identify existing field value and avoid updating while writing Unable to install opencv 3.1 with python 3.5, works with 2.7 only Syncing IndexedDB with Sql Server Redirect to a domain and to show it in address bar HEROKU: What to do with my dump file (PGBACKUPS) struct with list<..> in 2 dim. dynamic array segfaults on delete How to save selected rows from UITableview to NSString variable? How do I properly call methods in views? How can I tell a shortcut from a file in a C# drag and drop operation? Languages / stacks for deployment how many joins can a sql query have Whitelist Forms in HTML Purifier Configuration Generate C# code from XAML Sending data through a socket from another thread does not work in Python Odiya Language Support In Apache FOP Android Tablet: Automatically install/update app when connected to wifi Sharepoint 2010 JSOM getEnumerator 'The collection has not been initialized. It has not been requested...' Escape Sequences with Perl Unix Commands IOS Box2d version 2.1 DebugDraw not working How to disable KDE Marble map movement? Chrome extension keyboard command firing twice when popup is open Display loading message while Exporting a results set to Excel Check MySql Space How to display the the full content of a database on clicking on 'Read More' button Is it possible to use an NSMutableString inside a UITextField to optimize the setting of text? Symfony: sfError404Exception: This request has been forwarded to a 404 error page by the action Ember.js: make current user globally accessible Run TestNG XML suite from runnable jar with test classes in specific folder Query for Work Items that contain attachments of a specific type Working with Tortoise SVN plugins in C# not work Multi AJAX code Can Eclipse give me warning when casting Boolean to boolean Javascript Match on parentheses inside the string Can modern compilers unroll `for` loops expressed using begin and end iterators printing rdlc using report viewer 10.0.0.0, print button not showing How to force MSVS 2010 to use qt debug libraries in a C++ project? Any good client-server data sync frameworks available for iPhone? Wait on the Database Engine recovery handle failed!! SQL Server 2012 installation how to make iPhone app asks to register device for push notification multiple times PHP Function with Optional Parameters Ajax tool kit AutoCompleteExtender and javascript How to call a php function from javascript using ajax Binding in WPF style causes inexplicable "Cannot find governing FrameworkElement"
http://www.brokencontrollers.com/tags/division/page1.shtml
CC-MAIN-2019-30
refinedweb
1,050
54.56
I'm sure there is a smart and simplistic way to do this but I'm stuck. I simply want to extract the transcript_id field using the import re (re.findall) from the lines in the GTF (FASTA homo_sapiens). This is what I have so far: import re f = open ('Homo_sapiens.GRCh38.89.gtf', 'r') # Feed the files into findall(); it returns a list of all the found strings string = re.findall(r'transcript_id, f.read()) print transcript_id Where did I go wrong? Actually I do like this approach, thank you to share Devon. Your solution is a bit slower than regex, but definitively safer. Thanks! Hey Ryan If I want to parse the GRCH38 or 37 transcripts (model transcripts) just from chr22, to use them in kallisto analisys for reads quantifications. I could use your deeptools calling which functions? Paulo I'd use awk instead, it'd be easier for filtering.
https://www.biostars.org/p/269867/
CC-MAIN-2022-40
refinedweb
152
75.5
AnyEvent::XMPP::Ext::OOB - XEP-0066 Out of Band Data my $con = AnyEvent::XMPP::Connection->new (...); $con->add_extension (my $disco = AnyEvent::XMPP::Ext::Disco->new); $con->add_extension (my $oob = AnyEvent::XMPP::Ext::OOB->new); $disco->enable_feature ($oob->disco_feature); $oob->reg_cb (oob_recv => sub { my ($oob, $con, $node, $url) = @_; if (got ($url)) { $oob->reply_success ($con, $node); } else { $oob->reply_failure ($con, $node, 'not-found'); } }); $oob->send_url ( $con, 'someonewho@wants.an.url.com', "", "Yaww!!! Hot like SUN!", sub { my ($error) = @_; if ($error) { # then error } else { # everything fine } } ) This module provides a helper abstraction for handling out of band data as specified in XEP-0066. The object that is generated handles out of band data requests to and from others. There is are also some utility function defined to get for example the oob info from an XML element: This function extracts the URL and optionally a description field from the XML element in $node (which must be a AnyEvent::XMPP::Node). $node must be the XML node which contains the <url> and optionally <desc> element (which is eg. a <x xmlns='jabber:x:oob'> element)! (This method searches both, the jabber:x:oob and jabber:iq:oob namespaces for the <url> and <desc> elements). It returns a hash reference which should have following structure: { url => "", desc => "That was a party!", } If nothing was found this method returns nothing (undef). This is the constructor, it takes no further arguments. This method replies to the sender of the oob that the URL was retrieved successfully. $con and $node are the $con and $node arguments of the oob_recv event you want to reply to. This method replies to the sender that either the transfer was rejected or it was not fount. If the transfer was rejectes you have to set $type to 'reject', otherwise $type must be 'not-found'. $con and $node are the $con and $node arguments of the oob_recv event you want to reply to. This method sends a out of band file transfer request to $jid. $url is the URL that the otherone has to download. $desc is an optional description string (human readable) for the file pointed at by the url and can be undef when you don't want to transmit any description. $cb is a callback that will be called once the transfer is successful. The first argument to the callback will either be undef in case of success or 'reject' when the other side rejected the file or 'not-found' if the other side was unable to download the file. These events can be registered to whith reg_cb: This event is generated whenever someone wants to send you a out of band data file. $url is a hash reference like it's returned by url_from_node. $con is the AnyEvent::XMPP::Connection (Or AnyEvent::XMPP::IM::Connection) the data was received from. $node is the AnyEvent::XMPP::Node of the IQ request, you can get the senders JID from the 'from' attribute of it. If you fetched the file successfully you have to call reply_success. If you want to reject the file or couldn't get it call reply_failure.
http://search.cpan.org/~mstplbg/AnyEvent-XMPP/lib/AnyEvent/XMPP/Ext/OOB.pm
CC-MAIN-2014-52
refinedweb
520
62.88
I have a Django app called "publisher", it connects to various signals in my django project, and when it receives them it sends a message to a rabbitmq queue. What I want to do is to be able to test that my setup code is connecting to the correct signals. My app structure looks like: publisher - __init__.py - signals.py - tests.py import signals def receiver_function(*args, **kwargs): #Does rabbitmq stuff my_interesting_signal.connect(receiver_function) class SignalsTeste(TestCase): def_test_connection(self): with patch('publisher.signals.receiver_function') as receiver_mock: my_interesting_signal.application_created.send(None) self.assertEquals(receiver_mock.call_count, 1) I ran into the same mocking problem you describe. My solution is to reach into Django's signal registry and assert that my function was registered with the correct signal. Here's my test: def test_signal_registry(self): from foo.models import bar_func # The function I want to register. from django.db.models import signals registered_functions = [r[1]() for r in signals.pre_delete.receivers] self.assertIn(bar_func, registered_functions) A little explanation about that list comprehension: "pre_delete" is the instance of django.dispatch.dispatcher.Signal that I cared about in this case. You would be using your own "my_interesting_signal" in your example. Signals have an internal property called "receivers" that's a list of two-tuples, where the second element is a weakref to the function you register (hence r[1]). Calling a weakref returns the referent. I had to play around with weakrefs to figure that much out: import weakref def foo(): pass w = weakref.ref(foo) w() == foo Hope this helps.
https://codedump.io/share/At0a66cVnkts/1/testing-that-i-have-connected-to-a-particular-signal-in-django
CC-MAIN-2017-30
refinedweb
256
50.63
Tools, Tips, and Tweaks Manipulating XML at the command line with xmlstarlet In the world of open-source software, where open data formats are a necessity, XML is poised to become the de facto standard. A number of popular open-source applications already use XML as their primary data format, and many developers utilize it extensively in specialized, personal-use applications. There is a clear need for powerful and effective like to introduce this powerful utility and show you a few ways that you can use it to simplify some basic, everyday tasks. For these examples, I have constructed a simple XML file that contains information about several of the astronaut monkeys launched into space by NASA. Each monkey element contains a name attribute that specifies the name of the individual monkey, a date element that contains the date of the monkey's first flight, and a species element that describes the monkey's species. monkeys.xml > The xmlstarlet command enables users to extract information from XML content with simple XPath queries. Xmlstarlet can generate plain text or filtered XML. Let's start with a simple data extraction experiment. We will use xmlstarlet to determine how many monkeys are described in the monkeys.xml file: $ xmlstarlet sel -t -v "count(//monkey)" monkeys.xml 4 The sel instruction tells xmlstarlet that we plan to extract or filter data. The -t parameter indicates that the following parameters are part of the output template, and the -v parameter is used to output the value of an xpath expression. In this case, our xpath expression will count all the monkey element nodes. The xpath syntax is beyond the scope of this brief introduction, and interested readers can learn the entire xpath language from this helpful tutorial at the Zvon web site. Now we will generate a table that lists the name of each monkey as well as its species: $ xmlstarlet sel -t -m "//monkey" -v "species" -o " " -v "@name" -n monkeys.xml Squirrel Gordo Rhesus Able Squirrel Baker Rhesus Sam In this example, we iterate over each monkey element in the XML file, and display the relevant data. The -m parameter tells xmlstarlet to iterate over all nodes that match the provided xpath expression, which is "//monkey" in this case. The template parameters that follow the xpath expression will be evaluated and output for each matched node. In this example, we display the species element of each monkey element, as well as the name attribute. Note that the value xpath expressions all assume that the current context is the matched node, rather than the top level of the xml document: "species" is used instead of "//monkey[x]/species" . The -o parameter tells xmlstarlet to output a text string, and it is used in this example to include a space between the two values associated with each monkey. At the end of our template, we include the -n parameter, which tells xmlstarlet to include a new line character. If we omitted the -n parameter in this example, all the data would appear on one line of text. Xmlstarlet can also operate on remote XML content. Let's abandon our monkey example, and try to extract some content from the Ars Technica RSS feed: $ xmlstarlet sel --net -t -m "//item" -o "Title: " -v "title" -n -o "Author: " -v "author" -n Title: Microsoft server software to go 64-bit only Author: jeremy@arstechnica.com (Jeremy Reimer) Title: Firefox 1.5 release expected soon Author: segphault@sbcglobal.net (Ryan Paul) Title: Online DVD rentals have bright future Author: eric@arstechnica.com (Eric Bangeman) ... In this example, we include the --net parameter to tell xmlstarlet to download the XML content from a remote location. The example iterates over every item element in the XML document, and displays the title and author elements for each item. Xmlstarlet can also process remote html content. If you use the --html parameter in addition to the --net parameter, you can extract data from web sites. To generate a list of image files used in a web page, simply iterate over each img element and display the src attribute: $ xmlstarlet sel --net --html -t -m "//img" -v "@src" -n img/xmlstarlet.png /img/libxml2-logo.png Now let's try a more sophisticated example. As many of you know, the Open Document Format, which is utilized by OpenOffice 2 and other open source office applications, is based on XML. With a little bit of clever trickery, you can use xmlstarlet to extract content from your OpenOffice documents right at the command line. Open Document files are essentially compressed zip archives that contain all the relevant files associated with a document. The actual document text is stored in a file called content.xml within the archive. In order to use xmlstarlet to extract data from content.xml , you have to use the the unzip command to pipe the contents of content.xml into the xmlstarlet utility. In our next example, we will list all the headings in the document and the associated heading level values, a technique that could be used to automatically generate outlines of open documents. The Open Document format uses many different XML namespaces for different kinds of content. Various text elements use the "urn:oasis:names:tc:opendocument:xmlns:text:1.0" namespace, so we will need to use that one to get the headings. Xmlstarlet allows you to establish namespace keywords with the -N parameter. In our example, we will assign the Open Document text namespace with the keyword text: $ unzip -p test.odt content.xml | xmlstarlet sel -N text="urn:oasis:names:tc:opendocument:xmlns:text:1.0" -t -m "//text:*[@text:outline-level]" -v "@text:outline-level" -o " " -v . -n Our example iterates over every text element that has an outline-level attribute, and it displays the associated level value and the text of the node itself. Note that we do not tell xmlstarlet which file it should use for this operation, because we pipe in the relevant content. As you can see, xmlstarlet is an extremely useful tool for command line XML operations. There are many other features that I have not presented here, and interested users should take a look at the documentation for additional examples. Cool App of the Week SuperTux I don't know about you folks, but I'm a hardcore Mario fanatic. My obsession with Super Mario World for the SNES borders on religion, I have played Mario 3 so many times that I can probably beat the first world with my eyes closed, and I have unraveled virtually every hidden feature in Mario RPG. My dreams are filled with plumbers, mushrooms, and funky flying turtle things that inexplicably pursue my destruction. For all those reasons, I have way through tricky levels filled with walking bombs and evil snowballs. The current release contains all the content associated with Milestone 1, which includes 9 different enemies and 26 playable levels that feature the obligatory winter theme. Milestone 2, which is currently under active development, will add new enemies, up to 30 new levels with a forest theme, support for penguin "flapping" (doesn't that sound cute?), and internationalization support. SuperTux in action I have now beaten every level in the first world, and about a third of the levels on the bonus island. Despite a few subtle bugs and the amatuer quality art, this game is highly entertaining and woefully addictive. The developers are very creative, and some of the concept art illuminates other features planned for future releases. If you are a Mario fan, or you are looking for a fun way to waste some time on your Linux system, you might want to check out SuperTux. Warning: it will decimate your productivity, so play at your own risk. /dev/random - Gaim-vv developers claim that Google has too much control over Gaim development. - Oooh shiny! OSDir has a screenshot tour of KDE 3.5 RC 1. - Microsoft's Charles Fitzgerald thinks that open source users are "dorks." - MIT turns down free copies of OS X for its US$100 laptop project because Apple isn't willing to distribute its operating system under an open-source license. - OSTG announced a patent pledge web site that makes it easy for open source developers to find out which patents companies like IBM have made available for royalty-free usage. - Linux.com has a tutorial that introduces netcat, the hacker swiss army knife. You must login or create an account to comment.
http://arstechnica.com/information-technology/2005/11/linux-20051115/2/
CC-MAIN-2015-06
refinedweb
1,417
53.21
Non-Programmer's Tutorial for Python 3/Using Modules Here's this chapter's typing exercise (name it cal.py ( import actually looks for a file named calendar.py and reads it in. If the file is named calendar.py and it sees a "import calendar" it tries to read in itself which works poorly at best.)): import calendar year = int = int cancelled it (or the output at least continues until Ctrl+C is pressed). The program just does. Contents Exercises[edit] Rewrite the high_low.py program from section Decisions to use = int(input ("Guess a number: ")) if guess > number: print("Too high") elif guess < number: print("Too low") print("Just right") Other modules[edit] Sometimes you want to use a python module that does not come with the Python installation. You can also import those, but you have to have them installed on your computer. Creating your own module[edit] When python reads the import command, it first checks files in your directory, then site-packages or pre installed modules. To make your own module, just create a .py file in the current directory and use the command: import module This will try to import the file module.py from your current directory and if not found, from site-packages and prepackaged modules. Changing module to the name of the .py file you created will import that file. However, when it imports the module, it will basically start the file as a program, so any code on there will be run. You want to group all code into functions. The __name__ == __main__ trick[edit] In python, the variable __name__ will give you the current name of the program. If a module you import prints the __name__ variable, then it will print the name of the module. If the current file prints the __name__ variable, it will print __main__, to show it is the main program. If an if statement checks the name variable and runs code if the program is main, it can bypass the unintentional run problem created when a module is imported. Say for example you have a file, which runs some code. It also has a function you want to use in another program. However, you only want the function, not to run the code. By setting up the code below, it will only run the code if it is the file that was clicked on or started, not if it was imported. if __name__ == '__main__': pass In this instance, if the file is run but not imported, it will run the pass command. You can replace the pass command with the code you want to be run when not imported. Just remember to indent the code. The pip module[edit] The pip module is a module that comes with the python installation and acts as a module downloader/manager. You can download other modules from the internet with pip. The pip module is not used in the python interpreter, but is run through the command line. To use it, open up your command line interpreter (for Windows it is Command Prompt, for Mac/Linux it is Terminal) and type in the following code: py3 -m pip install module or the alternate code pip install module This will try to download and install module from the user-submitted python modules database. Module can be changed to the name of the module.
https://en.wikibooks.org/wiki/Non-Programmer's_Tutorial_for_Python_3.0/Using_Modules
CC-MAIN-2018-13
refinedweb
565
72.26
0.83 is a Special AUC Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. 0.83 (or more precisely 5/6) is a special Area Under the Curve (AUC), which we will show in this note. For a classification problem a good probability model has two important properties: - The model is well calibrated. When the model says there is a p-probability of being in the class, the item is in the class with a frequency close to p. - The model is useful, or is a strong signal. It doesn’t place most of its predictions near a constant such as the training prevalence. In general good probability models are much more useful that mere classification rules (for some notes on this, please see here). An ideal model would always return a score of zero or one, and always be right (items with a score of zero never being in the class, and items with a score of one always being in the class). Of course, this is unlikely to be achieved for real world problems. Now let’s consider a model that is perfectly calibrated, but only somewhat useful. Instead of the model scores being concentrated near zero and one, they are uniformly distributed in the interval between zero and one. Let’s also assume our class prevalence is 0.5. This model has a decent looking Receiver Operating Characteristic (ROC) plot, as we can see using R. library(WVPlots) d_uniform <- data.frame(x = runif(1000)) d_uniform$probabilistic_outcome <- d_uniform$x >= runif(nrow(d_uniform)) ROCPlot( d_uniform, 'x', 'probabilistic_outcome', truthTarget = TRUE, title = 'well calibrated probability model, uniform density') In the limit the Area Under the Curve (AUC) of this ROC plot is going to converge to: 5/6 or about 0.83, which we will derive later. Slowing down this plot a bit is useful]. ThresholdPlot( d_uniform, 'x', 'probabilistic_outcome', truth_target = TRUE, title = 'well calibrated probability model, uniform density') DoubleDensityPlot( d_uniform, 'x', 'probabilistic_outcome', truth_target = TRUE, title = 'well calibrated probability model, uniform density') ShadowHist( d_uniform, 'x', 'probabilistic_outcome', title = 'well calibrated probability model, uniform density') Back to the AUC. One interpretation of the AUC is: it is how often a uniformly selected positive example gets a higher score than a uniformly selected negative example (for example, please see here). So we are interested in the probability densities d[score|positive] and d[score|negative]. By Bayes’ Law we have d[score|positive] = P[postive|score] d[score] / P[positive] = score 1 / (1 / 2) = 2 * score d[score|negative] = P[negative|score] d[score] / P[negative] = (1 - score) 1 / (1 / 2) = 2 * (1 - score) (In the above the d[score] = 1 is because score is uniformly distributed in the unit interval, and we are only claiming this relation for scores in the unit interval. The P[positive] = P[negative] = 1/2 is from our prevalence 1/2 assumption.) So we are interested in the area of where a score of a negative example sneg is no more than the score of a positive example spos. This is the following nested integral. We substitute in our formulas for the conditional densities to get: And we finish the calculation in Python/sympy. from sympy import * spos, sneg = symbols('spos sneg') integrate( 2 * (1 - sneg) * integrate(2 * spos, (spos, sneg, 1)), (sneg, 0, 1)) # 5/6 And we get the claimed 5/6, which is about 0.
https://www.r-bloggers.com/2020/09/0-83-is-a-special-auc/
CC-MAIN-2021-31
refinedweb
572
51.68
Didj and Explorer SDL image How to install SDL_Image library A short tutorial on how to build, install and get started with the SDL_Image handling library, adding .jpg and .png support to your apps/games. I'd like to thank Nirvous, NullMoogleCable, PhillKll, Claude, JKent, Jburks, GrizzlyAdams and anyone I may have forgotten for their help :) Prerequisites A working toolchain LX Kernel sources and ThirdParty tarball unpacked to your harddrive correct environment variables set zlib (installed as part of this tutorial) libpng (installed as part of this tutorial) libjpg (installed as part of this tutorial) libSDL (should already be installed) SDL_Image requires a few libraries to be installed, namely the image formats that you want supported and any libraries they are dependent on. I have chosen libjpg as its a very popular format and libpng as we already have .png files on the Didj/LX. Luckily for us, all 3 of these libraries are provided for us, zlib and libpng are in the packages directory and libjpeg is part of the Third Party tarball. Installing zlib and libpng first we need to install zlib, this is the same procedure used for all /package apps. go to your /packages directory, make sure your env vars are set correctly and do: ./install.sh you should end up with the headers and lib files in their appropriate directories under ROOTFS_PATH/usr/include and /lib respectively. Do the same for libpng: ./install.sh Installing libjpeg go to where you unpacked the ThirdParty folder, I unpacked mine into the same folder that /scripts and /packages are in the LX kernel sources. Next go into the /libJPEG/Source folder. Now do the usual to start the installation, slightly different filename this time though: CLEAN=1 ./setup-libjpeg.sh the libJPEG lib and header files should be in ROOTFS_PATH/usr/lib and /include folders respectively. Now that we have the libs and headers installed we can go ahead and make our SDL_image library, to do this we will need a folder in PROJECT_PATH/packages and an install.sh file, so make the directory and enter it: mkdir $PROJECT_PATH/packages/SDLimage cd $PROJECT_PATH/packages/SDLimage Copy the following and paste it into a text file, save it as install.sh in your $PROJECT_PATH/packages/SDLimage folder: #!_image-1.2.10 rm ./SDL_image-1.2.10.tar.gz wget tar -xf SDL_image-1.2.10.tar.gz fi cd ./SDL_image-1.2.10 ./configure CPPFLAGS="-I${ROOTFS_PATH}/usr/include" --prefix=$ROOTFS_PATH/usr --build=`uname -m` --host=arm-linux --enable-shared --with-sdl-prefix=${ROOTFS_PATH}/usr --libdir=${ROOTFS_PATH}/usr/lib --includedir= ${ROOTFS_PATH}/usr/include --enable-jpg make -j3 make -j3 install next you just have to run the install.sh script (make sure your environment variables are set): CLEAN=1 ./install.sh this should now build and install SDL_Image to your ROOTFS_PATH/usr/lib and include folders, if you want to add other libraries, then compile and install them (use the install.sh files for libjpg and libpng for hints on what to do), then re-run the SDLimage/install.sh file again You can now build an app with SDL_image support by adding the following to your .c file: #include "SDL/SDL_image.h" and to compile an app with SDL_image support, try the following: arm-linux-uclibcgnueabi-gcc -o mysdlApp mysdlApp.c -I${ROOTFS_PATH}/usr/include -L${ROOTFS_PATH}/usr/lib -lSDL -lSDL_image -ljpeg -lpng -lz -lpthread Obviously change the arm-linux-uclibcgnueabi-gcc to your chosen cross compiler. If you added extra image formats don't forget to add a -l for each extra lib and support lib that is needed.
https://elinux.org/index.php?title=Didj_and_Explorer_SDL_image&direction=prev&oldid=46897
CC-MAIN-2017-43
refinedweb
602
52.7
So I just started out with C++ and using Visual C++ Express 2008 and I am following the starter tutorials on the site and one the second lesson. I would rewrite the code and change it so I can make sure I know. But I am completly stuck on one thing, I know that this was not in the lesson, but I tried it anyway and it failed, but I won't go on until I know this. I want to put the input for an answer of a + b, but I am not sure how to put. I knew cin<< a + b; wouldn't work but I tried anyways. Anyone can help me, or if you think is better and I should see a specific lesson. The code is not done for sure.The code is not done for sure.Code:#include <iostream> using namespace std; int main() { int a , b, c, d; cout<<"I am going to test you! Just put random numbers, don't fail now!"; cin.ignore(); cout<<"Your first number: "; cin>> a; cin.ignore(); cout<<"Your second number: "; cin>> b; cin.ignore(); cout<<"Your third number: "; cin>> c; cin.ignore(); cout<<"Your forth number: "; cin>> d; cin.ignore(); cout<<"Q1 - What is "<< a <<" plus "<< b <<"\n"; cin>> a + b ; if ( a + b == a + b ) { cout<<"Good, next question."; } else { cout<<"You FAILED, NOW GTFO!!"; } cin.get(); }
http://cboard.cprogramming.com/cplusplus-programming/118985-input-addition-problem.html
CC-MAIN-2015-48
refinedweb
231
91.92
The. Getting Started You’ll generally start a debugging session in Visual Studio.NET in one of three ways. If you are inside of Visual Studio with an open project, you can use the Debug menu to get started. The “Start” and “Step Into” commands will both launch your application and begin a debugging session. A second option is to use the “Attach to Process…” command on the Debug menu. The Attach command will let you break into a running application and begin a debug session. The Attach command is popular for web application developers who want to debug an already running instance of a web application or web service hosted by IIS. ASP.NET applications will run in either an aspnet_wp.exe process (for IIS 5.x), or a w3wp.exe process (for IIS 6.0). A final method for beginning a debugging session is to wait for Just-In-Time debugging to step in during an application crash. For managed .NET code, a crash (or program termination, if you prefer the term), will occur if the main application thread throws an exception and there is no exception handler to catch the exception. On a regular user’s computer the application will terminate. On a computer with debugging tools installed, the JIT debugger will step in and allow you to start a Visual Studio debugging session. If the application has source code and debug symbols present, Visual Studio can take you directly to the line of code responsible for the unhandled exception. Note: ASP.NET developers won’t be able to take advantage of JIT debugging. The ASP.NET runtime will not let an unhandled exception terminate the application. Instead, the runtime will catch exceptions and return an HTTP error code to the client along with an error message. Debug Symbols What are debug symbols? Debug symbols are vital for a successful debugging session. Debug symbols help the debugger correlate instructions in your application back to file names and line numbers in your source code. Debug symbols are stored in a program database file with a .pdb extension. Without a .pdb file present, you won’t be able to step through the lines of code in your application. When you create a new project with Visual Studio .NET, you’ll find the IDE creates two build configurations for your application (unless your project is a web project). You can see these configurations under the Configuration option of the Build menu. There is a Release configuration, which compiles your application so that the application will run as efficiently as possible. There is also a Debug configuration. The debug configuration compiles your application for the best debugging experience. In addition to creating PDB files, a debug build will also disable optimizations. Compiler optimizations will often rearrange instructions to increase performance or reduce memory consumption. The rearrangement of instructions can confuse the debugger. For ASP.NET web applications and web services, you can enable a debug build in the compilation settings of web.config. Alternatively, if you pre-compile your application, you can use flags with the pre-compilation tool to generate .pdb files with debugging symbols. Breakpoints Debugging is all about collecting information and finding out what is wrong in your application. Part of the trick in collecting information is pausing, or breaking into the execution of your application at just the right spot. When the debugger is in break mode, you can examine objects and local variables to see what is going awry. We use breakpoints to tell the debugger where and when we want to pause the execution of our application. A common method to set a breakpoint is to press F9 while the cursor is on the line of code you want to break on. When a breakpoint is set, a round, red glyph will appear in the left margin of the editor. You can also click in this area of the editor to add a breakpoint. Right click on an existing breakpoint glyph to set properties, disable, or delete the breakpoint. When execution reaches a breakpoint, the debugger pauses all of the application’s threads and allows you to inspect the state of your application, as we’ll see later. However, we might not want the debugger to pause the app every time we hit the breakpoint, but only in certain circumstances… Conditions and Hit Counts Most of the time you’ll add a plain breakpoint to an application and not set any special properties. If you open up the breakpoint window (from the Debug menu, select Windows -> Breakpoints), you’ll notice a breakpoint can use conditions and hit counts. Conditions and hit counts are useful if you don’t want the debugger to halt execution every time the program reaches the breakpoint, but only when a condition is true, or a condition has changed, or execution has reached the breakpoint a specified number of times. Conditions and hit counts are useful when setting breakpoints inside of a loop. For example, if your code iterates through a collection of Customer objects ith a for each loop, and you want to break on the 10th iteration of the loop, you can specify a hit count of 10. If something bad only happens when the Customer object’s Name property is equal to “Scott”, you can right click the breakpoint glyph, select Condition from the context menu, and enter the expression customer.Name == “Scott” into the breakpoint condition textbox. Intellisense is available in this textbox to ensure you are using the correct syntax. Breaking On Exceptions Another way to halt execution of a program is to ask the debugger to break when an exception occurs. By default, the debugger will only break if the exception goes unhandled, but this behavior is configurable. Select Exceptions from the Debug menu and you’ll see a tree display of all possible exceptions alongside checkboxes to indicate if the debugger should break when an exception “is thrown”, or only break if the exception is “user-unhandled”. You can ask the debugger to break on a broad category of exceptions (such as break on all Common Language Runtime Exceptions), or a category of exceptions from a namespace (such as any System.Data.SqlClient exception), or on a specific exception (like System.IO.IOException). Breaking on exceptions can be useful when you are trying to track down an exception that occurs, but have not determined under what condition the exception occurs, or where the exception occurs. Stepping Through Code Once you pause execution you have the ability to step through code, in other words, execute code one line at a time. From the debug menu there is a Step Into command (F11) . If you are currently in break mode on a line of code that contains a method call, the Step Into command will enter the method and break again on the first line of code inside the mehtod. In contrast, the Step Over command will execute the entire method call and break on the next line of code in the current method. Use Step Into if you want to see what happens inside a method call; use Step Over if you only want to execute the entire method and continue in the current code block. If the instruction pointer is currently on a line of code that does not contain a method call, Step Over and Step Into will both move the instruction pointer to the next line of code. The debug menu also contains a Step Out command, which you can use when you want to execute the rest of the current method and return to the calling method. Step Out will break execution at the return point in the calling function. Two more tips: right-clicking on a line of code and selecting the ‘Run To Cursor’ will put the application into run mode until execution reaches the specified line of code. Also, you can click and drag the yellow instruction point with the mouse to skip code, or to re-execute code. Viewing StateAnother crucial feature to have in a debugger is the ability to see and visualize the data in your application. Fortunately, Visual Studio offers plenty of options to view data, and to customize the views of data. DataTips One of the common techniques to view the data inside a variable is to place the mouse cursor over the variable in code and allow Visual Studio to display a DataTip. DataTips are only available when the program is in break mode. If the object you are inspecting is a complex object, structure, or array, there will be a plus sign (+) to the left of the tip. If you hover over the + you can expand the DataTip to view additional fields and properties of the object in a tree like view. If the object you are inspecting has a property that itself represents another complex object, you can continue to expand the nodes of the tree and drill further and further into the object. When viewing a DataTip you can edit writeable values of the object by right clicking the tip and selecting “Edit Value”, or by left clicking the value itself. Press the Ctrl key to temporarily hide the DataTip. Variable Windows DataTips are a transient display of information – once your mouse leaves the DataTip area the DataTip display will disappear. If you want a permanent display of an object’s value you can use one of the many variable windows. A Watch Window will display an object’s state until you explicitly remove the object from the window. You can add a variable to the watch window by right clicking a variable and selecting “Add Watch”. The watch window supports the ability to “drill down” into a complex object with the same sort of tree view capability that a DataTip window has. You can also open a locals window from the Debug menu (Windows -> Locals). The locals window will automatically display all local variables in the current block of code. If you are inside of a method, the locals window will display the method parameters and locally defined variables. The Autos window (Debug -> Windows -> Autos) will display variables and expressions from the current line of code, and the preceding line of code. Visualizers A visualizer is a new way to view data in Visual Studio 2005. Some dataypes are too complex to display in a DataTip or watch window. For example, a DataSet is a tremendously complex and hierarchical object. Trying to drill into the 5th column of the 10th row of the 2nd table of a DataSet is cumbersome. Fortunately, a Visualizer exists for DataSet objects that will display the data in a more natural environment, namely an editable data grid. Whenever a magnifying glass appears in a DataTip or watch window there is a visualizer available for the object. Double click on the magnifying glass to use the default visualizer. If multiple visualizers are available, you can click on the drop down chevron next to the magnifying glass and select which visualizer you’d like to use. For instance, a string will have three visualizers available: a text visualizer, an HTML visualizer, and an XML visualizer. There is also a visualizer for the DataSet and DataTable classes. Wrap Up When your program is buggy, Visual Studio will give you all the tools you need to track down the error. Control the execution of your application with breakpoints, and use variable windows, data tips, and visualizers to inspect state along the way.
http://odetocode.com/Articles/425.aspx
CC-MAIN-2014-10
refinedweb
1,925
60.55
Spaghetti code comes from all of your objects needing to know where all your other objects are so they can communicate. I used to attempt to alleviate this problem by creating a switchboard object that acted as a message bus. That helped surprisingly little, as I had to deal with long object paths; and the bus would fill up with a ton of simplistic pass-through functions, making for tedious coding and bloating. Then I discovered dojo.event.topic, which already is the switchboard I needed, and beyond that, helps make the application act as more of an event-driven system. The Dojo 0.9 Topic system has been greatly simplified over that found in Dojo 0.4. Actually, Topic itself is gone (now a private object), and the system has been rolled into dojo._base.connect, where it can be utilized without having to specifically require it. The two primary methods are publish and subscribe, with a third being unsubscribe. The Dojo 0.9+ Topic (pub/sub) System The current application I’m working on has the user’s name in several places. A crude way to design this would be: on application load, build the widget, access the userData object, find the name, and create the HTML. If the user changes his name, the userData object (or the switchboard) would have to know every place in the app that uses the name, and call those functions that would rebuild the HTML with the update. Not only is this extra work, but problems arise when function names change, arguments change, or object paths change. The methodology for refactoring this as an event driven system: on application load, build the widget without the username HTML, using a placeholder if necessary. Then subscribe to an event of the user’s name changing: dojo.subscribe("/app/name/update", this, "buildNameFunc"); This tells the Dojo system to listen to the event (topic) of /app/name/ update, and if it occurs, access the scope of this and call the function buildNameFunc. The topic can be any opaque string, but the convention is to break it down into: / namespace / item / event. Now in the userData object, when the user logs in, publish an event: dojo.publish("/app/name/update", [ "Mike" ] ); The arguments are put into an array slot, but this is for Dojo code simplification. They are received as a single argument, unless you comma delineate them. When the user logs in, “Mike” is published to every object that subscribed to this event, and Charlie’s name is populated throughout. This is great for code maintainability. Say the client has requested a feature that the user be able to change his name. Now it’s simply a matter of building the widget that handles the input with the published event. Done. Wrapping Topics There’s one problem with the pub/sub system. If the subscribe happens after the publish, it essentially missed the event and won’t update. Because of this, my code started turning to spaghetti again, as the objects still needed to access the initial data. I considered calling publish after a subscribe, but this would trigger all of the previous subscribes multiple times. If you have ten subscribes to a topic, the first one would get called ten times. Then I realized that the subscribe function is essentially passing in a callback. It seemed a simple matter of utilizing that callback for initialization. I created a global object, and since Dojo wasn’t using it anymore, I named it Topic. I changed all of my subscribe/publish functions to talk to this object instead of the Dojo object. The first version of the code used the same functions, acting as pass-throughs to the Dojo functions. After getting that working properly (with a lot of existing code in the app), I added the mechanism to save the published arguments, a hash map of the unique topic strings: this.current[topic] = args; Next, when the subscribe function is called, it uses the passed scope and method to return argument: scope[method].call(scope, arg); And that’s it in a nutshell. The final code:); } }) The final code also fixes the argument, which as you recall, is passed in as an array. I also added an acceptNull argument, because some subscribers would want to know if the current argument is null, while a null argument could blow up others. A couple notes to using this wrapper. It needs to be initialized very early; I actually do so before the app itself (Hence, I chose to make it a global object). You also need to aware that these subscribes will fire immediately on initialization, so there may need to be changes in the order that they initialize. The Dojo pub/sub system is a major step toward writing event-driven, maintainable code. If you have an application that uses devices like deferred loading, or “parse on demand”, wrapping it with an initializer helps keep it that way.
https://www.sitepen.com/blog/2007/08/10/wiring-a-dojo-app/
CC-MAIN-2016-50
refinedweb
832
62.38
I found a very interesting JavaScript Date() wrong result when using XDK. Setting a Date in Date(year,month,day) format will add one more month on result. I am not sure it is only on XDK or Javascript itself. (It seems from the Javascript itself.) The test code is as below for your test. <!DOCTYPE html> <html> <body> <p id="demo"></p> <script> var d1 = new Date("2016/01/01"); var d2 = new Date(2016,01,01); document.getElementById("demo").innerHTML = "Date('2016/01/01') = "+d1 +"<br>It shows the right date.<br>" +"<br>Date(2016,1,1) = "+d2+"<br>It will add one month to show the wrong date."; </script> </body> </html> /* You will get result as below --------------------------------------------------------------- Date('2016/01/01')=Fri Jan 01 2016 00:00:00 GMT+0800 (CST) It shows the right date. Date(2016,1,1)=Mon Feb 01 2016 00:00:00 GMT+0800 (CST) It will add one month to show the wrong date. -----------------------------------------------------------------------------------------------------*/ Link Copied Its likely because of the following: "2016/01/01" is already a "real date", so javascript wil (try to) convert the string date to a date object. Date(2016,1,1) wil make you a new date, with month zero based: 0 =Jan, 1 = Feb, 2 = March, 3 = April etc.. Dear Wesley, Thank for your reply. You may be right in technically. But Month should start from one, right? It is wrong or right action of Javascript on XDK? Should Javascript or XDK need to improve it? Matrix Hi Matrix, It is the way the Javascript library is. In Javascript programming (and many other languages), array references used 0 based counting. So 0 is the index of the first item, etc. As a side effect of that, the Date library is functioning as you discover. Weirdly, days are numbered "normally". It is not a bug in the XDK or Javascript. It is merely a questionable design decision that was made a long time ago. When you titled this forum thread you named it "JavaScript Date() wrong result when using XDK." But did you actually compare it to any other Javascript installations, like in Chrome or Firefox or Node? Dear Chris, Yes, just trying on Chrome, Firefox and Safiri. There are all have the same problem. Just as your saying, it is not consistency in the rules of month and day. May be it is time to suggest Javascript to refine. And Date() is not convenient to do compare operation like DateA>DateB, may be XDK can build a JS Dates library to improve all these problems. The Date() comparison method could like below for reference. // Source: var dates = { convert:function(d) { // Converts the date in d to a date-object. The input can be: // a date object: returned without modification // an array : Interpreted as [year,month,day]. NOTE: month is 0-11. // a number : Interpreted as number of milliseconds // since 1 Jan 1970 (a timestamp) // a string : Any format supported by the javascript engine, like // "YYYY/MM/DD", "MM/DD/YYYY", "Jan 31 2009" etc. // an object : Interpreted as an object with year, month and date // attributes. **NOTE** month is 0-11. return ( d.constructor === Date ? d : d.constructor === Array ? new Date(d[0],d[1],d[2]) : d.constructor === Number ? new Date(d) : d.constructor === String ? new Date(d) : typeof d === "object" ? new Date(d.year,d.month,d.date) : NaN ); }, compare:function(a,b) { // Compare two dates (could be of any type supported by the convert // function above) and returns: // -1 : if a < b // 0 : if a = b // 1 : if a > b // NaN : if a or b is an illegal date // NOTE: The code inside isFinite does an assignment (=). return ( isFinite(a=this.convert(a).valueOf()) && isFinite(b=this.convert(b).valueOf()) ? (a>b)-(a<b) : NaN ); }, inRange:function(d,start,end) { // Checks if date in d is between dates in start and end. // Returns a boolean or NaN: // true : if d is between start and end (inclusive) // false : if d is before start or after end // NaN : if one or more of the dates is illegal. // NOTE: The code inside isFinite does an assignment (=). return ( isFinite(d=this.convert(d).valueOf()) && isFinite(start=this.convert(start).valueOf()) && isFinite(end=this.convert(end).valueOf()) ? start <= d && d <= end : NaN ); } }
https://community.intel.com/t5/Software-Archive/JavaScript-Date-wrong-result-when-using-XDK/td-p/1088372
CC-MAIN-2021-10
refinedweb
716
75
Specifically: (1) import foo # where foo is a dynamically loaded module reload(foo) causes a core dump, at least on systems using SVR4 shared libraries for dynamic module loading. (2) import sys del sys.modules['sys'] import sys dumps core. (3) import math reload(math) raises ImportError: No module named math (4) import stdwin import sys del sys.modules['stdwin'] import stdwin does not actually seem to fail but looks very scary because stdwin cannot be initialized twice in the same process. In Python version 1.0.1++ (to be released as 1.0.2) I've added the following checks: - Calling reload() for dynamically loaded modules (case 1) is forbidden and raises an ImportError - Ditto for built-in modules (case 3) - Attempts to force a reinitialization of a few privileged built-in modules like sys, __main__ and __builtin__ by deleting them from sys.modules (case 2) raises ImportError - Similar attempts for non-privileged built-in modules (case 4) are not caught. This may still dump core depending on whether the module's init*() function can be called more than once or not. (Forbidding this would require more work as it would require maintaining a table of flags telling whether a particular module was already initialized.) - Calling reload() for frozen modules is fixed and should now work as expected (except I haven't tested it yet :). --Guido van Rossum, CWI, Amsterdam <Guido.van.Rossum@cwi.nl> URL: <>
https://legacy.python.org/search/hypermail/python-1994q2/0211.html
CC-MAIN-2021-43
refinedweb
237
63.19
! For years, developers using popular dynamic languages such as JavaScript, Ruby or Python, have benefited from the ability of using their favorite language in a scripting context. This allowed them to apply their knowledge of the language and the overall ecosystem to scenarios way beyond “regular” application development – automation, quick experiments, API experimentation or batch tasks – just to name a few. Through the new Roslyn compiler, scripting has also finally come to C#. While the sound of such concept – after all we are talking about scripting for a compiled, object oriented (class based) programming language – might sound counter intuitive, just bear with me for a moment, as we’ll try to make sense of it in this article. Background C# scripting was introduced into .NET community together with the Roslyn CTP back in October 2011. The primary idea behind C# scripting was to allow for code to be dynamically evaluated at runtime. While there have been other technologies allowing that in the past (Reflection.Emit, CodeDOM etc.), Roslyn took this concept to new heights by introducing scripting – using not the regular strict C#, but a relaxed version of C# language semantics. As Roslyn matured, in one of the later preview releases, the scripting bits were actually pulled from Roslyn altogether, as the Microsoft Language Team opted for a redesign of the scripting APIs. As a result, scripting didn’t make it into the stable Roslyn 1.0.0 release (which was also the version that shipped with Visual Studio 2015). However, since then, scripting has made a return to Roslyn and is currently available on Nuget, in 1.1.0 version of Roslyn. It is also part of Visual Studio 2015 Update 1 – the stable version of which was released yesterday (30 November 2015). Aside from the official Roslyn packages from Microsoft, the C# community has been enjoying a community-driven way of scripting with C# via a very popular scriptcs open source project for a few years now. scriptcs was built around the initial Roslyn CTP with the goal to provide excellent, rich experience for C# scripting – further enhancing the capabilities of Roslyn. It even introduced support for cross platform – Linux, OS X – C# scripting using Mono.CSharp. This is particularly useful, because Roslyn, while it has an ultimate goal of supporting x-plat scripting, even at this day, with the latest 1.1.0, only works on Windows. What does it all mean in practice? Well, C# scripting allows you to write C# code you know and love, but in a way that’s much friendlier for non-Visual Studio usage. Because scripted C# is characterized by looser syntax requirements, the overall experience can be really low-ceremony. Below are a few things to remember: – the entry point to your script is the first line of your script (no mandatory Main method) – any code allowed from the first line (no top level mandatory Program class) – global functions allowed – no namespaces – no project or solution file – your script can be entirely self-contained – using statements and references can be imported implicitly by the hosting application (responsible for invoking the script) Think about it for a second – because it is a very enticing idea – being able to author C# in any text editor (no need for Visual Studio), without having to worry about the heavy structures of solution and project files, build tasks, or traditional entry point constructs like “static void Main”, holds tremendous power. To get started with C# scripting, you can either install the Roslyn scripting package from Nuget or opt to use scriptcs instead. Getting started with Roslyn scripting You can grab the C# scripting package using the following Nuget installation command (there is also a corresponding one for Visual Basic scripting, Microsoft.CodeAnalysis.Scripting.VisualBasic): This will add the Roslyn scripting libraries to your project and allow you to execute C# scripts from your application./p> The simplest thing you can do is to use the static EvaluateAsync method of CSharpScript class. It allows you to quickly compile and invoke C# scripted code. For example: Of course your script code can be more elaborate – as mentioned earlier, you can define classes, loose methods and global variables. Moreover, your C# script can actually return a value to the hosting application if needed. The last line of your script, without a semicolon, is an instruction for Roslyn to evaluate an expression on that last line and return it. For example: In the above snippet, the last line is a call to the Add method (notice the lack of semicolon) – this means that this statement will be evaluated and the value returned to the host application, which conveniently captures it into result variable. This is a very easy way of marshaling the data back from the script. If you want to do the opposite – that is, pass data into your script, you can do so using an instance of a so called globals object. This concept, also known as a host object, will expose all of the public members of the host class as ambient global properties and methods, that can simply be accessed anywhere in the script. Host object can be any type you wish, but it must itself be declared as public. For example: In the snippet above, an instance of ScriptHost is used as host object, and therefore its property Number can be used inside the script to read the data (number 5) being passed in. Finally, in a more advanced scenario, you can also pre-seed your script context with assembly references (this way your script will have access to types defined there) and using statements (this way your script will automatically have access to types from those namespaces without having to manually import them, of course as long as the relevant assembly is referenced too). This is a great way to simplify your scripting experience. The following snippet adds a reference to System.Xml DLL from the GAC and imports the using statements for System.Xml in order to process an XML file (obviously that file has to exist on your machine in the first place – I just used a generic test file I had). This is controlled by the ScriptOptions object. Notice that in the above example, both the XmlDocument and Console classes are available, because the relevant namespaces have been imported into the script context. Additionally, the XmlDocument is only resolved correctly, because a reference to its assembly (the aforementioned System.Xml DLL) was done. Finally, there are two new extra feature worth mentioning, and these are new compiler directive, #load and #r. They are only allowed to be used in C# script code (would not work with “classic” C# syntax), and allow you to reference a script from another script (#load) and import an assembly reference from GAC or from a path (#r). This is particularly useful for code sharing between files. Let’s take our earlier example and extend it with a #load and #r sample: The above snippet is similar to the previous one, but we introduced a few changes. First of all, System.Xml is no longer referenced from the host application level, via ScriptOptions. Instead, the script itself is asking for the DLL to be referenced. The end result is the same as earlier, meaning that the assembly is available to be used in the script, but the responsibility of making that decision has been shifted. Moreover, we actually replaced the path to the XML report file with a variable reportPath – and you might be wondering where is it coming from? Well, since at the second line, we load a C# script file from c:\test\setup.csx, that’s a good bet to look. And indeed, whatever you declare in the #loaded script (variables, methods, classes) – all of that will be available in the other script. In our case, the setup.csx happened to be a one line CSX file: By the way, CSX is the C# script file extension convention. REPL In case you are wondering how all of this works under the hood, Roslyn will create a so called submission from your script code. A submission is an in memory assembly containing the types generated around your script code, which can be identified among the assemblies in the current AppDomain by a ℛ prefix in the name. The precise implementation details are not important here (though, for example, scriptcs heavily relies on understanding in detail how Roslyn works to provide its extra features), but it’s important to know that submissions can be chained together. When they are chained, variables, methods or classes defined in an earlier submission are available to use in subsequent submissions, creating a feature of a C# REPL (read-evaluate-print loop). The following example can illustrate how a super simple C# REPL could be built in a few lines of C# code: In this very basic example, we enter into a forever loop, and create a null ScriptState variable. We then wait for user input. To initialize a C# REPL, we call CSharpScript.RunAsync and pass in user’s input, which results in the user code being invoked and the script state being populated from this first submission. On subsequent runs, we call ContinueWithAsync method on the ScriptState itself, which will effectively result in new submissions being chained after the original one. Here’s a sample output from the above snippet: * using System; * var msg = "Hello"; * Console.WriteLine(msg); Hello * Going further Now, this is all excellent and very exciting – but the biggest productivity gains from C# scripting probably do not involve calling C# script code from within your application. Instead, what is likely of most value for us developers, is the ability to write that scripted C# as standalone CSX files, and rely on a stable, established script runner to run them, just like it’s the case with all scripting languages. Sure, you could write such a runner yourself, but that would be an unnecessary overkill. What you can do, is one of two things: – install scriptcs, which has been the go-to community driven script runner for quite a while now (scriptcs installation instructions) – install Visual Studio 2015 Update 1, which ships with CSI, a command line script runner which can be accessed from Visual Studio Developer Prompt (it’s also located under C:\Program Files (x86)\MSBuild\14.0\Bin if you need to access the EXE directly) They are both very powerful, and aside from being able to execute scripts, they also ship with built-in REPLs. We’ll look into using both in the next post of this series! If anyone else is getting an error message trying to do a: Install-Package Microsoft.CodeAnalysis.Scripting.CSharp I had to: 1. First do a Install-Package Microsoft.DiaSymReader.Native 2. add -Pre to Install-Package Microsoft.CodeAnalysis.Scripting.CSharp Hope this helps Mark It's not only dynamically typed languages that have REPLs. F#, OCaml and Haskell all had REPLs right from their inception. Install-Package Microsoft.CodeAnalysis.Scripting.CSharp should be Install-Package Microsoft.CodeAnalysis.CSharp.Scripting Looks like they changed the namespace ordering between 1.1.0-rc1 and 1.1.1 I got an error on the first: Install-Package Microsoft.CodeAnalysis.Scripting.CSharp for me: Install-Package Microsoft.CodeAnalysis.Scripting worked Great post, thanks a lot! C# scripting is a great addition to "classic" C# Hi I'm unable to find the scripting package on NuGet for VB.NET. Waiting for Part 2! I have a problem with using CompilationErrorExeption in my code. try { CSharpScript.EvaluateAsync(_scriptEntryPoint); } catch (CompilationErrorException ex) { /* Print exception*/ return false; } Now. My dll will compile ok, but when executing my program the program will crash to CompilationErrorException. When I comment out the CompilationErrorException try catch frame and execute my program, it will execute just fine. So does anyone have any hints what could cause this problem? When I just create a simple console application. It will execute just fine with this try catch frame. Br, Timo Found the problem. DLL not found. Br, Timo
https://blogs.msdn.microsoft.com/cdndevs/2015/12/01/adding-c-scripting-to-your-development-arsenal-part-1/
CC-MAIN-2017-26
refinedweb
2,019
60.55
Simple Also this mini IDE is suitable for games with GIMP and study its internal structure. -- Installing instructions for windows: 1) Install Gimp 2.6 (if not installed) into c:\bin\gimp\ 2) Install Python 2.5 (if not installed (check in Gimp)) into c:\bin\gimp\python 3) Install PyGTK.......... 4) Install GTK Binaries or add c:\bin\gimp\bin\ into PATH environment variable (check you path! path must be, a ptah to Gimp bin directory) 5) unpack dll.zip into c:\bin\gimp\bin\ (check you path). this archive contents: libglade-2.0-0.dll libgtksourceview-2.0-0.dll 6) unpack python.zip into c:\bin\gimp\python\lib\site-packages\ (check you path) this archive contents: gimpcolor.pyd _gimpenums.pyd gimpfu.pyc gimpplugin.pyc gimpshelf.py gimpthumb.pyd _gimpui.pyd gtksourceview2.la gimpenums.py gimpenums.pyo gimpfu.pyo gimpplugin.pyo gimpshelf.pyc gimpui.py gimpui.pyo gtksourceview2.pyd gimpenums.pyc gimpfu.py gimpplugin.py gimp.pyd gimpshelf.pyo gimpui.pyc gtksourceview2.dll.a pygimp-logo.png If you Gimp distribution setup with enabled python supprt, you only need: gtksourceview2.dll.a gtksourceview2.pyd and gtksourceview2.la files 7) Unpack gimpide in some place and check set-up: set PATH=%PATH%;C:\bin\GIMP\bin C:\bin\GIMP\Python\python.exe gimpplug.py (check you path) if you see errors - let me know 8) If you Gimp distribution setup with enabled python supprt, skip this item. Unpack pygimp.interp.zip into c:\bin\GIMP\lib\gimp\2.0\interpreters this file contain path to python interpreter. Edit him in notepad if you need (check paths). 9) If you Gimp distribution setup with enabled python supprt, skip this item. Unpack plug-ins.zip into c:\bin\gimp\lib\gimp\2.0\plug-ins\ 10) Unpack gimpide.zip into c:\bin\gimp\lib\gimp\2.0\plug-ins\ THE END :-) -- Example script (simple paset it into GIMP IDE editor and run): # -*- coding: utf8 -*- #Create border with bg and fg color #based on import gimp from gimpfu import pdb from gimpfu import * #BORDER WIDTH width = 20 #Edit this ^^^^^^^ images = gimp.image_list() image = images[0] image.undo_group_start() old_background_color = pdb.gimp_context_get_background() old_foreground_color = pdb.gimp_context_get_foreground() image_width = pdb.gimp_image_width(image) image_height = pdb.gimp_image_height(image) # get a copy of background background_copy = pdb.gimp_image_get_active_layer (image) border = pdb.gimp_layer_copy(background_copy,0) pdb.gimp_image_add_layer(image, border, -1) #select and make stroke pdb.gimp_rect_select(image, width, width, image_width - 2*width , image_height - 2*width , CHANNEL_OP_REPLACE, 0, 0.0) drawable = pdb.gimp_image_active_drawable (image) # TODO somehow set the width for the stroke pdb.gimp_edit_stroke(drawable) # fill by foreground color pdb.gimp_selection_invert(image) pdb.gimp_context_set_foreground(old_background_color) drawable = pdb.gimp_image_active_drawable (image) pdb.gimp_bucket_fill(drawable,FG_BUCKET_FILL,NORMAL_MODE,100,0,0,width/2,width/2) pdb.gimp_layer_set_opacity(border,50) pdb.gimp_selection_none (image) # megre visible layers pdb.gimp_image_merge_visible_layers(image,EXPAND_AS_NECESSARY) pdb.gimp_context_set_foreground(old_foreground_color) pdb.gimp_context_set_background(old_background_color) image.undo_group_end() gimp.displays_flush() -- Known issues: * Under Windows syntax highlight not work. (May be mime detection broken) * The strange игп is lost when the contents of global dict: if you write import aaa def a(test): print aaa.somefunction(test) a("foo") Python will swear that the aaa module is not loaded workaround: import aaa def a(test): import aaa print aaa.somefunction(test) a("foo") -- Changes: 31.09.2009 - Russian translation added Thank you for your participation In the near future I will try to make a correction Gimp IDE suggestions Again, thanks for sharing. These are suggestions for improvements: Register under "/Filters/Python-Fu/_IDE". Discussion: it should be beside "Python-Fu/Console. IDE should be capitalized. Gimp is an extra word, not needed. Don't register twice, once as an extension. Discussion: I don't think it is necessary. If you want to have two menu items, use gimp_plugin_menu_register() to add a second menu item. Enable the menu item even if no image is open. Discussion: the console is enabled even if no image is open. I don't know how to do that. An empty menu item parameter to register() doesn't work. Change the name of the file gimpplug.py. Discussion: A more conventional name would be python-foo-IDE.py or plugin-python-IDE.py. Use white background in editing window. Discussion: Black text on white background is best for a GUI. I can help with the English throughout the program, but that should be offline. It seems to me that the IDE is a candidate for a new open source project, maybe under Launchpad. Maybe this should be discussed on the gimp developer mailing list. Pessimistically, I think most people would say that the IDE is not necessary since the population of users is small: Gimp plugin programmers using Python. But I think you could improve it so that it also could be used by ordinary GIMP users to create extremely simple scripts: those that are just lists of actions without any parameters or variables or control statements (for, if, etc.) For example, there was a recent post that "Convert selection to path" followed by "Path to selection" was a useful script (it rounds the selection). It would be nice if a user could automate that and put it in their menus. plashless, off banks of noon how to register plugin always enabled (regardless image open) In Python, to enable a gimp plugin menu item whether or not an image is open: In the call to register: 1) 'params' param empty (omitting image and drawable etc.) 2)'menupath' param just the menu item, not the path 3) 'imagetypes' param empty 4) use keyword arg 'menu=path' In the plugin function, omit the image and drawable parameters. I am not sure why, but this gets around the gimpfu module wrapping that automatically handles the image and drawable parameters. If you don't get around that wrapping, then you can have the menu item always enabled, but a dialog pops up asking the user to choose an image and drawable. Example snippets for installing a new PDB Browser alongside the Python console: def browse(): # no image or drawable parameters app = TutorialApp() gtk.main() # enabled even if no image open "python_fu_pdb_browser", "Browse the Plugin Database", "TBD", "author", "2009", "New _Browse PDB", # just the menu item "", # all image types [], # no parameters [], menu="/Filters/Languages/Python-Fu") # domain=("gimp20-python", gimp.locale_directory)) plashless, off banks of noon Gimp IDE installation notes Dmitry: Great idea. Follows a discussion about Gimp IDE installation that may help others. The installation instructions are for Windows. Most of the installation instructions are for packages (software) that the plugin depends on (and can be ignored by most people). Most recent Linux distributions already have installed the packages which the Gimp IDE plugin depends on. (I can't say for Windows installations.) The GIMP IDE plugin itself is installed in the "normal place". However, unlike many plugins, it is not a single file, but comprises the plugin itself (file gimpplug.py) and a module it depends on (imports), which is in the folder named gimpide, which can be installed alongside the plugin itself. In my case, I downloaded and extracted just the gimpide.zip file to ~/.gimp-2.6/plug-ins and it seems to work. I am running Ubuntu 9.04 Jaunty. (But I also do Gimp plugin development and might have previously downloaded packages that the Gimp IDE plugin depends on.) Note that there are two files named gimpide.zip. I believe the second one includes a Russian language translation (in file gimpide/locale/ru/LC_MESSAGES. I downloaded the first one and it seems to be in English (except for the IDE's menu bar? which might be a mistake.) (I am not too familiar with Internationalization.) See also another post named "Gimp IDE suggestions." (For programmers: The GIMP IDE plugin depends on: Python support in Gimp GTK (GUI) support in Python libglade (separate, XML GUI resources) support in Python Sourceview (text editor) support in GTK for Python The Gimp IDE plugin uses libglade, which is a way of building the GUI using a GUI designer application (in this case Glade which produced the XML gimpide.glade file?) That software architecture is fairly state-of-the-art and has been around for several years (at least in the Linux world.) But the Python language and the world around it does change. plashless, off banks of noon How to use? Installation instructions for GIMP IDE Windows notes part 1 Installer
http://registry.gimp.org/comment/4239
CC-MAIN-2015-11
refinedweb
1,395
50.43
For some reason I thought that calling pthread_exit(NULL) at the end of a main function would guarantee that all running threads (at least created in the main function) would finish running before main could exit. However when I run this code below without calling the two pthread_join functions (at the end of main) explicitly I get a segmentation fault, which seems to happen because the main function has been exited before the two threads finish their job, and therefore the char buffer is not available anymore. However when I include these two pthread_join function calls at the end of main it runs as it should. To guarantee that main will not exit before all running threads have finished, is it necessary to call pthread_join explicitly for all threads initialized directly in main? #include <stdlib.h> #include <stdio.h> #include <pthread.h> #include <unistd.h> #include <assert.h> #include <semaphore.h> #define NUM_CHAR 1024 #define BUFFER_SIZE 8 typedef struct { pthread_mutex_t mutex; sem_t full; sem_t empty; char* buffer; } Context; void *Reader(void* arg) { Context* context = (Context*) arg; for (int i = 0; i < NUM_CHAR; ++i) { sem_wait(&context->full); pthread_mutex_lock(&(context->mutex)); char c = context->buffer[i % BUFFER_SIZE]; pthread_mutex_unlock(&(context->mutex)); sem_post(&context->empty); printf("%c", c); } printf("\n"); return NULL; } void *Writer(void* arg) { Context* context = (Context*) arg; for (int i = 0; i < NUM_CHAR; ++i) { sem_wait(&context->empty); pthread_mutex_lock(&(context->mutex)); context->buffer[i % BUFFER_SIZE] = 'a' + (rand() % 26); float ranFloat = (float) rand() / RAND_MAX; if (ranFloat < 0.5) sleep(0.2); pthread_mutex_unlock(&(context->mutex)); sem_post(&context->full); } return NULL; } int main() { char buffer[BUFFER_SIZE]; pthread_t reader, writer; Context context; srand(time(NULL)); int status = 0; status = pthread_mutex_init(&context.mutex, NULL); status = sem_init(&context.full,0,0); status = sem_init(&context.empty,0, BUFFER_SIZE); context.buffer = buffer; status = pthread_create(&reader, NULL, Reader, &context); status = pthread_create(&writer, NULL, Writer, &context); pthread_join(reader,NULL); // This line seems to be necessary pthread_join(writer,NULL); // This line seems to be necessary pthread_exit(NULL); return 0; } If that is the case, how could I handle the case were plenty of identical threads (like in the code below) would be created using the same thread identifier? In that case, how can I make sure that all the threads will have finished before main exits? Do I really have to keep an array of NUM_STUDENTS pthread_t identifiers to be able to do this? I guess I could do this by letting the Student threads signal a semaphore and then let the main function wait on that semaphore, but is there really no easier way to do this? int main() { pthread_t thread; for (int i = 0; i < NUM_STUDENTS; i++) pthread_create(&thread,NULL,Student,NULL); // Threads // Make sure that all student threads have finished exit(0); }
http://ansaurus.com/question/3330048-pthreads-in-c-pthread_exit
CC-MAIN-2017-34
refinedweb
452
50.46
Welcome back. In the first article of this series, you were introduced to version control and learned some of its concepts. The second article gave you a chance to apply this knowledge by using CVS from the command line, as well as under Project Builder, on a simple Cocoa program -- MyPing. In this final article of this series, we will look at creating software releases using the CVS tag and branch commands, as well as some Mac OS X GUIs for interacting with a CVS repository. tag branch The first version of MyPing contained minimal functionality. It enabled you to ping a host, control the number of pings, the delay between pings, and view ping's output. These features are useful, but we can do better. ping Let's add two new features that enable you to log the output of ping to a file and clear the output of the text view area between pings. I've added these features to MyPing, which you can download and integrate into your latest version. The new additions are notated in the code with comments. You will also need to update your Nib file to get the interface changes. Once integrated, make sure you commit your changes to CVS. Before using the CVS commands to create a new release, let's look briefly at the basics of release management. Against this backdrop, you will have a better sense for the software release process and get a feel for how it works. If you already understand release management, feel free to skip this section. Related Reading Essential CVS By Jennifer Vesperman As you recall from the first article, version control is a process, supported by software, that helps you manage aspects of the software development process. These include recording changes to your project's source files, controlling access to shared files, and managing releases. Even though the software release process varies from project to project and industry to industry, we can abstract some general properties that enable us to understand the process better. There are typically three players in the release process: developers, maintainers, and release managers (though more can exist, such as testers). A developer is responsible for writing code, fixing bugs, and adding features to components of the software system. Maintainers are accountable for a particular part (or parts) of the software package. They take contributions from developers, verify additions, and integrate them into the module's code base. Additionally, they can also develop code for a module. The release manager performs the necessary steps to put together and release the software package. This typically involves tasks such as testing, communicating with maintainers, enforcing a code freeze, reviewing code/feature, etc. The release manager can be one person, or a group. A code freeze is a policy, usually enforced by the release manager, that stops some, or all, types of development on the software system. Freezing can include feature freeze, where no new functionality is added to the software (though small additions like minor bug fixes can be applied) and hard freeze, where nothing new is added to the system. On many projects, the release policy functions as follows. Developers submit code (bug fixes, patches, features, etc.) to the maintainer of a module. The maintainer accepts or rejects submissions and integrates accepted code into the working code base for the module. Once the principal project members decide to release a new version (the policy varies from project to project), the release manager enforces a code freeze, gets updates from module maintainers, accepts/rejects submissions, performs or coordinates testing, reviews code, and makes sure that the new release is stable. Once the release is ready to go, the release manager announces the new release and makes it available to the public. Remember, this is the general case; each project can, and often will, have a different release model (for example, the Linux kernel or the Apache server). See "Release Management Within Open Source Projects" by Justin R. Erenkrantz for more information on release management on open source projects. For the MyPing project, there is a single developer, you, who will take on all of the roles. Nonetheless, it is useful to understand the usual model so you can see the different roles of the release cycle, even though you are performing all tasks. CVS contains two commands that help you create software releases: tag and branch. A tag is a label you give to a set of revisions, or files, enabling you to snapshot a fixed point in the project. As you continue editing tagged files, and committing them to CVS, the tag you created remains fixed with the state of the files when they were tagged. At any point, you can return to the past simply by checking out the files by their tag name. We will use the tag command to create a release of MyPing. The branch command helps you create and manage different versions of your project -- such as release versions, debugged versions, and optimized versions -- without disturbing the main trunk. The trunk is the current state of development; a branch is a specialized version, or diversion. Each branch has a root (the initial state of the branch) and a tip (the current state of the branch). If you create a branch, modify files, and commit the changes, the root and the tip will contain different versions of your files. Further, all commits to a branch do not affect the trunk, only the branch. We will use the branch command to create a specialized version, in our case a bug fix version, of MyPing. The following diagram shows an example of three branches rooted on the trunk, as well as branches rooted on branches. The following steps outline how to create a new release of MyPing. Institute a code freeze for the MyPing project, where nothing new is added to the system. At this point, you should commit all code to CVS. Once the code freeze is in place, you can begin creating a new release. First, make sure everything works correctly by performing your predefined tests. For many projects, this means running a script that executes regression testing to verify that the release is stable. If any code is changed, make sure you commit your changes to CVS. Note: This version of MyPing contains an intentional bug. I did this to demonstrate how to use the CVS branch command to perform bug fixes in a past release. I discuss this technique in the section "Updating a Released Version with CVS Branch." The next step is to record the new release. To do this, you use the CVS tag command to associate a label with all files that make up the release. % cd MyPing % cvs tag release_1_0-public-release Since the tag command does not support adding comments (like import and commit), it's a good idea to include as much information as possible in the tag name. import commit Finally, notify users that the new release is ready and tell them where they can download the.
http://www.macdevcenter.com/pub/a/mac/2003/08/29/version_control_two.html
CC-MAIN-2016-36
refinedweb
1,179
61.36
#include <sys/errno.h> More... #include <Standard_Mutex.hxx> #include <sys/errno.h> Mutex: a class to synchronize access to shared data. This is simple encapsulation of tools provided by the operating system to syncronize access to shared data from threads within one process. Current implementation is very simple and straightforward; it is just a wrapper around POSIX pthread librray on UNIX/Linux, and CRITICAL_SECTIONs on Windows NT. It does not provide any advanced functionaly such as recursive calls to the same mutex from within one thread (such call will froze the execution). Note that all the methods of that class are made inline, in order to keep maximal performance. This means that a library using the mutex might need to be linked to threads library directly. The typical use of this class should be as follows: Note that this class provides one feature specific to Open CASCADE: safe unlocking the mutex when signal is raised and converted to OCC exceptions (Note that with current implementation of this functionality on UNIX and Linux, C longjumps are used for that, thus destructors of classes are not called automatically). To use this feature, call RegisterCallback() after Lock() or successful TryLock(), and UnregisterCallback() before Unlock() (or use Sentry classes). Constructor: creates a mutex object and initializes it. It is strongly recommended that mutexes were created as static objects whenever possible. Destructor: destroys the mutex object. Method to lock the mutex; waits until the mutex is released by other threads, locks it and then returns. Method to test the mutex; if the mutex is not hold by other thread, locks it and returns True; otherwise returns False without waiting mutex to be released. Method to unlock the mutex; releases it to other users.
https://dev.opencascade.org/doc/occt-7.1.0/refman/html/class_standard___mutex.html
CC-MAIN-2022-33
refinedweb
288
52.39
On Mon, Mar 22, Zwane Mwaikambo wrote:> On Mon, 22 Mar 2004, Olaf Hering wrote:> > > On Mon, Mar 22, Zwane Mwaikambo wrote:> >> > > In the absence of /init and other nice boot goodies, we fall through to> > > prepare_namespace() so we shall require initmem to complete boot.> >> > Andrew, please restore the previous version of the patch. The 3 liner is> > much more obvious.> > Olaf, what does the previous patch look like?.../people/akpm/patches/2.6/2.6.5-rc1/2.6.5-rc1-mm1/broken-out/initramfs-search-for-init.patchFrom: Olaf Hering <olh@suse.de>initramfs can not be used in current 2.6 kernels, the files will never beexecuted because prepare_namespace doesn't care about them. The only way toworkaround that limitation is a root=0:0 cmdline option to force rootfs asroot filesystem. This will break further booting because rootfs is not thefinal root filesystem.This patch checks for the presence of /init which comes from the cpio archive(and thats the only way to store files into the rootfs). This binary/scripthas to do all the work of prepare_namespace().--- 25-akpm/Documentation/early-userspace/README | 26 ++++++++++++++++++++++++++ 25-akpm/init/main.c | 7 +++++++ 2 files changed, 33 insertions(+)diff -puN init/main.c~initramfs-search-for-init init/main.c--- 25/init/main.c~initramfs-search-for-init Tue Mar 9 17:00:46 2004+++ 25-akpm/init/main.c Tue Mar 9 17:00:46 2004@@ -604,6 +604,13 @@ static int init(void * unused) sched_init_smp(); do_basic_setup(); + /*+ * check if there is an early userspace init, if yes+ * let it do all the work+ */+ if (sys_access("/init", 0) == 0)+ execute_command = "/init";+ else prepare_namespace(); /*diff -puN Documentation/early-userspace/README~initramfs-search-for-init Documentation/early-userspace/README--- 25/Documentation/early-userspace/README~initramfs-search-for-init Tue Mar 9 17:00:46 2004+++ 25-akpm/Documentation/early-userspace/README Tue Mar 9 17:00:46 2004@@ -71,5 +71,31 @@ custom initramfs images that meet your n For questions and help, you can sign up for the early userspace mailing list at +How does it work?+=================+
http://lkml.org/lkml/2004/3/22/33
CC-MAIN-2015-06
refinedweb
348
54.52
1, Advanced file processing interface shutil Is a high-level file operation tool It is similar to advanced API, and its main strength lies in its better support for copying and deleting [root@python ~]# mkdir /tmp/demo [root@python ~]# cd /tmp/demo/ [root@python demo]# mkdir -p dir1 [root@python demo]# touch a.txt b.txt c.txt [root@python demo]# touch sh.py cc.py 001.jpg 002.jpg 003.jpg //Create required files [root@python demo]# ipython //Open ipython You can also create files in pycharms for implementation. 1. Copy files and folders shutil.copy(file1,file2) #file shutil.copytree(dir1,dir2) #folder (1) Copy file In [1]: import shutil In [2]: shutil.copy('a.txt','aa.txt') Out[2]: 'aa.txt' //You can check whether there are generated files in the corresponding paths of PyCharm and Linux In [3]: ls 001.jpg 003.jpg a.txt cc.py sh.py 002.jpg aa.txt b.txt c.txt (2) Copy folder In [5]: shutil.copytree('dir1','dir11') Out[5]: 'dir11' In [6]: ls 001.jpg 003.jpg a.txt cc.py dir1/ sh.py 002.jpg aa.txt b.txt c.txt dir11/ (3) Copy the contents of the file to another file # _*_ coding:utf-8 _*_ __author__ = 'junxi' import shutil # Copy the contents of the file to another file shutil.copyfileobj(open('old.txt', 'r'), open('new.txt', 'w')) # Copy files shutil.copyfile('old.txt', 'old1.txt') # Copy permission only. Content, group and user remain unchanged shutil.copymode('old.txt', 'old1.txt') # Copy permission, last access time, last modification time shutil.copystat('old.txt', 'old1.txt') # Copy a file to a file or directory shutil.copy('old.txt', 'old2.txt') # On the basis of copy, copy the last access time and modification time of the file shutil.copy2('old.txt', 'old2.txt') # Copy olddir to newdir. If the third parameter is True, the symbolic connection under the folder will be maintained when copying the directory. If the third parameter is False, a physical copy will be generated under the copied directory to replace the symbolic connection shutil.copytree('C:/Users/xiaoxinsoso/Desktop/aaa', 'C:/Users/xiaoxinsoso/Desktop/bbb') # Move directory or file shutil.move('C:/Users/xiaoxinsoso/Desktop/aaa', 'C:/Users/xiaoxinsoso/Desktop/bbb') # Move aaa directory to bbb directory # Delete a directory shutil.rmtree('C:/Users/xiaoxinsoso/Desktop/bbb') # Delete bbb directory 2. Rename and move files and folders shutil.move(filel, file2) shutil.move(file, dir) (1) Rename of file In [7]: shutil.move('aa.txt','dd.txt') Out[7]: 'dd.txt' In [8]: ls 001.jpg 003.jpg b.txt c.txt dir1/ sh.py 002.jpg a.txt cc.py dd.txt dir11/ (2) File move to folder In [9]: shutil.move('dd.txt','dir1') Out[9]: 'dir1/dd.txt' In [11]: ls dir1 dd.txt 3. Delete directory shutil.rmtree(dir) # Delete directory os.unlink(file) # Delete file Delete directory In [15]: shutil.rmtree('dir1') In [16]: ls 001.jpg 003.jpg b.txt c.txt sh.py 002.jpg a.txt cc.py dir11/ 2, Document content management 1. Directory and file comparison The filecmp module contains operations to compare directories and files. filecmp can realize the difference comparison function of files, directories and traversal subdirectories. With filecmp module, no installation is required. (1) Directory structure The contents of files a_copy.txt, a.txt and c.txt in directory dir1 are the same, but the contents of b.txt are different [root@python demo]# mkdir compare [root@python demo]# cd compare/ [root@python compare]# mkdir -p dir1 dir2 [root@python compare]# mkdir dir1/subdir1 [root@python compare]# ls dir1 dir2 [root@python compare]# touch dir1/a_copy.txt dir1/a.txt dir1/b.txt dir1/c.txt [root@python compare]# touch dir2/a.txt dir2/b.txt dir2/c.txt [root@python compare]# mkdir -p dir2/subdir1 dir2/subdir2 [root@python compare]# touch dir2/subdir1/sb.txt //Create required files [root@python compare]# ipython //Open ipython filecmp provides three operation methods: CMP (single file comparison), cmpfile (multi file comparison), and dircmp (directory comparison). (2) Example code: Use the cmp function of filecmp module to compare whether two files are the same. If the files are the same, return True, otherwise False In [1]: import filecmp In [2]: filecmp.cmp('a.txt','b.txt') Out[2]: False In [3]: filecmp.cmp('a.txt','c.txt') Out[3]: True In [4]: filecmp.cmp('a.txt','a_copy.txt') Out[4]: True (3) Compare two files There is also a function named cmpfiles in the filecmp directory, which is used to compare multiple files in two different directories at the same time, and return a triple containing the same file, different files and files that cannot be compared. An example is as follows: In [9]: filecmp.cmpfiles('dir1','dir2',['a.txt','b.txt','c.txt','a_copy.txt']) Out[9]: (['b.txt'], ['a.txt', 'c.txt'], ['a_copy.txt']) # Returns a triple. The first is the same. The third is different. The third is not comparable (without this file or for other reasons) (4) Compare multiple files The cmpfiles function is used to compare files in two directories at the same time, or it can be used to compare two directories. However, when comparing two directories, you need to specify possible files by parameters, so it is cumbersome. There is also a function called dircmp in filecmp to compare two directories. After calling the dircmp function, an object of dircmp class will be returned. This object holds many properties. We can get the differences between directories by looking at these properties. As follows: In [11]: d = filecmp.dircmp('dir1','dir2') #Set test directory In [12]: d.report() diff dir1 dir2 Only in dir1 : ['a_copy.txt'] Only in dir2 : ['subdir2'] Identical files : ['b.txt'] Differing files : ['a.txt', 'c.txt'] Common subdirectories: ['subdir1'] (5) Direct comparison directory does not specify file Directory comparison: create A directory comparison object through filecmp (a,b[,ignore[,hide]]) class to compare folders. By comparing two folders, you can get some detailed comparison results (such as the list of files only existing in folder A), and support recursive comparison of subfolders. In [17]: d.left_list #View dir1 directory structure Out[17]: ['a.txt', 'a_copy.txt', 'b.txt', 'c.txt', 'subdir1'] In [18]: d.right_list #View dir2 directory structure Out[18]: ['a.txt', 'b.txt', 'c.txt', 'subdir1', 'subdir2'] In [19]: d.left_only #Only the Out[19]: ['a_copy.txt'] In [20]: d.right_only #Only directory dir2 exists Out[20]: ['subdir2'] 2. MD5 checksum comparison The check code is calculated by hash function, which is a method of creating small digital "fingerprint" from any data. The hash function compresses the message or data into a summary, making the data smaller and easier to compare. MDS is the most official MD5 hashes are generally used to check the integrity of files, especially to check the correctness of files in case of file transfer, disk error or other situations. Under Linux, the MD5 check code of a file is calculated as follows: [root@192 demo]# md5sum a.txt d41d8cd98f00b204e9800998ecf8427e a.txt It is also very simple to calculate the MD5 check code of a file in Python. You can use the standard library hashlib module. As follows: import hashlib d = hashlib.md5() with open('b.txt') as f: for line in f: d.update(line.encode('utf-8')) print(d.hexdigest()) # Or you can (the most common way of writing, often used to name pictures) >>> import hashlib >>> hashlib.md5(b'123').hexdigest() '202cb962ac59075b964b07152d234b70' # You can also use the general method of hash.new(), hashlib.new(name[, data]). Name passes in the name of the hash encryption algorithm, such as md5 >>> hashlib.new('md5', b'123').hexdigest() '202cb962ac59075b964b07152d234b70' Remember to create the b.txt file 3, Python Management Pack 1,tarfile Since there is a compression module zipfile, it is natural to have an archive module tarfile. The tarfile module is used to unpack and package files, including those compressed by gzip, bz2 or lzma. If it is a file of type. zip, it is recommended to use the zipfile module. For more advanced functions, please use the shutil module. Defined classes and exceptions tarfile.open(name=None, mode='r', fileobj=None, bufsize=10240, \kwargs) Returns an object of type TarFile. In essence, it is to open a file object. Python can see this kind of file object type design everywhere. You can easily understand it, can't you? Name is the file name or path. bufsize is used to specify the size of the data block, which is 20 * 512 bytes by default. Mode is the open mode. A string similar to filemode[:compression] format can have the combination shown in the following table. The default is "r" If the current mode does not properly open the file for reading, a ReadError exception will be thrown, in which case use the "r" mode. If the specified compression method is not supported, a CompressionError exception is thrown. In the mode of w:gz,r:gz,w:bz2,r:bz2,x:gz,x:bz2, the tarfile.open() method accepts an additional compression level parameter, compresslevel, with the default value of 9. (1) Read file Compress files Extraction code: 0418 import tarfile with tarfile.open('tengine-2.3.2.tar.gz') as t: # getmember() to view the list of files for member in t.getmembers(): print(member.name) with tarfile.open('tengine-2.3.2.tar.gz') as t: t.extractall('a','tengine-2.3.2/man') t.extract('tengine-2.3.2/man','b') Common method description: - getmembers(): get the list of files in the tar package - member.name: get the file name of the file in the tar package - extract(member, path): extract a single file - Extract all (path, memebers): extract all files (2) Create tar package Remember to create the read.txt file import tarfile with tarfile.open( 'readme.tar',mode='w') as out : out.add('read.txt') You can check whether there is a readme.tar file in the corresponding location (3) Read and create compressed package import tarfile with tarfile.open('tarfile_add.tar ',mode='r:gz') as out: pass with tarfile.open('tarfile_add.tar ',mode='r:bz2') as out: pass (4) Back up the specified file to a compressed package import os import fnmatch import tarfile import datetime def is_file_math(filename, patterns): '''Find files of a specific type''' for pattern in patterns: if fnmatch.fnmatch(filename, pattern): return True return False def find_files(root, patterns=['*']): for root, dirnames, filenames in os.walk(root): for filename in filenames: if is_file_math(filename, patterns): yield os.path.join(root, filename) patterns = ['*.txt','*.md'] now = datetime.datetime.now().strftime('%Y_%m_%d_%H_%M_%S') filename = 'backup_all_file_{0}.tar.gz'.format(now) with tarfile.open(filename, 'w') as f: for item in find_files('.', patterns): f.add(item) You can check whether there is a readme.tar file in the corresponding location 2,zipfile zipfile is used for compression and decompression of zip format in python. Because it is a very common zip format, this module is also used frequently. Zipfile has two very important classes, zipfile and ZipInfo. In most cases, you only need to use these two classes. - ZipFile is the main class used to create and read zip files; - ZipInfo is the information for each file of the stored zip file. (1) Read zip file import zipfile demo_zip = zipfile.ZipFile('read.zip') print(demo_zip.namelist()) demo_zip.extractall('1') demo_zip.extract('a.jpg','2') //Remember to create a directory named 2. Of course, the path of the first field must also be correct. Common method description: - namelist(): returns a string list of all files and folders contained in the zip file - extract(filename, path): extract a single file from a zip file - Extract all (path): extract all files from the zip file (2) Create zip file import zipfile newZip = zipfile.ZipFile( 'new.zip', mode='w' ) newZip.write('a.jpg') #File must exist newZip.close() (3) Python command line calls zipfile #Create zip file python -m zipfile -c new1.zip b.txt #View the contents of the zip file python -m zipfile -l new1.zip File Name Modified Size b.txt 2020-04-26 14:35:12 0 #Extract the zip file to the specified directory python -m zipfile -e new1.zip / Options included in the command line interface provided by the zipfile module: - -1: Display the list of files in zi p package - -e: Extracting z i p compressed packets - -c: Create a zip package - -t: Verify that the file is a valid zi p (4) Properties of zipfile import zipfile, os zipFile = zipfile.ZipFile(os.path.join(os.getcwd(), 'duoduo.zip')) zipInfo = zipFile.getinfo('Files in files.txt') print ('filename:', zipInfo.filename) #Get file name print ('date_time:', zipInfo.date_time) #Gets the last modification time of the file. Returns a tuple containing six elements: (year, month, day, hour, minute, second) print ('compress_type:', zipInfo.compress_type) #Compression type print ('comment:', zipInfo.comment) #Document description print ('extra:', zipInfo.extra) #Extension data print ('create_system:', zipInfo.create_system) #Gets the system that created the zip document. print ('create_version:', zipInfo.create_version) #Gets the PKZIP version of the zip document created. print ('extract_version:', zipInfo.extract_version) #Get the PKZIP version required to extract the zip document. print ('extract_version:', zipInfo.reserved) # Reserved field. The current implementation always returns 0. print ('flag_bits:', zipInfo.flag_bits) #zip flag bit. print ('volume:', zipInfo.volume) # Volume label for the header. print ('internal_attr:', zipInfo.internal_attr) #Internal properties. print ('external_attr:', zipInfo.external_attr) #External properties. print ('header_offset:', zipInfo.header_offset) # File header offset. print ('CRC:', zipInfo.CRC) # CRC-32 for uncompressed files. print ('compress_size:', zipInfo.compress_size) #Gets the compressed size. print ('file_size:', zipInfo.file_size) #Gets the uncompressed file size. zipFile.close() # 3. shutil creates and reads compressed packages Shutil can be simply understood as sh + util, shell tool. The shutil module is a supplement to the os module, mainly for copying, deleting, moving, compressing and decompressing test import shutil print(shutil.get_archive_formats()) The output results are as follows: [('bztar', "bzip2'ed tar-file"), ('gztar', "gzip'ed tar-file"), ('tar', 'uncompressed tar file'), ('xztar', "xz'ed tar-file"), ('zip', 'ZIP file')] (1) Create a compressed package import shutil # Parameter 1: name of the generated package file # Parameter 2: format of compressed package # Parameter 3: compressed directory shutil.make_archive('a.jpg','gztar', 'ddd') You can check whether there are generated files in the corresponding location (2) Unzip import shutil # Parameter 1: the compressed package to be decompressed # Parameter 2: extracted directory print(shutil.unpack_archive('a.jpg.tar.gz','jpg')) You can check whether there are generated files in the corresponding location
https://programmer.ink/think/methods-of-python-processing-files-and-files-shutil-filecmp-md5-tarfile-zip.html
CC-MAIN-2021-39
refinedweb
2,435
60.61
Help and tutorials will be online very soon! Please check back in a few days. For now you can check out the references. If you just received your board and want to get started straight away, try typing the following code into the main.py file on the board. Then save and reset the board. import pyb switch = pyb.Switch() leds = [pyb.LED(i+1) for i in range(4)] accel = pyb.Accel() i = 0 while not switch(): y = accel.y() i = (i + (1 if y > 0 else -1)) % len(leds) leds[i].toggle() pyb.delay(10 * max(1, 20 - abs(y)))
https://www.micropython.org/help/
CC-MAIN-2020-34
refinedweb
102
89.45
Advanced Namespace Tools blog 06 March 2018 History of ANTS, part 2 After describing how I arrived in the promised land of namespaces, I will move on to the process of writing the software that became ANTS. The first piece is hubfs, which is still the most generally-useful thing I have done in Plan 9, and the program in which I have invested the most time in continuing improvements. Pre-hubfs: scripting pipe devices, and iosrv During my years of using linux, I always found "screen" to be a useful part of my workflow. I got in the habit of pretty much always working within a screen session, not only when SSH'd to a remote system, but locally as well. Decoupling my work from the outer layer of the user interface seemed to improve just about everything. Given Plan 9's nature as a distributed OS, I was rather surprised that there didn't seem to a screen-equivalent program - how was I supposed to avoid losing state on remote systems if I needed to disconnect? There was a weird old program called "consolefs" which was focused on serial lines, but didn't seem like an optimal general solution. Things originated from me playing around with having two machines mounting the same window system, and then using the pipe device to create share input/output to a shell. This old blog post and one from the next day show me playing around with sharing a single rio between multiple machines and using the pipe device and tee and shell directions to create a rough system for sharing a single shell. I had been learning C from k&r along with Nemo's "Intro to OS Abstractions" Plan 9 book, and the fact that I really wanted a usable "screen" type program prompted me to begin development in earnest. During the next couple weeks I hacked for about 16 hours a day on a program that was the direct forerunner to hubfs: iosrv. It is almost exactly like hubfs, except with a hackier implementation and a much more awkward interface. It creates pipes with the pipe device, wraps them into a /srv with exportfs, and then creates an internal data buffers for each pipe/file descriptor. The abstraction called a "hub" which extends the idea of a pipe originated here. I recall spending seemingly infinite hours trying to get the locking correct. Looking back at the iosrv code it is clear that it really is the same program as hubfs, just implemented more awkwardly. The same basic idea of providing a mux-buffer for each shell fd and having an outer shell-client program that sits between the user and the shell 'inside' the io ubs, and a lot of the control interface and internal data structures are the same. Hubfs I proceeded immediately from iosrv into rewriting it as hubfs. I was mostly displeased with the hacky mechanism of creating pipes with the pipe devices and then running an exportfs into /srv. I knew things should be done as a real 9p fs. Since I hadn't written a 9p fs before, there was a lot of learning involved in making the translation. It took me awhile to grok the basic concept of how lib9p worked, that the flow of control was set by the lib9p service loop and my fs functions were being called from within it. I think this "design pattern" is very beautiful and powerful and I feel like it could probably be used more often than it seems to be. From checking sourcesdump, I see that I uploaded hubfs.tgz on August 23, 2009. That makes just over a month of development from "rc script experiments" through iosrv to hubfs. I remember I was doing absolutely nothing except coding during this entire time, working with absolute and complete obsessive focus. Most of my previous experience of creative flow had been in the context of piano playing and musical performance and the feeling of programming when "in the zone" had certain similarities, but also strong contrasts. In addition to the emotional/logical divide between music and coding, programming seems to have the capacity to take over my brain for a longer span of time, up to several weeks for a single problem. Because ANTS is "namespace tools" it seems relevant to explain just how hubfs fits in. In a trivial sense, any 9p fs is in a certain sense a 'namespace tool' because it works by creating an fs interface within your namespace, but this is vacuous. More relevantly, persisting a shell provides a means of saving and moving between namespaces. The main example in ANTS is that a hubfs is started within the independent "rootless" namespace created during boot, allowing that namespace to be accessed via the hubfs from the main namespace. This is a specific example of a general principle - one purpose of hubfs is to allow data to flow easily between divergent namespaces.
http://doc.9gridchan.org/blog/180306.ants.history.pt2
CC-MAIN-2021-21
refinedweb
831
55.58
[explanation] Uva10529 dumbbones dominoes meaning of the title You try to line up some dominoes and push them down. But if you accidentally knock down the newly placed dominoes when you put them, it will knock down all the adjacent strings of dominoes, and your work will be partially destroyed. For example, you've set your dominoes as DD__ DxDDD_ The shape (side view) of D (where d stands for dominoes, underline and x Stand for empty spaces where dominoes have not been placed), and you want to put another dominoes in the position of x. It may knock down one dominoes on the left or three dominoes on the right, and you will have to rearrange them. Give the number of dominoes you want to put, and the probability that it will fall to the left and right when you put dominoes (each dominoes is the same). In order to minimize the number of dominoes expected to be placed, you can use a specific placement order. Find the minimum number of dominoes expected to be placed. Algorithm 1 Probability + optimal solution, interval DP can be considered Consider an interval. Enumerate the positions of the last dominoes placed in this interval, and divide the original interval into left and right sub intervals. Let the expected minimum placement times of the original section be \ (E \), the left section be \ (E_1 \), and the right section be \ (E_2 \). At this time, the last domino has not been put on, so the position of this vacancy is \ (x \) It is known that the probability of each domino falling to the left is \ (P_l \) and to the right is \ (P_r \) If you look at the dominoes \ (x \) alone, the probability that it will not fall is \ (1-P_l-P_r \), then the expected number of times it will not fall is \ (\ frac1{1-P_l-P_r} \) Note that in this number: the last one does not fall, some of the previous ones are left and some are right. Because the probability to the left is \ (P_l \), the number of times to the left is \ (\ frac1{1-P_l-P_r}\times P_l \). Because every time it reverses to the left, the left section must be rearranged. It takes \ (E_1 \) to place the left section once, so the left section should be placed \ (\ frac{P_l}{1-P_l-P_r}\times E_1 \) times in total. Similarly, a total of \ (\ frac{P_r}{1-P_l-P_r}\times E_2 \) times should be placed in the right section. In addition, before placing \ (x \), the left and right sections need to be placed once, with a total of \ (E_1+E_2 \). Finally, the state transition equation is So we easily wrote the following lump int n; double pl, pr;//Probability of falling left and right double f[N][N];//dp array (memory search) double dfs(int l, int r) {//Left and right end points of interval double &E = f[l][r];//It is convenient and fast to use references if (E > eps) return E;//Memorization if (l > r) return E = 0.0;//boundary if (l == r) return E = 1.0/(1.0-pl-pr);//Just one E = inf;//Let's start with a maximum for (int i=l; i<=r; ++i) {//Enumerate the last placement i double E1 = dfs(l, i-1);//Placement times of left section double E2 = dfs(i+1, r);//Right interval E = std::min( E, E1 + E2 + 1.0/(1.0-pl-pr) + pl*E1/(1.0-pl-pr) + pr*E2/(1.0-pl-pr)//The equation just now );//The optimal solution is obtained by min } return E; } Then the glorious T flew to zero Algorithm 2 The above algorithm is \ (O(n^3) \), of course not. We consider dimensionality reduction Because each card is as like as two peas, and the same is true for all the same lengths of intervals. Their E\ is the same. If the state is reset to the interval length, only the intervals with different lengths need to be transferred, and the state transfer equation remains unchanged Obtain positive solution #include <cstdio> #include <iostream> using namespace std; #define N 1030 const double eps = 1e-8; const double inf = 1e20; double pl, pr;//Probability of falling left and right double f[N]; double dfs(int n) {//Interval with length n double &E = f[n], E1, E2;//Or reference if (E > -0.5) return E;//Let f[0]=0.0 and f[1]=1.0/(1.0-pl-pr) transfer naturally E = inf; for (int i=1; i<=n; ++i) { E1 = dfs(i-1), E2 = dfs(n-i);//Left and right answers E = min( E, E1 + E2 + f[1] + pl*E1*f[1] + pr*E2*f[1] ); } return E; } signed main() { ios::sync_with_stdio(false); cin.tie(nullptr);//Turn off stream synchronization while (true) { int n; cin >> n; if (!n) break; cin >> pl >> pr; for (int i=2; i<=n; ++i) f[i] = -1.0;//Pay attention to details f[1] = 1.0 / (1.0 - pl - pr);//Meanwhile, f[0]=0.0 printf("%.2f\n", dfs(n)); } return 0; }
https://programmer.group/solution-uva10529-dumbbones-dominoes.html
CC-MAIN-2021-49
refinedweb
838
64.85
On 22/01/2008, Jörg Schaible <Joerg.Schaible@elsag-solutions.com> wrote: > Mark Proctor wrote: > > Torsten Curdt wrote: > >> > >> On 21.01.2008, at 10:08, Tom Schindl wrote: > >> > >>> Hi Torsten, > >>> > >>> I understand this but we are seeing many J2EE-Servers adopting OSGi > >>> and many applications (I admit most of them in Eclipse-world) also. > >>> It seems strange to me in those envs to use this "artificial" > >>> package to overcome jar-hell (which is the only reason for the > >>> java5-package right?) they are not having > >>> because of OSGi. > >> > >> Hm.... not sure why its such a big deal to have e.g. > >> o.a.commons.lang2 or similar. If you use an IDE that manages imports > >> you will barely notice anyway. > > personally I've always wondered why having a version attached to the > > namespace hasn't taken off more to deal with api breaking > > releases. if > > we had org.antlr1 org.antlr2 org.antlr3 life would be much > > easier. Sure > > you wouldn't get auto drop in jar and release, but I'm > > guessing tooling > > could make up for that in those cases. > > Ironically Java could already support this, there's a reason why a manifest should specify a Specification-Version. It would have been so simple to use this information also to separate classes in a class loader. But the Gods of Java refused to make anything out of it ;-) > Surely that would not work for java classes without a manifest - e.g. classes which are loaded as individual class files rather than from jars. Not all Java processes use jars. > - Jörg > > --------------------------------------------------------------------- > To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org > For additional commands, e-mail: dev-help@commons.apache.org > > --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org For additional commands, e-mail: dev-help@commons.apache.org
http://mail-archives.apache.org/mod_mbox/commons-dev/200801.mbox/%3C25aac9fc0801220409k5cd8e392gc2bccd4fd4e68ea0@mail.gmail.com%3E
CC-MAIN-2014-23
refinedweb
300
59.3
So my situation is that I have a file, myfile.py (written in python) that I started developing on my own machine. I used vim to do this, but at that time I did not know that Vim could let you define the length of your tabs, and also to show how many times the line had been tabbed by using a marker. So before I SCP'ed myfile.py to a remote unix server and began working on it there, all my tabs were about 6 spaces in length each, and there were no markings defining how many indentations were present in a particular line. But since 6 spaces is a lot, there really was no need... But when I began developing on the remote server, I noticed that their vim configurations had the tab so that it would be about 4 spaces in length each, and so that there would be vertical markings to show that a tab is present. However, it did not automatically change the lines of code that were already there in myfile.py to fit their tab-settings, and the settings ended up only applying to any new lines of code I wrote using vim on their server. Being the newb that I was, I didn't bother to fix anything in order to make the file look consistent. Now I have some weird, interleaved-mixture where some lines have tabs that are 4 in length, others with 6, and the ones with shorter length have a vertical marking for each tab and the longer ones have nothing. How do I efficiently fix this mess to revert back to one standard? (hopefully the one that has the shorter tab length with the vertical markings). I'm looking for anyone who has even remote experience regarding this predicament - whether this causes me to learn more vim on an advanced level, or start with some roundabout way first, I do not care. I just prefer anything over manually fixing each line - you know how wrong that could make your code when you're dealing with Python! [EDIT]-> Also thought I should note that each line does not have the same number of tabs... some lines have 0, 1, 4, 5, etc etc... This question came from our site for professional programmers interested in conceptual questions about software development. :retab! Indentation in vim can be a little hairy. You can configure vim to either insert a tab character or insert some number of spaces when you press the tab key using the expandtab option. You can also configure how wide tabs (how many spaces they take up) are with the tabstop option. Finally you can also configure vim to display tabs with a special character using the listchars option (▸ is a common choice). However that character won't be shown where spaces are used in place of tabs. That is why some of your tabs have a special character and some do not, some of them are actual tab characters, and others are just a lot of spaces acting like a tab. It sounds like the remote unix server was configured with expandtab on and tabstop set to 6, so when you enter a tab, it outputs 6 spaces, while you're development server seems to be configured with noexpandtab and tabstop set to 4. expandtab tabstop noexpandtab To fix this you can use vim's find/replace functionality to replace all occurrences of six spaces with a single tab using the command :%s/ \{6}/\t/g. To understand that command you should read up on vim's search and replace capabilities here. To keep this from happening in the future you should make your own .vimrc file that sets things up the way you like it and either place it in your home directory so it is automatically loaded or :source it whenever you edit any files. :%s/ \{6}/\t/g .vimrc :source Note: There are also other tab options such as shiftwidth and softtabstop but they aren't as relevant to your problem. However you should read about them so you can configure vim to act exactly how you want. :set ts=6 et #!/usr/bin/env python def fun(): ······print "hello, world" :set noet :ret! :set noet :ret! #!/usr/bin/env python def fun(): »·····print "hello, world" :set ts=4 #!/usr/bin/env python def fun(): »···print "hello, world" :set et :ret! :set et #!/usr/bin/env python def fun(): ····print "hello, world" How I would to it. Set up the tabbing in your .vimrc on both machines as you prefer the most. Number of spaces a tab counts for set tabstop=4 Number of spaces used for (auto)indent set shiftwidth=4 If you want to expand tabs to spaces set expandtab or not set noexpandtab Every time you open a file that has a messed up indentation. Go to the start of the page gg and autoindent to the end of the page =G By posting your answer, you agree to the privacy policy and terms of service. asked 3 years ago viewed 1338 times active
http://superuser.com/questions/369747/tabs-in-vim-when-the-alignment-is-all-wrong-how-do-you-fix/369748
CC-MAIN-2015-11
refinedweb
854
68.5
Jul 26 2020 12:52 PM I am running the Windows Insider Preview, build 20175.1000. I have found some new, as-yet-undocumented MSIX manifest schemas, and I was wondering if anyone might have any insight into what they might be useful for. (To follow along, install Resource Hacker and open the file C:\Windows\System32\AppxPackaging.dll; the schemas are found under the "500" entry in the list on the left.) Since these features are still in early beta, I thought it would be better to post here instead of filing a GitHub issue on the MSIX schema reference on docs.microsoft.com. These elements seem to duplicate the functionality of the existing windows.cloudFiles extension in desktop3. I'm not certain why we need two. Why, exactly, are there MSIX entries to create new items in the Control Panel shell namespace? After all, the legacy Control Panel is deprecated in favor of the UWP-based Settings app. While the Settings app is currently not extensible by third parties, there is little reason (that I know of) that the Control Panel entries in this schema would be incompatible with the Settings app, since all they can do is provide a link to an external application. Nonetheless, Microsoft has given no evidence that they will be migrating the ability to add new items to the legacy Control Panel into the Settings app. This appears to be a superset of the com:, com2:, and com3: schema namespaces. Is there a reason why we cannot simply extend those schemas, instead of having to duplicate everything? This schema defines an extension for creating custom shortcuts, an apparent duplicate of rescap3:DesktopAppMigration, an extension for registering MAPI handlers, and finally, an extension for creating Windows 7-style application registrations. This last one is weird, because (a) the UI that consumes this kind of application registration has been removed (when you click on it in Control Panel, it simply opens the Settings app, which uses a different UI that does not surface the same information as its Windows 7 equivalent), and (b) MSIX already does this registration for you. This schema contains a system error reporting extension (which is self-describing), and what appears to be a subset of the file association support present in several of the existing schemas. Again, why do we need two, especially since this one has fewer features than what is present in the already documented schemas? Thanks for looking into this for me! Jul 28 2020 06:05 AM Jul 28 2020 06:05 AM I, too, find the schemas frustrating and wish to add my thoughts. Although I don't expect that Microsoft would respond to requests regarding the undocumented pieces, we can still vent about the state of the schemas. I am so far avoiding any programing that requires me to generate or parse the manifest, because it is impossible to create a validator for the available schemas. The practice of adding additional schema files each release, in combination with what I think of as different schema families (what you want for iot or mobile is a different subset of similar things), has led to elements and parameters defined without the logic about where these things belong. Microsoft code itself must be littered with assumptions about what are legal combinations (or placements of elements in the XML) that outsiders can only guess at because quite too often it is not defined in the schema sets. As an example, last year I looked into ApplicationAlias. It was a well defined element in itself, but at the time I was unable to find any element with documentation/schema that accepted it as a sub-element. We are left to searching for examples where Microsoft actually is seen using it to understand a possible usage structure. If Microsoft wants a robust partner environment I believe that they are going to have to clean this up at some point. Aug 04 2020 12:17 PM Aug 04 2020 12:17 PM Hey folks, Thanks for the detailed feedback on the manifest schemas. We understand your concerns about the state of documentation around manifest schemas and the impact this has on our partners. We are working out what additional details we can provide and hope to create a more detailed post explaining the overall rational and design of the manifest schemas. Thanks! James
https://techcommunity.microsoft.com/t5/msix-packaging-and-tools/purpose-of-new-msix-manifest-schema-additions/td-p/1546785
CC-MAIN-2021-49
refinedweb
732
56.69
Input/Output with Mahotas¶ Mahotas does not have any builtin support for input/output. However, it wraps a few other libraries that do. The result is that you can do: import mahotas as mh image = mh.imread('file.png') mh.imsave('copy.png', image) It can use the following backends (it tries them in the following order): - It prefers mahotas-imread, if it is available. Imread is a native C++ library which reads images into Numpy arrays. It supports PNG, JPEG, TIFF, WEBP, BMP, and a few TIFF-based microscopy formats (LSM and STK). - It also looks for freeimage. Freeimage can read and write many formats. Unfortunately, it is harder to install and it is not as well-maintained as imread. - Finally, it tries to load pillow. Thus, to use the imread or imsave functions,.
http://mahotas.readthedocs.io/en/latest/io.html
CC-MAIN-2017-47
refinedweb
135
68.47
import "sigs.k8s.io/controller-runtime/pkg/manager" Package manager is required to create Controllers and provides shared dependencies such as clients, caches, schemes, etc. Controllers must be started by calling Manager.Start. doc.go internal.go manager.go testutil.go type LeaderElectionRunnable interface { // NeedLeaderElection returns true if the Runnable needs to be run in the leader election mode. // e.g. controllers need to be run in leader election mode, while webhook server doesn't. NeedLeaderElection() bool } LeaderElectionRunnable knows if a Runnable needs to be run in the leader election mode. type Manager interface { // Add will set requested dependencies on the component, and cause the component to be // started when Start is called. Add will inject any dependencies for which the argument // implements the inject interface - e.g. inject.Client. // Depending on if a Runnable implements LeaderElectionRunnable interface, a Runnable can be run in either // non-leaderelection mode (always running) or leader election mode (managed by leader election if enabled). Add(Runnable) error // SetFields will set any dependencies on an object for which the object has implemented the inject // interface - e.g. inject.Client. SetFields(interface{}) error // AddHealthzCheck allows you to add Healthz checker AddHealthzCheck(name string, check healthz.Checker) error // AddReadyzCheck allows you to add Readyz checker AddReadyzCheck(name string, check healthz.Checker) error // Start starts all registered Controllers and blocks until the Stop channel is closed. // Returns an error if there is an error starting any controller. Start(<-chan struct{}) error // GetConfig returns an initialized Config GetConfig() *rest.Config // GetScheme returns an initialized Scheme GetScheme() *runtime.Scheme // GetClient returns a client configured with the Config. This client may // not be a fully "direct" client -- it may read from a cache, for // instance. See Options.NewClient for more information on how the default // implementation works. GetClient() client.Client // GetFieldIndexer returns a client.FieldIndexer configured with the client GetFieldIndexer() client.FieldIndexer // GetCache returns a cache.Cache GetCache() cache.Cache // GetEventRecorderFor returns a new EventRecorder for the provided name GetEventRecorderFor(name string) record.EventRecorder // GetRESTMapper returns a RESTMapper GetRESTMapper() meta.RESTMapper // GetAPIReader returns a reader that will be configured to use the API server. // This should be used sparingly and only when the client does not fit your // use case. GetAPIReader() client.Reader // GetWebhookServer returns a webhook.Server GetWebhookServer() *webhook.Server } Manager initializes shared dependencies such as Caches and Clients, and provides them to Runnables. A Manager is required to create Controllers. This example adds a Runnable for the Manager to Start. Code: err := mgr.Add(manager.RunnableFunc(func(<-chan struct{}) error { // Do something return nil })) if err != nil { log.Error(err, "unable add a runnable to the manager") os.Exit(1) } This example starts a Manager that has had Runnables added. Code: err := mgr.Start(signals.SetupSignalHandler()) if err != nil { log.Error(err, "unable start the manager") os.Exit(1) } New returns a new Manager for creating Controllers. This example creates a new Manager that can be used with controller.New to create Controllers. Code: cfg, err := config.GetConfig() if err != nil { log.Error(err, "unable to get kubeconfig") os.Exit(1) } mgr, err := manager.New(cfg, manager.Options{}) if err != nil { log.Error(err, "unable to set up manager") os.Exit(1) } log.Info("created manager", "manager", mgr) This example creates a new Manager that has a cache scoped to a list of namespaces. Code: cfg, err := config.GetConfig() if err != nil { log.Error(err, "unable to get kubeconfig") os.Exit(1) } mgr, err := manager.New(cfg, manager.Options{ NewCache: cache.MultiNamespacedCacheBuilder([]string{"namespace1", "namespace2"}), }) if err != nil { log.Error(err, "unable to set up manager") os.Exit(1) } log.Info("created manager", "manager", mgr) type NewClientFunc func(cache cache.Cache, config *rest.Config, options client.Options) (client.Client, error) NewClientFunc allows a user to define how to create a client type Options // MapperProvider provides the rest mapper used to map go types to Kubernetes APIs MapperProvider func(c *rest.Config) (meta.RESTMapper, error) // SyncPeriod determines the minimum frequency at which watched resources are // reconciled. A lower period will correct entropy more quickly, but reduce // responsiveness to change if there are many watched resources. Change this // value only if you know what you are doing. Defaults to 10 hours if unset. // there will a 10 percent jitter between the SyncPeriod of all controllers // so that all controllers will not send list requests simultaneously. SyncPeriod *time.Duration // // LeaseDuration is the duration that non-leader candidates will // wait to force acquire leadership. This is measured against time of // last observed ack. Default is 15 seconds. LeaseDuration *time.Duration // RenewDeadline is the duration that the acting master will retry // refreshing leadership before giving up. Default is 10 seconds. RenewDeadline *time.Duration // RetryPeriod is the duration the LeaderElector clients should wait // between tries of actions. Default is 2 seconds. RetryPeriod *time.Duration // Namespace if specified restricts the manager's cache to watch objects in // the desired namespace Defaults to all namespaces // // Note: If a namespace is specified, controllers can still Watch for a // cluster-scoped resource (e.g Node). For namespaced resources the cache // will only hold objects from the desired namespace. Namespace string // MetricsBindAddress is the TCP address that the controller should bind to // for serving prometheus metrics. // It can be set to "0" to disable the metrics serving. MetricsBindAddress string // HealthProbeBindAddress is the TCP address that the controller should bind to // for serving health probes HealthProbeBindAddress string // Readiness probe endpoint name, defaults to "readyz" ReadinessEndpointName string // Liveness probe endpoint name, defaults to "healthz" LivenessEndpointName string // Port is the port that the webhook server serves at. // It is used to set webhook.Server.Port. Port int // Host is the hostname that the webhook server binds to. // It is used to set webhook.Server.Host. Host string // CertDir is the directory that contains the server key and certificate. // if not set, webhook server would look up the server key and certificate in // {TempDir}/k8s-webhook-server/serving-certs. The server key and certificate // must be named tls.key and tls.crt, respectively. CertDir string // NewCache is the function that will create the cache to be used // by the manager. If not set this will use the default new cache function. NewCache cache.NewCacheFunc // NewClient will create the client to be used by the manager. // If not set this will create the default DelegatingClient that will // use the cache for reads and the client for writes. NewClient NewClientFunc // EventBroadcaster records Events emitted by the manager and sends them to the Kubernetes API // Use this to customize the event correlator and spam filter EventBroadcaster record.EventBroadcaster // contains filtered or unexported fields } Options are the arguments for creating a new Manager type Runnable interface { // Start starts running the component. The component will stop running // when the channel is closed. Start blocks until the channel is closed or // an error occurs. Start(<-chan struct{}) error } Runnable allows a component to be started. It's very important that Start blocks until it's done running. RunnableFunc implements Runnable using a function. It's very important that the given function block until it's done running. func (r RunnableFunc) Start(s <-chan struct{}) error Start implements Runnable Package manager imports 26 packages (graph) and is imported by 750 packages. Updated 2020-01-14. Refresh now. Tools for package owners.
https://godoc.org/sigs.k8s.io/controller-runtime/pkg/manager
CC-MAIN-2020-05
refinedweb
1,210
51.65
Components and supplies Apps and online services About this project Motivation Have you ever struggled to make it through an afternoon meeting? Can't stop yawning? Sleeping through your 3 hour lectures? No, I'm not selling you an energy supplement. You probably didn't get enough sleep last night, most of us don't. But, your drowsiness might not be completely your fault! Poor ventilation and high levels of carbon dioxide could be causing you to feel sleepy! Using this NDIR CO2 sensor you can accurately measure the CO2 levels and read the values using a simple UART serial interface. Conventional CO2 sensors used to draw a lot of power and took time to warm up the lamp before they were primed to take a reading. Now using an LED and infrared detector you can accurately measure gasses using this 3.3v sensor which pulls less than 1.5ma on average. It uses optical dispersion and some other witchcraft, check out Cozir's datasheet for some more details, it's impressive. Getting Started Right so what do I need to start measuring the 'sleepiness factor' of my office cubicle? - Arduino Due or Zero - AnduinoWiFi shield - Cozir CO2 sensor - A few jumpers to connect and power up the sensor. Connections for this one are simple, the sensor has 10 pins but you'll only need to wire up 4. Using the Due connect 3.3v to 3V3, GND to Ground, Rx(DIO19) to Tx, and Tx(DIO18) to Rx. Be sure you've "crossed" the UART wires and remember you can't use Tx(DIO1) and Rx(DIO0) unless you'd like to forgo using the serial term to monitor your readings. I've used Serial1 for the sensor although you could use any of the three remaining UARTs. ** If you'd like to bypass the Arduino for now and test sending commands directly to the sensor just open up putty or your favorite serial term and connect at 9600 baud, 8bits, no parity, 1 stop bit. You may need to enable sending '/r/n' on each submission of ascii.** ***This sensor is 3.3v TTL so be sure to use a logic shifter if coms originate on a 5v source*** Calibrating the sensor There are a few ways to calibrate the CO2 sensor. One of the best ways is saturating the sensor in a known gas (Nitrogen) which contains no Carbon Dioxide. This will generate a known zero reading. If you don't have any Nitrogen laying around you can also fairly accurately calibrate using fresh air. So grab your sunglasses, we're going on a field trip. When you're outside you're going to want to run the example sketch below and uncomment calibrateFreshAir(); in the Setup() routine. This sends the 'G' command over serial to the sensor requesting... calibration! Now this isn't perfect, since I don't know exactly what the current CO2 concentration conditions really are on the 14th floor here in NYC (possibly a bit higher here than "Earth's" average in Hawaii). But since our thresholds for actually physically sensing differences in the environment are in the 1,000's of ppm I think we're pretty safe to use this 450ppm fresh air reading as our calibration point for measuring our conference rooms and classrooms. char buffer[20] = {0}; int c = 0; void setup() { Serial.begin(9600); while(!Serial){}; Serial1.begin(9600); while(!Serial){}; Serial.println("Begin reading CO2 Levels"); //setOperatingMode(CZR_STREAMING); //setOperatingMode(CZR_POLLING); //calibrateFreshAir(); } void loop() { delay(10000); c = Request("Z"); Serial.print("CO2 : ");Serial.println(c); Serial.println(""); } int Request(char* s) { buffer[0] = '\0'; int idx = 0; Command(s); delay(250); while(Serial1.available()) { buffer[idx++] = Serial1.read(); } buffer[idx] = '\0'; uint16_t rv = 0; switch(buffer[1]) { case 'T' : rv = atoi(&buffer[5]); if (buffer[4] == 1) rv += 1000; break; default : rv = atoi(&buffer[2]); break; } return rv; } void Command(char* s) { Serial1.print(s); Serial1.print("\r\n"); } uint16_t calibrateFreshAir() { return Request("G"); } void setOperatingMode(uint8_t mode) { sprintf(buffer, "K %u", mode); Command(buffer); } Also take quick note of the two #define statements at the top of the sketch. When you first unbox your sensor you may need to configure it using setOperatingMode(). This sketch is designed to work in polling mode. If you've successfully calibrated and are reading CO2 levels to the terminal you're ready to move on to publishing this to the cloud. Let's connect to Adafruit IO and start visualizing the data. Publishing CO2 metrics to the cloud If you haven't connected to Adafruit IO using anduinoWiFi yet, check out this project writeup that will get started. It covers all the details I'm going to gloss over here. Here's the sketch to get you started publishing your CO2 levels every minute. #include <WiFi101.h> #include "Adafruit_MQTT.h" #include "Adafruit_MQTT_Client.h" #include "AnduinoLCD.h" // WiFi parameters #define WLAN_SSID "YOUR_SSID" #define WLAN_PASS "YOUR_PASSWD" // Adafruit IO #define AIO_SERVER "io.adafruit.com" #define AIO_SERVERPORT 1883 #define AIO_USERNAME "YOUR_AIO_USERNAME" #define AIO_KEY "YOUR_AIO_KEY" WiFiClient client; Adafruit_MQTT_Client mqtt(&client, AIO_SERVER, AIO_SERVERPORT, AIO_USERNAME, AIO_KEY); /****************************** Feeds ***************************************/ // Setup feed for co2 Adafruit_MQTT_Publish carbonDioxide = Adafruit_MQTT_Publish(&mqtt, AIO_USERNAME "/feeds/co2"); /*Create an instance of the AnduinoLCD */ AnduinoLCD LCD = AnduinoLCD(ST7735_CS_PIN, ST7735_DC_PIN, 13); static int co2 = 0; static int co2Prev = 0; #define CZR_STREAMING 0x01 #define CZR_POLLING 0x02 char buffer[20] = {0}; void setup() { Serial.begin(115200); delay(3000); Serial1.begin(9600); //Connect to WiFi & Adafruit.IO connectToWiFi(); connectToAdafruit(); //Initialize LCD LCD.begin(); LCDinit(); //CO2 Calibration and initial setup //setOperatingMode(CZR_STREAMING); //setOperatingMode(CZR_POLLING); //calibrateFreshAir(); } void loop() { // ping adafruit io a few times to make sure we remain connected if(! mqtt.ping(3)) { // reconnect to adafruit io if(! mqtt.connected()) connect(); } // Grab the current co2 reading co2 = Request("Z"); //convert int temp to char array char b[20]; String str; str=String(co2); for(int i=0; i<str.length(); i++) { b[i]=str.charAt(i); } b[(str.length())+1]=0; // Publish data if (!carbonDioxide.publish((char*)b)) { Serial.println(F("Failed to publish co2")); } else { Serial.print(F("co2 published: ")); Serial.println(co2); displayCo2(co2, co2Prev); } Serial.print("CO2 : ");Serial.println(co2); Serial.println(""); //prev val stored for LCD co2Prev = co2; //repeat every 1min delay(60000); } //!")); } void displayCo2(int co2, int co2Prev) { //clear the stale value LCD.setTextColor(ST7735_BLACK); LCD.setTextSize(2); LCD.setTextWrap(true); LCD.setCursor(40,60); LCD.setTextSize(3); LCD.print(co2Prev); LCD.setTextSize(1); LCD.print("ppm"); //Print new value LCD.setTextColor(ST7735_WHITE); LCD.setTextSize(2); LCD.setTextWrap(true); LCD.setCursor(40,60); LCD.setTextSize(3); LCD.print(co2); LCD.setTextSize(1); LCD.print("ppm"); }("CO2: "); } uint16_t Request(char* s) { buffer[0] = '\0'; int idx = 0; //send command request 'Z' for CO2 Command(s); delay(250); while(Serial1.available()) { buffer[idx++] = Serial1.read(); } buffer[idx] = '\0'; uint16_t rv = 0; rv = atoi(&buffer[2]); return rv; } void Command(char* s) { Serial1.print(s); Serial1.print("\r\n"); } uint16_t calibrateFreshAir() { return Request("G"); } void setOperatingMode(uint8_t mode) { sprintf(buffer, "K %u", mode); Command(buffer); } That's it! This thing is super sensitive, you can nearly track room occupancy levels based off the concentration of CO2 in even a well vented room. Time to prove to your professor that it isn't 'diff eq' getting you down, it's the poor ventilation! Time to move class to the beach! Code Anduino Author Brian Carbonette - 11 projects - 61 followers Additional contributors Published onMay 3, 2017 Members who respect this project you might like
https://create.arduino.cc/projecthub/bcarbs/measuring-co2-levels-aka-the-sleepiness-multiplier-a4d4bf
CC-MAIN-2021-04
refinedweb
1,241
56.76
I want to add duckboard to the list of values for the wikipage. Is there some kind of voting process, or can I just add it too the list? On the same note, I understand them (boardwalk and duckboard) to be the same thing, but would boardwalk be better? asked 14 Sep '10, 06:12 aharvey 508●2●8●13 accept rate: 22% edited 12 Oct '10, 15:57 TomH ♦♦ 3.2k●8●35●41 Let me first put the role of the wiki in perspective. Anyone is allowed to use any tag (and tag value) they see fit in OpenStreetMap, whether these are documented on the wiki or not. Software that uses OSM data - such as the renderers that make our maps - are free to evaluate or ignore any tags (and tag values), whether these are documented on the wiki or not. The wiki is a good platform for documenting things but it is in no way mandatory. The wiki is widely seen as documenting keys that are in (relatively widespread) use, rather than tags that someone thought might come in handy some time. So if "duckboard" is commonly used then feel free to add it, and perhaps put your reasoning in the change comment. If, on the other hand, it is only you and two others from the mailing list who think that this tag might be nice, then don't add it right away. Instead, use it (and convince others to use it) until it becomes common, and then add it. You may create a proposal (see the Wiki) to discuss your ideas, or you may post on the tagging mailing list to do the same. Such a discussion is the right way to find the answer to questions like "would boardwalk be better than duckboard?". If you want, you can hold a vote after your proposal has matured. If your proposal is accepted then it is a little bit more likely for others to use your new tag value; however, an accepted proposal does not automatically mean the tag will be used (or even supported by renderers and editors), and a rejected proposal does not mean the tag may not be used. Under no circumstances should voting results be used as a justification for any kind of large-scale edit of the database (as in "the voters have rejected surface=boardwalk so I will remove it from all objects"). answered 14 Sep '10, 07:55 Frederik Ramm ♦ 74.4k●86●672●1152 accept rate: 24% Thanks. That makes a few things clearer. After I did some more diging into the propsals I say that my example has already been proposed The first step is to search the wiki thoroughly for existing tags that describes the same concept. The second step is to search for undocumented tags that describe the same concept. It may not always be possible to deduce what those undocumented tags are being used for, especially if the none of the tagged objects are physically near you. So this step is really optional. In this case the new tag seems to be a trivial extension of an existing scheme. But, if you are still unsure, you can ask your question on the tagging mailing list. After that you can start a new page for your tag. Doing so will prevent many (semantic) problems like honomyns, tags based on words that are not specific enough and acronym namespace conflicts. A wiki is a remarkably powerful tool for collaboration. If experienced OSM users see a problem with your tag (e.g. being a duplicate of an established tag), they can easily see how many times it has been used and take appropriate action. For example, they can add a message to your wiki page. answered 08 Oct '10, 18:57 Nic Roets 583●9●12●19 accept rate: 6% edited 08 Oct '10, 22:06 Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: tagging ×863 wiki ×63 question asked: 14 Sep '10, 06:12 question was seen: 4,159 times last updated: 12 Oct '10, 15:57 Does disused:railway=* require railway=disused? How do I propose an official tag? What is the process to propose a new tag or feature? How do I tag a zebra crossing? crossing_ref=zebra or crossing=zebra? how do I include tags for my geographical region to my user page on wiki? How do I map a road that has traffic lights only in one direction? How to tag a 'no entrance except to bus and bicycle'? How to tag a Cafe, Restaurant & Bar Does the legal status of a road (public or private ownership) always matter when dealing with access tags? How do I stop rivers from rendering where the riverbanks are mapped? First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/794/protocol-for-documenting-key-values-on-the-wiki?sort=active
CC-MAIN-2020-45
refinedweb
826
70.43
My instructor provided the following code, but it is not working on OS X when run from command line. file_name = 'data/' + raw_input('Enter the name of your file: ') + '.txt' fout = open(file_name, 'w') Traceback (most recent call last): File "write_a_poem_to_file.py", line 12, in <module> fout = open(file_name, 'w') IOError: [Errno 2] No such file or directory: 'data/poem1.txt' As stated by @Morgan Thrapp in the comments, the open() method won't create a folder for you. If the folder /data/ already exists, it should work fine. Otherwise you'll have to check if the folder exists, if not, then create the folder. import os if not os.path.exists(directory): os.makedirs(directory) So.. your code: file_name = 'data/' + raw_input('Enter the name of your file: ') + '.txt' fout = open(file_name, 'w') Became something like this: import os folder = 'data/' if not os.path.exists(folder): os.makedirs(folder) filename = raw_input('Enter the name of your file: ') file_path = folder + filename + '.txt' fout = open(file_path, 'w')
https://codedump.io/share/DsHwBwKH5m3J/1/creating-directories-python
CC-MAIN-2017-43
refinedweb
165
59.4
01 June 2012 05:41 [Source: ICIS news] SINGAPORE (ICIS)--Toluene prices in east ?xml:namespace> Toluene prices in east Some toluene traders took short positions amid bearish sentiment in the crude market. Many July-August toluene futures were sold at CNY8,200-8,300/tonne ex-tank Zhangjiagang in the second half of May, which put downward pressure on toluene prices, a trader said. East Downstream producers stayed on the sidelines because of the ample supply and an uncertain price outlook, a trader said. Most of the cargoes were imported at a cost of $1,235-1,240/tonne CFR (cost & freight) About 60,000-70,000 tonnes of import cargoes will arrive in east Most traders have suffered large losses at CNY400-600/tonne. Slight losses are usually at CNY100-200/tonne, according to market sources. The decline in toluene prices maybe limited in June if crude prices stabilise, another trader said. WTI crude prices were at $86.53/bbl on the close of trade on 31 May. ($1 = CNY
http://www.icis.com/Articles/2012/06/01/9566116/toluene-prices-in-east-china-at-11-month-low-on-falling.html
CC-MAIN-2013-48
refinedweb
172
71.44
. import java.text.NumberFormat; import java.util.Arrays; public class BookStore2 { public static void main(String[] args) { NumberFormat formatter = NumberFormat.getCurrencyInstance(); Book[] bookList = new Book [5]; bookList[0] = new Book("978-0-7653-6264-3", "Wizard's First Rule", "Terry Goodkind", 1994, "Tom Doherty Associates, LLC", 7.99f); bookList[1] = new Book("0-812-54809-4", "Stone of Tears", "Terry Goodkind", 1995, "Tom Doherty Associates, LLC", 7.99f); bookList[2] = new Book("0-812-55147-8", "Blood of the Fold", "Terry Goodkind", 1995, "Tom Doherty Associates, LLC", 7.99f); bookList[3] = new Book("0-439-15411-1", "Dracula", "Bram Stroker", 1897, "Scholastic Inc", 4.99f); bookList[4] = new Book("0-440-94060-5", "I Am The Cheese", "Robert Cormier", 1977, "Dell Laurel-Leaf", 5.50f);.
http://www.javaprogrammingforums.com/whats-wrong-my-code/19678-method-calls-output-looks-little-funny.html
CC-MAIN-2015-06
refinedweb
125
50.53
moen to these ends. mozprocess.processhandler:ProcessHandler is the central exposed API for mozprocess. ProcessHandler utilizes a contained subclass of subprocess.Popen, Process, which does the brunt of the process management. process = ProcessHandler(['command', '-line', 'arguments'], cwd=None, # working directory for cmd; defaults to None env={}, # environment to use for the process; defaults to os.environ ) process.run(timeout=60) # seconds process.wait() ProcessHandler offers several other properties and methods as part of its API: def __init__(self, cmd, args=None, cwd=None, env=None, ignore_children = False, processOutputLine=(), onTimeout=(), onFinish=(), **kwargs): """ cmd = Command to run args = array of arguments (defaults to None) cwd = working directory for cmd (defaults to None) env = environment to use for the process (defaults to os.environ) ignore_children = when True, causes system to ignore child processes, defaults to False (which tracks child processes) processOutputLine = handlers to process the output line onTimeout = handlers for timeout event kwargs = keyword args to pass directly into Popen NOTE: Child processes will be tracked by default. If for any reason we are unable to track child processes and ignore_children is set to False, then we will fall back to only tracking the root process. The fallback will be logged. """ @property def timedOut(self): """True if the process has timed out.""" def run(self, timeout=None, outputTimeout=None): """ Starts the process. If timeout is not None, the process will be allowed to continue for that number of seconds before being killed. If outputTimeout is not None, the process will be allowed to continue for that number of seconds without producing any output before being killed. """ def kill(self): """ Kills the managed process and if you created the process with 'ignore_children=False' (the default) then it will also also kill all child processes spawned by it. If you specified 'ignore_children=True' when creating the process, only the root process will be killed. Note that this does not manage any state, save any output etc, it immediately kills the process. """ def readWithTimeout(self, f, timeout): """ Try to read a line of output from the file object |f|. |f| must be a pipe, like the |stdout| member of a subprocess.Popen object created with stdout=PIPE. If no output is received within |timeout| seconds, return a blank line. Returns a tuple (line, did_timeout), where |did_timeout| is True if the read timed out, and False otherwise. Calls a private member because this is a different function based on the OS """ def processOutputLine(self, line): """Called for each line of output that a process sends to stdout/stderr.""" for handler in self.processOutputLineHandlers: handler(line) def onTimeout(self): """Called when a process times out.""" for handler in self.onTimeoutHandlers: handler() def onFinish(self): """Called when a process finishes without a timeout.""" for handler in self.onFinishHandlers: handler() def wait(self, timeout=None): """ Waits until all output has been read and the process is terminated. If timeout is not None, will return after timeout seconds. This timeout only causes the wait function to return and does not kill the process. """ See for the python implementation. ProcessHandler extends ProcessHandlerMixin which by default prints the output, logs to a file (if specified), and stores the output (if specified, by default True). ProcessHandlerMixin, by default, does none of these things and has no handlers for onTimeout, processOutput, or onFinish. ProcessHandler may be subclassed to handle process timeouts (by overriding the onTimeout() method), process completion (by overriding onFinish()), and to process the command output (by overriding processOutputLine()). In the most common case, a process_handler is created, then run followed by wait are called: proc_handler = ProcessHandler([cmd, args]) proc_handler.run(outputTimeout=60) # will time out after 60 seconds without output proc_handler.wait() Often, the main thread will do other things: proc_handler = ProcessHandler([cmd, args]) proc_handler.run(timeout=60) # will time out after 60 seconds regardless of output do_other_work() if proc_handler.proc.poll() is None: proc_handler.wait() By default output is printed to stdout, but anything is possible: # this example writes output to both stderr and a file called 'output.log' def some_func(line): print >> sys.stderr, line with open('output.log', 'a') as log: log.write('%s\n' % line) proc_handler = ProcessHandler([cmd, args], processOutputLine=some_func) proc_handler.run() proc_handler.wait() subprocess.Popen.kill subprocess.Popen.kill
https://chromium.googlesource.com/chromium/deps/mozprocess/+/0188148023e698dd7960a968b0515c35bbad844e/
CC-MAIN-2022-33
refinedweb
702
57.06
01 June 2012 04:20 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> But the index has remained above the 50% threshold, indicating that the Chinese industries have remained on an expansion mode, according to the China Federation of Logistics and Purchasing (CFLP). The new order sub-index declined to 49.8% in May, down 4.7 percentage points from April, the data showed. The sharp decline in this sub-index indicates a likely reduction in operating rates at manufacturing facilities in the near term, said Zhang Liqun, an analyst at CFLP. The Chinese government is fine-tuning its macroeconomic policies to ensure stable growth. Production sub-index under the PMI dropped by 4.3 percentage points to 52.9% in May, while the export index declined by 1.8 percentage points to 50.4%. The data also showed a 2.4-percentage point slippage in import index to 48.1%. The sub-index for purchasing price in May fell by 10 percentage points from April to 44.8%, CFL
http://www.icis.com/Articles/2012/06/01/9566111/china-may-pmi-falls-to-50.4-as-economy-slows-down.html
CC-MAIN-2014-41
refinedweb
169
68.97
You are free to check some expression to evaluates some block of code as per your requirement. Java provides decision making statements to perform this task. Java supports the following two decision making statements : These statements allows you to control the flow of program's execution based upon the conditions known only during the run-time. The if statement comprises of Boolean expression, followed by one or more statements. You will learn about the if statement in separate chapter. The switch statement allows a variable to be tested for equality against the list of values. Each value is called a case, and the variable which being switched on is checked for the each case. You will learn about switch statement in separate chapter. Here is an example program, helps in understanding how the decision making statement works in Java: /* Java Decision Making - Example Program */ public class JavaProgram { public static void main(String args[]) { int num1=50, num2=60; if(num1>num2) { System.out.println("num1 is greater than num2"); } else { System.out.println("num1 is not greater than num2"); } } } Here is the output produced by the above Java program: Tools Calculator Quick Links
http://codescracker.com/java/java-decision-making.htm
CC-MAIN-2016-50
refinedweb
193
63.29
Support pillow and imaging Bug Description Python has two PIL provides, python-imaging and python-pillow. Fedora is moving to python-pillow in the next release. imaging supports being loaded by doing: import Image or from PIL import Image pillow only supports the latter, so we're patching all the calls to use 'from PIL import image' Please apply the below changes! (They're against 3.4.2, but the code hasn't changed in any real way.) Index: gwibber- ======= --- gwibber- +++ gwibber- @@ -1,6 +1,7 @@ #!/usr/bin/env python -import os, hashlib, urllib2, Image +import os, hashlib, urllib2 +from PIL import Image DEFAULT_AVATAR = 'http:// Index: gwibber- ======= --- gwibber- +++ gwibber- @@ -8,7 +8,7 @@ import os, sys from hashlib import sha1 from os import makedirs, remove, environ from os.path import join, isdir, realpath, exists -import Image +from PIL import Image import mx.DateTime from gwibber.microblog import network from gwibber.
https://bugs.launchpad.net/gwibber/+bug/1118574
CC-MAIN-2016-44
refinedweb
150
57.98
Hello everyone, I have a simple question for you: In My project I have a button that change ON / OFF realy, so my button control a digital pin that link my relay. After, I add a timer to do the same…now why should I use virtual pin to use timer? By widget button I don’t use skecth and virtual pin, I use only digital pin and works, why isn’t the same? In my project I use button to turn on and off a light (I don’t use sketch and virtual pin, blynk provide to work all). If I use also a timer why doesn’t work? Thank you Andrea Timer Start and stop on simple Relay Hello everyone, The time widget does allow this (at least on iOS anyway) The time input widget doesn’t. Which widget did you actually try? I have IOs and my widget is a simple timer. Please see my photo. I have two button with two timer. The button I use only to stop relay if I want. let me know Thank you The Timer widget in iOS does allow either digital or virtual pins to be selected. Personally, I never use digital pins, virtual pins are much more powerful. But, what exactly is your question? Pete. Actually, my timer works only for start, stop doesn’t work. At start timer, my relay becomes ON and relative button becomes ON. So far is good, but at the end of timer nothing happens! Why? Thank you I think you need to effectively start again with this tread. Start by providing the basic info about what hardware and Blynk library version you are using, along with a summary of exactly what it is that you’re trying to achieve and why this inst working. Share the code that you’re using - correctly formatted with triple backticks at the beginning and end. Triple backticks look like this: ``` Also share details of what you have set-up in your app - what widgets you have, what pins they are connected to, and what their purpose is. Pete. And also the settings of the button and timer widget. I have just tried this (albeit with a virtual pin) but the timer triggered both on and off for my button. (I guess like you want) My project works to control the energy of my solar panel. I use NodeMcu, current sensor INA219 and relay 4ch. My 2 button set on D5 and D6 simply. Before timer, I use normally, start and stop without problem. Actually with 2 timer, only works only start! Do you need other? Thank you SORRY… #include <Blynk.h> #include <Wire.h> #include <Adafruit_INA219.h> #define BLYNK_PRINT Serial #include <ESP8266WiFi.h> #include <BlynkSimpleEsp8266.h> char auth[] = "xxxxxxx"; Adafruit_INA219 ina219; // Your WiFi credentials. // Set password to "" for open networks. char ssid[] = "xxxxxxxxxxx"; char pass[] = "xxxxxx"; //char ssid[] = "xxxxxx"; //char pass[] = "xxxxxx?"; float shuntvoltage = 0; float busvoltage = 0; float current_mA = 0; float loadvoltage = 0; float power_mW = 0; float energy = 0; unsigned long previousMillis = 0; unsigned long interval = 100; unsigned long currentMillis = 0; #define greenLED D7 #define redLED D8 BlynkTimer timer; // This function sends Arduino's up time every second to Virtual Pin (5). // In the app, Widget's reading frequency should be set to PUSH. This means // that you define how often to send data to Blynk App. void setup(void) { // Debug console Serial.begin(9600); Blynk.begin(auth, ssid, pass); pinMode(greenLED, OUTPUT); pinMode(redLED, OUTPUT); // Setup a function to be called every second timer.setInterval(1000L, myTimerEvent); Serial.begin(115200); while (!Serial) { // will pause Zero, Leonardo, etc until serial console opens delay(1); } uint32_t currentFrequency; ina219.begin(); // To use a slightly lower 32V, 1A range (higher precision on amps): //ina219.setCalibration_32V_1A(); // Or to use a lower 16V, 400mA range (higher precision on volts and amps): //ina219.setCalibration_16V_400mA(); // Serial.println("Measuring voltage and current with INA219 ..."); } void loop(void){ if(Blynk.connected()){ // not to be confused with BLYNK_CONNECTED() digitalWrite(greenLED, HIGH); digitalWrite(redLED, LOW); //Serial.print("CONNESSO"); currentMillis = millis(); if (currentMillis - previousMillis >= interval){ previousMillis = currentMillis; ina219values(); // stampaValori(); Blynk.run(); timer.run(); // Initiates BlynkTimer } }else{ digitalWrite(greenLED, LOW); digitalWrite(redLED, HIGH); // Serial.print("NON CONNESSO"); Blynk.begin(auth, ssid, pass); } delay(1000); } void ina219values() { shuntvoltage = ina219.getShuntVoltage_mV(); busvoltage = ina219.getBusVoltage_V(); current_mA = ina219.getCurrent_mA(); if (current_mA<0){ current_mA = current_mA*(-1); } power_mW = ina219.getPower_mW(); loadvoltage = busvoltage + (shuntvoltage / 1000); energy = energy + loadvoltage * current_mA / 3600; } void stampaValori(){ Serial.print("Bus Voltage: "); Serial.print(busvoltage); Serial.println(" V"); Serial.print("Shunt Voltage: "); Serial.print(shuntvoltage); Serial.println(" mV"); Serial.print("Load Voltage: "); Serial.print(loadvoltage); Serial.println(" V"); Serial.print("Current: "); Serial.print(current_mA); Serial.println(" mA"); Serial.print("Power: "); Serial.print(power_mW); Serial.println(" mW"); Serial.print("Energia: "); Serial.print(energy); Serial.println(" mWh"); Serial.println(""); } void myTimerEvent(){ // You can send any value at any time. // Please don't send more that 10 values per second. Blynk.virtualWrite(V4, current_mA / 1000); Blynk.virtualWrite(V5, power_mW / 1000); Blynk.virtualWrite(V6, energy / 1000); } Despite me saying this: you still posted your code without the triple backticks, so your unformatted code has been removed. Please edit your last post, using the pencil icon at the bottom of the post, and add-in your code - complete with backticks. Pete. Sorry, now I rewrite my post And how are your timer widgets configured? Pete. So in your app you appear to have the following: One timer widget *(called Faro or Faretto, depending on which screenshot we are looking at) that is connected to pin D4 You have another timer widget (called Madonna) connected to an unknown pin You have a switch widget (called Faretto) connected to D5 or D6 You have a switch widget (called Madonna) connected to D5 or D6 Can you please clarify the missing information for these widget pins, and explain how the timer on/off switch widgets are able to override the timers when they are connected to different digital pins to the timer widgets? Pete. Faretto and Timer Faretto on D4 Madonna and Timer Madonna on D5 In This photo you CAn see setting of Timer madonna Thank you I don’t know if this time I was clear… Button D4 and Timer D4 control the same relay (Faretto light) Button D5 and Timer D5 control the same relay (Madonna light) The buttons I would use only if I want to stop light when I want (in the case one day I had to accumulate little solar energy). Timers I would use to start light automatically. Thank you It’s been painfully difficult extracting the correct information from you about the setup of 4 simple widgets! If these widgets are configured in the way that you’ve said 8n your latest post (which conflicts with what you said earlier) then they should probably work correctly - but, as I don’t use digital pins, or the timer widget, my personal experience in this area is limited. However, as I said earlier, virtual pins are much more appropriate for this type of application and that’s the way that I’d go if I were you. Pete. sorry, but what’s so difficult? 1° Realy on D4 actived by Timer1 (also started and stopped by Button1) 2° Realy on D5 actived by Timer2 (also started and stopped by Button2) D4 and D5 are digital pin Can timer active and stop digital pin?? In my case, stop doesn’t work, works only button widget. simpler than that? Thank you I’d say that any time you have multiple widgets trying to control the same pins then you’re asking for problems. You may also have multiple things trying to control the same pins, because you’re using mixed referencing for the pin numbers, and it’s also not clear how your current sensor is wired. But, the bottom line is that your current setup isn’t working. Pete.
https://community.blynk.cc/t/timer-start-and-stop-on-simple-relay/40013
CC-MAIN-2019-39
refinedweb
1,314
57.27
each language supports regular expressions to varying degrees. In C language, the three functions with input function do not support regular expressions strongly, but we still need to know. let’s first look at their prototype: #include <stdio. H> int scanf (const char * format,...); int fscanf(FILE *stream, const char *format, ...); int sscanf(const char *str, const char *format, ...); can accept variable parameters. Sscanf is similar to scanf. Standard input (stdin) can be used as the input source. The key part is the parameter format. It can be one or more {%[*] [width] [{h | L | i64 | l}] type | '' | ' ' | '' | Non% sign}. parameter explanation: 1, * can also be used in the format, (i.e.% * D and% * s) an asterisk (*) indicates that the data is skipped and not read in. (that is, the data is not read in the parameter) 2, {a|b|c} indicates one of a, B and C, [D], indicating that there can be d or no D. 3. Width indicates the read width. 4, {h | L | i64 | l}: the size of the parameter. Generally, H represents single byte size, I represents 2-byte size, l represents 4-byte size (double exception), and L64 represents 8-byte size. 5. Type:% s,% D and so on. 6. Special:% * [width] [{h | L | i64 | l}] type means that those satisfying the condition are filtered out and the value will not be written to the target parameter. Supported collection operations:% [A-Z] means to match any character from a to Z, greedy (as many matches as possible)% [ab '] matches one of a, B and', greedy% [^ A] matches any character other than a, greedy Return value these three functions return successfully matched and allocated input items. It means that the format in the format parameter list and the return value can be less than the number of matching items you provide (some will fail to match). If the advance matching fails, 0 is returned. EOF is returned if the end of the file is reached, and EOF is also returned when an error occurs. You can check the error code by outputting errno. if fscanf is used to judge whether the file ends, there will be a security risk. If the matching fails every time, the return value will never be EOF. The functions of scanf family are to read data into the buffer first, and then read it in the flush buffer. Note: scanf family functions will ignore the blank space at the beginning of a line sscanf/scanf regular usage % [] usage: % [] means to read a character set. If [the first character after is "^", it means the opposite. [] The string within can be composed of 1 or more characters. The empty character set (% []) is against the rules and can lead to unpredictable results.% [^] is also against the rules. % [A-Z] reads the string between A-Z and stops if it is not before, such as sscanf (s,"% [A-Z] ", string);// String = Hello % [^ A-Z] read the string not between A-Z. if the character between A-Z is touched, it will stop, such as Char s [] = "Hello Kitty";// Note: commas are not between A-Z sscanf (s, "% [^ A-Z]", string);// String = Hello % * [^ =] preceded by * indicates that variables are not saved. Skip strings that meet the conditions. char s [] = "Notepad = 1.0.0.1001"; Char szfilename [32] = ""; int i = sscanf (s, "% * [^ =]", szfilename);// szfilename = null because int i = sscanf (s, "% * [^ =] = =% s", szfilename);// szfilename = 1.0.0.1001 % 40C read 40 characters the run time library does not automatically append a null terminator to the string, nor does reading 40 characters automatically terminate the scanf() function. Because the library uses buffered input, you must press the ENTER key to terminate the string scan. If you press the ENTER before the scanf() Reads 40 characters, it is displayed normally, and the library continues to prompt for additional input until it reads 40 characters % [^ =] read the string until it meets the '=' sign, and more characters can be followed by '^', such as: character s [] = "Notepad = 1.0.0.1001"; Char szfilename [32] = ""; int i = sscanf (s, "% [^ =]", szfilename);// szfilename = Notepad if the parameter format is:% [^ =:], you can also read Notepad from Notepad: 1.0.0.1001. Examples: char s [] = "Notepad = 1.0.0.1001"; Char szname [32] = ""; char szver [32] = ""; sscanf (s, "% [^ =] = =% s", szname, szver);// szname = Notepad, szver = 1.0.0.1001 summary:% [] has great functions, but it is not commonly used, mainly because: 1. Scanf functions of many systems are vulnerable (typically, TC sometimes makes mistakes when inputting floating-point type). 2. The usage is complex and error prone. 3. It will be very difficult for the compiler to make syntax analysis, which will affect the quality and execution efficiency of the object code. Personally, I think the third point is the most fatal. The more complex functions are, the lower the execution efficiency. We can handle some simple string analysis by ourselves. the usage and differences of scanf(), sscanf(), fscanf() in C language scanf(), sscanf(), fscanf() differences: the first one is from the console (keyboard) Input; the second is input from a string; the third is input from a file; scanf scanf() function reads from stdin (standard input) according to the format specified by format and saves the data to other parameters. Int main() { int a, B, C; printf ("input: A, B, C"); scanf ("% D,% D,% d", & amp; a, & amp; B, & amp; c); printf ("a =% D, B =% D, C =% d", a, B, c); return 0; } sscanf function sscanf() is similar to scanf(), except that the input is read from buffer Similar to scanf, sscanf is used for input, except that the latter takes the screen (stdin) as the input source, and the former takes the fixed string as the input source . Usage: % [] means to read a character set. If [the first character after is "^", it means the opposite meaning. The string in [] can be composed of 1 or more characters. Empty character set (% []) is against the regulations and can lead to unpredictable results.% [^] is also against the regulations. 1. Common usage. char buf [512]; sscanf ("123456", "% s", buf);// here buf is the array name, which means to store 123456 in buf as% s! printf ("% s", buf); the result is: 123456 2. Take the string of specified length. As in the following example, take a string with a maximum length of 4 bytes. sscanf("123456 ", "%4s", buf); printf("%s", buf); the result is: 1234 3. Get the string up to the specified character. As in the following example, take the string until a space is encountered. sscanf("123456 abcdedf", "%[^ ]", buf); printf("%s", buf); the result is: 123456 4. Take the string containing only the specified character set. As in the following example, take a string containing only 1 to 9 and lowercase letters. sscanf("123456abcdedfBCDEF", "%[1-9a-z]", buf); printf("%s", buf); the result is: 123456abcdedf when entering: sscanf ("123456abcdedfbcdef", "% [1-9a-z]", buf); printf("%s",buf); the result is: 123456 5. Get the string up to the specified character set. As in the following example, take the string until uppercase letters are encountered. sscanf("123456abcdedfBCDEF", "%[^A-Z]", buf); printf("%s", buf); the result is: 123456abcdedf 6. Give a string IIOS/ [email protected] , get the string between/and @, filter out "IIOS /", and then send a string of content other than '@' to buf sscanf ("IIOS")/ [email protected] ", "%*[^/]/%[^@]", buf); printf("%s", buf); the result is: 12ddwdff 7. Given a string "Hello, world", only world is reserved. (Note: "," followed by a space,% s stops in case of a space, and adding * ignores the first read string) sscanf ("Hello, world", "% * s% s", buf); printf("%s", buf); the result is: World % * s means that the first matching% s is filtered out, that is, "Hello," is filtered out . If there is no space, the result is null.
http://www.itworkman.com/239989.html
CC-MAIN-2022-21
refinedweb
1,290
71.65
This project is mainly GCD and validating the error from user input. I got the GCD working correctly, but not the errors. Here's guidelines to validate the errors. .) Now here's the problem. Everything compiles with no problems. When I input zero for X, it moves on and ask for Y. Now, it's not suppose to do that. It's suppose to output an error message and asking me for a different input. Also, a similar problem with decimal numbers like 123.123. It just automatically ends the program with decimal numbers. Here is a sample output: Enter integer X: 0 Please enter an integer greater than 0 123.456 Enter integer Y: Please enter an integer greater than 0 abc Please enter an integer greater than 0 44 The GCD of 123 and 44 is 1 Here's what I have. #include <stdio.h> int gcd(int x, int y); int gcd(int x, int y) { if (y == 0) return x; else return gcd(y, x % y); } int get_input() { int x, y; while(1) { if(scanf("%d", &x) != EOF) { if (x > 0) return x; } if(scanf("%d", &x) != EOF) { if (y > 0) return y; } else { printf("Input error\n"); printf("Please enter an integer greater than 0\n"); while(getchar() != '\n'); // Clear the input buffer } } } int main() { int x, y, common; printf("This program computes the greatest common divisor\n"); printf("of positive integers X and Y, entered by the user.\n"); printf("Inputs must be integers greater than zero.\n\n"); printf("Enter integer X: "); x = get_input(); printf("Enter integer Y: "); y = get_input(); common = gcd(x,y); printf("The GCD of %d and %d is %d\n", x, y, common); return 0; } Any help getting me in the right direction is greatly appreciated. Thanks!
https://www.daniweb.com/programming/software-development/threads/152222/validating-an-error-from-user-input-using-functions
CC-MAIN-2017-34
refinedweb
298
65.01
I found this program running a search for software or solutions that will mirror certain directories from file server to file server across WAN. It's located here: http:/ It sounds ideal for what I need; I would like to replicate certain directories in our file servers to a remote server about 60 miles away, and I want it to be as real-time as possible since people at both locations often edit files. Has anyone used it? IF so, what are your thoughts on it? Anything else you would suggest for this project? 8 Replies Aug 13, 2010 at 1:10 UTC So I'm guessing DFS-R is not an option? If not what exactly are you replicating? I use it to replicate shares on our MS Server boxes. Aug 13, 2010 at 1:21 UTC +1 on the DFS When I was reading that it was screaming DFS or flavor of it. Aug 13, 2010 at 1:47 UTC Yes, I dismissed DFS until Server 2003 R2 gave us the legit replication needed. When it comes to replicating Windows shares or files I look to DFS-R first. Looking at the system requirements for the LinkPro solution it specifies "Windows NT/2000/2003 servers" so my suggestion is that if you have at least Server 2003 R2 installed at both locations you can accomplish this very easily without purchasing anything. Here are some quick links to get you started if you aren't familiar: http:/ http:/ I referenced both of these back when I started considering DFS namespaces and DFS-R. Aug 13, 2010 at 3:30 UTC Well this is freakin' awesome. I never noticed this feature before - still kind of a newbie in setting these things up and obviously am not completely aware of everything Windows Server 2003 R2 (currently using, slowly migrating to Win 2008 Enterprise) is capable of. I ran a quick google search but I guess I didn't dig deep enough or use the right terms to find this. Thanks for the reference guys. In terms of bandwidth, once the initial replication is done, will it only upload/download changes? So if the share has 500 files and only 1 file is changed today, it will only upload that change to the share and leave everything else alone. That was another big concern of mine as we aren't on the fastest connection on either end. Aug 13, 2010 at 3:55 UTC Yes, once you have done the initial replication it will only replicate changes and you can "throttle" the bandwidth used during certain times of the day. Yes, DFS-R was designed for branch offices with low bandwidth in mind. If you have a ton of data you can copy it to an external HD and physically take it to the other site for "initial" replication. This will avoid having to send everything over a low bandwidth connection the first time. After that configure your replication settings and only changes will be replicated. Aug 13, 2010 at 4:20 UTC Nice, I just setup a test share to see how it works and it seems to do the trick. Thanks again. Aug 13, 2010 at 4:24 UTC Let me know if you have any questions on it, I am more than happy to assist! And feel free to give out the BA and HP where necessary! ;) Aug 13, 2010 at 4:56 UTC I really like the bandwidth feature cause I let it have a lot of bandwidth on nights and weekends. I have found this to be a very useful tool for our branch offices. TM
https://community.spiceworks.com/topic/107647-linkpro-ipreplicator-has-anyone-tried-it
CC-MAIN-2017-22
refinedweb
609
76.96
BBC micro:bit IS31FL3731 LED Matrix Driver Introduction The electronics company, Adafruit, have a range of 16x9 charlieplexed LED matrices. There is also a driver board that you solder to the LED board. Together, you're paying around £13 for a really nifty i2c controlled charlieplexed matrix. There is a slightly high pitched noise when the display has lots of lights at high PWM. That, according to Adafruit, is normal. For the 144 neatly arranged LEDs, you're not going to complain about a little whining, are you? Circuit Your connections here are the basic i2c connections. The matrix is shown in the diagram, but the pins you are connecting to are on the driver board. The following picture shows the display, on its driver board and connected to a micro:bit. Programming This code is adapted from the Adafruit libraries with this being based more on the approach used in the Arduino library for the main functions I got around to trying. from microbit import * class Matrix: ADDRESS = 0x74 REG_CONFIG = 0x00 REG_CONFIG_PICTUREMODE = 0x00 REG_CONFIG_AUTOPLAYMODE = 0x08 REG_CONFIG_AUDIOPLAYMODE = 0x18 CONF_PICTUREMODE = 0x00 CONF_AUTOFRAMEMODE = 0x04 CONF_AUDIOMODE = 0x08 REG_PICTUREFRAME = 0x01 REG_SHUTDOWN = 0x0A REG_AUDIOSYNC = 0x06 COMMANDREGISTER = 0xFD BANK_FUNCTIONREG = 0x0B FRAME = 0 def __init__(self): # off an on again self.write_reg8(self.BANK_FUNCTIONREG, self.REG_SHUTDOWN,0x0) sleep(10) self.write_reg8(self.BANK_FUNCTIONREG, self.REG_SHUTDOWN,0x1) # select picture mode self.write_reg8(self.BANK_FUNCTIONREG, self.REG_CONFIG, self.REG_CONFIG_PICTUREMODE) self.write_reg8(self.BANK_FUNCTIONREG, self.REG_PICTUREFRAME, self.FRAME) self.fill(0) for f in range(8): for i in range(18): self.write_reg8(f,i,0xff) # turn off audio sync self.write_reg8(self.BANK_FUNCTIONREG,self.REG_AUDIOSYNC, 0x0) def fill(self, value): self.select_bank(self.FRAME) for i in range(6): d = bytearray([0x24 + i * 24]) + bytearray(([value]*24)) i2c.write(self.ADDRESS, d, repeat=False) def select_bank(self, bank): self.write_reg(self.COMMANDREGISTER, bank) def write_reg(self,reg,value): i2c.write(self.ADDRESS, bytes([reg,value]), repeat=False) def write_reg8(self,bank, reg, value): self.select_bank(bank) self.write_reg(reg, value) def set_led_pwm(self, lednum, frame, value): self.write_reg8(frame, 0x24 + lednum, value) # 0 at 0,0 and 143 at 15,8 def set_led_xy(self, x, y, frame, value): self.write_reg8(frame, 0x24 + x + y * 16, value) a = Matrix() while True: a.fill(255) sleep(1000) a.fill(0) sleep(1000) for i in range(0,256,5): a.fill(i) sleep(20) sleep(1000) a.fill(0) sleep(50) for y in range(9): for x in range(16): a.set_led_xy(x,y,0,255) sleep(50) a.fill(0) sleep(1000) a.fill(255) sleep(1000) for led in range(143,-1,-1): a.set_led_pwm(led,0,0) sleep(50) sleep(1000) The while loop demonstrates the basics of switching all or individual LEDs on or off as well as varying their brightness. Challenges - Add some functions for setting whole columns or rows of the grid and use these to create some effects. - Make a program that displays a maze on the charlieplexed display. Make it possible to navigate a blinking dot around the maze. - Design a font to use for the matrix. Start by displaying invidivual characters before working on scrolling strings. - This display is more than large enough for displaying the time and date on a binary display for RTC readings. - The driver IC has a lot more functionality than has been covered on this page. It has some built-in effects. The datasheet for the chip and Adafruit's Python library will get you on the right track.
http://multiwingspan.co.uk/micro.php?page=matdrive
CC-MAIN-2018-22
refinedweb
583
59.7
Unity3D: JavaScript vs. C# – Part 5 Posted by Dimitri | Dec 23rd, 2010 |. I will explain how to cast a ray in Unity3D with a JavaScript example: //Creates a ray object private var ray : Ray; //creates a RaycastHit, to query informarion about the objects that are colliding with the ray private var hit : RaycastHit = new RaycastHit(); //Get this GameObject's transform private var capsTrans:Transform; function Awake() { //get this Transform capsTrans = this.GetComponent(Transform); } // Update is called once per frame function Update () { //recreate the ray every frame ray = new Ray(capsTrans.position, Vector3.left); //Casts a ray to see if something have hit it if(Physics.Raycast(ray.origin,ray.direction, hit, 10))//cast the ray 10 units in distance { //Collision has happened, print the distance and name of the object into the console Debug.Log(hit.collider.name); Debug.Log(hit.distance); } else { //the ray isn't colliding with anything Debug.Log("none") } } This is how it works: first, we need to create a Ray object and recreate it every frame (lines 02 and 20). The Ray class stores the ray properties, such as direction the ray is being cast and origin. Then, to obtain information about ray’s collision with other GameObjects, we need a RaycastHit class object, which will return the Collider and Transform from the GameObject that collided with the ray, distance which the collision occurred and some other details. Last but not least, we need to call the Raycast() static method from the Physics class (line 23). This is the method that will actually cast the ray. Note that it takes the origin of the ray, a direction to cast the ray, a RaycastHit object and distance as parameters. What this method does is cast the ray to the given distance and store the collision results passed inside the RaycastHit object. In the beginning, it may sound a little bit confusing, but after programming it for the first time it will make more sense (additional information here). In C#, the same above example is the following code: using UnityEngine; using System.Collections; public class Raycast : MonoBehaviour { //Creates a ray object private Ray ray; //creates a RaycastHit, to query informarion about the objects that are colliding with the ray private RaycastHit hit = new RaycastHit(); //Get this GameObject's transform private Transform capsTrans; void Awake() { //get this Transform capsTrans = this.GetComponent<Transform>(); } // Update is called once per frame void Update () { //recreate the ray every frame ray = new Ray(capsTrans.position, Vector3.left); //Casts a ray to see if something have hit it if(Physics.Raycast(ray.origin, ray.direction, out hit, 10))//cast the ray 10 units in distance { //Collision has happened, print the distance and name of the object into the console Debug.Log(hit.collider.name); Debug.Log(hit.distance); } else { //the ray isn't colliding with anything Debug.Log("none") } } } When casting rays, the only difference between these two programming languages is at line 28 in the above code. See how we had to pass the hit object with the out keyword? That happens because, in C#, the Raycast() method says it needs the RaycastHit parameter passed as reference and that’s exactly what the out keyword does. So instead of passing the object (as we done in JavaScript), we are passing a reference to it, the place where it is so the Raycast() method can find it (more information here). It may not look like much, however one can spend some time trying to figure that out, specially when migrating from JavaScript to C#. And that’s basically it! These are the main differences between those two programming languages. There must be a lot more out there! Final Thoughts JavaScript or C# ? Which one to choose? It depends heavily on a lot of factors, such as previous programming skills, previous game programming experiences, affinity with certain programming languages, just to name of few. Nevertheless, people that are beginning to grasp the basic concepts of programming, or just started programming games recently or are new to Unity3D, should stick with JavaScript, because they will have less to worry about. These people will eventually migrate to C#, due to it’s internal classes and structures which can really help to solve complex programming problems. For the folks who have some programming baggage with C#, C++ or Java and some experience with game programming in these languages: go directly to C#. In the end, this won’t matter after all. Eventually, you will end up learning both programming languages when writing code for Unity3D, meaning that this won’t be an issue. Naturally, you will end up creating scripts in these two languages, and then converting the code to the one the rest of your project is using. One thing is for certain: try not to mix use these two programming languages at the same time. Doing everything with one scripting language will avoid a lot of headaches. One last thing: I haven’t forgotten the project with the examples as promised in the first post of the series. Grab it here: - JSvsCSharp.zip Read instructions at the README.txt file. Thanks for your post, it’s very halpfull. I’m a newbie in Unity3d but not in C#. Note for C# listing: when you call a method with OUT parameter, you should not create the object for that parameter, it is created by calling method. Check this out.
http://www.41post.com/1974/programming/unity3d-javascript-vs-csharp-5
CC-MAIN-2018-26
refinedweb
902
61.67
Details - Type: Bug - Status: Open (View Workflow) - Priority: Minor - Resolution: Unresolved - Component/s: workflow-job-plugin - Labels:None - Environment:Jenkins 2.222.3 workflow-job-plugin 2.39 - Similar Issues: Description In the latest Jenkins version and workflow-job-plugin, it seems that the jenkins console log is frozen. I've tested both Firefox 76 and Chromium 81.0.4044.129. From trying to debug this issue, it seems that it is related to the workflow-job-plugin. It seems to be stuck processing nodes from the console: I've downgraded the plugin to version 2.36 - it seems that is the last version that works. When upgrading to 2.37, the console log freezes the tab again. This is the test job I ran to generate the console output: def genText(lines){ (1..lines).each{ println "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur interdum fringilla interdum" } return true }parallel "branch-1" : { genText(20000) }, "branch-2": { genText(20000)}
https://issues.jenkins.io/browse/JENKINS-62241
CC-MAIN-2021-39
refinedweb
160
51.85
I need to get the last part of a list that I have fetched with readlines() ['Some Name__________2.0 2.0 1.3\n', 'Some Name__________1.0 9.0 1.0\n', # and so on....] 2.0 2.0 1.3 split("_") def openFile(): fileFolder = open('TEXTFILE', 'r') readFile = fileFolder.readlines() for line in readFile: line = line.split("_") personNames = line[0] print personNames print openFile() line[2] line[3] Based on your last question, you haven't gotten rid of those empty strings in this solution yet. So line[2] and line[3] didn't work because they're probably empty strings that are '_'s originally: readFile = ['Some name____2.0 2.1 1.3','Some other name_____2.2 3.4 1.1'] Here's how I would do it: def openFile(): readFile = ['Some name____2.0 2.1 1.3\n','Some other name_____2.2 3.4 1.1\n'] data=[] for line in readFile: line = (line.rstrip()).split("_") #EDIT: Strip the newline character in this line data.append(line [-1].split(' ')) print(data) openFile()
https://codedump.io/share/7s20lTw33IIS/1/how-to-get-the-last-part-of-a-list-containing-floats-strings-and-special-characters-in-python-when-splitting
CC-MAIN-2017-09
refinedweb
178
78.35
NavigationView Questions Hey All, I am thrilled about the new ui module and have already made some great little apps for myself. The one thing I lack, though, is an app that has more than one View. So, my question is: how do you use NavigationView? That is all. I admit that the documentation for NavigationViewis lacking, so here's a simple example: import ui def button_tapped(sender): v = ui.View() v.background_color = 'green' v.name = 'Pushed View' sender.navigation_view.push_view(v) root_view = ui.View() root_view.background_color = 'white' root_view.name = 'Root View' button = ui.Button(title='Tap me') button.action = button_tapped root_view.add_subview(button) nav_view = ui.NavigationView(root_view) nav_view.present('sheet') Are there other undocumented types of present, other than sheet? I believe that sheet is documented. Under documentation -> pythonista modules -> ui module -> ui.view.present, it says "View.present(style='default', animated=True, popover_location=None, hide_title_bar=False, title_bar_color=None, title_color=None, orientations=None) Present a view on screen. By default, a view is presented with a title bar that also contains a “close” button. If hide_title_bar is set to True, you can close the view with a 2-finger swipe-down gesture, or by calling the View.close() method. Valid values for the style parameter are: 'full_screen': The presented view covers the entire screen. 'sheet': The view covers a roughly quadratic portion in the middle of the screen. This style is only available on iPad. 'popover': The view is presented in a popover. The popover_location parameter can be used to specify its position; by default, it is shown in the top-right corner. This style is only available on iPad. 'panel': The view is presented in a sliding panel, along with other accessories (e.g. the console). 'sidebar': The view is presented in a sidebar, next to the main editor view. The sidebar width will be determined by the current width of the view, but it cannot be wider than 200 points. The optional popover_location parameter can be a 2-tuple (x, y) that specifies the point (in screen coordinates) from which the popover is shown (where the arrow points). The orientations parameter is a sequence of strings that determines in which device orientations the view will auto-rotate. It is only used for fullscreen presentation. Valid values are 'portrait', 'portrait-upside-down', 'landscape', 'landscape-left', 'landscape-right'. By default, all orientations are supported." I still can't even make a simple text input with a button to save the input... I have been trying to use the ui module to create data entry apps... Here is a utility function that gathers all user-entered values into a dict: # utility function proposed for the ui.View class def get_entered_values(self): the_dict = {} for subview in self.subviews: if subview.name: if isinstance(subview, (ui.TextField, ui.TextView)): the_dict[subview.name] = subview.text elif isinstance(subview, (ui.Slider, ui.Switch)): the_dict[subview.name] = subview.value elif isinstance(subview, ui.DatePicker): if subview.mode == ui.DATE_PICKER_MODE_COUNTDOWN: the_dict[subview.name] = subview.countdown_duration else: the_dict[subview.name] = subview.date return the_dict Have you gotten the basic text input working though? @omz: a quick question about your code. When you instantiate nav_viewyou give it root_viewas a parameter. Does that automagically set the root_viewas a subview? what is going on there? @ccc: how would you use that in one of your projects? That looks very interesting. Thank you! class My_Very_Special_World_View(ui.View): # add the definition of the get_entered_values() function here def will_close(self): # This will be called when a presented view is about to be dismissed. # You might want to save data here. print('will_close() called with {} subviews'.format(len(self.subviews))) print(self.get_entered_values()) Hi All, I'm back with another question about Navigation Views. In the ui builder I have a main view with a Label and a Navigation View. Then, inside the navigation view I have a variety of buttons and tables. Now, when I go to the script side and query the navigation view for a button, I get nothing v = ui.load_view() nv = v['nav-view'] print nv['button1'] ## 'None' How do I get the sub views of my navigation view that I set up in the ui editor? Or do I have to add the sub views manually? Thank you, B. Edit: Not quite right for NavigationViews... You can always build yourself a roadmap by walking the subview tree: for i, sv in enumerate(v.subviews): print(i, sv.name) for i, sv in enumerate(v['nav-view'].subviews): print(i, sv.name) # Once you know where things are you can: button1 = v['nav-view']['button1'] # subview of a subview @ccc I had tried doing that, but in hopes that yours would work I ran it and got this: (0, u'label1') (1, u'nav-view') It seems like if you add subviews to a navigation view in the ui editor, it doesn't go through to the script editor side. I checked the pyuifile in a text editor and saw that the button was in fact a nodeof the navigation view in the pyuifile. I may just create a seperate UI file and import it in. At least that will get going in the right direction... NavigationViews are a bit special... Adding views to a NavigationViewin the UI editor won't actually add them as subviews, but rather create an empty view, add the subviews to that, and then basically call the NavigationView's constructor with that view, so it becomes the root of the navigation tree... The whole NavigationViewclass is a bit under-developed to be honest, and there's not really a good way to access views that you add in the UI editor directly. If you need that, you might want to construct the navigation view programmatically for now. - EvilPorpoise Navigation views have a back button in the upper left corner that appears after a second subview is pushed. Can anyone tell me how to detect a user tapping that "back" button within a program? I created three files: A pyui file and a corresponding py file (MyView.pyui and MyView.py) plus a subview pyui file (MySubView.pyui) I then was able to create a button in the main view (where the navigation view is located) which, when pressed, pushes the subview into to navigation view. However doing it that way, makes this button stay, no matter which root/subview is visible inside the navigation view. How can I create a right_button_item inside MyView.py= I tried v = ui.load_view v['navigationview1'].right_button_items = ui.ButtonItem[...] But that does nothing at all v = ui.load_view v.right_button_items = ui.ButtonItem[...] Adds it to the top level view. What do I miss? @vcr80 If possible, please copy and paste code directly from your script and don't type it out from memory. The code snippets that you posted would cause a few TypeErrors when you'd try to run them (disregarding the ...s - their meaning is clear enough IMHO). If the code you post here is different from what's in your script, then it can be difficult to help, because there's no way to tell what is a typo and what is a mistake in your real code. Anyway, the issue here seems to be that you're setting right_button_itemson the NavigationView. If you want to put the buttons in the same toolbar where the "Back" button is, then you need to put them in the right_button_itemsof the view that's in the NavigationView. However last I checked this is not possible if you create the NavigationViewthrough the UI builder. The issue is that the initial view in the NavigationViewis created "anonymously" by load_view, meaning that there is no way to access its right_button_items. This means that you'd need to create the NavigationViewin Python from an existing view that is accessible in some way so you can give it right_button_items. @dgelessus you're right. sorry! Next time, I'll copy the code! I changed my code based on your answer: I created a third pyui file for the root view and pushed the root view via code to the navigation view rather than in the UI Builder and I set the button_right_items in the code that loads the root view. That worked! Here's exactly what I did in case someone is stuck with the same very basic problem: I created a script file with a UI file. They share the same filename (say NavView.py and NavView.pyui). In The pyui file, all I did was set the Custom Class View to ui.NavView. In its corresponding py-file, i created class NavView(ui.View). Also, I created two stand alone UI files for designing the root view and the sub view. This is the class: # coding: utf-8 import ui class NavView(ui.View): def __init__(self): # Load the default view that's placed inside the Navigation View. # This root view has its own pyui file (here: RootView.pyui). # That makes it easier to design the root view. root = ui.load_view('RootView.pyui') # Set the titlebar text of the Navigation View when the root view is loaded root.name = 'Root View' # Specify the visible buttons in the navigation view titlebar when the root view is visible. root.right_button_items = [ui.ButtonItem(action=self.openSubView, image=ui.Image.named('ionicons-close-24'))] # Create the Navigation View with the root view preloaded self.v = ui.NavigationView(root) self.v.present('sheet') def openSubView(self,sender): # Load the sub view thats loaded when the defined button in the Navigation View is pressed while the root view is loaded sub = ui.load_view('SubView.pyui') # This text will be displayed as the Navigation View's titlebar text sub.name = 'Sub View' # Display the sub view instead of the root view self.v.push_view(sub) # Call the class NavView() If you have any suggestion on how to improve the code, I'd love to hear it! The above code works great as long as I don't actually want to do anything with the code. I'm not getting it. Did I miss any kind of tutorial or documentation? Say I have a button in one of the sub views. How do I make the button (with action set to buttonFunction) do anything? def buttonFunction(sender)below the class doesn't work. def buttonFunction(self, sender)inside the class doesn't work either. And it doesn't matter wether every view (not just the Navigation View Container pyui) or just the top level pyui that has the Navigation View in it has its custom view class set to ui.NavView or not. I'm really confused since I can't find anything like a introduction to Pythonista UI Designer...
https://forum.omz-software.com/topic/1880/navigationview-questions
CC-MAIN-2021-43
refinedweb
1,789
66.94
On Fri, Jul 31, 2009 at 09:16:02PM +0000, Jacob Rus wrote: > > * The operation is crazy: It defines a MimeTypes class which > actually stores the type mappings, but this class is designed to > be a singleton. The way that such a design is enforced is > through the use of the module-global 'init' function, which > makes an instance of the class, and then maps all of the > functions in the module global namespace to instance methods. > But confusingly, all such functions are also defined > independently of the init function, with definitions such as: > > def guess_type(url, strict=True): > if not inited: > init() > return guess_type(url, strict) I can't speak for any of your other complaints, but I know that this weird init stuff is fixed in trunk. For the other stuff, you seem to have some very good points. I'm sure a patch would be welcome. -- Andrew McNabb PGP Fingerprint: 8A17 B57C 6879 1863 DE55 8012 AB4D 6098 8826 6868
https://mail.python.org/pipermail/python-dev/2009-August/090930.html
CC-MAIN-2016-22
refinedweb
163
62.21
A long time ago I decided to grab the chips from a broken Atari 2600 (Jr.) and see if I could build an Atari from scratch, on a solder-less “breadboard”. My first experience (post here) was to drive the CPU with an Arduino, which showed the silly chip advancing through what it believes to be memory, but is actually just a single “no operation” ( NOP) hard-wired instruction: It took some time (between finding the right connector, 3D-printing a part, figuring out the wiring and fixing the Arduino software), but I finally moved on to the next step: plugging a real Atari cart and seeing some actual code running! An Atari cartridge contains a ROM (Read-Only Memory) chip, meaning we’ll only read data from it. The 6507 CPU can request any single byte on a given memory position on the cart by setting that memory address, in binary form, on a given set of its pins (the address bus), and the cart responds with the byte on that position on another set of pins (the data bus). In fact, these buses are used to both read and write bytes to all other parts of the system, but for now we only care about the cart. My first step was to get the cart pinout, which you can find in several places, but people often forget to mention the orientation of the pins, whether we are seeing the cart or the slot, etc., so I went with the original Atari schematics, which shows the connector as seen by someone looking directly at the console: From looking at it, the connector consists pretty much in address ( A0- A12) and data ( D1- D8) lines (plus a +5V and two GND pins), so connecting those to the matching CPU and voltage pins in our board should do the job, no extra electronics needed this time. In non-Atari lingo, the socket is a 24-pin “edge” connector - which is just like a computer peripheral card “slot”, only smaller. It isn’t a trivial size, but with the right name you can find it online (or use the link above). Unfortunately you can’t just plug a cart into the connector, because carts (or, at least, the Atari-manufactured ones) have a sliding plastic protector that only opens when the cart is inserted in the matching plastic guide - and that one isn’t manufactured anymore. I was lucky not to be the only one with this problem. In particular, people hacking Atari Flashback mini-consoles to add a cart slot also required one, so they created a model that I could download and 3D-print (of course, there are other options you may consider): The fit wasn’t perfect, but with some epoxy and a bit of drilling, I managed to fix the connector in the guide. I connected some female-to-male jump wires (hint: use longer ones), inserting a toothpick to keep them firm, then labeling according to the schematics: Starting with the breadboard from the first post, I removed the hard-wired NOP instruction, and connected the address/data lines to the matching pins on the 6507, and the +5V (socket pin 23) and ground (socket pins 12 & 24) to the power lines. One thing to notice: the Atari schematics refer to data pins as D1- D8, whereas 6507 names them D0- D7 (starting from 0 like the address pins, and also like bits are usually assigned). But at least they are aligned on the chip, so it wasn’t (much) confusing. Another thing to pay attention: for some reason, A10 and A11 are flipped on the connector - the sequence, looking left-to-right from outside the console, is A8, A9, A11, A10 and A12. Remember to flip them as you connect the wires. Speaking of pins, the previous method of monitoring the address lines worked fine when addresses were just growing sequentially, but monitoring an actual program this way was too difficult, so I switched to wiring the Arduino to the data lines instead. That will show the actual ROM bytes as they were requested by the CPU for execution (as long as we tweak the monitoring program, which I had to do anyway, see below). Here is the updated drawing, with the Arduino connected to the data lines, and the cart connected to data and address. I included the power connections as well, so everything needed is there. I recommend opening the .fzz file on Fritzing, which has the pin names on the cart connector (it doesn’t resemble the connector a lot, I know; but it’s the first time I customized a part in the software). The test program generates a (slow - 10Hz) clock pulse to keep the processor running. At each pulse, it prints the hex value from the data bus in the Arduino IDE serial monitor (set the speed to 115200). It is much smaller than the original code, and several issues (such as use of serial I/O pins and wonky binary conversion) were fixed. // Turns an Arduino into a 10Hz clock generator (on pin A5) // and a monitor for an 8-bit data bus (pins 2-9) void setup() { for(int pin = 2; pin <= 9; pin++) { pinMode(pin, INPUT); } pinMode(A5, OUTPUT); Serial.begin(115200); } void loop() { // Clock pulse digitalWrite(A5, HIGH); delay(50); digitalWrite(A5, LOW); delay(50); // Print current data bus (pins 2-9) int data_value = 0; int power_of_two = 1; for(int bit = 0; bit <= 7; bit++) { data_value += digitalRead(bit + 2) * power_of_two; power_of_two *= 2; } if (data_value < 0x10) { Serial.print("0"); } Serial.println(data_value, HEX); } To test it, I’ve used a cart with 2048 2600 since, as the author, I’m familiar with the code. The Stella screenshot below shows the initialization code, and we’ll be looking for the bytes (opcodes) on the right side: To be precise: once we press and release the “reset” push button, we expect the 6507 to read the address of that code ( F914) from its standard location from the cartridge, then start reading the opcodes above ( 78, D8, A2, 00, …) in sequence, until the BNE instruction at F91D loops back to reading again from a few lines above ( CA, 9A, 48, …), and repeat that a bunch of times. That will be enough to show us whether this mess of wires is working - and indeed it is! Check this snippet of the serial monitor output (comments and disassembly after #), comparing to the values above: ... 00 # Some gibberish here, until 6507 resets 00 14 # 6507 reads the contents of RESET vector: the lowest byte (14)... F9 # ...then the highest (F9) of F914, which is where our code starts 78 # SEI # read from address F914 D8 # CLD # read from address F915 D8 A2 # LDX #$ # read from address F916 A2 00 # 00 8A # TXA # read from address F918 A8 # TAY # read from address F919 A8 CA # DEX # read from address F91A CA 9A # TXS # read from address F91B 9A 48 # PHA # read from address F91C 48 D0 # maybe a premature read of next instruction? 00 # the value that would be sent to the stack - if we had RAM :-) D0 # BNE FB # F91A # address calculated as "5 bytes before"; FB here means -5 A9 CA # DEX # again from address F91A 9A # TXS # again from address F91B 9A # ... 48 48 D0 00 D0 FB A9 CA 9A ... We print the value on the data bus once per clock cycle - since instructions take a different number of clock cycles to run, we see the uneven repetitions. But overall, we are running the code in the cart - mission accomplished! 🎉 Hope I don’t take as long as I did this time to continue with this experiment. I wonder if I can add the TIA at some capacity without going with a full speed clock (which will be hard to monitor without an oscilloscope, so I’m deferring as much as I can). I’ll see as I tinker. Stay tuned!]]> As a Dance Dance Revolution (DDR) enthusiast on its heyday, I spent a lot of time adapting dance pads to improve comfort and durability, until I got myself an Ignition pad. Its thick rubber interior, superior sensors and RedOctane (of Guitar Hero fame) quality resulted in no mis-/over-/continued registering of arrows, less knee strain and happier downstairs neighbours. I sold that one years ago, but having some floor space and time now, I decided to buy a “new” one on eBay. Not having a Playstation these days, I planned to use Stepmania (the open-source DDR clone), but my mat was missing the USB adaptor. A Playstation-to-USB one gets recognized, but arrows do not register correctly. The adaptor I needed would plug in the XBox connector (classic XBox controllers are quite close to USB in nature, as we’ll see below). They are near-impossible to find, but it seems the breakaway cable that came with the controller can be converted into such an adaptor. The operation is trivial: I was going to solder them together and tape (like the video), but it seemed too flimsy for me, so I soldered the wires into a protoboard, then fixed the set on a small box with my trusty Durepoxy. Ugly, I know, but sturdy. Once the box was closed, had some cleanup and electrical tape finished, it looked much better: As for the software, I used this macOS XBox Controller Driver. To test it (and Mac controllers in general), I recommend Controllers Lite. With that set, I downloaded Stepmania, addded some songs from the usual places, and spent a great afternoon playing! 🕺 A positive surprise was that no remapping was needed: Controllers Lite shows the arrows registering both as axis and button presses, and StepMania recognizes the arrows as it should, the A/B buttons as OK/back, and even the cancellation shortcuts with select+start. P.S.: If you can’t find the original XBox breakaway cable, this page says an XBox 360 one (which is already USB on the other end) will do, as long as you sand the connector a bit to make it fit on the mat/controller. I’m curious to try that. Photos: Raquel Camargo]]> One convenient feature of Chromecast is that it turns on your TV automatically when you connect to it - as long as your TV has HDMI-CEC. Mine doesn’t, but it is already remote-controlled via Raspberry Pi, and thanks to Home Assistant, I can easily detect when the Chromecast is in use, so in theory I could just blast a command to the IR when it switches away from “off”. There is just one problem: Home Assistant doesn’t know whether the TV is on or off. If it is already on when I start casting, sending the command will turn it off - the opposite of what I want. Also, I would like to turn the TV off when not using the Chromecast (something it doesn’t seem to do, even with HDMI-CEC). I am not the only one: people have already asked around, and one of the ideas was to use the USB port (that a lot of sets have for playing media or firmware upgrades). A quick multimeter test on mine showed that it is only powered when the TV is on, so it’s just a question of monitoring it and forwarding the info to Home Assistant. The simplest idea for monitoring the state would be to connect the power output of the TV to a GPIO pin on the Raspberry Pi. However, those pins expect 3.3V, and USB operates at 5V, so a direct connection would fry the RPi. I knew I’d need what is known as a “voltage divider” - a setup that (in its simplest form) uses two resistors to extract a lower voltage from a higher one. The good news: someone had already done the homework for me, noticing the Rasperry Pi already provides one of the resistors and calculating the value of the other. So it was as simple as: You can reuse any cable with a USB connector - they are usually color-coded, red being the 5v, and the silver, non-isolated wire is GND. Raspberry Pi pinout is here, but here is my wiring for this setup: To test it, we can invoke python2 and type some Python code: import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BOARD) GPIO.setup(11, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) while True: print GPIO.input(11) time.sleep(1) This prints 1 when the TV is off and 0 when it is off. Here is a test I did plugging and unplugging the USB to a power adaptor: On the actual TV, it takes some time to pick up the “off” state because the TV slowly reduces the output instead of cutting it straight (I checked with a multimeter, it takes quite a few seconds to go from ~5V to ~0). Anyway, it shows that the hardware works, so we can move on to exposing it in Home Assistant. It will appear on the panel as a binary_sensor just by adding these lines to the binary_sensor section of configuration.yaml (creating one if you don’t have it): binary_sensor: - platform: rpi_gpio ports: 17: PIR TV # rpi_gpio uses BCM notation => physical pin 11 = GPIO17 pull_mode: DOWN And it almost works 🥺. Even though the sensor shows up on the interface (alongside the Chromecast, on the screenshot), and switches to on when I turn the TV on, it does not switch to off when I turn the TV off. Never. It happens that (unsurprisingly), Home Assistant code is more efficient than mine, using threaded callbacks instead of checking the state every second (details). So I changed my test code to match Home Assistant’s: import RPi.GPIO as GPIO GPIO.setmode(GPIO.BOARD) GPIO.setup(11, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) def cb(port): print GPIO.input(port) GPIO.add_event_detect(11, GPIO.BOTH, callback=cb, bouncetime=1000) Instead of continuously printing, it will just output the current state when it changes, and works as expected fine when plugging/unplugging from the power adaptor. When connected to the TV, turning it of produces a “1”, but turning it off also produces a “1”. It isn’t a bounce issue (I tried changing bouncetime to no avail). My other guess is that the state returned by GPIO.input isn’t updated when the cb function is fired by the callback, likely due to the slow discharge. To confirm that, I include a little pause (200ms) on the function before I read the state, and, lo and behold, that fixes the problem. The code above consistently prints “0” when I turn the TV off: import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BOARD) GPIO.setup(11, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) def cb(port): time.sleep(0.2) # Pause for 200ms print GPIO.input(port) GPIO.add_event_detect(11, GPIO.BOTH, callback=cb, bouncetime=1000) I could change the Home Assistant code on the Pi to do that (maybe accepting an optional delay parameter in the same way that it accepts a bounce time), but I had a hard time running their tests, so it will be a while before I can submit a contribution to the project (which may or may not be accepted), so for now I went with ha different approach: To be honest, I haven’t been using the Raspberry Pi GPIO for a while precisely because of this type of issue: running I/O on a non-realtime (or at least not very predictable OS) leads to inconsistent reads. Instead, I’ve switched all the I/O on my home automation to a separate board (a NodeMcu Lua ESP8266, which behaves like an Arduino, but is more compact and has built-in Wi-Fi). The board runs OpenMQTTGateway, a software that makes it easy to forward hardware events to the Raspberry Pi (here is how I set it up to work with Home Assistant) and brings us the best of two worlds: the stability of the microcontroller and the software flexibility of the Pi. For this setup, we don’t have the Rapsberry-pi-provided pull-down, so we’ll need two resistors to provide the voltage divider (i.e., bring the TV USB 5v down to 3.3v that the board can monitor). There is a formula that you can use to find a pair of resistors, but I was lazy and just used this calculator, throwing 5V as the voltage input and playing with values until I got a pair that I had lying around (R1 = 75Ω and R2 = 150Ω) and gave an approximate 3.3v output. Here is how I wired them (you must choose a pin you are not using for some other I/O): Opening OpenMQTTGateway’s source code in the Arduino IDE, I enabled the monitoring by removing the trailing // from this line in User_config.h: #define ZsensorGPIOInput "GPIOInput" //ESP8266, Arduino, ESP32 and in config_GPIOInput.h, I configured the pin I’m using by adding it to the first #define on the block below PIN_DEFINITIONS (the correct number depends on your board). In the one depicted above, D5 means GPIO14, so we’d go with: #if defined(ESP8266) || defined(ESP32) #define GPIOInput_PIN 14 #else #define GPIOInput_PIN 7 #endif Of course there are other configurations you may want to change to ensure the software connects to your Wi-Fi network, and that the Raspberry Pi can subscribe to the events published by OpenMQTTGateway (see the docs and my previous post). Once everything is set up, it is possible to ssh into the Raspberry Pi and monitor the queue with: mosquitto_sub -t \# -v As the TV is turned on and off, the following events appear: home/OpenMQTTGateway/GPIOInputtoMQTT {"gpio":"HIGH"} home/OpenMQTTGateway/GPIOInputtoMQTT {"gpio":"LOW"} That allowed me to add a binary_sensor to Home Assistant’s configuration.yaml. Like I did with the sensors in the aforementioned post, I used the mqtt platform, telling it to watch for the messages above: binary_sensor: - platform: mqtt name: Living Room TV Power state_topic: "home/OpenMQTTGateway/GPIOInputtoMQTT" payload_on: '{"gpio":"HIGH"}' payload_off: '{"gpio":"LOW"}' device_class: power That makes the switch appear, and this time it reacts to on and off! The final goal is to to monitor my Chromecast ( media_player.living_room_tv)’s state. When it changes from off to anything else, I want it to send a power toggle command to my TV (which I defined as the switch.tv when I set up IR) - but only if the sensor we just installed says the TV is off. In Home Assistant language, that translates to these lines in automations.yaml: - alias: tv_on_when_start_casting trigger: platform: state entity_id: media_player.living_room_tv from: 'off' condition: condition: state entity_id: binary_sensor.living_room_tv_power state: 'off' action: - service: switch.toggle entity_id: switch.tv Conversely, if I want it to turn off the TV when I’m done with the Chromecast (and again, only if I haven’t turned it off already): - alias: tv_off_when_stop_casting trigger: platform: state entity_id: media_player.living_room_tv to: 'off' condition: condition: state entity_id: binary_sensor.living_room_tv_power state: 'on' action: - service: switch.toggle entity_id: switch.tv A few quirks still remain. For example, if I switch sources without giving the setup a few seconds to catch up, the TV will turn off, but not on again. Worse: if I switch to another HDMI source without disconnecting, the Chromecast will become idle after a while, and turn the TV off at the worst possible moment. But those are likely fixable by tweaking the automations, and in general I just start casting and everything works!]]> The ZX Spectrum Next was a Kickstarter-backed initiative aiming to recreate the iconic ZX Spectrum using FPGA and lots of ingenuity. I am a bit too Marie-Kondo-ed for physical retrocomputing these days, and, on top of that, have been skeptical of such projects (for good reasons). However, this one had names like Victor Trucco (one of the most respected Brazilian retrocomputing hackers) and Rick Dickinson (industrial designer behind several Sinclair computer cases, who sadly passed away before it was finished) behind it, so in May 2017 I gave it a shot and backed the campaign in exchange for a unit. Expected to ship January 2018, it was delayed for more than two years, but for good reasons: the people behind the project would not accept anything but the best quality, continuously pressuring manufacturers to go on-spec. And it was worht the wait - the computer is sturdy and gorgeous: The ZX Spectrum was one of the most influential computers from the 80s. It matched the (relatively) low price of its Sinclair predecessors with capabilities like color graphics, sound and enough RAM made it capable of all sorts of tasks - in particular games. In Brazil (where, at the time, it was legal to clone any foreign computer) we had the TK90X, a clone of the ZX Spectrum 48K, which I was lucky enough to have access to during my formative years. Here is one (from my retrocollector days), with a few software titles in cassete tapes, and a homemad sound chip expansion module: (I did eventually own a ZX Spectrum +2 as a retrocomputing enthusiast, but that’s another story entirely. Back to the ZX Spectrum Next!) Computer boxes at that time were neitehr the unimaginative packaging of typical PCs, nor the sterile whiteness of Apple ones. They used to showcase what the computer could do, and the Next goes with that idea, but with a modern look. I loved it. The manual is another highlight: like manuals of the era, it covers everything from handling the hardware to teaching you BASIC - in this case, a souped-up version that unleashes the new hardware features, yet feels like the classic. I am not a huge fan of unboxing videos, but the packaging of this computer deserved some special attention, so here it is: Between the material and the portal, there is a lot of material covering the Next, so I decided to just post a couple videos made right when I unboxed it. The first one shows me turning it on and (after the one-time configuration screens) typing the classic “Hello World” program. On the second one, I recreate a prototype “game” that draws a glyph on the screen and moves it in the four directions when directional keys are pressed. It’s a bit long, and the result won’t beat Fortnite in popularity so soon, but it shows how fun it is to just play freely in BASIC. Was happy doing it back then, am happy doing it now! Video and stills by Raquel Camargo TK90x photo by Carlos Eduardo Nogueira]]> Dynamic DNS providers like DynDNS or Duck DNS are great to let you access software like Home Assistant running on your properly secured computer or Raspberry Pi from anywhere. One problem that I was having with them: the custom URL did not work inside my network, just outside. That happens because my router does not support NAT loopback, blocking any requests from the internal network to the external IP (which is what my custom domain points to). On a computer, there is an easy fix. Let’s assume your custom domain is foobar.duckdns.org, and the internal IP (the one that you configured the router to forward a given port to) is 1.2.3.4. Adding a line like this to /etc/hosts (and commenting it out with # if the computer ever leaves the house) does it: foobar.duckdns.org 1.2.3.4 For mobile devices, it isn’t that easy: neither Android, nor iOS expose /etc/hosts. And those devices enter and leave the house all the time, making it impractical anyway. The workaround: since I already have the Raspberry Pi lying around, I installed dnsmasq (a lightweight DNS server) on it: sudo apt-get install dnsmasq and opened the firewall for its port, so my mobile devices can use that DNS when they are inside the network: sudo ufw allow dns Several online tutorials suggest config changes (such as enabling DHCP on dnsmasq - a no-no for my otherwise working network). I just kept the default config and added this line to /etc/dnsmasq.conf and restarting the service: address=/foobar.duckdns.org/1.2.3.4" With that, the DNS server will respond with the internal IP for the custom domain: $ dig @1.2.3.4 foobar.duckdns.org ; <<>> DiG 9.10.6 <<>> @1.2.3.4 foobar.duckdns.org ... ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10291 ;; flags: qr aa rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ... ;; ANSWER SECTION: foobar.duckdns.org. 0 IN A 1.2.3.4 ... and everything else will go to the normal DNS server for normal resolution: $ dig @1.2.3.4 google.com ; <<>> DiG 9.10.6 <<>> @1.2.3.4 google.com ... ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55137 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ... ;; ANSWER SECTION: google.com. 171 IN A 172.217.165.14 ... The final step was to edit the Wi-Fi connection on the devices, so they would ask dnsmasq for names when inside the network. iOS was the easiest: you just click Configure DNS, switch to Manual, remove the auto-configured value(s) and add 1.2.3.4. Android requires changing the whole IP settings to Static (which also means you’ll need an IP assigned to your mobile phone on your router) and enter the information manually. But that is a minor nuisance (that I only had to go through once). After that, the devices work inside and outside the house (which is great for Home Assistant, in particular if you have security things you want to check from away). The downside: if the Pi is down, the devices won’t have internet (which is why I kept the computer on the ‘/etc/hosts’ solution), but you can always switch back (or to data) if that happens. Overall, I’m happy now.]]> Water incidents in a condo can be catastrophic, and surely things like shutting your main water valve when you go out for long periods and having the proper coverage in your insurance are important. But for added peace of mind, leak detectors aren’t a bad idea. When I saw these $3 leak detectors on eBay, I decided to give them a shot. Not only for low price, but also because they used 433Mhz RF - the same tech I use to voice-control my lights from a Raspberry Pi. Once they arrived, I ran RFSniffer and indeed, when they get wet, the Raspberry Pi prints a different value for each sensor - so it should be easy to wire up an alert system… right? Well, it wasn’t. I found it odd that Home Assistant doesn’t have a 433MHz input integration (despite having a few for output). The reason is that “sniffing” from Raspbian can be clunky (you need a daemon running) and unreliable (due to its non-realtime nature). technicalpickles suggested I should check out MQTT, so I did. It is a “publish/subscribe message transport” - something I’m familiar with (having worked with a few pub-sub systems, including the venerable IBM MQSeries from which the “MQ” in “MQTT” comes), but honestly, it felt like over-engineering at first. Eventually, I realized it is more of a divide-and-conquer approach, with these parts: Splitting things like that (and using MQTT as the glue) means I don’t have to write any new software: I got a new Arduino Uno for the project (my other Arduino clones/models lacked the memory requirements for OpenMQTTGateway). Since it needs to send the events to the network, I added an Ethernet Shield, and for RF I used my newest Long Range 433Mhz RF Receiver module. The long range and built-in antenna made this my receiver of choice over the more popular RF modules - and it’s just as cheap as those. By bending the data output pin on the RF receiver a little bit, the receiver can be inserted directly on the shield, just matching the VCC and GND pins with 5V and GND on the board (respectively). The data pin can be then connected to Arduino digital input 3 with a small breadboard jumper wire. The final result is a bit taller than I wished, but hey, no soldering required: Assuming you are using Raspbian Buster or later, just log on the Pi and type: sudo apt-get install mosquitto mosquitto-clients and you are good. Seriously, that’s it. Technically you just need the mosquitto package, but the other one allows you to test your installation by running: mosquitto_sub -t \# -v which prints all messages published to the broker. You can publish a message by opening a second terminal window and running a command like this: mosquitto_pub -t "some/test/topic" -m "hi this is the message" The first terminal will show some/test/topic hi this is the message, indicating hi this is the message was published under the topic some/test/topic. This project’s wiki contains several configuration guides, including one for my setup, that is, an Arduino reading RF signals and publishing to a MQTT broker. Here are the changes I made to User_config.h (after downloading the “CODE-“ release and moving the lib folder as explained on the wiki): DEFINE THE MODULES YOU WANT BELOW, uncomment (that is, remove the //from): #define ZgatewayRF "RF" //ESP8266, Arduino, ESP32 Leave all other ##define Z... lines commented (with //). In the line char mqtt_server[40] = "...", put the IP address of the Raspberry Pi (between the quotes). Replace the zeros in the line const byte ip[] = { 0, 0, 0, 0 }; //ip adress with an IP address for the Ethernet Shield that is compatible with the network (even though the comments say the software supports DHCP, it wasn’t working for me). IMPORTANT: Uncomment the line below so each sensor publishes events to a specific MQTT topic (instead of a single topic for all of them, which results in a barrage of No matching payload found for entity: ... with state_topic: ...'). #define valueAsASubject true Uploading a sketch with those changes will make the Arduino connect to your broker (Serial Monitor on the IDE will help you debugging; fiddle with baud speed until you see text instead of garbage). If you still have mosquitto_sub -t \# -v open, you should see something like: home/OpenMQTTGateway/LWT online home/OpenMQTTGateway/version 0.9.1 and whenever a sensor gets wet: home/OpenMQTTGateway/433toMQTT/VALUE {"value":VALUE,"protocol":...,"length":...,"delay":...} where VALUE is (hopefully) unique for each of your sensor. In fact, you’ll see those events for all other things transmitting in the 433MHz frequency in your vicinity. I get a couple every minute in my apartment. First thing is to make Home Assistant aware of your new broker. You can do it on the UI (clicking on +, selecting “MQTT” and setting localhost as the broker address), or by adding to configuration.yaml: mqtt: broker: localhost That will make Home Assistant subscribe to the broker, but you need to expose the events. There are two ways: The trigger was tempting for my goal (getting notifications on my computer/phone when a potential leak is detected), but putting the sensors in the UI allows for richer integrations with other elements in the home. It also allows configuring fine details - for example, defining that a sensor is “on” when you get the message, but only gets “off” after X seconds without a message, so I went with it. Just add these values to configuration.yaml, one - session for each sensor (replacing “11111111”, “22222222”, etc. with the VALUEs from mosquitto_sub or RFSniffer): binary_sensor: - platform: mqtt name: Washroom Leak Sensor payload_on: "11111111" value_template: "{{ value_json.value }}" off_delay: 10 device_class: moisture state_topic: "home/OpenMQTTGateway/433toMQTT/11111111" - platform: mqtt name: Kitchen Sink Leak Sensor payload_on: "22222222" value_template: "{{ value_json.value }}" off_delay: 10 device_class: moisture state_topic: "home/OpenMQTTGateway/433toMQTT/22222222" - platform: mqtt name: Some Other Leak Sensor ... Once you restart, the sensors should be available in your Home Assistant main dashboard. I manually config mine, so I built a nice little card with them: The page updates dynamically, so you should see them flip as you wet the sensors, then go back after 10s (or how long you set the off_delay above): No one looks at a dashboard all the time (well, I don’t 😁), so we need a way to send notifications to my mobile phone. The Telegram integration is a great way to do it. Just open BotFather on the app, send it a /newbot command to create a bot for you, and get its TOKEN. Send any message to the newly-created bot, then visit (replace TOKEN accordingly) to get the ID of your personal user for that bot. Then add these lines to configuration.yaml, replacing TOKEN and ID with yours. telegram_bot: - platform: polling api_key: TOKEN allowed_chat_ids: - ID notify: - name: my_telegram platform: telegram chat_id: ID Another Home Assistant restart, and you should be able to send messages to your Telegram app by clicking on “Services” and calling notify.my_telegram with something like { "message": "hi" }. The result: The grand finale: sending the notification when a sensor gets on (wet) or off (dry). Here is where Home Assistant shines - something like this in automations.yaml does the trick: - alias: kitchen_sink_leak_sensor_state_change_alert trigger: platform: state entity_id: binary_sensor.kitchen_sink_leak_sensor action: service: notify.my_telegram data: message: "Kitchen Sink Leak Sensor is {{ states('binary_sensor.kitchen_sink_leak_sensor') }}" Had to add one for each sensor, but it was worth it. Here is my phone, telling me I should check the pipes under my sink: How cool is that? Not at all? Well… I find it cool. 🤓 I’m quite happy with the results so far, but here are some things that could be improved: The Nintendo Switch is surprisingly sturdy, but this is a common problem: joy-cons that still click (and oh, how I love that click) and snap to the console, but slide off when they shouldn’t (e.g., right in the middle of an online game match). The cause is a plastic latch that erodes slightly with use (or maybe after it gets jammed on the backplate when you mis-connect it). There is a cheap replacement for that latch that you can order here. The replacement latch is made of metal (which should have been Nintendo’s original choice), and several videos (like this one) show how to replace it. No soldering is required, just some careful screw removal and disassembly. With the usual caveats (you do it on your own risk, it voids your warranty, etc.), here is how it went for me. Nintendo things often use tri-wing screws (in addition to the more common Phillips ones), so the kit comes with one of each screwdriver. It would be handy - if they weren’t horrible. Seriously, do yourself a favour and get a decent tri-wing. Once you open the joy-con, be careful so you don’t damage the ribbon cables connecting the two halves. Remove the screws that connect the black board to the back half, and put the later aside. After you get to the black board, lift the metal blade that holds the latch in its place. Once the blade is removed, just replace the plastic latch with a metal one from the kit, fitting it over the old one’s spring. When removing the old latch, be careful so the spring doesn’t fly off. Don’t worry if the spring or any buttons fall out of place - just put them back as you reassemble the controller. It takes a little time, but it’s worth it: now my joy-con stays firm on its place, only sliding off when I press the button on the back.]]> A while ago I built a couple inexpensive hacks that added voice-command to my tv and then to my lights using a Raspberry Pi, Google Home Mini, infrared and RF radio. Since then, I added other things, which prompted me to move the hacks into the popular Home Assistant home hub software. With so much of my routine depending on that setup, backup became a concern. I’d make an occasional copy of the SD card with dd, but that isn’t a good long-term solution. Ideally, I wanted to rebuild my setup easily, should the card get corrupt, slow or just wrecked by my ongoing hacking. Enter Ansible. Sysadmins use it to write “playbooks” that represent the changes they would manually apply to a server. If done right, such playbooks can be applied to an existing server (fixing any broken configs), or a brand new one (to recreate its services). The Raspberry Pi is just a (tiny) server - meaning hobbyists can use Ansible as well! I’m not an Ansible expert (there are better places to learn about it), but my Ansible configs and these notes may be useful for anyone interested in automating Raspberry Pi setups (for home automation or anything else). Raspberry Pi setup is typically done by downloading Raspbian and writing it to a (micro)SD card. I usually download the latest “Lite” version, so I can install just what I need and keep it snappy. With that as a starting point, I created two Ansible playbooks: The main playbook can be ran as many times as needed - it will only configure things that aren’t already set up (Ansible peeps call that an “idempotent playbook”, I’ve heard). Every server needs a couple passwords and keys, and since my playbooks are public, I encrypt those secrets using Ansible Vault. That works nicely for everything… except Home Assistant. In theory, you can provide Home Assistant secrets on a separate file and just encrypt it, provision manually, etc. I have tried that, but every time I built the system from zero, I realized something was stored outside the standard config files (e.g., logins), or even scattered in binary datafiles (dynamic device information, some configs made on the UI, etc.). After lots of frustration, I went with a different plan: I set up an encrypted daily backup of the Home Assistant configuration to a network drive (just a thumb drive on my router), and made the playbook restore the latest backup when a config-less install is detected. The main downside is that my automations aren’t easily shared. But I can always write a post if I ever come with something useful (so far, they are all pretty boring 😅). What a horrible programmer I must be, because, you know, reuse is good™️… right? 😁 I tried to use Ansible Galaxy. Really. But the roles I found were often not generic enough (like almost doing what I wanted) and don’t always support Raspbian. Galaxy also lacks a robust package management system (which may not even be feasible, given the free-form, “script-y” nature of Ansible that I like so much), so I went solo. Of course I took inspiration in a lot of third party roles and playbooks (on Galaxy and around the web), and highly recommend doing so. Good question! Hass.io is a prebuilt SD card image that manages a minimal OS with Home Assistant baked in, automatically updated with Docker. I personally found it a bit too slow (at least on earlier-gen Raspberry Pi models), and I feel more comfortable with a Debian-based system that I can poke with a stick. But hey, all the cool kids are using Docker 😜. Seriously though, if it works for you, awesome - you’ll save yourself a lot of trouble. The automatic Home Assistant updates are appealing, but with my solution, I can just axe the application directory (or the whole SD card, for the matter) and run the playbook, and the latest version will be there, with my configs unchanged. Oh yes I do! With gusto. The main point of having a custom-made solution (other than cost and security) is tinkering. Ansible makes me confident that I can rebuild the whole thing quickly if I screw up, but yes, that requires me to keep the Ansible file up-to-date. That’s actually easier than it sounds: once I’m happy with my changes, I type history and figure which steps (installed packages, changed config files) are really needed, and add those to the playbook. Run it a few times, undo some changes, check that it does nothing when changes are already there… and that’s it. If the change was super complex and/or I’m afraid I forgot something, I can always run the playbooks against a fresh card, pop it in and kick the tires - it’s a great opportunity to get a fresh, snappy OS install for my tiny computer!]]> A while ago I got this beautiful Atari 2600 all-black, 4-switch model - often nicknamed “Darth Vader”, for obvious reasons: The console generates a TV signal, which the TV has to tune in just like a normal over-the-air channel. It was quite convenient at the time (and quality was good enough for the TV sets we had), but modern TVs show degradation - not to mention some can’t even pick up the faint signal - I had to hook mine to a VCR that would decode the signal into A/V. That quirky setup led me to make the popular A/V conversion (“mod”) - and, while at it, replace the power adaptor (with one that I can actually keep on the wall without fear of burning down the house) and capacitors (something that should be done to any vintage electronics that you want to keep humming). There are different types of of mods, varying in how they mix (or split) the video and audio signal and what sort of output they generate. I opted to get an A/V output from the signals mixed by the Atari board (but before they get modulated into a TV signal), with the video one amplified by a single transistor and a couple resistors. I based my mod on this version, which throws in a third 75Ω resistor that adjusts the impedance. Following the schematics there, I aligned the components like this (transistor is a 2N3904, flat side up; resistors are, from left to right, 75Ω, 3K3Ω and 2K2Ω): There are ready-made circuit boards, but I just cut a piece of protoboard. Hint: don’t solder the cables like I did - instead, follow the “strain relief” tip here for better securing. The layout reduced the number of connections, so I could just throw an extra gob of solder over the terminals before cutting to make the jumper connections. Maybe I could have used a little less solder, but heh, it worked. You need to pick up Video In and +5V/GND from the Atari board. Mine was a Rev 16, which has those three right on the connector of the RF box. That box had to be removed anyway, and doing so opened some space for the protoboard. Audio comes straight from the Atari board into the inner part of the audio jack. I strongly recommend checking this mod assembly guide to figure out where to pick it up in your particular model/board revision. The guide also helps figuring out which components to remove in order to reduce interference. I was able to test before removing anything from the board (just disconnected the RF box, something easy to revert if it did’t work). I just cut one resistor (R209) and one transistor (Q202). This is a good moment to replace the capacitors. Again, each model has its own set, but this thread has it figured out. I could not find a bulky C243 (guess the technology for eletrolytics improved), but stretching the legs on the modern one allowed me to solder it. You can find a lot of Atari 2600 power brick replacements online. But most of them have short cables, so I reused the original’s loooong cord on a high quality 2A power supply) with the proper adapter. Just ensure that it supplies 9V and a minimum of 0.5A with polarity as labeled on the original (⊖-outside, ⊕-inside). The most unexpected improvement was in audio quality. Even without a second audio jack, I get much better sound now that what I had with RF. Image has almost no artifacts now (it seems the occasional faint vertical line/shadow is a fact of life unless you go with more sophisticated mods. Personally, I’m pretty happy with the results I got: My friends were also super happy: One of them got enough points in Frostbite to earn an Activision Patch 😁. Too bad we are a few decades late to send a picture to Actvision, but here are the proof and her honorary patch. ]]> My Keurig B40 coffee maker was super convenient, but throwing those pods in the garbage wasn’t great for the environment (yes, you can compost the contents, but despite manufacturer claims/efforts, you can’t recycle the pods in Toronto. They also limit your coffee choice, so a couple years ago I switched to reusable pods and wrote a blog post about it. Since then, I learned a couple things about grinding and avoiding leaks that are worth sharing. Throwaway sealed pods had one thing going for them: freshness. Ground coffee quality decays noticeably as you go through the package. The solution: buy beans and grind them yourself. There is a daunting amount of material on grinding online, and lots of decisions to be made (I had no idea what “burr grinding” was). In the end, I went with the Krups F203, a blade-based one on the cheaper range. Reviews like this one convinced me it’s the best value for the money: it’s easy to operate and clean, and grinds in about 10 seconds with an acceptable noise level. You need to get the timing right: too much and the machine can’t push the water; not enough and you don’t get the best flavour. I usually grind enough for 4-6 servings, which can be stored in a small container and last a couple days of working-from-home. An Evak airless container keeps the beans fresh, and the result is good coffee, without much hassle. I mentioned on the original post the importance of following the instructions, in order to prevent water from flowing from the top of the pod and spill in random directions. Even following those, leaks started to get more frequent - to the point I was considering a coffee maker replacement. The Keurig system works by having needles perforating the pod from top and bottom. The top needle doubles as the water injection system, and a rubber piece around it exerts pressure on the top of the pod - which creates an efficient seal in traditional pods, but reusables may require some extra pressure, and I suppose they may have deformed the rubber a bit. While looking for a rubber piece replacement, I found this “EZ Fix - Stop Reusable K Cup Coffee Leaks ad, and decided to try it. It is essentially a pair of tiny rubber discs (the Cafe Cup reusable pods only required one) that lower the machine’s rubber piece to tighten the sealing. And it works - I had no leaks since I installed it. Granted, you could get such a rubber disc from a local plumbing/hardware shop (1mm height / 3mm hole diameter / 1 cm external diameter should do) or even try an o-ring, but for the price and convenience, I recommend that one. Installation was quick, and (so far 🤞) I had no more leaks.]]> My friend drives a BMW 3 Series. Those cars come with a built-in entertainment system that allows you to connect your smartphone and make calls, which worked as expected. But some capabilities (which they call Enhanced Bluetooth) are only enabled by means of a “service fee” - even though you’ve already paid for the system. And when “the man” abuses its power, hacking ensues… The car’s on-board diagnostics (OBD) port can be used to configure all sorts of settings - as long as you have the proper equipment. Enter BimmerCode, an app that enables a smartphone to read/change those settings through an OBD adapter - a dongle that plugs into the port and connects to the phone via Bluetooth. Fair warning: there must be a dozen ways in which you can render your car unusable by changing those settings (names suggest they affect things like the engine and fuel injection). Don’t do anything unless you are confident and can afford to have the car towed and fixed by those same dealers that wanted to charge you for enabling the software in the first place… Once you pair the smartphone with the adapter and run the app, the fun comes in two flavours: a “basic” mode with high-level options, and an “expert” one that exposes all the low-level settings. We had to use the later, activating and applying two different config blocks (following this helpful forum post): HEADUNIT 3003_Telefon_Telematik_Online - CDMM_Bluetooth_Audio - CDMM_BT_DATABASE - AUDIO_PLAYER_ON_OFF - BT_MODUL_ON_OFF - A2DP_PROFILE COMIBOX 3004_Bluetooth_Paramter - A2DP_AVRCP_EIN_AUS We had to “restart” the car, Windows-style (turning the ignition mode off and on again). Once we did, all the functions worked: browsing call lists, playing music (and controlling it from the car, even on a non-standard Android player app), Telegram calls and some other things. Best of all, the car still worked - a perfect wrap for an unusual New Year’s hack! 😊]]> This was a rough year for me, so I haven’t done much regarding hobby projects. But my BFFs had a Switch Pro Controller intermittently failing in one direction. I saw this before on a Wii U, so I offered to repair when I had the chance. Unlike the Wii U (which had convenient plug-in stick units), the Pro Controller ones are soldered, so basic (de-)soldering skills are needed. On the plus side, the electronics aren’t nearly as crammed - plus you aren’t risking an irreplaceable part of the system, so as long as you used a soldering iron before, it can be done. The replacement stick will take some time to arrive. Use it to watch someone taking the controller apart, so you know what to expect when it’s your turn. When the time comes: remove the handles (one screw each) and the back transparent plate (four screws) screws, getting access to the battery compartment. Then, remove the battery and the three screws (two on the top, one below), jiggle a bit and you’ll separate the top part from the bottom. You are halfway towards your goal, but watch out: the two parts are connected by a tiny flat cable (which you don’t want to damage). Disconnect it from the bottom (so it stays out of your way during the soldering work). Important: don’t use any tools, just your finger and don’t force it. See this video and/or search for “how to open/close a ZIF connector”. Doing it right, you’ll see this (click pics to enlarge): Notice there are also cable connecting the board to the rumble motors. You’d better leave those alone, and just remove the screw on the white piece and the three screws holding the board in place. It will still be locked to the plastic because the USB connector (in the front) will be inset. Again, a bit of jiggling should pop it out. Turn it towards you, like this: The soldering points for the left and right stick are easy to spot (and remove). These days, my favorite tool for that is a cheap de-soldering iron with built-in squeeze, but use whatever works for you. Just be careful not to mix left and right if you turn the board around. Solder the new stick, then apply the reverse steps. Don’t forget to put the flat cable back. Again: no tools, no force, just gently insert and let the flap go down. Use the opportunity to see the secret message (spoiler available by clicking the picture below). The white screws go on the back plate, all other screws are the black ones (didn’t check the handles, since the screws stayed in place). Be gentle, and you should have the controller working just like new, for just a few bucks and a bit of time (which you’d be wasting on a game anyway if the controller wasn’t busted 😛).]]> : A few years later, I’ve rebuilt this in order to figure out the next steps. And, of course, there was one last error! I was having a hard time uploading the monitor for the second time, and noticed the last couple bits were a bit unstable. It turns out you should not use Arduino pins 0 and 1 if you are dumping serial output. I am not sure how I managed to make this work before (maybe I was using a compatible board that did not reserve those pins for serial I/O), but I’ve changed the circuit to instead use pins 2-14, where “14” is actually analog pin A0 (which can be used as digital), and pin A5 for the clock. I’ve also rewrote the monitor program, cleaning it up a quite a bit and printing actual ROM addresses: // Turns an Arduino into a 10Hz clock generator (on pin A5) // and a monitor for a 13-bit address bus (on pins 2-14, where 14=A0) void setup() { pinMode(A5, OUTPUT); Serial.begin(115200); } void loop() { // Clock pulse digitalWrite(A5, LOW); delay(50); digitalWrite(A5, HIGH); delay(50); // Print current address assuming Atari ROM (higher bits all 1s) word address_value = 0b1110000000000000; for(int bit = 0; bit <= 12; bit++) { address_value += digitalRead(bit + 2) * pow(2, bit); } Serial.print(address_value, BIN); Serial.print(" = 0x"); Serial.println(address_value, HEX); } On the board, just skip Arduino pins 0 and 1, wiring pin 2 to CPU A0, pin 3 to CPU A1, …, pin 13 to CPU A11, then pin A0 to CPU A12. Then wire pin A5 to CPU clock. Or just follow the fixed Fritzing (.fzz) drawing: It seems to work all right now. I still get a couple odd results (notably, 0xFFFB and 0xFFFC instead of 0xFFFC and 0xFFFD` read when I press the RESET button, and the last ROM address being skipped), but they may be either 650x oddities, or imperfections from this monitor. Still, that puts me back on track to continue building up towards the Atari.! UPDATE: Since I wrote this, I learned a couple things aboug grinding and avoiding leaks, so check out the new post. ]]>: ]]>]]>
http://feeds.feedburner.com/chesterbr
CC-MAIN-2021-17
refinedweb
9,168
65.56
In this server side implementation, it will list its own IP address when program start. And run in background thread, start a ServerSocket and wait at serverSocket.accept(). Once any request received, it return a message to client side. package com.example.androidserversocket; import java.io.IOException; import java.io.OutputStream; import java.io.PrintStream; import java.net.InetAddress; import java.net.NetworkInterface; import java.net.ServerSocket; import java.net.Socket; import java.net.SocketException; import java.util.Enumeration; import android.os.Bundle; import android.app.Activity; import android.widget.TextView; public class MainActivity extends Activity { TextView info, infoip, msg; String message = ""; ServerSocket serverSocket; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); info = (TextView) findViewById(R.id.info); infoip = (TextView) findViewById(R.id.infoip); msg = (TextView) findViewById(R.id.msg); infoip.setText(getIpAddress()); Thread socketServerThread = new Thread(new SocketServerThread()); socketServerThread.start(); } @Override protected void onDestroy() { super.onDestroy(); if (serverSocket != null) { try { serverSocket.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } private class SocketServerThread extends Thread { static final int SocketServerPORT = 8080; int count = 0; @Override public void run() { try { serverSocket = new ServerSocket(SocketServerPORT); MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { info.setText("I'm waiting here: " + serverSocket.getLocalPort()); } }); while (true) { Socket socket = serverSocket.accept(); count++; message += "#" + count + " from " + socket.getInetAddress() + ":" + socket.getPort() + "\n"; MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { msg.setText(message); } }); SocketServerReplyThread socketServerReplyThread = new SocketServerReplyThread( socket, count); socketServerReplyThread.run(); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } private class SocketServerReplyThread extends Thread { private Socket hostThreadSocket; int cnt; SocketServerReplyThread(Socket socket, int c) { hostThreadSocket = socket; cnt = c; } @Override public void run() { OutputStream outputStream; String msgReply = "Hello from Android, you are #" + cnt; try { outputStream = hostThreadSocket.getOutputStream(); PrintStream printStream = new PrintStream(outputStream); printStream.print(msgReply); printStream.close(); message += "replayed: " + msgReply + "\n"; MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { msg.setText(message); } }); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); message += "Something wrong! " + e.toString() + "\n"; } MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { msg.setText(message); } }); } } private String getIpAddress() { String ip = ""; try { Enumeration<NetworkInterface> enumNetworkInterfaces = NetworkInterface .getNetworkInterfaces(); while (enumNetworkInterfaces.hasMoreElements()) { NetworkInterface networkInterface = enumNetworkInterfaces .nextElement(); Enumeration<InetAddress> enumInetAddress = networkInterface .getInetAddresses(); while (enumInetAddress.hasMoreElements()) { InetAddress inetAddress = enumInetAddress.nextElement(); if (inetAddress.isSiteLocalAddress()) { ip += "SiteLocalAddress: " + inetAddress.getHostAddress() + "\n"; } } } } catch (SocketException e) { // TODO Auto-generated catch block e.printStackTrace(); ip += "Something Wrong! " + e.toString() + "\n"; } return ip; } } <LinearLayout xmlns: <TextView android: <TextView android: <TextView android: <ScrollView android: <TextView android: </ScrollView> </LinearLayout> Remark: uses-permission of "android.permission.INTERNET" is needed. *** Updated example: Bi-directional communication between Client and Server, using ServerSocket, Socket, DataInputStream and DataOutputStream. 42 comments: Thanks, great examples! Works perfectly when devices are in a local network. I tried with one device connected using 3G and the other using Wifi, but it didn't work. Any idea why? I think you have to set port forwarding on your router. In the Server example, you start the socketServerReplyThread by calling run() instead of calling start(). Isn't it a misuse of a thread? how to connect and test? please help.... Hello Dominique, Thx for yuor concern. My point is all network processing should run in background thread. In my implementation, socketServerReplyThread.run() from SocketServerThread, that means both SocketServerThread and SocketServerReplyThread run in a common background thread. I have no idea about any advantage/disadvantage to run in ONE common background thread, or to run in TWO separate background. Any further advice is welcome:) hello Sourav Suman, Please view the video in last post Android Server/Client example - client side using Socket. In my test, both client and server run in a common WiFi network, such that no need to concern port forwarding in router. In my test, Server crashs, so i created a server in my computer using Perl, i tried to connected using client from my Phone and client return: IOException.java.net.SocketException: No route to host You know that is this ? Thanks! hello Tales Bragança, Can you make sure your server is reachable? for example, both the server and client are in the same network. Your App have permission of "android.permission.INTERNET"... Hi, I have a query . if the client wants to say hello to server. or send any msg how to get it in server. is it possible to add a inputstream reader the server thread. I just to read it but the socket connection is closed BufferedReader in1 = new BufferedReader(new InputStreamReader(hostThreadSocket.getInputStream(); if (in.ready()) { String s = in1.readLine(); System.out.println("manga"); message+= "replayed: " + msgReply + " "+"server request: "+s; } else{ message+= "replayed: " + msgReply + "\n"+"server request"; } I tried to connect two devices through wifi-direct using this. but i am getting an ConnectException (Network is unreachable) can you help me with that? Hello this is Pushpal Roy. I liked your example.. I tried it too. I created a hotspot in one phone, and connect by wifi to another phone. But its not working. Please help. hello Subramanian P V, Please check the upadted example: Bi-directional communication between Client and Server, using ServerSocket, Socket, DataInputStream and DataOutputStream hello senurz, I don't know how to connect with WiFi direct. hello Pushpal Roy, I tried to use on WiFi spot, it work. Please check the second video on the above link. Thank you so much. :) I saw the video of bidirectional communication. And also implemented it successfully. :) But, I'm working on a project in were I need to broadcast a message from the server (for say)to multiple clients connected in the same network. Or any of of the clients to other clients and the server. Actually I'm building a Wi-Fi group chat. Can you please help me in this? hello Pushpal Roy, please check: Implement simple Android Chat Application, server side and client side. hope can help:) Saw your broadcasting tutorial... Its just awesome!! Thank you so much! I am not able to download the code. Please suggest!! I'm trying to implement smartphone to PC socket. I tried my best to write but still not work. Can you give me some help?? My example is"Server=Smartphone(Android), Client=PC(java)" And can send or response some messages Thanks. HI, I get the following error: Attempt to invoke virtual method 'void android.widget.TextView.setText(java.lang.CharSequence)' on a null object reference for this line: infoip.setText(getIpAddress()); hello 黃小右, please read updated examples Bi-directional communication between Client and Server, using ServerSocket, Socket, DataInputStream and DataOutputStream and Java/JavaFX Client link to Android Server. Thanks its working fine for WiFi .Do you have any tutorial for same concept for blue tooth. hello fazil sheriff, I'm just doing it: Bluetooth communication between Android devices. Honestly, it is not a good working example, but may be easy to understand. Hello, I need to communicate between android code and JavaScript code(webView) of same application. My implementation is as follows, 1. Created serverSocket connection in android as described in your example code. Start this server in onCreate of Activity. 2. Created websocket connection in JavaScript code. Result: I am able see some messages being read in serverSocket read() method. Data is related websocket creation details (http header, connection upgraded to socket, user agent, etc) But I try to send some message to websocket from serverSocket(android code). It is not working. Note: ServerSocket is listening to port 8080. Websocket is cerated using URL - "wb:127.0.0.1:8080" Please let me know if my approach is wrong. Is it possible to communicate between serverSocket and webSocket? Mallikarjun Sir where is Client said code? how to send message one client to other using server in between multiple clients on android Hello Nafees, client side using Socket is HERE Great tutorial. I am trying to connect 2 android devices over a tethering wi-fi network. But client side is showing connection refusesd. I've added the network permissions in the manifest file as well. Anything I am missing ? Sir I need android codings to send the messages to the server and I have to read it from my app through the server.Please help me sir. i have pasted your code but it says "Something wrong! java.net.SocketException can you help me with it? This post is likeable, and your blog is very interesting, congratulations :-) i am a newbie on android so i just copy and paste it to my project but my prolblem now is how to run this chat application?. .anyone can help me? .thank you hello tags, Just connect both mobile to the same WiFi router, suppose it work. Refer to the video. Hello Andr.oid Eric, I wanted to ask about the port forwarding. Do you know a good tutorial I can follow to understand how to implement that? Does that mean my server must remain connected to the same router though? If you don't mind, may you please take a look at my question on StackOverFlow and try to help me? Any advice or guidance would be greatly appreciated. Thanks! Best Regards, M. Jalil M. Jalil, Just found a good tutorial: It mainly depends on your router setting, if your server behind router of HOME network. IF your server is in MOBILE network, it's another case: most probably the ISP will block your ports. Hello Eric congratulations for your blog. When I run the server side of your project, it loads but with a message that says "something wrong ......" In your code: Enumeration enumNetworkInterfaces = NetworkInterface .getNetworkInterfaces(); gets that message. would youn tell me, please, the origen of that wrong doing? Thanks in advance sir how to save to save the datas from the server socket to database. Hi I have 2 QUESTIONS:- 1. I am able to run SERVER from phone and CLIENT from Emulator... but i am not able to receive any communication when i run SERVER on emulator and CLIENT on phone... why? shouldnt it work both ways? 2. When i Disconnect my phone from WIFI... then the emulator is not able to connect when i am on my regular 3G network? I cant seem to understand... It would be nice if you could shed some light on this logic thanx, YO! hello YO! Sorry, I don't know how the emulator config! hello andr.oid Eric, Your tutorial and blog is awesome as it helped me a lot in creating a server client communication. But it works when the two devices are connected to same network. Do you have any tutorial to connect over different routers? (through IP) Regards, Subramanya Hello Hands on Science - SAI, To connect over different routers; I think it very depends on router setting, specially port forwarding. It's very hard to explain for me. It's a very good tutorial Port forwarding on your router, a suggested tutorial. hello sir , i am having hostinger,s server by using this server i want to implement the android chat application. in chat application the client must use the server as hostinger server .sir plz help me that how can i do this? hi, I would like to know one thing(client and server are 2 different projects/apps) line1 outputStream = hostThreadSocket.getOutputStream(); line2 PrintStream printStream = new PrintStream(outputStream); line3 printStream.print(msgReply); line4 printStream.close(); if line3 printstream.print(msgReply) which is server side code is sending some useful message to client. Now if i want to use that message in my client program, how can i ? How can i save that message in client project in a variable or anything else? please help. hello /\ B H !, Refer to the client side Android Server/Client example - client side using Socket: where response in MyClientTask is the message received. Nice tutorial...Loved it
http://android-er.blogspot.com/2014/02/android-sercerclient-example-server.html?showComment=1419934591395
CC-MAIN-2017-13
refinedweb
1,960
52.97
This is your resource to discuss support topics with your peers, and learn from each other. 12-01-2012 08:38 PM yhea i share the dev B with my brother my app is already in the app store. Looking to start two more one being another webview based app with ads. kinda need the sim to work until i get back the dev. 12-01-2012 11:50 PM has anyone posted a bug in the issuetracker for this yet? If so, please make it public so that we can upvote it and make sure proper channel are aware of this issue! I did a very simple app that reproduces the issue (not 100% but >50%): import bb.cascades 1.0 Page { WebView { id: webView url: "dummy.html" } } as you can see, it's the most basic app with only a webview if no one created an issue yet, i will do so on Monday! thanks 12-02-2012 10:52 AM Well, I've wasted my time, it seems, for two reasons. For one thing, although the app works perfectly well, and I knew it would work only when installed with a debug token (since it has to be an unsigned app), I didn't take into account the fact that others would have to edit the MANIFEST.MF to replace my author and author id with their own in order to sideload on their own devices. That's not particularly difficult actually, but given that the audience for this was mainly people who aren't very familiar with SSH and would find the "pidin + slay" option difficult, making them edit a .bar file like that may be asking too much. The other reason is, as it's now been pointed out to me, that Momentics can deal with these app crashes perfectly well. In Momentics, go to the QNX System Information Perspective. Make sure you're connected to your target device or simulator, and look at the System Summary tab. You should see your "stuck" process listed there. Right-click on it and select "Deliver Signal...". The default is SIGTERM, and that's the one we want. Click OK and a moment later your app should have been "slayed" and the icon will return to the normal state, letting you re-launch the app. For anyone who would still like to install it, it's a very nice app. :-) You can find it at At this point, however, I don't intend to make a build for the simulator unless enough people ask for it. Screenshots, to finish this up: 12-02-2012 11:00 AM Thanks for the help guys. This should fix it. Fingers crossed illncheck today 12-02-2012 12:11 PM - edited 12-02-2012 12:33 PM Peter trying to do what you described above, but where is the System Summary tab? EDIT: Nevermind, I found it. I had it closed, so I had to reset my System Info Perspective. 12-07-2012 10:35 PM Thanks Peter, you saved the day with your Zombie Slayer app. 12-09-2012 08:25 PM This isn't limited to WebViews only. I've seen it happen to the browser app too after doing some funky stuff with WebGL... 12-10-2012 11:40 AM this issue happens here, too, since Beta 4 SDK and Simulator. Not sure as it happened before, as I started using WebView right now. I have a WebView without content at beginning and just after a user input, it starts to load the content. If I do not press the search button, everything works fine, when closing, but if it loads sth from the web, I am not able to close the app properly... thanks for the tip for hard exit, maybe there should be a task manager as a BB10 app, with that you can close processes (like KillMe for Symbian)... as looking to the All Aboard Port-A-Thon, would it be the best way to simply submit the app as it is now? Or did you found a work around for this issue? 12-10-2012 11:47 AM schumi1331 wrote: as looking to the All Aboard Port-A-Thon, would it be the best way to simply submit the app as it is now? Or did you found a work around for this issue?... 12-10-2012 02:17 PM... haha, maybe that's also a good idea
https://supportforums.blackberry.com/t5/Native-Development/Beta-4-WebView-app-freeze-post-exit/m-p/2013857/highlight/true
CC-MAIN-2016-30
refinedweb
743
80.21
Generate TensorFlow Tensor Full Of Random Numbers In A Given Range Generate TensorFlow Tensor full of random numbers in a given range by using TensorFlow's random_uniform operation < > Code: Transcript: First, we import TensorFlow as tf. import tensorflow as tf Then we print out the TensorFlow version we are using. print(tf.__version__) We are using TensorFlow 1.5.0. In this video, we’re going to generate two example TensorFlow tensors full of random numbers in a given range by using tf.random_uniform operation. For the first example, we’ll generate random float32 numbers. tf_ru_float_ex = tf.random_uniform([2, 4, 6], minval=0, maxval=1, dtype=tf.float32) We create a 2x4x6, with a data type float32, the min value is 0, the max value is 1, and we’re using tf.random_uniform, and we’re going to assign it to the Python variable tf_ru_float_ex. Note that though we specify a min value and max value, the TensorFlow defaults are zero for the min value and one for the max value if we’re using the float32. The other thing to take into account is that when we do the random uniform, the min value is included in the range while the max value is not included in the range. Let’s print out the tf_ru_float_ex Python variable to see what we have. print(tf_ru_float_ex) We see that it’s a TensorFlow tensor, we see that the shape is 2x4x6, and we see the data type is float32. Because we haven’t run it in a TensorFlow session yet, it doesn’t have any values attached to it. For the second example, we’ll generate random int32 numbers. tf_ru_int_ex = tf.random_uniform([2, 4, 6], minval=0, maxval=100, dtype=tf.int32) So we’re going to use the tf.random_uniform, the shape we want is 2x4x6, the min value is going to be 0, the max value is going to be 100, and this time the data type is tf.int32, and we’re going to assign it to the Python variable tf_ru_int_ex. Let’s print this variable we just created. print(tf_ru_int_ex) We see that it’s a TensorFlow tensor, the shape is 2x4x6, and the data type is int32. Like the previous example, because we haven’t run it in a TensorFlow session, it doesn’t have any values yet. All right, now that we’ve created our TensorFlow tensors, it’s time to run the computational graph. Let’s launch the graph in a session sess = tf.Session() And let’s initialize all the global variables in the graph. sess.run(tf.global_variables_initializer()) Next, we print out our two tensors to see how tf.random_uniform operation did. Let’s now print out our first TensorFlow tensor. print(sess.run(tf_ru_float_ex)) Great! We see that it is a two by one, two, three, four by one, two, three, four, five, six. That’s what we expect. We see that all the numbers are floating point numbers, and we see that the tensor’s full of random numbers pulled from the random uniform distribution with a minimum value of zero and a max value of one. So none of the numbers are below zero and none of the numbers are above the number one. Let’s now print our second TensorFlow tensor example. print(sess.run(tf_ru_int_ex)) We see, again, that it is a 2x4x6 tensor. We see that the numbers are all integer numbers, and we see that the tensor’s full of random numbers pulled from the random uniform distribution, with a minimum value of zero, we see a zero here, and a max value of 100. So none of the numbers are below zero; none of the numbers are above 100. Perfect! It worked. We were able to generate a TensorFlow tensor full or random numbers in a given range by using the tf.random_uniform operation.
https://aiworkbox.com/lessons/generate-tensorflow-tensor-full-of-random-numbers-in-a-given-range
CC-MAIN-2019-51
refinedweb
650
63.29
GETRUSAGE(3) GETRUSAGE(3) getrusage - get information about resource utilization #include <sys/time.h> #include <sys/resource.h> #define RUSAGE_SELF 0 /* calling process */ #define RUSAGE_CHILDREN -1 /* terminated child processes */ int getrusage(int who, struct rusage *rusage); Getrusage returns information describing the resources utilized by the current process, or all its terminated child processes. This routine is provided for compatibility with 4.3BSD._minflt the number of page faults serviced without any I/O activity; here I/O activity is avoided by "reclaiming" a page frame from the list of pages awaiting reallocation. Page 1 GETRUSAGE(3) GETRUSAGE(3) ru_majflt the number of page faults serviced that required I/O activity. messages sent over sockets. ru_msgrcv the number of messages received from sockets. remaining fields are not maintained by the IRIX kernel and are set to zero by this routine. The numbers ru_inblock and ru_oublock account only for real I/O; data supplied by the caching mechanism is charged only to the first process to read or write the data. The ru_msgsnd and ru_msgrcv fields keep count of IPC messages sent and received via the socket (2) interface only. The ru_maxrss field count includes shared pages. The possible errors for getrusage are: [EINVAL] The who parameter is not a valid value. [EFAULT] The address specified by the rusage parameter is not in a valid part of the process address space. wait(2), see timers(5), gettimeofday(3) for details on the time resolution. Page 2 GETRUSAGE(3) GETRUSAGE(3) There is no way to obtain information about a child process that has not yet terminated. PPPPaaaaggggeeee 3333
https://nixdoc.net/man-pages/IRIX/man3/getrusage.3.html
CC-MAIN-2021-10
refinedweb
266
55.84
This Flask tutorial covers the idea of templates and using Bootstrap for your styling / CSS needs. First, we cover templates. The idea of templates is two-fold. First, you use templates so you can have one location for code that corresponds to your navbar, for example. If you had the navbar code on every single page, think about what you'd have to do if you wanted to change your navbar. Yikes! Thus, templating is used for this. Also, with the marriage of Python and HTML in mind, we use the Jinja templating of Flask to pass variables from Flask to HTML. If you would like an explanation of the following HTML code, see the above video.> </html> <p>Hi there how ya doin!?</p> Notice the line: "{{ url_for('static', filename='css/bootstrap.min.css') }}" The curly braces denote a variable, which is made possible by Jinja templating. This variable, is whatever the value of the function, url_for, is. The parameters we pass through our url_for function are: Location directory (static), and then the filename (css/bootstrap.min.css). This is how we can dynamically reference the files, no matter where the user is located on our website. Variables can be the result of functions built into Flask (url_for is a Flask function), but they can also be variables that you have passed from your Python script, through Flask and Jinja templating, through to your HTML page. Logic via Jinja templating looks like: {% if something %} do something {% endif %} Next, to continue on, we're going to need a few things. We're referencing a favicon, so we let's grab that here. Now we also need Bootstrap. Once there, click on download Bootstrap, and then again on Download Bootrap. You want the regular version, not the source code or the Sass. Once downloaded, extract the files and you should have css, fonts, and js directories. Take those three directories, and move them to within your "static" directory. Finally, we need to modify our init.py file, so: File: __init__.py, server location: /var/www/PythonProgramming/PythonProgramming/__init__.py from flask import Flask, render_template app = Flask(__name__) @app.route('/') def homepage(): return render_template("main.html") if __name__ == "__main__": app.run() Here, the major changes are that we import render_template, and then we return render_template, with "main.html" as the parameter. As usual, every time we modify the Python files, we need to run the following via SSH: service apache2 restart Now you should be able to load your website successfully, seeing any text you put in your HTML file, as well as noticing that the Bootstrap CSS file is affecting your styling. If you are having trouble anywhere, leave a comment on the YouTube video. Otherwise, you're ready to move on to the next tutorial.
https://pythonprogramming.net/bootstrap-jinja-templates-flask/
CC-MAIN-2019-26
refinedweb
465
75
The ANTLR example we’ve seen in the last two posts (part 1 and part 2) produced a simple calculator that accepted integers and the four arithmetic operators and calculated the answer. This sort of parser is fine if you need to parse the input once only. However, suppose we wanted to write an application that allowed the user to enter a mathematical function f(x) as an algebraic expression and then draw a graph of that function between two values of x. In that case, we’d need to generate a number of values of f(x) for various values of x between the endpoints. One way of doing this is to parse the input string repeatedly, each time passing in the required value of x. However, this is rather inefficient, and ANTLR offers a better alternative: generating an abstract syntax tree or AST, and then using the AST to evaluate the input expression. So what is an AST? To understand this, we need to understand what a parser does when it parses an input string. For example, with our simple calculator, we can give the input string 3+4*5. Since multiplication has a higher precedence than addition, 4*5 is done first. The parser treats the input as a tree like this: To evaluate the tree, it is traversed or ‘walked’ in a depth first manner, which means that we start at the root ‘+’ and go as deep into the tree as we can along each child, then evaluate the operations as we come back out of the tree. Thus going down the left branch we encounter 3, which is saved until the result of the right branch is found. Going down the right branch as far as possible we encounter * then 4. Backing up from 4 to * we then go down the right branch and find 5. Both children of * are now determined so we can apply the * operator to get 20. This determines the right branch of the +, so we can now apply that operator and get 3+20=23. In the more general situation mentioned above, where we need to evaluate the input several times, perhaps with different values at some of the nodes, it would clearly be less work if we didn’t have to generate the tree afresh each time before we traverse it. ANTLR allows us to generate an AST from a grammar file and then define a tree grammar which uses the AST as input, rather than the original string. Essentially what we need to do is write a grammar in the usual way and then write a second grammar that operates on the tree nodes rather than the original grammar rules. This might sound like more work (OK, it is), but usually writing the tree grammar is easier than writing the original grammar, and we are rewarded with a more efficient program. To illustrate how this is done, we’ll expand our calculator grammar to an algebraic function evaluator. To this end, we want the numbers it uses to be doubles instead of ints. We also want to support a few more operations, such as raising to a power (using the ^ operator) and the unary minus for negating an expression. The user can enter an algebraic expression using x as the variable, and then specify the range of x values over which f(x) is to be calculated. Here’s the complete grammar file which implements this, and which generates the AST. We’ll explain the new syntax required for AST generation afterwards. grammar Polynomial; options { language=CSharp3; TokenLabelType=CommonToken; output=AST; ASTLabelType=CommonTree; } tokens { UNARYMINUS; } @lexer::namespace{Polynomial} @parser::namespace{Polynomial} // START:expr public startProg : expr { System.Console.WriteLine($expr.tree.ToStringTree());}; expr : multExpr ( '+'^ multExpr | '-'^ multExpr )* ; multExpr : powerExpr ('*'^ powerExpr | '/'^ powerExpr )* ; powerExpr : unaryExpr ('^'^ unaryExpr)? ; unaryExpr : '-' atom -> ^(UNARYMINUS atom) | atom ; atom : DOUBLE | ID | '('! expr ')'! ; // END:expr // START:tokens ID : 'x' ; DOUBLE : '-'? '0'..'9'+ ('.' '0'..'9'*)?; // Use uppercase Skip() for C# WS : (' '|'\t'|'\r'|'\n')+ {Skip();} ; // END:tokens (If you’re wondering about the grammar name, I originally designed the grammar for calculating polynomials, but it grew in the telling.) Note that we’ve added 3 lines to the options section on lines 5 to 7. Actually, the options are those generated by default by the Visual Studio plugin, so you can just leave them as they are if you’re starting a new grammar file. On line 10 we have a ‘tokens’ section, in which we define a single token, UNARYMINUS. Since the – sign is used for two purposes (subtraction and negation), the parser gets confused at the tree stage, so I’ve used a token as a label for the unary minus. We’ll see how this works when we define the tree grammar in a minute. The remainder of the grammar should be fairly self-explanatory if you’ve read the earlier posts, except for the bits that generate the AST. This bit does require some careful thought as to what nodes you want to be in the AST and how they should be structured. We’ve inserted a startProg rule at line 18. This isn’t really needed here, but when you’re starting out with ASTs it’s useful to see that actual tree that is produced, so that’s what this rule does in addition to calling the expr rule which is where the actual parsing takes place. Now look at the ‘expr’ rule on line 21. We’ve defined it as a multExpr on its own, or as two multExprs joined by + or -. The solo multExpr on line 22 is unadorned since we want this node to be placed in the AST as it is. The + rule on line 23 however has 3 parts to it (the left and right multExpr operands and the + operator). Comparing with the diagram above, we see that we want this expression to be placed in the tree with the + as the root and the two multExprs as its children. This is indicated on line 23 by placing a ^ after the term that is to be the root of the node, in this case, the + sign. The same technique is used in creating the other AST nodes in the expr, multExpr and powerExpr rules. The unaryExpr on line 39 is a bit different. In order to distinguish this use of ‘-’ from the subtraction rule in expr, we want the node in the AST to use the UNARYMINUS token as the root node rather than a bare ‘-’ symbol. To do this we’ve used a rewrite rule. This is defined by using the arrow -> followed by the form we want the node in the AST to have for this rule. Nodes in the AST always have the form (root child1 child2…), that is, the first node is the root and the other nodes are its children. Thus here UNARYMINUS is the root and the atom is its single child. Finally, look at the atom rule on line 44. An atom consists of a DOUBLE, which matches a double floating point number, or an ID, which here we’ve restricted to the single variable name ‘x’, or an expr in parentheses. The parentheses serve only to fence off an expression from other terms on either side, and once the expr has been identified, the parenthses are no longer needed. We therefore don’t need them in the AST. To exclude a term from the AST, place a ! after it, as we’ve done here. Now we can look at the tree grammar. To create a tree grammar file, you can use Visual Studio’s Add New Item dialog and select ANTLR Tree Grammar. Here’s the complete file: tree grammar PolynomialTree; options { language=CSharp3; ASTLabelType=CommonTree; tokenVocab=Polynomial; } @namespace{Polynomial} @header { using System; } // START:node public start[double x] returns [double value] : a = node[x] {$value = a;}; node [double x] returns [double value] : ^('*' a = node[x] b = node[x]) {$value = a * b;} | ^('/' a = node[x] b = node[x]) {$value = a / b;} | ^('+' a = node[x] b = node[x]) {$value = a + b;} | ^('-' a = node[x] b = node[x]) {$value = a - b;} | ^('^' a = node[x] b = node[x]) {$value = Math.Pow(a, b);} | ^(UNARYMINUS a = node[x]) {$value = -a;} | ID {$value = x;} | DOUBLE {$value = Double.Parse($DOUBLE.text);} ; // END:node Notice this is defined as a ‘tree grammar’ (not just a ‘grammar’) on line 1. In the options on line 6 we’ve specified a ‘tokenVocab’. When ANTLR processes the original grammar file it produces a file containing the tokens used in that file, and to ensure that the tree grammar uses the same tokens, we load in that file. The tokens file has the same name as the grammar with the suffix ‘.tokens’. If you want to see it, it’s located under your project folder in obj\x86\Debug. The start rule on line 15 is the entry point into the tree. Note that rule names in the tree grammar don’t have to match those in the original grammar, since you’re effectively defining a new grammar with tree nodes as input instead of a string. You might want to make the names the same, but I’ve kept them different here to show that they are in fact separate entities. Since we want to walk the tree with various values for x, we need to define the start rule so that it accepts a parameter. This is done by enclosing the parameter in square brackets (NOT parentheses, as you’d do in a normal C# method call). The start rule also returns the result of the calculation as ‘value’. Its only action (on line 16) is to call a ‘node’ and pass x along to that node. Again, note that square brackets are used to call a rule with a parameter. The meat of the tree grammar is in the ‘node’ rule on line 18. We list all the node types that can occur and define an action for each. We must begin each compound node (one that contains more than one term) with a ^ and enclose the node in parentheses; apart from that, it’s much the same as a rule in the original grammar. On line 24, we make use of the UNARYMINUS token to recognize a unary minus operator. On line 25, the value of the parameter x is assigned whenever an ID is encountered in the tree, and on line 26 we parse a double floating point number. We don’t have any references to the various rules like expr, multExpr and so on that were in the original grammar, since the original grammar took care of all that and built an AST where the nodes had a much more uniform structure. The precedence of the various operators is built into the AST (remember that lower down nodes are processed first in a depth-first traversal), so we don’t need to specify that either. Finally, we need some C# code to use the AST to do some real calculations. Here’s the program: using Antlr.Runtime; using Antlr.Runtime.Tree; using System; using System.IO; using System.Text; namespace Polynomial { class Program { static void Main(string[] args) { Console.Write("Enter expression: "); string expression = Console.ReadLine(); Console.Write("Enter low value of x: "); double xLow = Double.Parse(Console.ReadLine()); Console.Write("Enter high value of x: "); double xHigh = Double.Parse(Console.ReadLine()); Stream exprStream = new MemoryStream(ASCIIEncoding.Default.GetBytes(expression)); // Use the parser to build the AST ANTLRInputStream input = new ANTLRInputStream(exprStream); PolynomialLexer lexer = new PolynomialLexer(input); CommonTokenStream tokens = new CommonTokenStream(lexer); PolynomialParser parser = new PolynomialParser(tokens); var result = parser.startProg(); // Use the tree to do the evaluation CommonTree tree = (CommonTree)result.Tree; CommonTreeNodeStream nodes = new CommonTreeNodeStream(tree); PolynomialTree walker = new PolynomialTree(nodes); double xStep = (xHigh - xLow) / 10.0; for (double x = xLow; x <= xHigh; x += xStep) { nodes.Reset(); double value = walker.start(x); Console.WriteLine("f(" + x + ") = " + value); } } } } On lines 13 to 18 we read in the data from the user. The parser needs to read its input from an ANTLRInputStream, and that requires a C# Stream object, so we need to convert the string containing the algebraic expression into a Stream, which is done on line 19. Next we need to call the parser based on the original grammar to build the AST. This is done on lines 22 to 26. These steps are pretty much the same as in the original use of the parser from the previous post. The difference is that the ‘result’ returned from the parser on line 26 is an AST rather than the result of a calculation. (Its actual data type is pretty horrendous, so we’ve used a ‘var’ to declare it.) This call to startProg() will print out the AST. Once we’ve got the AST, we build the tree parser on lines 28 to 31. The CommonTreeNodeStream on line 30 acts as an input stream for the tree parser, and the PolynomialTree object ‘walker’ on line 31 is the parser that will do the evaluation. The loop on line 33 calls the walker for each value of x and prints out the result. Note that we need reset the ‘nodes’ stream before each call since after the walker processes this stream, the marker in the stream is at the end of the input. Here’s a typical run of the program: Enter expression: x^2 - (-3.14 + (4.5 - 9.03/x))*(5.43 - x) Enter low value of x: 10 Enter high value of x: 30 (- (^ x 2) (* (+ -3.14 (- 4.5 (/ 9.03 x))) (- 5.43 x))) f(10) = 102.08849 f(12) = 147.991275 f(14) = 202.12755 f(16) = 264.40975625 f(18) = 334.78925 f(20) = 413.236845 f(22) = 499.733968181818 f(24) = 594.2682375 f(26) = 696.831080769231 f(28) = 807.416375 f(30) = 926.01963 There’s no error checking, so if the user makes a mistake entering the expression or enters a value of x that causes a math error (like division by zero), the program will just crash, but error handling is a whole new ball game (and is probably harder than writing the original grammar), so we’ll leave it here for now.
http://programming-pages.com/2012/07/
CC-MAIN-2013-20
refinedweb
2,379
61.97
Hello I'm new to c++ and to all forms of programming and am trying to get the source code for a program that prints out the hex value of any given byte in a file. I've gone through; this site, google, yahoo, cplusplus.com, C++ for dummies 5th Edition and, unbelievably, more. The answer just seems to elude me. No one says how to do this. I can only get the size of a file or read a txt file, but not the hex, or any other value of any given byte in a file. Thats the problem. Can ANYONE show me the source of a simple program that does this? I'm going to show you the closest I could get to this a majority may lose your lunch when you see it but you'll understand what a state I'm in with which I can only get the character of in a txt file and not the value of a byte of anyfile. I know I'm supposed to make the char unsigned but that stops me from compiling at all and give me errors. [tag]Thanks for reading.Thanks for reading.Code:#include <iostream> #include <fstream> using namespace std; int x, y; int main () { char * buffer; Start: cout << "Enter the number of the byte you are looking for: "; cin >> x; if (x<1) { cout << "We begin counting at 0. Please try again.\n"; goto Start;} ifstream is; is.open ("Yo.bin", ios::in|ios::binary); is.seekg (x, ios::beg); is.read (buffer,1); is.close(); cout << "\nThe hex value of byte number"<< x <<" is "; cout.write (buffer,1); cout << "\n\nDo you want to return to the start of this program?\n"; cout << "Press 1 for YES or any other key for NO: "; cin >> y; if (y == 1) {goto Start;} if (y == !1) {goto End;} End: cout << "Goodbye!\n"; delete[] buffer; return 0;} [/tag]
http://cboard.cprogramming.com/cplusplus-programming/102797-please-help-me-code-read-value-byte-file.html
CC-MAIN-2014-23
refinedweb
321
83.96
High Quality Stapler High Quality Office Desktop Standard 20 Sheets Paper Manual Stationery Metal Stapler US $0.78-$0.95 / Piece 3600 Pieces (Min. Order) Top Sponsored Listing Promotional gifts 2018 candy color manual fashion school students stationery 12 Sheets office mini stapler US $0.25-$0.40 / Set 3000.0 Sets (Min. Order) Mini Stapler 10# Metal durable fashion color cute stapler shool escola documents stationery supply office utility accessories US $0.49-$0.55 / Piece 50 Pieces (Min. Order) Geometric stapler dazzle color fashion mini student stapler office supplies US $0.85-$1.25 / Piece 50 Pieces (Min. Order) Fashion Decorative Pink Rhinestone Stapler CD-Q015 US $1.18-$2.88 / Piece 1000 Pieces (Min. Order) fashion mini stapler US $0.01 / Piece 5000 Pieces (Min. Order) 2020 Top Fashion Cute Cartoon Office Mini Stapler for School Student Modern Basketball Design Stationary US $0.26-$0.28 / Piece 1000 Pieces (Min. Order) Promotional gifts 2018 candy color manual fashion school students stationery office mini stapler US $0.45-$0.50 / Piece 500 Pieces (Min. Order) 2020 newest fashional mini size ivory and light blue fashion stapler with a staple remover US $0.77-$0.87 / Piece 3000.0 Pieces (Min. Order) new business ideas for 2021 Sweden hot sales luxury stationery organizer office fashion book binding mini cartoon stapler US $5.95-$7.33 / Piece 10 Pieces (Min. Order) Top fashion office basic style hard ABS plastic mini bear shaped environmental manual function stapleless stapler US $0.50-$0.95 / Piece 3000 Pieces (Min. Order) Korea creative fashion wooden Mini stapler cute cartoon animal color US $0.37-$0.47 / Piece 500 Pieces (Min. Order) Customization cute plastic mini fashion No.10 stapler US $0.31-$0.36 / Piece 7200.0 Pieces (Min. Order) Fashion School Mini Plastic Funny Cute Stapler US $0.10-$0.86 / Piece 1000.0 Pieces (Min. Order) 2018 Top Fashion Cute Cartoon Office Mini Stapler for School Student Modern Design Stationary US $0.20-$1.00 / Piece 500 Pieces (Min. Order) Fashion Cute Office Mini Stapler for School Student / standard stapler US $0.05-$0.50 / Piece 100 Pieces (Min. Order) Rose Gold/ Gold Stapler with Staples | Office Desk Accessory | School Supplies US $2.98-$4.76 / Piece 100 Pieces (Min. Order) free shipping for small transparent green plastic fashion stapler US $0.63-$0.64 / Piece 1 Piece (Min. Order) Colorful Book Binding Kawaii Nice Stationery Bind Office Metal Stapler US $0.80-$1.20 / Piece 20.0 Pieces (Min. Order) Manufacturer Sublimation High Quality office desktop stationery metal stapler US $1.99-$2.17 / Piece 100 Pieces (Min. Order) Husien fashion stapler rose gold stainless steel office new acrylic stapler US $3.00-$5.00 / Piece 50 Pieces (Min. Order) Fashionable Factory Made A3/A4/A5 Paper Heavy Duty Power Saving stapler US $2.00-$2.60 / Piece 1 Piece (Min. Order) Wholesale hand press office modern mini clear acrylic gold stapler US $3.39 / Piece 500 Pieces (Min. Order) 2021 High quality office desktop standard 20 sheets paper manual stationery metal stapler US $0.08-$0.12 / Piece 600 Pieces (Min. Order) LULAND LL21006 High Quality Hot sale Color fashion office Plastic Promotional Stapler US $0.84 / Piece 10 Pieces (Min. Order) Stapler Pin Office Rose Gold Office Staplers Fashion Transparent Acrylic Rose Gold Stapler School Office Supplies Stationery US $2.20-$2.45 / Piece 100 Pieces (Min. Order) Promotional 2020 multi color manual fashion school gifts students stationery 15 Sheets office mini stapler US $0.27-$0.29 / Piece 1000 Pieces (Min. Order) import stationery from china factory directly sale luxury cartoon fashion colorful mini hand metal stapler for office US $1.02-$1.20 / Piece 20 Pieces (Min. Order) Strong quality yellow child mini stapler mini children stapler fashional colorful modern stapler US $0.36-$0.39 / Piece 5000.0 Pieces (Min. Order) 2018 Top fashion cartoon office supplies toothpaste shaped durable plastic cheap colorful mini stapler cute stapler US $0.55-$1.00 / Piece 3000 Pieces (Min. Order) Wholesale mini clear acrylic gold stapler US $3.09-$4.86 / Piece 100 Pieces (Min. Order) Professional School Kawaii Custom Stationery Nice Office Mini Machine Stapler US $0.50-$1.00 / Piece 20.0 Pieces (Min. Order) Fashionable design custom logo mini size promotional plastic stapler with soft grip US $0.31-$0.34 / Piece 1000 Pieces (Min. Order) Hot Sale Office Fancy Plastic Metal Stationery Mini Kawaii School Nice Stapler US $0.80-$1.20 / Piece 20.0 Pieces (Min. Order) School supplies fashion style cheap price good quality 26/6 colorful eco-friendly plastic cute book binding stapler US $0.30-$0.65 / Piece 3000 Pieces (Min. Order) Custom made glass clear acrylic stapler case US $2.65-$4.79 / Piece 100 Pieces (Min. Order) - About product and suppliers: Alibaba.com offers 55 fashion mini staplers products. About 74% of these are stapler. A wide variety of fashion mini staplers options are available to you, such as plastic, metal. You can also choose from mini, normal. As well as from manual. And whether fashion mini staplers is standard stapler, staple-free stapler, or plier stapler. There are 55 fashion mini staplers suppliers, mainly located in Asia. The top supplying country or region is China, which supply 100% of fashion mini staplers respectively. Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show fashion mini staplers or other products of your own company? Display your Products FREE now! Related Category
https://www.alibaba.com/countrysearch/CN/fashion-mini-staplers.html
CC-MAIN-2021-21
refinedweb
926
62.75
In this getting started guide on MicroPython with ESP32 and ESP8266, we will learn how to use program and flash firmware to ESP32 and ESP8266 development boards. Firstly, we will see the difference between ESP8266, ESP32, and Arduino to see why ESP32/ESP8266 should be your first choice for embedded application development using MicroPython. Secondly, we will see how to download and install uPyCraft IDE that we will use to write firmware and flash programs to ESP boards. At the end of the tutorial, you will be able to write and flash your first program to ESP boards using uPyCraft IDE. ESP32/ESP8266 for MicroPython Esp32 and Esp8266 are Wi-Fi modules which are extremely cost convenient. They are used in multiple projects including IoT and Automation. Unlike other microcontrollers like Arduino, these modules have wireless networking included which makes it possible to use and monitor devices through Wi-Fi or Bluetooth making it a great cost-effective and useful tool. Esp32 and Esp8266 both come with GPIOs. Thus, many protocols including PWM, I2C, and ADC, etc. are supported through it. Comparison between Esp32 and Esp8266 with Arduino Mega MicroPython Introduction MicroPython is the reimplementation of the software Python3 which is specially designed for microcontrollers and embedded systems. It is very similar in use to that of Python. For example, if someone knows how to write simple programs in Python then it is very easy to work in microPython as the programming language is the same. The only major difference is that it does not come with a full standard library, but it can be easily used to access lower-level hardware because it has those modules present in it. Why do We use MicroPython ? The reason for Micro-python being used so readily in the embedded fields is because it is a simple and easy to learn software development language even for beginners than C and C++. Another important is that the Python programming language is easier to learn and all beginners find it very easy when comes to the learning curve for Python as compared to C and C++. Because MicroPython is similar to python. On top of that, we can use it to program embedded boards. Therefore, the main objective of MicroPython developers is to make embedded programming as simple as possible so that hobbyists, researchers, teachers, educators, and beginners can easily learn and dive into the embedded field. Last but not the least important reason to use MicroPython is that it contains REPL (Read-Evaluate-Print Loop) which let us run the code on ESP boards without going through program compilation process. As compared to C and other embedded proramming languages, the LED blinking code for ESP32 and ESP8266 can be written in few lines as shown: import time from machine import Pin led=Pin(2,Pin.OUT) #create LED object from pin2, Set Pin2 to output while True: led.value(1) #Set led turn on time.sleep(0.5) led.value(0) #Set led turn off time.sleep(0.5) There are several devices which can be run through this software but for now, we will focus on esp32 and esp8266. As these two boards are similar they are programmed in the same way in micro-python. MicroPython Supported Board There are many MicroPython supported board available in market. We have mentioned name of some of them below: - ESP32 - ESP8266 - Adafruit Circuit Playground Express - PyBoard - Micro: Bit - Teensy 3 Serial Boards - WiPy – Pycom - All types ESP32/ESP8266 You can look in these guides to find more information on supported boards for MicroPython Installing uPyCraft IDE Now we will learn how to use ESP32 and ESP8266 in Micro-python firmware. In order to do that we need to first install an IDE. There are many integrated development environments available that can be used to write programs for ESP32 and ESP8266. We will be using uPyCraft IDE as it can be run in any major operating system and is always extremely interactive and easy. We will be installing uPyCraft IDE in Windows. Installing Python3 Before we start the installation of the uPyCraft IDE IDE, make sure you have the latest version of Python 3.7 or the latest installed on your windows-based. If you don’t have the python 3 package installed on your Windows, follow these steps to install one: - After you click the Download Python 3.9.2 button an .exe file will start downloading. After the download is completed click on it and the following appears. - Press the run button. The following screen appears. Make sure to tick Add Python 3.9 to a path and then click Install Now. - After the installation is complete the following screen appears showing that the download was successful. Download and Install uPyCraft IDE As we mentioned earlier, we will use uPyCraft IDE to program ESP32 and ESP8266 development boards. The reason we are using uPyCraft IDE is that it is simple and easy to use IDE among other micropython based IDE’s available in the market. Now we will move onto the next step. Now follow these steps to download and install uPyCraft IDE: - In order to download the uPyCraft IDE, go to this link here and download the .exe file for windows as shown below. You can also download for Linux and Mac if you are using Linux or Mac operaing system. 2. After that click on the uPyCraft_VX.exe which you download in the previous step: 3. After installing the IDE, click on its icon and the following screen will appear. Till now we have downloaded and installed uPyCraft IDE. We will use this IDE to write firmware and flash firmware to ESP32 and ESP8266. uPyCraft IDE Introduction Now lets explore different windows and components of uPyCraft IDE. First open uPyCraft IDE, you will see a window like this: uPyCraft IDE is a integrated development environment which is used to program development boards in MicroPython language. It makes the steps for firmware development, code debugging and flashing code to ESP board very easier under a single package. Now we will learn about the different sections which are found on the IDE. The picture below shows the four main sections found in the IDE. - Tools - Editor - Folder and files - Micro-python shell terminal. 1. Tools On the furthest right side, you can see the different icons which can perform multiple tasks. It is very convenient to use and each function can be achieved with ease. The picture below shows each icon labeled. - New file: By clicking on this icon a new file will be created in the editor. - Open file: This icon opens up previously saved files. - Save file: By clicking on this icon the file will be saved automatically. - Download and run: This button uploads the program code written in the editor onto your module and runs it. - Stop: By clicking on this icon the execution of the program code stops. - Connect/Disconnect: Tools>Serial is used to connect your module through the serial. - Undo: By clicking on this icon the last change done to the program code is erased and it goes back to its previous state. - Redo: This icon restores the program code which was erased by the undo button. - Syntax Check: This is a very helpful feature that helps us check any syntax mistakes in the program code in the editor. - Delete: Clears the messages in the Shell Micro-python terminal. 2. Editor In the editor section, we write our program code which is then executed and run onto the esp32/esp8266 module serially. The extension of files is .py. We can open and create multiple files for a single project here in the editor window. 3. Folder & Files This section is found at the extreme left hand side of the screen. Four different folders can be seen namely: Device: In this folder, you can view all the files which are saved on your module. When you click the device folder all the files already stored would be visible. boot.py file is the default file that runs when the program is started and the main.py file contains your main program code sd: In this folder you can view files which are already stored in your sd card. This option is available for those modules which support it. uPy_lib: This is the in-built uPyCraft library. You can view the individual files by clicking on this. Workspace: This is a very handy folder which helps you organize your files and save them in the particular directory you choose. In order to set your path double click on the workspace folder and choose your path where you want to save all the program code. In order to update your current directory go to File>Reflush Directory as shown in the picture below. After that go Tools >InitConfig and select the workspace directory where you want to store project files. 4. Micro-Python Shell Terminal All messages are displayed in this section including errors that are found at the bottom of the screen. If you don’t want to upload new files you can directly write the command in the terminal so that it can be performed quickly. Flashing MicroPython Firmware on ESP32/ESP8266 After installing uPyCraft IDE and getting its overview, before we run a sample MicroPython program on ESP32 and ESP866, we should know that how to flash the MicroPython program to ESP development boards using uPyCraft IDE. To Flash ESP32 and ESP8266 boards with uPyCraft IDE, we need to download MicroPython firmware for ESP32 and ESP8266. Follow these steps to download MicroPython firmware for ESP boards. Downloading MicroPython firmware By default, the Esp32 and Esp8622 are not flashed with micro-python. It is very easy to flash Esp32 and Esp8622 with the help of uPyCraft IDE. To download the latest version of Micro-Python firmware for the ESP32, go to the website Micro-python and in its download page here () and select the latest firmware for Esp32. The format will be of the type Esp32.bin Similalry, you can download ESP8266 MicroPython firmware from this link. Selecting the Serial Port In order to select the serial port which will be connected open the uPyCraft IDE and go to Tools>Serial and select the Com port for your Esp32. To select the module go to Tools>Board and select Esp32. Here you can see that two serial ports are showing in my case. Because I have connected both ESP32 and ESP8266 boards with my computer. Select Board To select boards, go to Tools>Boards and select the board which you are using. For example, if you are using ESP32, select ESP32 and if you are using ESP8266 then select ESP8266. Flashing/Uploading MicroPython Firmware As we have already selected our board and the Com port, now we will flash the firmware to ESP32 or ESP8266. In order to do that go to Tools>Burn Firmware and click on it. After we click BurnFirmware the following screen will appear. Choose the following options. Updating Firmware - In the board section, choose ESP32 or ESP8266 whichever board you are using. - For erase_flash choose the yes option. - Select the correct Com port according to your module. For firmware choose the Users option and place the Esp32.bin file which we downloaded before as shown below: Now we have selected all the correct options. It is the time to flash micropython firmware to ESP32 or ESP8266. For example, if you are using ESP32. Press the hold-down “BOOT/FLASH” button on the Esp32 module. It can be seen clearly in the following picture. When we will hold down the BOOT button at the same time click OK in the burn firmware window. After the burn and download process started, you can release the boot button. The burning process will be completed in a few seconds and the firmware would be flashed onto Esp32. Erase False Error If you don’t see progress in the “EraseFlash” bar and you will get this message “erase false” after some time. That means either ESP32 or ESP8266 boards are not connected with your computer correctly or you have not pressed the “BOOT/FLASH” button correctly. In order to avoid this error, do the above steps again and this time make sure to press the “BOOT/FLASH” button on your ESP boards so that it can go to flashing mode. A similar process is done if we want to flash Micro-Python with Esp8266. Just download the firmware from Micro-Python download for Esp8266 and follow the same steps as Esp32. Instead of choosing Esp32 choose Esp8266 instead. Now we are ready to program our modules in MIcro-python with the help of uPyCraft IDE. Writing Your First MicroPyhton Script So far in this getting started tutorial on micropython for esp32 and esp8266 learned to install software that is required to process and execute cause on esp32 and esp8266. now let’s write a simple script on esp32 and execute it on the esp32 development board. We will write a simple LED blinking script for the esp32 and esp8266 Python programming language and will upload it to the ESP board and see how it works. Serial Connection with ESP32 and ESP8266 uPyCraft IDE Now let’s see how to establish a communication between uPyCraft IDE and ESP32 and ESP8266. After flashing micropython firmware to ESP32 and ESP8266, we can communicate with these ESP boards using this IDE. - Now connect your boards to your computer using a USB cable. - After that go to tools-> boards and select the board which you are using either ESP32/ESP8266. 3. Now press the connect button from the right toolbar. It will make the serial connection between IDE and ESP32/ESP8266. 4. As soon as you click the connect button, the >>> will show up in the Shell window of uPyCratf IDE and it shows that a successful connection has been developed with your board. Now to see its working, you can type this command on the shell window and it will print “Hello world” on console in response. >>> print('Hello World') Hello World >>> Note: If you don’t see the output of print() command on the shell console of uPyCraft IDE that means your ESP board has not made a serial connection with your computer and you cannot flash micropython code to your ESP boards. You should check the connection and try again. Creating the main.py file on your board In this section, we will see how to create new MicroPython file and download and run on ESP32 and ESP8266 boards. Follow these steps: 1. From the left sidebar, click the “New file” button and it will create a new file and it will up in editor window as a untitled. 2. Now copy this MicroPython code into your editor window. from machine import Pin from time import sleep led = Pin(2, Pin.OUT) while True: led.value(not led.value()) sleep(0.5) 3. After that click on the “Save file” button to save the file on your computer. You can select any location in your computer where you want to save this file. 4. When you click on the “Save file” button, A new window as shown below, will open. Given your file a name such as main.py and save it in your computer. Note: You can give any name to this file. But the extension of file should always be .py. 5. After saving the file, you will find the following in your uPyCraft IDE (the boot.py file under). 6. Now let’s download the script to ESP32 or ESP8266. To download script, click on the “Download and Run button” as shown below: 7. Once you click the “Download and Run button” and code downloaded successfully to your board, you will see a message saying “download ok” in the Shell window. At the end, we need to test the script which we just downloaded to ESP32 or ESP8266 board. In order to run the MicroPython script on ESP boards, just press the stop button. Now press the ENABLE or RESET Button of ESP32 or ESP8266.You should see that on Board LEDs of ESP32 or ESP8266 will start blinking with a delay of half second. This is all about getting started with Micropython for esp32 and esp8266 using uPyCraft IDE. In the coming tutorial, we will further explore ESP32 and ESP8266 peripherals using MicroPython. Further Reading:
https://microcontrollerslab.com/getting-started-with-micropython-on-esp32-and-esp8266-upycraft-ide/
CC-MAIN-2021-39
refinedweb
2,747
72.05
Introduction to MobX 4 for React/Redux Developers shawn swyx wang 🇸🇬 Mar 17 Updated on Mar 19, 2018 MobX uses the "magic" of observables to manage state and side effects. This not only has a learning curve but is a different programming paradigm altogether, and there is not a lot of up-to-date training material on how to use React with Mobx, while there is far, far more content on using React with Redux. In this intro we will progressively build up a simple app that pings a mock API to see how MobX works with React, and then make a MobX + React Kanban board to show off the power of MobX! How we will proceed: - Example A. Build a basic app that lets you type an text Input that is reflected in a Display. We show the basics of establishing observables and observercomponents. - Example B. We split up the Input and Display into siblings to simulate a more complex app. We also introduce async state updating by pinging a mock API. To do this we use the mobx-react Providerto put MobX state into React context to demonstrate easy sibling-to-sibling or sibling-to-parent communication similar to react-redux. - Example C: We add a secondary Display to our app. Demonstrates the usefulness of computedvariables (a Mobx concept). - Example D: We scale our app up to do an arbitrary number of Displays. Demonstrates using arrays and maps for our MobX state. - Example E: Tune up and Cleanup! We add the MobX dev tools, put our whole app in useStrictmode and explain the formal use of MobX actions and transactions for better app performance. This tutorial will use the recently released MobX 4 and MobX-React 5. A lot of people associate MobX with decorators, which are only a stage 2 proposal. That (rightfully) causes hesitation for some people, but MobX 4 introduces non-decorator based syntax so we don't have that excuse anymore! However; for tutorial writers this is a problem, because you have to decide to either teach one or the other or both. To resolve this, every example here will use the non decorator syntax as the primary version, but will have a clone that uses decorators to show the equivalent implementation (e.g. Example A vs Decorators A). Note to Reader: There is not an attempt at recommending MobX over Redux or vice versa. This is solely aimed at factually introducing core MobX concepts for people like myself who were only familiar with Redux. I will attempt to draw some conclusions but reasonable people will disagree. Additionally, Michel Weststrate has stated repeatedly that both libraries address completely different requirements and values. EXAMPLE A1: React + MobX Here is our very basic app using React + MobX: import { decorate, observable } from "mobx"; import { observer } from "mobx-react"; const App = observer( class App extends React.Component { text = ""; // observable state render() { // reaction return ( <div> Display: {this.text} <br /> <input type="text" onChange={e => { this.text = e.target.value; // action }} /> </div> ); } } ); decorate(App, { text: observable }); (Example A1, Decorators A1) You can see here that observer connects the observable text property of App so that it rerenders whenever you update text. While this is nice, it really isn't any different from using state and setState. If you have React you don't need MobX just to do this. EXAMPLE A2: So what? Let's try separating the concerns of state and view model: // this deals with state const appState = observable({ text: "" // observable state }); appState.onChange = function(e) { // action appState.text = e.target.value; }; // this deals with view const App = observer( class App extends React.Component { render() { // reaction const { text, onChange } = this.props.store; return ( <div> Display: {text} <br /> <input type="text" onChange={onChange} /> </div> ); } } ); // you only connect state and view later on... // ... <App store={appState} /> (Example A2, Decorators A2) Here the store: - is explicitly passed in as a prop (we will use the Providerpattern later) - brings its own action handlers along with it (no separate reducers to import) EXAMPLE A3: But that's not OO Look at this part of the above code. const appState = observable({ text: "" // observable state }); appState.onChange = function(e) { // action appState.text = e.target.value; }; Yeah, I dont like that. The method isn't encapsulated within the observable. Can we make it more object oriented? // import { decorate } from 'mobx' class State { text = ""; // observable state onChange = e => (this.text = e.target.value); // action }; decorate(State, { text: observable }); const appState = new State() (Example A3, Decorators A3) ahh. much better (especially the Decorators example where you don't need to use decorate)! EXAMPLE B1: But I hate prop drilling! Just like react-redux lets you put your store in a Provider, mobx-react also has a Provider that works in the same way. We will refactor our Display and our Input components into sibling apps: import { inject, observer, Provider } from "mobx-react"; class State { text = ""; // observable state onChange = e => (this.text = e.target.value); // action } decorate(State, { text: observable }); const appState = new State(); const Display = inject(["store"])( observer(({ store }) => <div>Display: {store.text}</div>) ); const Input = inject(["store"])( observer( class Input extends React.Component { render() { // reaction return <input type="text" onChange={this.props.store.onChange} />; } } ) ); // look ma, no props const App = () => ( <React.Fragment> <Display /> <Input /> </React.Fragment> ); // connecting state with context with a Provider later on... // ... <Provider store={appState}> <App /> </Provider> (Example B1, Decorators B1) Note that if I were to add a -second- store, I could simply define another observable, and pass it in to Provider as another prop, which I can then call from any child. No more redux style combineReducers! Using a Provider also helps avoid creating global store instances, something that is strongly advised against in MobX React Best Practices. MobX 4 Note: If you just try to use the old MobX observer(['store']) shorthand, which was always synonymous with observer + inject(['store']), you will get a very nice deprecation warning to not do that anymore. I found this inject/observer syntax a bit fiddly, so this is a nice little utility function you can define to type less: const connect = str => Comp => inject([str])(observer(Comp)); Hey! that's like our good friend connect from react-redux! The API is a little different, but you can define whatever you want 🤷🏼♂️. EXAMPLE B2: Ok but what about async Well for async API fetching we have a few choices. We can go for: mobx-thunk mobx-observable mobx-saga - and about 300 other options. They're all special snowflakes and we can't wait to see what you decide on! pause for rage quit... Ok if you couldnt tell, I was kidding. Using observables means you can "just" mutate the observables and your downstream states will react accordingly. You might have observed that I have been annotating the code examples above with // reaction, // action, and // observable state, and they mean what they normally mean in English. We'll come back to this. Back to code! Assume we now have an async API called fetchAllCaps. This is a Promise that basically capitalizes any text you pass to it, after a 1 second wait. So this simulates a basic request-response flow for any async action you want to take. Let's insert it into our example so far! class State { text = ""; // observable state onChange = e => { // action this.text = e.target.value; fetchAllCaps(e.target.value).then(val => (this.text = val)); }; } decorate(State, { text: observable }); const appState = new State(); (Example B2, Decorators B2) Well that was... easy? Note that here we are using the public class fields stage 2 feature for that onChange property, while not using decorators, which are also stage 2. I decided to do this because public class fields are so widespread in React (for example, it comes with create-react-app) that you likely already have it set up or can figure out how to set it up in Babel if you need to). CONCEPT BREAK! Time to recap! We've come this far without discussing core MobX concepts, so here they are: - Observable state - Actions - Derivations (Reactions and Computed values) In our examples above we've already used observable states as well as defined actions that modify those states, and we have used mobx-react's @observer to help bind our React components to react to changes in state. So that's 3 out of 4. Shall we check out Computed values? EXAMPLE C: Computed Values Computed values are essentially reactions without side effects. Because Observables are lazy by default, MobX is able to defer calculations as needed. They simply update whenever the observable state updates. Another way of phrasing it, computed values are derived from observable state. Let's add a computed value that just reverses whatever is in text: class State { text = ""; get reverseText() { return this.text .split("") .reverse() .join(""); } onChange = e => { // action this.text = e.target.value; fetchAllCaps(e.target.value).then(val => (this.text = val)); }; } decorate(State, { text: observable, reverseText: computed }); const appState = new State(); // lower down... const Display2 = inject(["store"])( observer(({ store }) => <div>Display: {store.reverseText}</div>) ); (Example C1, Decorators C1) Cool! It "just works" (TM) ! A fair question to have when looking at this is: why bother?? I can always put synchronous business logic in my React render function, why have computed values at the appState level at all? That is a fair criticism in this small example, but imagine if you rely on the same computed values in multiple places in your app. You'd have to copy the same business logic all over the place, or extract it to a file and then import it everywhere. Computed values are a great way to model derivations of state by locating them nearer to the state rather than nearer to the view. It's a minor nuance but can make a difference at scale. By the way, vue.js also has computed variables, while Angular just uses them implicitly. EXAMPLE D1: Observable Arrays MobX can make basically anything observable. Let me quote the docs: - If value, all its current properties will be made observable. See Observable Object - If value is an object with a prototype, a JavaScript primitive or function, a Boxed Observable will be returned. MobX will not make objects with a prototype automatically observable; as that is the responsibility of its constructor function. Use extendObservable in the constructor, or @observable in its class definition instead. In the examples above we have so far been making Boxed Observables and Observable Objects, but what if we wanted to make an array of observables? Observable Arrays are array-like objects, not actual arrays. This can bite people in the behind, particularly when passing data to other libraries. To convert to a normal JS array, call observable.toJS() or observable.slice(). But most of the time you can just treat Arrays as arrays. Here's a very simple Todo app using an observable array: class State { text = ["get milk"]; // observable array onSubmit = e => this.text.push(e); // action } decorate(State, { text: observable }); const appState = new State(); const Display = inject(["store"])( observer(({ store }) => ( <ul>Todo: {store.text.map(text => <li key={text}>{text}</li>)}</ul> )) ); const Input = observer( ["store"], class Input extends React.Component { render() { // reaction return ( <form onSubmit={e => { e.preventDefault(); this.props.store.onSubmit(this.input.value); this.input.value = ""; }} > <input type="text" ref={x => (this.input = x)} /> </form> ); } } ); const App = () => ( <React.Fragment> <Display /> <Input /> </React.Fragment> ); (Example D1, Decorators D1) note that "just push" just works! Example D2: Observable Maps What's the difference between Observable Objects (what we used in Examples A, B, and C) and Observable Maps? Well, its the same difference between Plain Old Javascript Objects and ES6 Maps. I will quote the MobX doc in explaining when to use Maps over Objects: Observable maps are very useful if you don't want to react just to the change of a specific entry, but also to the addition or removal of entries. So if we want to have a bunch of Todo lists, where we can add new todo lists, this is the right abstraction. So if we take that App from Example D1, rename it to TodoList and put it in todolist.js with some other superficial tweaks, then on index.js, we can do this: // index.js const connect = str => Comp => inject([str])(observer(Comp)); // helper function const listOfLists = observable.map({ Todo1: new TodoListClass(), Todo2: new TodoListClass() // observable map rerenders when you add new members }); const addNewList = e => listOfLists.set(e, new TodoListClass()); const App = connect("lists")( class App extends React.Component { render() { const { lists } = this.props; return ( <div className="App"> <span /> <h1>MobX Kanban</h1> <span /> {Array.from(lists).map((k, i) => ( <div key={i}> {/*Provider within a Provider = Providerception */} <Provider todolist={k}> <TodoList /> </Provider> </div> ))} <div> <h3>Add New List</h3> <form onSubmit={e => { e.preventDefault(); addNewList(this.input.value); this.input.value = ""; }} > <input type="text" ref={x => (this.input = x)} /> </form> </div> </div> ); } } ); (Example D2, Decorators D2) And hey presto! We have a Kanban board (an expandable list of lists)! This was enabled by the dynamically expanding ability of that listOfLists which is an Observable Map. To be honest, you could probably also use Arrays to achieve this but if you have a use case that is better suited for demonstrating Observable Maps, please let me know in the comments below. Example E1: MobX Dev Tools Redux dev tools are (rightfully) an important part of Redux's value, so let's check out MobX React dev tools! import DevTools from 'mobx-react-devtools'; // npm install --save-dev mobx-react-devtools // somewhere within your app... <DevTools /> (Example E1, Decorators E1) You can see the three icons pop up: - Visualize rerenders - Audit the dependency tree - Log everything to console (use Browser console not Codepen console) You can't do time travel but this is a pretty good set of tools to audit any unexpected state changes going on in your app. Stay tuned... There is a blocking bug with mobx-dev-tools and mobx 4: and I will finish this out when the bug is fixed. However in the mean time you can check out how to explicitly define actions so that MobX can batch your state changes into transactions, which is a big performance saver: Notice how we were able to do all our demos without using the actions - MobX has a (poorly) documented strict mode (formerly useStrict, now configure({enforceActions: true});) - see the MobX 4 docs. But we need the dev tools to really show the benefits for our example app. Acknowledgements This introduction borrows a lot of code and structure from Michel Weststrate's egghead.io course, but updates the 2 year old course for the current Mobx 4 API. I would also like to thank my employer for allowing me to learn in public. The examples here were done with the help of Javid Askerov, Nader Dabit, and Michel. Other Tutorials and Further Reading Other recent guides Docs - MobX docs - common pitfalls and best practices - MobX changelog - be very careful on v3 vs v4 changes - official MobX+React 10 minute guide Older Related libraries to explore - MobX state tree and associated blogpost Contribute What other current (<1yr) resources should I include in this guide? Have I made any mistakes? Let me know below! Choosing a Programming Language A discussion on programming languages and the rationale for choosing them
https://dev.to/swyx/introduction-to-mobx-4-for-reactredux-developers-3k07
CC-MAIN-2018-30
refinedweb
2,562
55.74
Overview¶ Verde provides classes and functions for processing spatial data, like bathymetry, GPS, temperature, gravity, or anything else that is measured along a surface. The main focus is on methods for gridding such data (interpolating on a regular grid). You’ll also find other analysis methods that are often used in combination with gridding, like trend removal and blocked operations. Conventions¶ Before we get started, here are a few of the conventions we use across Verde: - Coordinates can be Cartesian or Geographic. We generally make no assumptions about which one you’re using. - All functions and classes expect coordinates in the order: West-East and South-North. This applies to the actual coordinate values, bounding regions, grid spacing, etc. Exceptions to this rule are the dimsand shapearguments. - We don’t use names like “x” and “y” to avoid ambiguity. Cartesian coordinates are “easting” and “northing” and Geographic coordinates are “longitude” and “latitude”. - The term “region” means the bounding box of the data. It is ordered west, east, south, north. The library¶ Most classes and functions are available through the verde top level package. The only exceptions are the functions related to loading sample data, which are in verde.datasets. Throughout the documentation we’ll use vd as the alias for verde. import verde as vd The gridder interface¶ All gridding and trend estimation classes in Verde share the same interface (they all inherit from verde.base.BaseGridder). Since most gridders in Verde are linear models, we based our gridder interface on the scikit-learn estimator interface: they all implement a fit method that estimates the model parameters based on data and a predict method that calculates new data based on the estimated parameters. Unlike scikit-learn, our data model is not a feature matrix and a target vector (e.g., est.fit(X, y)) but a tuple of coordinate arrays and a data vector (e.g., grd.fit((easting, northing), data)). This makes more sense for spatial data and is common to all classes and functions in Verde. As an example, let’s generate some synthetic data using verde.datasets.CheckerBoard: data = vd.datasets.CheckerBoard().scatter(size=500, random_state=0) print(data.head()) Out: northing easting scalars 0 -3448.095870 2744.067520 -417.745960 1 -3134.825681 3575.946832 -10.460197 2 -2375.147789 3013.816880 914.277006 3 -1247.024885 2724.415915 -534.571829 4 -3332.462671 2118.273997 407.865799 The data are random points taken from a checkerboard function and returned to us in a pandas.DataFrame: import matplotlib.pyplot as plt plt.figure() plt.scatter(data.easting, data.northing, c=data.scalars, cmap="RdBu_r") plt.colorbar() plt.show() Now we can use the bi-harmonic spline method [Sandwell1987] to fit this data. First, we create a new verde.Spline: Out: Spline(damping=None, engine='auto', force_coords=None, mindist=1e-05) Before we can use the spline, we need to fit it to our synthetic data. After that, we can use the spline to predict values anywhere: spline.fit((data.easting, data.northing), data.scalars) # Generate coordinates for a regular grid with 100 m grid spacing (assuming coordinates # are in meters). grid_coords = vd.grid_coordinates(region=(0, 5000, -5000, 0), spacing=100) gridded_scalars = spline.predict(grid_coords) plt.figure() plt.pcolormesh(grid_coords[0], grid_coords[1], gridded_scalars, cmap="RdBu_r") plt.colorbar() plt.show() We can compare our predictions with the true values for the checkerboard function using the score method to calculate the R² coefficient of determination. true_values = vd.datasets.CheckerBoard().predict(grid_coords) print(spline.score(grid_coords, true_values)) Out: 0.9950450871662451 Generating grids and profiles¶ A more convenient way of generating grids is through the grid method. It will automatically generate coordinates and output an xarray.Dataset. grid = spline.grid(spacing=30) print(grid) Out: <xarray.Dataset> Dimensions: (easting: 167, northing: 168) Coordinates: * easting (easting) float64 23.48 53.42 83.37 113.3 143.3 173.2 203.1 ... * northing (northing) float64 -4.997e+03 -4.967e+03 -4.937e+03 -4.908e+03 ... Data variables: scalars (northing, easting) float64 495.6 521.3 549.0 578.7 610.0 ... Attributes: metadata: Generated by Spline(damping=None, engine='auto',\n force_co... grid uses default names for the coordinates (“easting” and “northing”) and data variables (“scalars”). You can overwrite these names by setting the dims and data_names arguments. grid = spline.grid(spacing=30, dims=["latitude", "longitude"], data_names=["gravity"]) print(grid) plt.figure() grid.gravity.plot.pcolormesh() plt.show() Out: <xarray.Dataset> Dimensions: (latitude: 168, longitude: 167) Coordinates: * longitude (longitude) float64 23.48 53.42 83.37 113.3 143.3 173.2 203.1 ... * latitude (latitude) float64 -4.997e+03 -4.967e+03 -4.937e+03 ... Data variables: gravity (latitude, longitude) float64 495.6 521.3 549.0 578.7 610.0 ... Attributes: metadata: Generated by Spline(damping=None, engine='auto',\n force_co... Gridders can also be used to interpolate data on a straight line between two points using the profile method. The profile data are returned as a pandas.DataFrame. prof = spline.profile(point1=(0, 0), point2=(5000, -5000), size=200) print(prof.head()) plt.figure() plt.plot(prof.distance, prof.scalars, "-") plt.show() Out: northing easting distance scalars 0 0.000000 0.000000 0.000000 66.785376 1 -25.125628 25.125628 35.533004 92.895113 2 -50.251256 50.251256 71.066008 124.644012 3 -75.376884 75.376884 106.599012 163.870392 4 -100.502513 100.502513 142.132016 209.836541 Wrap up¶ This covers the basics of using Verde. Most use cases and examples in the documentation will involve some variation of the following workflow: - Load data (coordinates and data values) - Create a gridder - Fit the gridder to the data - Predict new values (using predictor grid) Total running time of the script: ( 0 minutes 5.929 seconds) Gallery generated by Sphinx-Gallery
http://www.fatiando.org/verde/v1.0.0/tutorials/overview.html
CC-MAIN-2018-51
refinedweb
967
51.44
GabrielSousa Senior Member - - Feb 15, 2011 - - 4,378 - - 1,952 Code: #include <std_disclaimer.h> /* *. */ Select build corresponding to your device's codename ( OnePlus 9 Pro = lemonadep ) NikGapps - Browse /Releases at SourceForge.net A Custom Google Apps Package that Suits Everyone Needs! - Backup your personal data [IMPORTANT] - Be on latest firmware OOS 12 (C.48_1480) - Download AwakenOS rom & recovery & vendor_boot & dtbo for your variant - Boot to Bootloader - Flash the recovery Image - Flash the vendor_boot Image - Flash the dtbo Image - Reboot to Recovery Mode - “Apply Update” and select “Apply from ADB” - Adb sideload AwakenOS.zip - "Advanced" and select "Reboot to recovery" - “Apply Update” and select “Apply from ADB” - Adb sideload NikGapps.zip This build was only possible thanks to: @chandu dyavanapelli If you like my work buy me a gin & tonic cocktail If you like my work buy me a gin & tonic cocktail Donate to Gabriel Sousa Help support Gabriel Sousa by donating or sharing with your friends. XDA:DevDB Information AwakenOS v2.8 - Android12L, ROM for the OnePlus 9 Pro Builder GabrielSousa ROM OS Version: Android 12L ROM Kernel: Linux 5.4.x Created 2022-05-23 Last Updated 2022-06-13 Last edited: - -
https://forum.xda-developers.com/t/rom-12l-unofficial-lemonadep-13-06-2022-awakenos-2-8.4449409/#post-86940413
CC-MAIN-2022-27
refinedweb
193
61.97
From: Lewis Hyatt (lhyatt_at_[hidden]) Date: 2007-03-22 22:30:38 Sam Schetterer <samthecppman <at> gmail.com> writes: > > Hello. I have recently uploaded the updated sorting.zip file to vault, and > it contains the full header file for multikey quicksort. For those of you > who download the library and those of you who have downloaded the example > library, some feedback, good or bad, is appreciated. Thanks. Hi Sam- First of all, please try to exhibit more patience. Most of the people here are only working during their spare time. Secondly, you seem to be using a very bizarre interface to this mailing list. You don't quote properly, and you keep starting new threads to respond to old threads. This makes it harder for people to find your messages, assuming they are willing to put in the effort to find them in the first place. If you can't figure out how to get your mailer to work properly, please just use the gmane interface here:. IMHO, it's the best way to do it anyway. Anyway, on to the code. I don't know much about sorting. I tried compiling your radix sort example code last week. It failed to compile because of misuse of the typename keyword, either too often or not often enough. I fixed that, and was able to compile the code, but your example failed. (It tested an error condition, and reported an error). I tried writing my own test code to do some timings, but your code crashed. What compiler are you using? I am surprised the example radix code worked at all on any compiler. I haven't tried compiling the multi-key quicksort code here, but I did read the beginning of it and found a number of problems with the fundamental design. I tried to indicate a few things in the code annotations below, but I didn't make it all the way through. All things considered, I think it is great that you are so excited about C++, and that you want to apply what you know to sorting algorithms, but you still have *a lot* to learn before you will be able to write a library suitable for inclusion into Boost. Here are my suggestions: -Slow down. You don't need to post code to the vault every day. Instead, focus on learning how to improve the algorithms and the fundamental design. -Premature optimization is bad, and it seems to be your number one concern. You need to write a working library *first*, and then optimize it. Adding things like different calling conventions and i686-specific assembly instructions is necessary _way down the road_, if at all--certainly not now, when your code doesn't even work or compile on all systems, and you yourself don't know whether it is endianness-independent or not. -I think you have done a good job of identifying specific goals for the library at this point. The focus on radix sorts, multikey sorts, etc, could eventually produce something very valuable. But you need to write a coherent set of timings and test cases to demonstrate conclusively that your code is easy to use and is better than std::sort. Quoting theoretical arguments from some book is not enough; you need concrete test cases to establish this. -I think you need to learn more about how to write good generic code. One possible suggestion: read through your STL implementation of <algorithm>, <numeric>, and <vector>. Make sure you understand what each and every line does, and why it is there. If you don't have a good STL implementation to look at, then look at gcc, which is very comprehensible. Then, read through the complete source of some top-notch boost generic libraries to understand the techniques used there. I would recommend at least ptr_container and shared_ptr, which are incredibly useful and well-designed generic libraries. There are many others in Boost as well, of course. Understanding these concepts is infinitely more important than adding inline assembly language and unnecessarily tossing in the register keyword here and there. -Read all of the articles here: I hope these suggestions are useful to you and encourage you to keep working on this C++ library. Code Annotations prefixed by ==> : ------------------------------------------- //include file for multikey quicksort #ifndef BOOST_MULTIKEY_QUICKSORT_HPP #define BOOST_MULTIKEY_QUICKSORT_HPP #include "detail/insertion.hpp" #include <deque> #ifndef BOOST_NO_FASTCALL #define BOOST_NO_FASTCALL #define callingconv __fastcall #else #define callingconv #endif ==> First of all, __fastcall is a nonstandard extension to C++ and most likely does not belong in a general library. In any case, it should not be enabled by default. Also, your logic here does not make sense. You should not #define BOOST_NO_FASTCALL right before implementing fastcall! namespace boost { namespace detail { template<typename T, typename Holder> class qobjclass //I use a class because only class templates can be //partially specialized { public: inline static const T& qobj(typename const Holder& a, int d) { return a[d]; } }; ==> The use of typename in this context is incorrect. You only need to use typename to clarify that a dependent name must be a type. In this context, Holder has to be a type. it looks like you added typename everywhere because you don't fundamentally understand what it is for. I haven't noted all of the mistakes below. ==> You should not use int to index an array; std::size_t is the correct type to use to index an array, lacking any other information. ==> There is no need to type "inline" when you define a method inside a class definition, as it is already implied. When you do add it unnecessarily, it mainly just suggests that you intend your code to be "fast", but you don't understand what that means. ==> If a class has only public members, just make it a struct to signify your intent. template<typename T> class qobjclass<T, typename std::deque<T>::iterator> //I use a class //because only class templates can be partially specialized { public: inline static const T& qobj( typename const std::deque<T>::iterator& a, int d) { return *(a + d); } }; ==> What is the point of this? This is fully equivalent to the general template already. And why is std::deque appearing at all in a *generic* sorting library? Even if deque is somehow so special, why do you only support std::deque<T, std::allocator<T> > ? What about other allocators? ==> You seem to be under the impression that there is a difference between a[d] and *(a+d). There isn't. Again, the issue here is that you are more concerned with ill-motivated optimizations than with writing a good algorithm in the first place. ==> Iterators should be passed by value. ==> It looks like instead of your previous class, what you are trying to say is that Holder should be a Random Access Iterator with value_type T. You should phrase this as follows: template<typename RandomAccessIterator> class qobjclass { typedef std::iterator_traits<RandomAccessIterator> traits; public: static typename traits::value_type const& qobj( RandomAccessIterator i, typename traits::difference_type d) { return i[d]; } }; ==> With this formulation it is clear that your qobjclass is fully redundant, since all it does it provide a synonym for operator[] for a generic random access iterator. You need to re-organize the code below to be templated on a random access iterator type. Whether it is a pointer or a std::deque::iterator or something else is not your concern. //I need all of these special functions because I cannot do //partial specializations for function templates, but I need //to simulate it template<typename T> inline T& _get(typename T* a, int d) { return a[d]; } ==> Again typename is incorrect here, and this function is redundant, since this code would compile for any random access iterator, not just a pointer. Also, don't use identifiers beginning with an underscore, as the rules for when these are reserved and when they are not reserved are too hard to remember. The whole point of putting something in namespace detail is that you don't have to mangle the name by hand. template<typename T> inline T& _get(typename std::deque<T>::iterator& a, int d) { return *(a + d); } ==> This specialization is already equivalent to the generic version, which is already unnecessary. #define qobj(a, b) qobjclass<KeyHolds, Key>::qobj((a), (b)) #define get(a, b) _get<Key>((a), (b)) ==> These macros do not save enough typing to be worth the evilness of using them. Also, if you do use them, you should use the boost naming conventions and you should #undef them at the end of the file. But don't use them. template<typename KeyHolds, typename Key> void callingcov mkquicksort( typename Key* a, int l, int r, int bit,typename const KeyHolds& term) => You don't seem to understand how to make this code generic at all. The whole point of the templates is to allow any random access iterator, not just a pointer. { if(r <= l) return; register int i = l - 1, j = r, d = bit; typename const KeyHolds v = qobj(get(a, j), d);//speed increase with registers ==> I'm pretty certain it's been a good many decades since the register keyword held any meeting at all. With today's modern compilers, it's probably just as likely to confuse the optimizer and make things worse, but in all likelihood it's fully equivalent to whitespace. Do you have any evidence to back this up whatsoever? What makes you think you can tell better than the compiler what should be in a register? ==> At this point, I stopped reading, as I don't know enough to evaluate your algorithms, but I do know enough to know that this code needs to be redesigned. It's not just a matter of a few details here and there; it's a matter of a dramatic increase in your understanding of how to write a generic library in C++. -Lewis Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2007/03/118568.php
CC-MAIN-2020-45
refinedweb
1,675
61.77