text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
In the previous tutorial, we discussed the basics of the I2C protocol. In most of the embedded devices, either UART otherwise I2C is used for console messages. In this tutorial, we will discuss serial communication in Raspberry Pi using the I2C protocol. I2C in Raspberry Pi For serial communication over the I2C protocol, the Broadcom processor of Raspberry Pi has Broadcom Serial Controller (BSC). This standard-mode master BSC controller is NXP Semiconductor’s I2C compliant and supports a data transfer rate of 400 kbps. The BSC controller supports both 7-bit as well as 10-bit addressing. This I2C interface is accessible at pins GPIO2 (Board Pin No. 3) and GPIO3 (Board Pin No. 5). GPIO2 is Serial Data (SDA) line, and GPIO3 is a Serial Clock (SCL) line of the I2C1. These I2C pins are internally pulled up to 3.3V via 1.8 kohms resistors. That is why these pins cannot be used for general-purpose I/O where pull-up is not required. There is one more I2C peripheral BSC2 in Raspberry Pi identified as I2C0. The BSC2 master is dedicated to the HDMI interface and cannot be accessed by users. This I2C interface is present at board pins 27 (ID_SD) and 28 (ID_SC). I2C0 remains reserved for reading EEPROM of Raspberry Pi’s add-on boards called Hardware on The Top (HAT) boards. I2C0 can only talk to HAT EEPROM at address 0x50 during boot time. It is possible to access I2C0 only if both the camera interface and HDMI port are unused. To use I2C0, add the following lines to boot/config.txt. dtparam=i2c_vc=on. It needs to be enabled from the Raspberry Pi configuration. The Raspberry Pi’s BSC controllers support multi-master, multi-slave I2C. Therefore, I2C1 is sufficient to connect several I2C slaves (maximum 112 slaves) and any number of master devices. Enabling I2C1 from Raspberry Pi GUI On Raspbian, navigate to Pi Start Menu -> Preferences -> Raspberry Pi Configuration. In the pop-up window, click on the ‘Interfaces’ tab and select the ‘Enable’ radio button for I2C. You can also enable or disable other interfaces as required. To take changes effect, restart Raspberry Pi. After rebooting, GPIO3 and GPIO5 can be used to connect Raspberry Pi as I2C master with an I2C bus or to any I2C slave. Enabling I2C1 from Terminal The I2C support for Raspberry Pi’s ARM core and Linux Kernel can also be enabled from the Terminal (Bash Shell on Raspberry Pi). Open Terminal and run the following command: sudo raspi-config In the Raspberry Pi Software Configuration Tool, navigate to ‘Interfacing Options’. In older Raspberry Pi models, navigate to ‘Advanced Options’ and then ‘I2C’. In the pop-up window, enable the Arm I2C interface and select ‘Yes’ to load the I2C Kernel Module. Now reboot Raspberry Pi by entering the following command: sudo reboot After rebooting, GPIO3 and GPIO5 can be used to connect Raspberry Pi as I2C master with an I2C bus or to any I2C slave. Testing I2C port After enabling I2C user-port and rebooting Raspberry Pi, we can test if the port is available as a Linux device or not. In the Terminal window, run the following command: ls /dev/ Or ls /dev/*i2c* I2C1 must appear as one of the Linux devices available as shown in the image below. Note that in the older versions of Raspberry Pi, the I2C user port is identified as I2C0 instead of I2C1. In all 256M Raspberry Pi models, the I2C user port is 0, and in rest, all it is 1. Scanning I2C slaves on Raspberry Pi After enabling the I2C user port, the connected I2C slaves can be detected using i2c-tools. First of all, install the i2c-tools by running the following command in the Raspberry Pi Terminal: sudo apt-get install -y i2c-tools Now run the following command to scan connected I2C slaves: sudo i2cdetect -y 1 As already mentioned that in older versions of Raspberry Pi, I2C user port is 0, in older versions change the port number to 0 as follows: sudo i2cdetect -y 0 The i2c-detect is a tool that scans the I2C user port and returns the I2C addresses of the connected slave devices. The tool returns a table of addresses of connected I2C slave devices as shown in the image below: Accessing I2C devices using SMBus library On Raspberry Pi, the I2C bus can be accessed in a Python script using the SMBus library. In a Python script, the SMBus library can be imported using the following statement: import smbus After importing SMBus library, an object of SMBus class must be created using the SMBus() method. The SMBus() method takes the I2C port number as a parameter and must be used in an assignment statement to create an SMBus object. It has the following syntax: <Object_name> = smbus.SMBus(I2C_Port_Number) The following is a valid example of creating an SMBus object: i2c-bus = smbus.SMBus(1) Note that in older Raspberry Pi versions, I2C user port is 0, and in all Raspberry Pi versions above 256M RPi versions, it is 1. To use the latest SMBus2 library, it can be installed using pip by running the following command: pip install smbus2 In a Python script, the SMBus2 library can be imported using the following statement: from smbus2 import SMBus, i2c_msg An object of SMBus class can be created using smbus2.SMBus() method as follows: i2c-bus = smbus2.SMBus(1) The smBus2 library has two classes – SMBus and i2c_msg. The SMBus class supports the following methods: smbus.SMBus()/smbus2.SMBus() – To create an SMBus object in Python script. open(bus) – To open a given i2c bus. close() – To close I2C connection. The serial data from an I2C slave can be read in bytes, words or block of bytes. In some I2C slave devices, master need to access serial data from specific registers. The following methods are available in SMBus2 library for reading serial I2C data from slave devices: read_byte(i2c_addr,force=None) – To read a single byte from a device. read_byte_data(i2c_addr,register,force=None) – To read a single byte from a designated register. read_block_data(i2c_addr,register,force=None) – To read a block of up to 32-bytes from a given register. read_i2c_block_data(i2c_addr,register,length,force=None) – To read a block of byte data from a given register. read_word_data(i2c_addr,register,force=None) – To read a single word (2 bytes) from a given register. Similarly, data can be written to I2C slaves in bytes, words or block of bytes. In some I2C slave devices, data must be written to specific registers. The following methods are available in SMBus2 library for writing serial I2C data from slave devices: write_byte(i2c_addr,value,force=None) – To write a single byte to a device. write_byte_data(i2c_addr,register,value,force=None) – To write a byte to a given register. write_block_data(i2c_addr,register,data,force=None) – To write a block of byte data to a given register. write_i2c_block_data(i2c_addr,register,data,force=None) – To write a block of byte data to a given register. write_word_data(i2c_addr,register,value,force=None) – To write a byte to a given register. write_quick(i2c_addr,force=None) – To perform quick transaction. Throws IOError if unsuccessful. The following methods are available for managing SMBus processes and to combine I2C bus read/write operations: process_call(i2c_addr,register,value,force=None) – To execute a SMBus Process Call, sending a 16-bit value and receiving a 16-bit response block_process_call(i2c_addr,register,data,force=None) – To send a variable-size data block and receiving another variable-size response. i2c_rdwr(*i2c_msgs) – To combine a series of i2c read and write operations in a single transaction. In the next tutorial, we will discuss interfacing the ADXL345 accelerometer sensor with Raspberry Pi via I2C port. Filed Under: Microcontroller Projects, Raspberry pi
https://www.engineersgarage.com/articles-raspberry-pi-i2c-bus-pins-smbus-smbus2-python/
CC-MAIN-2021-39
refinedweb
1,309
54.52
The received data is not complete When programming with uart.readline(), there is a problem that the received data is not complete. How do I solve the problem? - neil jepsen last edited by @Andy-Smith that line reads a GPS response which are coming in at 10 lines or more each second (depending on the gps) and the function falls out the end, only printing the ones that I am interested in decoding. @jcaron Actually it doesn't, the data I receive in this one would be longer, something like ST<{"cmd_code": "set_value", "type": "label", "widget": "label2", "value":1.23, "format":"%.2f"}>ET. @neil-jepsen Thanks for your reply, I would like to know how this uart.readline() determines the end of reception. - neil jepsen last edited by livius Hi Andy, coincidently I have been playing with reading a GPS and trying to get to the bottom of how the various uart instructions work, which is not entirely clear to me from the docs. I have the following code running on a fipy' import time import utime import socket import machine dir(machine) import gc from time import sleep from machine import UART from utime import sleep_ms, ticks_ms, ticks_diff,ticks_add from machine import Pin from machine import RTC import pycom import math pycom.wifi_on_boot(False) pycom.smart_config_on_boot(False) valid = "" latitude = 0 longitude = 0 GMT_time= 0 satellites_used=0 gps=UART(1,baudrate=9600,pins =('P8','P13'),rx_buffer_size = 4096) print('uart done') def read_gps(): try: t = utime.ticks_us() global gps,latitude,longitude,GMT_time,satellites_used if gps.any(): print(gps.any()) index = 0 sleep(0.1) print('chars after 0.1 =',gps.any()) nmea_str= str(gps.readline()) # reads from buffer not gps. looks like b'$ nema string ' and is a byte array print('gps said',nmea_str,'len=',len(nmea_str)) if nmea_str.find('$GNGGA') >-1: index = nmea_str.strip().split(',') # strip spaces, split on ',' split takes 17ms valid = index[6] gps_time = index[1] #time is battery backed up if valid =='1': latitude = index[2] longitude=index[4] GMT_time = index[1] satellites_used = index[7] # else: print('invalid response',nmea_str,'validity = ',valid) # # if nmea_str.find('$GPGSV') >-1: index = nmea_str.strip().split(',') print() print() print ('last good lat',latitude,'at time',GMT_time, 'from',satellites_used,'satellites') print ('last good long',longitude) print('satellites in view=',index[3], 'at time',GMT_time) print() print() sleep(5) print('chars=',gps.any()) gps.read() #read all - flush buffer print('chars after flush =',gps.any()) except Exception as e: print("## failed to read GPS",e) while True: r = read_gps() The code runs OK. Things i have learned are: uart.any() returns the number of bytes in the uart buffer. If the buffer is too small and overflows, data is lost but the processor doesn't crash and as far as I know other code/var/ are not overwritten uart.readline() reads a line of code if there is a complete line in the buffer at that point in time. Data is not removed from the buffer with uart.readline(), unlike uart.read() uart.read() will read verything in the uart buffer and clear the buffer. In the above code, whilst the first line of gps nmea data is being read and processed, GPS data is still coming in and being stored in the buffer. @Andy-Smith have you set timeout_chats in the UART constructor?
https://forum.pycom.io/topic/7216/the-received-data-is-not-complete
CC-MAIN-2022-05
refinedweb
550
64.81
No sorry there is no way to change the default edit view. One of the main reasons for on-page being the default edit view is that it's much more user friendly if a user can see the page that they've selected before they start editing. You can make properties required if it's something important that might be missed. This way the user will be forced to fill this in when creating a page. Thank you very much Ben! As I see in the Alloy Template, the container pages will be ridirected to form mode automatically without require editor to click the change mode button. However, I cannot see any Required attribute in the model. Is there anyway so I can implement such the same function in my MVC template? Best, Hai You can do this by defining an UI Descriptor for your content type: using EPiServer.Shell; [UIDescriptorRegistration] public class ContainerPageUIDescriptor : UIDescriptor<ContainerPage> { public ContainerPageUIDescriptor() { DefaultView = CmsViewNames.AllPropertiesView; } } Yeah it should be possible to change the execute method to the following: _execute: function () { topic.publish("/epi/shell/context/request", { uri: this.get("selectionData").uri }, { viewName: "formedit" }); } Maria's code is a fork of my add-on,, please read the disclaimer about the approach taken. @ben open from treenode did work as you suggest (aug 13, 2015), Ive published it on github: I can not figure out how to display it on the contextmenus of blocks and media (both in lists and on-page-edit-block-menu), any suggestion? Hi guys, Is there any way to set form mode as default editing interface? Some of the properties cannot edit in the preview mode so wer need the form mode as default editing interface. Thank you in advance!
https://world.episerver.com/forum/legacy-forums/Episerver-7-CMS/Thread-Container/2013/2/Set-Form-mode-as-default-editing-interface/
CC-MAIN-2020-16
refinedweb
290
63.9
Segment makes it easy to send your data to GoSquared (and lots of other destinations). Once you've tracked your data through our open source libraries we'll translate and route your data to GoSquared in the format they understand. Learn more about how to use GoSquared with Segment. Getting Started When you toggle on GoSquared in Segment, this is what happens: - Our CDN is updated within 5-10 minutes. Then our snippet will start asynchronously loading GoSquared’s Tracker onto your page. This means you should remove any manual integration of GoSquared. - Your GoSquared Now dashboard will instantly start showing the number of visitors online, and if you’re using identify, users will start appearing in People Analytics. GoSquared supports mobile, webpage and server-side tracking. Website Tracking When you enter your GoSquared site token into Segment, website tracking will automatically start. Mobile and Server-Side Tracking To track data via Segment’s mobile and server-side sources, you will need to enter a GoSquared API Key, which can be created in your GoSquared account. The API Key must have “Write Tracking” access. All functionality is supported by mobile and server-side tracking. Page When you call page, we call GoSquared’s track to track a pageview. By default the Segment JavaScript snippet includes a call to page so you don’t need to add it manually. Page calls will be tracked from any Segment library, but GoSquared’s real-time analytics will be most accurate using front-end website tracking. Identify When you call identify, we call GoSquared’s identify. Once identified with a userId, that person (along with historical browsing information from before they were identified) will be visible and queryable in GoSquared People Analytics. GoSquared expects a slightly different set of traits from us, so we start by transforming the traits to match their format. GoSquared recognises certain traits as “special” and requires all other traits to be sent under a namespace of custom. The Segment code handles all of this, sending recognised special properties and custom properties in the correct places. Track When you call track, we call GoSquared’s event with the same arguments. Screen GoSquared supports the screen method by converting it into an event, with an event name of "Screen: " + name. Group GoSquared converts the group method into an identify call, to set the company details for a user. Only one company/group is supported per user. Ecommerce GoSquared supports our Ecommerce tracking API, so the Order Completed event will be tracked as a GoSquared Transaction.. Anonymize IP Enable if you need to anonymize the IP address of visitors to your website. API Key (Server-side) Generate your server-side API key here: Site Token You can find your Site Token by viewing the GoSquared Integration guide. It should look something like GSN-123456-A. Cookie Domain Use this if you wish to share GoSquared’s tracking cookies across subdomains, .example.com will enable shared tracking across all example’s subdomains. By default, cookies are set on the current domain (including subdomain) only. Track Hash Enable if you’d like page hashes to be tracked alongside the page URL. By default, example.com/about#us will be tracked as example.com/about. Track Local Enable to track data on local pages/sites (using the file:// protocol, or on localhost). This helps prevent local development from polluting your stats. Track Parameters Disable to ignore URL querystring parameters from the page URL, for example /home?my=query&string=true will be tracked as /home if this is set to disabled. Use Cookies Disable this if you don’t want to use cookies If you have any questions or see anywhere we can improve our documentation, please let us know or kick off a conversation in the Segment Community!
https://segment.com/docs/destinations/gosquared/
CC-MAIN-2018-22
refinedweb
634
64.41
This blog is part 2 of a series that covers relevant Azure fundamentals - concepts/terminology you need to know, in the context of Hadoop. Some of the content is a copy of Azure documentation (full credit to the Azure documentation team). I have compiled relevant information into a single post, along with my commentary, to create a one stop shop for those new to Azure and thinking Hadoop. In part 1 of the series, I covered Azure networking. Here's what's covered in this post: Section 05: Azure storage Section 06: Azure blob storage Section 07: Azure disk storage options Section 08: Azure managed disks Here are links to the rest of the blog series: Just enough Azure for Hadoop - Part 1 | Focuses on networking, other basics Just enough Azure for Hadoop - Part 3 | Focuses on compute Just enough Azure for Hadoop - Part 4 | Focuses on select Azure Data Services (PaaS) 5. Azure Storage - overview Azure has many offerings from a storage perspective. This section will touch on ones relevant from the Hadoop perspective. Azure storage - offerings: Azure Storage offers choices of non-relational data storage including Blob Storage, Table Storage, Queue Storage, and Files. Only those relevant to the Hadoop context are covered in this section. 6. Azure Blob Storage Is Azure's object store PaaS service and includes three types - block blobs, append blobs, page blobs. From the perspective of Hadoop, Azure Blob Storage is HDFS compatible, and you can leverage it as a secondary HDFS - to offload cool data to a cheaper storage tier, when compared to disks. 6.1. Blob Service Components: Storage Account: Is an entity within which your provisioned Azure storage sits. All access to storage services takes place through the storage account. The storage account is the highest level of the namespace for accessing each of the fundamental services. It is also the basis for authentication. A storage account has limits - IOPS limits, throughput limits, and a subscription has number of storage accounts limit. This details scalability limits. This storage account can be a General-purpose storage account or a Blob storage account which is specialized for storing objects/blobs.. 6.2. Types of blob 1. Block blob: Block blobs are ideal for storing text or binary files, such as documents and media files. Block blobs are the only relevant type of blob in the context of Hadoop. Block blobs are HDFS storage compatible and a block blob storage account can be attached to Hadoop cluster as auxiliary HDFS - see section below called "WASB" 2. Append blobs: Append blobs are similar to block blobs in that they are made up of blocks, but they are optimized for append operations, so they are useful for logging scenarios. 3. Page blogs: Page blobs are disk abstractions - can be up to 1 TB in size, and are more efficient for frequent read/write operations. Azure Virtual Machines use page blobs as OS and data disks. Page blobs are relevant in the context of Hadoop when you use unmanaged disks - covered further down in this blog. 6.3. Azure Storage redundancy options: LRS: Locally redundant storage LRS offers intra-datacenter redundancy - keeps 3 replicas of your data within a single facility within a single region for durability GRS: Geo-redundant storage This is the default option for redundancy when a storage account is created. With GRS your data is replicated across two regions, and 3 replicas of your data are kept in each of the two regions. RA-GRS: Read-access geo-redundant storage This is the recommended option for production services relying on Azure Storage. For a GRS storage account, you have the ability to leverage higher availability by reading the storage account’s data in the secondary region. ZRS: Zone-redundant storage ZRS fits between LRS and GRS in terms of durability and price. ZRS stores 3 replicas of your data across 2 to 3 facilities. It is designed to keep all 3 replicas within in a single region, but may span across two regions. ZRS currently only supports block blobs. ZRS allows customers to store blobs at a higher durability than a single facility can provide with LRS 6.4. Azure Blob Storage Feature Summary - Is a massively scalable object store - structured and unstructured data - Is highly available - three copies are maintained with customer charged for only one copy - Offers redundancy options detailed in section above - Offers strong consistency - Offers encryption at rest with Microsoft managed keys or customer managed keys - Offers tiers based on data temperature - hot cool, archive - with cost versus performance trade-off. 6.5. WASB and HDFS: Storage accounts - Block blobs can be leveraged as auxiliary HDFS - and are supported by all Hadoop distributions as secondary HDFS. Microsoft contributed towards the Hadoop-Azure module of the Apache Hadoop project. Feature summary from Apache Hadoop docs: - Read and write data stored in an Azure Blob Storage account like it were from attached disks. - Present a hierarchical file system view by implementing the standard Hadoop FileSystem interface. - Supports configuration of multiple Azure Blob Storage accounts. - Supports both block blobs (suitable for most use cases, such as Spark, MapReduce) and page blobs (suitable for continuous write use cases, such as an HBase write-ahead log). - Reference file system paths using URLs using the wasb scheme. - Also reference file system paths using URLs with the wasbs scheme for SSL encrypted access. - Tested on both Linux and Windows. - Tested at scale The experience is seamless other that the fact that you have to provide the WASBS URI of your blob storage to read/write - see section 6.7. You can use HDFS FS shell commands and read/write just like you would from primary HDFS. 6.6. Attaching a storage account to your Hadoop cluster: You need to create a storage account, you will need to make an entry into core-site.xml - the storage account name, and associated access key, save and restart HDFS and dependent services. You can encrypt the key for added security. 6.7. Interacting with your storage account from HDFS shell/Spark etc - WASB URI: Always use wasbs scheme - wasbs utilizes SSL encrypted HTTPS access. In general, you have provide the fully qualified blob storage URI as detailed in the example below. % hdfs dfs -mkdir wasbs://<yourcontainer>@<yourstorageaccountname>.blob.core.windows.net/dummyDir % hdfs dfs -put dummyFile wasbs://<yourcontainer>@<yourstorageaccountname>.blob.core.windows.net/dummyDir /dummyFile % hdfs dfs -cat wasbs://<yourcontainer>@<yourstorageaccountname>.blob.core.windows.net/dummyDir/dummyFile "Do what you love and love what you do" 6.8 Encryption of storage accounts attached to your Hadoop cluster: At rest: Azure offers storage service encryption (AES 256 bit) - server side, transparent encryption with Microsoft managed keys or customer provided keys secured in Azure Key Vault. In transit: When you use wasbs scheme (https), data transfer is over SSL. 6.9. Blocking wasb scheme (http) in storage accounts attached to your Hadoop cluster: You can configure your storage account to allow only wasbs scheme. 6.10. Blocking external access to storage accounts attached to your Hadoop cluster: Configure VNet service endpoint to your storage account and specify subnets to allow access to and configure to not allow access from elsewhere. 6.11. Distcp to WASB You can distcp from your on-premise cluster to your Azure storage account % hadoop distcp hdfs://<yourHostName>:9001/user/<yourUser>/<yourDirectory> wasbs://<yourStorageContainer>@<YourStorageAccount>.blob.core.windows.net/<yourDestinationDirectory>/ 6.12. Architectural considerations/best practices with using blob storage as auxiliary HDFS: This section is specific to block blobs. Securing: - Encrypt at rest with Storage Service Encryption - Encrypt during transit - Make wasbs (/https) the only mode supported for data transfer - Encrypt the storage credentials in core-site.xml; Periodically regenerate keys for added security - Block external access with VNet service endpoint - Ensure the right people have access to your storage account using Azure service level RBAC Configuring some of these from the Azure portal are covered in Azure Cloud Solution Architect, Jason Boeshart's blog High-availability: Out of the box Disaster recovery: Plan for replication to DR datacenter, just like you would with primary HDFS Sizing: With blob storage, size for the actual amount of data you want to store (without replicas). E.g. Lets say you have 5 TB, with disks you would size for 5 TB * 1 only Tiers: Choose between hot/cool/archive based on your requirements Microsoft does not charge for the replicas, and you pay per use. Consistency: Is strong Constraints and workarounds: Storage accounts come with IOPS limits - be cognizant of the same, and shard data across multiple storage accounts to avoid throttling and outages from the same 500 TB limit of storage account limit can be increased with support ticket 200 storage account per subscription limit can be increased with support ticket Performance: Understand performance versus cost tradeoff with using Azure blob storage as your secondary HDFS Egress charges: Understand that you will incur egress charges for data leaving your storage account. 6.13. Apache Ranger plug-in for WASB Microsoft developed a Ranger plug-in for fine-grained RBAC for WASB. Will cover this in detail in subsequent blogs. 7. Azure Disks Azure offers two flavors of disks you can attach to your Hadoop virtual machines - unmanaged and managed disks, in two performance tiers - standard and premium. Microsoft recommends usage of managed disks - it is supported by all Hadoop distributions. The next blog, covers Azure compute and touches on important concepts like premium managed disk throughout and VM max disk throughput and optimal number of disks to attach to guarantee performance. 7.1. Unmanaged disks: At a very high level, with unmanaged disks, you have to create a storage account, and have to deal with the constraints/limits of storage account like IOPS, it offers lower fault tolerance and a higher management overhead than its managed counterpart. I wont be covering any more on unmanaged disks - prefer managed disks over unmanaged as detailed in this video. Documentation 7.2. Managed disks: Az. 7.3. Premium Disks: They are high performance Solid State Drive (SSD)-based storage designed to support I/O intensive workloads with significantly high throughput and low latency. There are several SKUs of premium disks and you choose the option which best meets your required storage size, IOPs, and throughput. They are supported by DS-series, DSv2-series, FS-series, and GS-series virtual machine sizes, at the time of writing this blog. There are no transaction costs for premium disks. Premium managed disks is recommended for guaranteed performance in higher environments. For masters, always use premium disks, irrespective of environment. 7.4. Standard Disks: Standard Disks use Hard Disk Drive (HDD)-based storage media (are backed by regular spinning disks). They are best suited for dev/test and other infrequent access workloads that are less sensitive to performance variability. There are several SKUs of standard disks and you choose the option which best meets your required storage size, IOPs, and throughput. There are no transaction costs for standard disks. Standard managed can be used in lower environments for non-master nodes. 8. Azure Managed Disks This is a fantastic video from Azure storage team and clearly details benefits of Azure managed disks over unmanaged. The diagrams below are from the video. 8.1. IOPS and throughput Azure managed disk SKUs come with IOPS and throughput limits to be cognizant of. The compute section will elaborate on this again. Here is an example of what to expect.. Value proposition and features.. 8.2. Enhanced availability over unmanaged disks Azure managed disks offer enhanced storage availability. Lets say if you have an availability set with multiple VMs, with managed disks, managed disks will be placed in separate storage units per VM for higher availability. In the above diagram, on the left, say you used unmanaged disks, and put all your masters in an availability set, you could actually lose all masters because the storage could be a SPOF. With the managed disk option, notice how Azure will ensure the VM attached disks are isolated into separate storage units for better fault tolerance. 8.3. Simplifies creation of multiple VMs with same custom image With unmanaged disks, if you have a custom image you want to propagate across multiple VMs, you have to copy the same across multiple storage accounts, as there is a cap on number of VMs per storage account - effectively an administrative overhead. With managed disks, you can create a managed disk image, then capture a VM you need to replicate - this will capture all the VM metadata, it gets saved to the same resource group by default, you can then use it to provision multiple VMs with the same VM image. Some examples are - Cloudera Centos 7.3 image in the marketplace, a base VM image of a Hadoop slave node etc Creating a Linux VM image - documentation In the context of Hadoop, you would, create base images of each node type and then clone them programmatically. E.g. an image of a master node, an image of a worker node. 8.4. Create managed disk snapshots independent of the disks themselves You can create a snapshot of managed disks and persist the same, and can even delete the managed disks after snapshotting. This is helpful for backup/DR scenarios or for testing scenarios (e.g. financial month-end/quarter-end/year-end point-in-time snapshots of data) 8.5. Simplified upgrade/downgrade between standard and premium managed disks The picture says it all - stop the VM, update storage type, reboot. Where would this be useful: Lets say - you build a cluster for a PoC/dev/test environment, and once done, want to upgrade that to production, and while in the lower environments, for cost optimization, you went for standard disks, you want the best performance in production with premium managed disks; Another example - when your engineers are doing development, maybe use standard disks and when its time to tune performance, switch to premium and then back for cost optimization. Documentation An example here is, periodic snapshots of your Hadoop services metastore disks. 8.6. Converting from unmanaged to managed disks No worries if you have already provisioned unmanaged disks, we have a conversion process that will switch your disks from unmanaged to managed with a few commands. 8.7. Simplifies scaling VMs. Managed Disks will allow you to create up to 10,000 VM disks in a subscription, which will enable you to create thousands of VMs in a single subscription. 8.8. Disk-level RBAC. You can grant access to only the operations a person needs to perform. 8.9.. 8.10. Managed Disk Snapshot A Managed Snapshot is a read-only full copy of a managed disk which is stored as a standard managed disk by default. With snapshots, you can back up your managed disks at any point in time. These snapshots exist independent of the source disk and can be used to create new Managed Disks. They are billed based on the used size. For example, if you create a snapshot of a managed disk with provisioned capacity of 64 GB and actual used data size of 10 GB, snapshot will be billed only for the used data size of 10 GB. 8.11. Managed disk image. This ties into #2 above. 8.12. Image versus Snapshot With Managed Disks, you can capture a VM image of a generalized VM that has been deallocated. This image will include all of the disks attached to the VM. You can use this image to create a new VM, and it will include all of the disks. A snapshot is a copy of a single disk at the point in time it is taken. If a VM has only one disk, you can create a VM with either tyhe image or the snapshot. If you have multiple disks attached to a VM, and they are striped - the snapshot feature does not support this scenario yet, you have to create an image to replicate the VM. 8.13. Encryption at rest The two kinds of encryption are available - Storage Service Encryption (SSE) - at the storage service level, and Azure Disk Encryption (ADE) at the disk level (OS and data disks). Both persist keys to Azure Key Vault. SSE is enabled by default for all Managed Disks, Snapshots and Images in all the regions where managed disks is available. WRT to ADE, for Linux, the disks are encrypted using the DM-Crypt technology. From the perspective of Hadoop, HDFS encryption is at the application layer, sensitive information can spill to disk. You may want to use Azure Disk Encryption in addition to HDFS encryption. 8.14. High-availability Three replicas are maintained, and are designed for 99.999% availability. . 8.15. Pricing and billing Considerations: Disk tier - premium versus standard Disk SKU/size Number of transactions - applicable for standard disks only Egress Snaphots In summary Always leverage managed disks for Hadoop - primary HDFS, use premium disks for masters and workers, for cost optimization with performance tradeoff, you can use standard disks for workers. For further cost optimization, leverage Azure Blob Storage where possible. In my next blog, I cover Azure compute and resurface managed disks in the context of availability, images/snapshots, optimal number of premium managed disks to attach to a VM for guaranteed performance. Blog series Just enough Azure for Hadoop - Part 1 | Focuses on networking, other basics Just enough Azure for Hadoop - Part 2 | Focuses on storage Just enough Azure for Hadoop - Part 3 | Focuses on compute Just enough Azure for Hadoop - Part 4 | Focuses on select Azure Data Services (PaaS) Thanks to fellow Azure Data Solution Architect, Ryan Murphy for his review and feedback.
https://blogs.msdn.microsoft.com/cloud_solution_architect/2017/10/31/just-enough-azure-for-hadoop-part-2/
CC-MAIN-2018-47
refinedweb
2,957
50.06
PhpStorm 2020.3 EAP #4: Custom PHP 8 Attributes PhpStorm 2020.3 will come with several PHP 8 attributes available out-of-the-box: #[ArrayShape], #[ExpectedValues], #[NoReturn], #[Pure], #[Deprecated], #[Immutable]. Read on to learn more about the attributes, and please share your feedback about the design. Download PhpStorm 2020.3 EAP You’ve probably already heard about the attributes in PHP 8. But just in case you haven’t, they are the new format for structured metadata that replaced PHPDoc and will now be part of the language. What attributes are in PHP 8? Apart from the syntax definition and validation when calling ReflectionAttribute::newInstance(), PHP 8 does not provide any attributes out-of-the-box. For attributes that you define, you have to implement their behavior yourself. What attributes will be available in PhpStorm 2020.3? Several attributes will be available in PhpStorm 2020.3 under \JetBrains\PhpStorm\ namespace. #[ExpectedValues] and #[NoReturn] are more advanced descendants of .phpstorm.meta.php functions. And #[ArrayShape] is a highly anticipated evolution of PHPDoc’s array description. There also will be #[Deprecated], #[Pure], and #[Immutable]. The design of the attributes below is still a work in progress, and your feedback is very welcome. #[Deprecated] This attribute is similar to @deprecated PHPDoc tag and is used to mark methods, functions, classes, or class constants and it indicates that they will be removed in future versions as they have become obsolete. The main advantage of this new attribute is that you can specify replacement for functions and methods. That will help users of the deprecated functionality migrate. If you specify the reason argument for the attribute, then it will be shown to a user in the inspection tooltip. #[Deprecated(reason: '', replacement: '')] Let’s take a look at a real-world example. In Symfony 5.2 the \Symfony\Component\DependencyInjection\Alias::setPrivate() will be deprecated. With #[Deprecated] attribute we can make migration easier. #[Deprecated( reason: 'since Symfony 5.2, use setPublic() instead', replacement: '%class%->setPublic(!%parameter0%)' )] #[ArrayShape] One of the most requested features for PhpStorm was support for more specific array PHPDoc annotations. This was partially implemented with Psalm support. But the other part – specifying the possible keys and what value type they correspond to – was still missing. This functionality could be useful when working with simple data structures or object-like arrays when defining a real class may feel excessive. Starting from PhpStorm 2020.3, it will be possible to define the structure of such arrays with an #[ArrayShape]. The syntax is as follows: #[ArrayShape([ // ‘key’ => ’type’, ‘key1’ => ‘int’, ‘key2’ => ‘string’, ‘key3’ => ‘Foo’, ‘key3’ => App\PHP 8\Foo::class, ])] function functionName(...): array As you can see, the ‘type’ can be specified as a scalar in a string or as a class reference in the form of an FQN string or a ::class constant . You can extract an array that defines a shape into a constant and then reuse it inside the attributes where it applies: const MY_ARRAY_SHAPE = []; #[ArrayShape(MY_ARRAY_SHAPE)] What about legacy projects that can’t upgrade to PHP 8? Fortunately, the syntax of one-line attributes is backward compatible. This means that if you add the #[ArrayShape] attribute in a separate line in your PHP 7.* project, the PHP interpreter will parse it as just a line comment and you won’t get a parse error. However, multiline attributes are not safe for versions of PHP prior to 8. Unlike the PHP interpreter, PhpStorm will analyze attributes anyway! So even if your project runs on PHP 7.4 or lower, you still benefit from adding #[ArrayShape] attributes. Note, you’ll have code completion when working with earlier PHP versions in PhpStorm, but inspections will run only with language level 8 and above. #[Immutable] Immutable objects are the ones that can not be changed after they are initialized or created. The benefits of using them are the following: - The program state is more predictable. - Debugging is easier. It was possible to somewhat emulate immutable objects using getters and setters or magic methods. Starting from PhpStorm 2020.3, you can simply mark objects or properties with the #[Immutable] attribute. PhpStorm will check the usages of objects and properties and highlight change attempts. You can adjust the write scope restriction to a constructor only, or simulate private and protected scopes. To do that, pass one of the constants CONSTRUCTOR_WRITE_SCOPE, PRIVATE_WRITE_SCOPE, PROTECTED_WRITE_SCOPE to the #[Immutable] attribute constructor. The #[Immutable] attribute will work even with PHP 7.4 and lower! #[Pure] You can mark functions that do not produce any side effects as pure. Such functions can be safely removed if the result from executing them is not used in the code after. PhpStorm will detect redundant calls of the pure functions. If the function is marked as pure, but you try to change something outside it, i.e. it produces a side effect, then PhpStorm will warn you and highlight the unsafe code. #[ExpectedValues] With this attribute, you can specify which values a function accepts as parameters and which it can return. This is similar to what the expectedArguments() function could do in .phpstorm.meta.php, except that the meta version is more like a completion adversary. The attribute, by contrast, assumes that there are no other possible values for the argument or return value. For example, let’s take the count function: count ( array|Countable $array_or_countable [, int $mode = COUNT_NORMAL ] ) : int The second argument it takes is an integer, but in reality, it is not an integer. Rather it is one of the constants COUNT_NORMAL or COUNT_RECURSIVE, which correspond to the 0 and 1. You can add an attribute #[ExpectedValues] to the second parameter. And this is how the code completion will change in this case. No meta With expectedArguments() in .phpstorm.meta.php With #[ExpectedValues] attribute How to specify possible values or bitmasks. Expected values are passed to the attribute constructor and can be any of the following: - Numbers: #[ExpectedValues(values: [1,2,3])] - String literals: #[ExpectedValues(values: [‘red’, ‘black’, ‘green’])] - Constant references: #[ExpectedValues(values: [COUNT_NORMAL, COUNT_RECURSIVE])] - Class constant references: #[ExpectedValues(values: [Code::OK, Code::ERROR])] And there are a few ways to specify expected arguments: #[ExpectedValues(values: [1,2,3])]means that only one of the values is expected. #[ExpectedValues(flags: [1, 2, 3])]means that a bitmask of the specified values is expected, e.g. 1 | 3. #[ExpectedValues(valuesFromClass: MyClass::class)]means that any of the constants from the class ` MyClass` is expected. #[ExpectedValues(flagsFromClass: ExpectedValues::class)]means that a bitmask of the constants from the class `MyClass` is expected. #[ExpectedValues] examples Let’s take a look at the response() helper in Laravel. It takes the HTTP status code as the second argument. This leaves us missing two key features: - Code completion for possible status codes - Validation in the editor Let’s fix this by adding the attribute #[ExpectedValues(valuesFromClass: Response::class)] #[NoReturn] Some functions in a codebase may cause the execution of a script to stop. First, this is not always obvious from a function name, for example, trigger_error() can stop execution depending on the second argument. And second, PhpStorm cannot always detect such functions, because deep analysis can cause performance problems. This is why it makes sense to mark such functions as exit points to get a more accurate control flow analysis by adding the #[NoReturn] attribute. Also, PhpStorm will offer to propagate the attribute down across the hierarchy with a quick-fix to get even more well-defined analysis. Show me the code! The definitions of these attributes are available in the github.com/JetBrains/phpstorm-stubs. We are going to annotate some internal functions like parse_url() with #[ArrayShape] in the stubs. And also migrate @property-read to #[Immutable]. What other attributes are in the works? There are ideas for more attributes, such as the Contract attribute. We are interested to know which ones you would find useful for your work. Feel free to share any comments or suggestions with us. Final notes PhpStorm won’t look for attributes deeper than one level. So we expect users to propagate attributes with a quick-fix. Currently, these attributes are distributed with github.com/JetBrains/phpstorm-stubs. It means they are available in the IDE out-of-the-box. But we may reconsider how the distribution is done in the future. Download PhpStorm 2020.3 EAP The full list of changes in this EAP build mention them in the comments to this post. Your JetBrains PhpStorm team The Drive to Develop
https://blog.jetbrains.com/phpstorm/2020/10/phpstorm-2020-3-eap-4/
CC-MAIN-2020-50
refinedweb
1,407
57.57
I cant figure out how to fix the "public class Triangle implements Measurable" i dont know what i am doing wrong to get this error.. i am not fluent in java but i never had this error before. i tried changing the name but in the process i get more errors. if any one can help Thank you very much. public class Triangle { /** Sean Bing */ double side1 = 0; double side2 = 0; double side3 = 0; double s = (side1 + side2 + side3) / 2; double area = Math.sqrt (s * (s - side1) * (s - side2) * (s - side3)); public interface Measurable { public double getPerimeter(); public double getArea(); } public class Triangle implements Measurable // error at Triangle- can't have // the same name as parent file { private double mySide1; private double mySide2; private double mySide3; public Triangle(double Side1, double Side2, double Side3) { mySide1 = side1; mySide2 = side2; mySide3 = side3; } public double getPerimeter() { return s = (side1 + side2 + side3) / 2; } public double getArea() { return area = Math.sqrt(s * (s - side1) * (s - side2) * (s - side3)); } public class RightTriangle extends Triangle { public class RightTriangle { } } } }
https://www.daniweb.com/programming/software-development/threads/390941/triangle-implement-measurable-error
CC-MAIN-2017-26
refinedweb
171
55.88
Definition An instance parser of the data type gml_graph is a parser for graph in GML format [46]. It is possible to extend the parser by user defined rules. This parser is used by the read gml of class graph. The following is a small example graph (a triangle) in GML format. # This ']'. An input in GML format is a list of GML objects. Each object consists of a key word and a value. A value may have one out of four possible types, an integer (type gml int), a double (type gml double), a string (type gml string), or a list of GML objects (type gml list). Since a value can be a list of objects, we get a tree structure on the input. We can describe a class C of objects being in the same list and having the same key word by the so-called path. The path is the list of key words leading to an object in the class C. In principle, every data structure can be expressed in GML format. This parser specializes on graphs. A graph is represented by an object with key word graph and type gml list. The nodes of the graph are objects with path graph.node and type gml list. Each node has a unique identifier, which is represented by an object of type gml int with path graph.node.id. An edge is an object of type gml list with the path graph.edge. Each edge has a source and a target. These are objects of type gml int with path graph.edge.source and graph.edge.target, respectively. The integer values of source and target refer to node identifiers. There are some global graph attributes, too. An object of type gml int with path graph.directed determines whether the graph is undirected (value 0) or directed (every other integer). The type of node parameters and edge parameters in parameterized graph (see manual page GRAPH) can be given by objects of type gml string with path graph.nodeType and graph.edgeType, respectively. Parameters of nodes and edges are represented by objects of type gml string with path graph.node.parameter and graph.edge.parameter, respectively. No list has to be in a specific order, e.g., you can freely mix node and edge objects in the graph list. If there are several objects in a class where just one object is required like graph.node.id, only the last such object is taken into account. Objects in classes with no predefined rules are simply ignored. This means that an application A might add specific objects to a graph description in GML format and this description is still readable for another application B which simply does not care about the objects which are specific for A. This parser supports reading user defined objects by providing a mechanism for dealing with those objects by means of callback functions. You can specify a rule for, e.g., objects with path graph.node.weight and type gml double like in the following code fragment. ... bool get_node_weight(const gml_object* gobj, graph* G, node v) { double w = gobj->get_double(); do something with w, the graph and the corresponding node v return true; or false if the operation failed } ... main() { char* filename; ... graph G; gml_graph parser(G); parser.append("graph"); parser.append("node"); parser.append("weight"); parser.add_node_rule_for_cur_path(get_node_weight,gml_double); // or short parser.add_node_rule(get_node_weight,gml_double,"weight"); bool parsing_ok = parser.parse(filename); ... } You can add rules for the graph, for nodes, and for edges. The difference between them is the type. The type of node rules is as in the example above bool (*gml_node_rule)(const gml_object*, graph*, node), the type for edge rules is bool (*gml_edge_rule)(const gml_object*, graph*, edge), and the type for graph rules is bool (*gml_graph_rule)(const gml_object*, graph*). A GML object is represented by an instance of class gml_object. You can get its value by using double gml_object::get_double(), int gml_object::get_int() or char* gml_object::get_string(). If one of your rules returns false during parsing, then parsing fails and the graph is cleared. #include < LEDA/graph/gml_graph.h > Creation Operations 3.1 Parsing 3.2 Path Manipulation 3.3 User Defined Rules Implementation The data type gml_graph is realized using lists and maps. It inherits from gml_parser which uses gml_object, gml_objecttree, and gml_pattern. gml_pattern uses dictionaries.
http://www.algorithmic-solutions.info/leda_manual/gml_graph.html
CC-MAIN-2017-13
refinedweb
725
67.55
----Arjun Window typesFM's Driver prg and Passing Data 5 Examples on displaying text, variables from driver prog, logo, Address, StandardText 1.define2.address 3.top..endtop 4.bottom...endbottom 5.protect...endprotect 6.standard text/pageno 7.NEW-PAGE 8.IF...ENDIF.. 9.CASE...ENDCASE SAP AG 2001, Smart Forms - the Form Printing Solution,Claudia Binder / Jens Stumpe 4 10.PERFORM(EXT SUBROUTINE) OverviewSAP SCRIPT It is an SAP tool which is used to generate printable business documents like invoice , sale order, delivery note,Employee forms etc., The advanced version of script is smart form . SAP scripts are client dependent i.e. If a script is developed in client 100, it is not visible in another client 120 or 130. Smart forms are client independent 6.Charecterformat7.Layout SAP AG 2001, Smart Forms - the Form Printing Solution,Claudia Binder / Jens Stumpe 11 HEADERIt contains the header information of the SAP Script . Administrative data It contains the data related to package name , clientno,username and languages Pages:SAP script is a group of pages. Each page contains a layout. The layout is used to design the page. Page is a group of windows. Windows:A window contains some information to display on script. The entire page information is divided in the form of windows. There are four types of windows:Main SAP AG 2001, Smart Forms - the Form Printing Solution,Claudia Binder / Jens Stumpe 16 A Window which does not expand i.e., width and height is fixed .Graphicaltext will be displayed at that particular position.Tabs are represented by ,, (2 commas). Character format :A format which is used by a group of characters inside a paragraph is called a character format . Layout: It is a place where we design the page with windows. SAP AG 2001, Smart Forms - the Form Printing Solution,Claudia Binder / Jens Stumpe 21 Driver Program Driver Program:: CALL FUNCTION WRITE_FORM EXPORTING ELEMENT = MAIN WINDOW = MAIN. CALL FUNCTION CLOSE_FORM. SAVE ,ACTIVATE AND TEST IT. SAP AG 2001, Smart Forms - the Form Printing Solution,Claudia Binder / Jens Stumpe 25 EXAMPLES Example on sample script to display TITLE and some info on MAIN window Example on sample script to display variable from driver prog. LOGO/GRAPHICS IN SCRIPTS Address EndAddress :It is the command used to print the Address of customer or vendor /: Addressnumber &Addressno&/ : Endaddres SAP AG 2001, Smart Forms - the Form Printing Solution,Claudia Binder / Jens Stumpe 31 Example on displaying ADDRESS Create a window by name ADDRESS Double click on it and write the below code /: Address /: Addressnumber 122 / : Endaddres STANDARD TEXT STANDARD TEXT IN SCRIPTS : It is a text which is reusable by multiple scripts or Smartforms is called standard texts . Once stanadard text is created,follow the below steps Goto Main windowDouble click Click on INSERT->TEXT->STANDARD A popup is raised GIve the text name as ZARJUNPress enter The below code will be generated /:Include ZARJUN object TEXT id ST Save,activate,Test Symbols are place holders for storing a value and printing them In SAP SCRIPT. There are mainly 3 types of symbols in SAP Script . Programsymbols : Driver program symbols System symbols : symbols defined by system Standard symbols : symbols which are declared at a standard place i.e. table(TTDTG) so that they can be reusable by multiple scripts are called standard symbols CONTROL COMMANDS CONTROL COMMANDSThe commands which are used to format or change the outputOf SAP SCRIPT are called control commands DEFINE /: DEFINE &LV_NAME& = RELIANCE. * &LV_NAME& Address EndAddress :It is used to print the Address of customer or vendor syntax : /: Address /: Addressnumber &Addressno& / : Endaddres Top Endtop :It is used to display the constant page heading in the main window across all the pages . syntax : /: Top * here pageheading / : Endtop SAP AG 2001, Smart Forms - the Form Printing Solution,Claudia Binder / Jens Stumpe 40 Bottom . Endbottom :It is used to display constant footer in main window across all pages . syntax : /: Bottom* here footer information / : Endbottom Protect . Endprotect :syntax : /: protect New-page :it is used to start a new page to display some information . * this is some data ON PAGE1 /: new-page * this is some data ON PAGE2 IF.ENDIFIt is a conditional statement which is same as ABAP ./: DEFINE &LV_NAME& = RELIANCE. / : IF &LV_NAME& = RELIANCE. * RELIANCE GLOBAL SERVICES /: endif similarly we use IFELSEENDIF. IFELSEIF.ELSEIF.ELSEIF..ELSE.ENDIF CASE..ENDCASE SAP AG 2001, Smart Forms - the Form Printing Solution,Claudia Binder / Jens Stumpe 43 Purchase Order Go to NACE tcodeSelect the Appliction EX--- EF for purchase order V1 for salesV2 for Billing SAP AG 2001, Smart Forms - the Form Printing Solution,Claudia Binder / Jens Stumpe 46 Convert Original language Give the Original Language as EN. Press Enter. SAP AG 2001, Smart Forms - the Form Printing Solution,Claudia Binder / Jens Stumpe 50 Double click on LOGO window & insert the image by clicking on INSERT--GRAPHICS Test it TestingGoto ME22N Give purchase order no : 4500012164 Enter Click on print preview Our ZMEDRUCK will be dispalyed. using <v1>Changing <v2> Endperform. Syntax for form definition : Form <formname> tables intab structure ITCSY Outtab structure ITCSY. Endform. Where, ITCSY is a structure for storing name and value of exported variable from perform statement. In the subroutine implementation, we write the custom logic i.e all our select statements Business requirement :Modify the standard script MEDRUCK to print PO document type in the layout . Go to se71. Give the form name as zmedruckClick on Change. Create a window by name document type Double click on it and write the below code. /: perform GET_BSART in program ZGET_BSART. /: using &EKKO-EBELN& /: changing &V_BSART& Doc. Type : &V_BSART&. Create a program by name ZGET_BSART in se38 of type subroutine pool and Write the below code.
https://ru.scribd.com/presentation/175027515/SCRIPT-PP
CC-MAIN-2020-29
refinedweb
943
54.52
Techniques to make a web app load fast, even on a feature phone How we used code splitting, code inlining, and server-side rendering in PROXX. At Google I/O 2019 Mariko, Jake, and I shipped PROXX, a modern Minesweeper-clone for the web. Something that sets PROXX apart is the focus on accessibility (you can play it with a screenreader!) and the ability to run as well on a feature phone as on a high-end desktop device. Feature phones are constrained in multiple ways: - Weak CPUs - Weak or non-existent GPUs - Small screens without touch input - Very limited amounts of memory But they run a modern browser and are very affordable. For this reason, feature phones are making a resurgence in emerging markets. Their price point allows a whole new audience, who previously couldn't afford it, to come online and make use of the modern web. For 2019 it is projected that around 400 million feature phones will be sold in India alone, so users on feature phones might become a significant portion of your audience. In addition to that, connection speeds akin to 2G are the norm in emerging markets. How did we manage to make PROXX work well under feature phone conditions? Performance is important, and that includes both loading performance and runtime performance. It has been shown that good performance correlates with increased user retention, improved conversions and—most importantly—increased inclusivity. Jeremy Wagner has much more data and insight on why performance matters. This is part 1 of a two-part series. Part 1 focuses on loading performance, and part 2 will focus on runtime performance. Capturing the status quo Testing your loading performance on a real device is critical. If you don't have a real device at hand, I recommend WebPageTest (WPT), specifically the "simple" setup. WPT runs a battery of loading tests on a real device with an emulated 3G connection. 3G is a good speed to measure. While you might be used to 4G, LTE or soon even 5G, the reality of mobile internet looks quite different. Maybe you're on a train, at a conference, at a concert, or on a flight. What you'll be experiencing there is most likely closer to 3G, and sometimes even worse. That being said, we're going to focus on 2G in this article because PROXX is explicitly targeting feature phones and emerging markets in its target audience. Once WebPageTest has run its test, you get a waterfall (similar to what you see in DevTools) as well as a filmstrip at the top. The film strip shows what your user sees while your app is loading. On 2G, the loading experience of the unoptimized version of PROXX is pretty bad: When loaded over 3G, the user sees 4 seconds of white nothingness. Over 2G the user sees absolutely nothing for over 8 seconds. If you read why performance matters you know that we have now lost a good portion of our potential users due to impatience. The user needs to download all of the 62 KB of JavaScript for anything to appear on screen. The silver lining in this scenario is that the second anything appears on screen it is also interactive. Or is it? The First Meaningful Paint in the unoptimized version of PROXX is technically interactive but useless to the user. After about 62 KB of gzip'd JS has been downloaded and the DOM has been generated, the user gets to see our app. The app is technically interactive. Looking at the visual, however, shows a different reality. The web fonts are still loading in the background and until they are ready the user can see no text. While this state qualifies as a First Meaningful Paint (FMP), it surely does not qualify as properly interactive, as the user can't tell what any of the inputs are about. It takes another second on 3G and 3 seconds on 2G until the app is ready to go. All in all, the app takes 6 seconds on 3G and 11 seconds on 2G to become interactive. Waterfall analysis Now that we know what the user sees, we need to figure out the why. For this we can look at the waterfall and analyze why resources are loading too late. In our 2G trace for PROXX we can see two major red flags: - There are multiple, multi-colored thin lines. - JavaScript files form a chain. For example, the second resource only starts loading once the first resource is finished, and the third resource only starts when the second resource is finished. Reducing connection count Each thin line ( dns, connect, ssl) stands for the creation of a new HTTP connection. Setting up a new connection is costly as it takes around 1s on 3G and roughly 2.5s on 2G. In our waterfall we see a new connection for: - Request #1: Our index.html - Request #5: The font styles from fonts.googleapis.com - Request #8: Google Analytics - Request #9: A font file from fonts.gstatic.com - Request #14: The Web App Manifest The new connection for index.html is unavoidable. The browser has to create a connection to our server to get the contents. The new connection for Google Analytics could be avoided by inlining something like Minimal Analytics, but Google Analytics is not blocking our app from rendering or becoming interactive, so we don't really care about how fast it loads. Ideally, Google Analytics should be loaded in idle time, when everything else has already loaded. That way it won't take up bandwidth or processing power during the initial load. The new connection for the web app manifest is prescribed by the fetch spec, as the manifest has to be loaded over a non-credentialed connection. Again, the web app manifest doesn't block our app from rendering or becoming interactive, so we don't need to care that much. The two fonts and their styles, however, are a problem as they block rendering and also interactivity. If we look at the CSS that is delivered by fonts.googleapis.com, it's just two @font-face rules, one for each font. The font styles are so small in fact, that we decided to inline it into our HTML, removing one unnecessary connection. To avoid the cost of the connection setup for the font files, we can copy them to our own server. Note: Copying CSS or font files to your own server is okay when using Google Fonts. Other font providers might have different rules. Please check with your font provider's terms of service! Parallelizing loads Looking at the waterfall, we can see that once the first JavaScript file is done loading, new files start loading immediately. This is typical for module dependencies. Our main module probably has static imports, so the JavaScript cannot run until those imports are loaded. The important thing to realize here is that these kinds of dependencies are known at build time. We can make use of <link rel="preload"> tags to make sure all dependencies start loading the second we receive our HTML. Results Let's take a look at what our changes have achieved. It's important to not change any other variables in our test setup that could skew the results, so we will be using WebPageTest's simple setup for the rest of this article and look at the filmstrip: These changes reduced our TTI from 11 to 8.5, which is roughly the 2.5s of connection setup time we aimed to remove. Well done us. Prerendering While we just reduced our TTI, we haven't really affected the eternally long white screen the user has to endure for 8.5 seconds. Arguably the biggest improvements for FMP can be achieved by sending styled markup in your index.html. Common techniques to achieve this are prerendering and server-side rendering, which are closely related and are explained in Rendering on the Web. Both techniques run the web app in Node and serialize the resulting DOM to HTML. Server-side rendering does this per request on the, well, server side, while prerendering does this at build time and stores the output as your new index.html. Since PROXX is a JAMStack app and has no server side, we decided to implement prerendering. There are many ways to implement a prerenderer. In PROXX we chose to use Puppeteer, which starts Chrome without any UI and allows you to remote control that instance with a Node API. We use this to inject our markup and our JavaScript and then read back the DOM as a string of HTML. Because we are using CSS Modules, we get CSS inlining of the styles that we need for free. const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.setContent(rawIndexHTML); await page.evaluate(codeToRun); const renderedHTML = await page.content(); browser.close(); await writeFile("index.html", renderedHTML); With this in place, we can expect an improvement for our FMP. We still need to load and execute the same amount of JavaScript as before, so we shouldn't expect TTI to change much. If anything, our index.html has gotten bigger and might push back our TTI a bit. There's only one way to find out: running WebPageTest. Our First Meaningful Paint has moved from 8.5 seconds to 4.9 seconds, a massive improvement. Our TTI still happens at around 8.5 seconds so it has been largely unaffected by this change. What we did here is a perceptual change. Some might even call it a sleight of hand. By rendering an intermediate visual of the game, we are changing the perceived loading performance for the better. Inlining Another metric that both DevTools and WebPageTest give us is Time To First Byte (TTFB). This is the time it takes from the first byte of the request being sent to the first byte of the response being received. This time is also often called a Round Trip Time (RTT), although technically there is a difference between these two numbers: RTT does not include the processing time of the request on the server side. DevTools and WebPageTest visualize TTFB with a light color within the request/response block. Looking at our waterfall, we can see that the all of requests spend the majority of their time waiting for the first byte of the response to arrive. This problem was what HTTP/2 Push was originally conceived for. The app developer knows that certain resources are needed and can push them down the wire. By the time the client realizes that it needs to fetch additional resources, they are already in the browser's caches. HTTP/2 Push turned out to be too hard to get right and is considered discouraged. This problem space will be revisited during the standardization of HTTP/3. For now, the easiest solution is to inline all the critical resources at the expense of caching efficiency. Our critical CSS is already inlined thanks to CSS Modules and our Puppeteer-based prerenderer. For JavaScript we need to inline our critical modules and their dependencies. This task has varying difficulty, based on the bundler that you're using. Note: In this step we also subset our font files to contain only the glyphs that we need for our landing page. I am not going to go into detail on this step as it is not easily abstracted and sometimes not even practical. We still load the full font files lazily, but they are not needed for the initial render. This shaved 1 second off our TTI. We have now reached the point where our index.html contains everything that is needed for the initial render and becoming interactive. The HTML can render while it is still downloading, creating our FMP. The moment the HTML is done parsing and executing, the app is interactive. Aggressive code splitting Yes, our index.html contains everything that is needed to become interactive. But on closer inspection it turns out it also contains everything else. Our index.html is around 43 KB. Let's put that in relation to what the user can interact with at the start: We have a form to configure the game containing a couple of components, a start button and probably some code to persist and load user settings. That's pretty much it. 43 KB seems like a lot. To understand where our bundle size is coming from we can use a source map explorer or a similar tool to break down what the bundle consists of. As predicted, our bundle contains the game logic, the rendering engine, the win screen, the lose screen and a bunch of utilities. Only a small subset of these modules are needed for the landing page. Moving everything that is not strictly required for interactivity into a lazily-loaded module will decrease TTI significantly. What we need to do is code split. Code splitting breaks apart your monolithic bundle into smaller parts that can be lazy-loaded on-demand. Popular bundlers like Webpack, Rollup, and Parcel support code splitting by using dynamic import(). The bundler will analyze your code and inline all modules that are imported statically. Everything that you import dynamically will be put into its own file and will only be fetched from the network once the import() call gets executed. Of course hitting the network has a cost and should only be done if you have the time to spare. The mantra here is to statically import the modules that are critically needed at load time and dynamically load everything else. But you shouldn't wait to the very last moment to lazy-load modules that are definitely going to be used. Phil Walton's Idle Until Urgent is a great pattern for a healthy middle ground between lazy loading and eager loading. In PROXX we created a lazy.js file that statically imports everything that we don't need. In our main file, we can then dynamically import lazy.js. However, some of our Preact components ended up in lazy.js, which turned out to be a bit of a complication as Preact can't handle lazily-loaded components out of the box. For this reason we wrote a little deferred component wrapper that allows us to render a placeholder until the actual component has loaded. export default function deferred(componentPromise) { return class Deferred extends Component { constructor(props) { super(props); this.state = { LoadedComponent: undefined }; componentPromise.then(component => { this.setState({ LoadedComponent: component }); }); } render({ loaded, loading }, { LoadedComponent }) { if (LoadedComponent) { return loaded(LoadedComponent); } return loading(); } }; } With this in place, we can use a Promise of a component in our render() functions. For example, the <Nebula> component, which renders the animated background image, will be replaced by an empty <div> while the component is loading. Once the component is loaded and ready to use, the <div> will be replaced with the actual component. const NebulaDeferred = deferred( import("/components/nebula").then(m => m.default) ); return ( // ... <NebulaDeferred loaded={Nebula => <Nebula />} /> ); With all of this in place, we reduced our index.html to a mere 20 KB, less than half of the original size. What effect does this have on FMP and TTI? WebPageTest will tell! Our FMP and TTI are only 100ms apart, as it is only a matter of parsing and executing the inlined JavaScript. After just 5.4s on 2G, the app is completely interactive. All the other, less essential modules are loaded in the background. More Sleight of Hand If you look at our list of critical modules above, you'll see that the rendering engine is not part of the critical modules. Of course, the game cannot start until we have our rendering engine to render the game. We could disable the "Start" button until our rendering engine is ready to start the game, but in our experience the user usually takes long enough to configure their game settings that this isn't necessary. Most of the time the rendering engine and the other remaining modules are done loading by the time the user presses "Start". In the rare case that the user is quicker than their network connection, we show a simple loading screen that waits for the remaining modules to finish. Conclusion Measuring is important. To avoid spending time on problems that are not real, we recommend to always measure first before implementing optimizations. Additionally, measurements should be done on real devices on a 3G connection or on WebPageTest if no real device is at hand. The filmstrip can give insight into how loading your app feels for the user. The waterfall can tell you what resources are responsible for potentially long loading times. Here's a checklist of things you can do to improve loading performance: - Deliver as many assets as possible over one connection. - Preload or even inline resources that are required for the first render and interactivity. - Prerender your app to improve perceived loading performance. - Make use of aggressive code splitting to reduce the amount of code needed for interactivity. Stay tuned for part 2 where we discuss how to optimize runtime performance on hyper-constrained devices.
https://web.dev/load-faster-like-proxx/
CC-MAIN-2020-29
refinedweb
2,871
64.51
I have a Coin class that compares weight and value and I need help developing a hashcode and Junit tests to test them, here is my class: public class Coin implements Comparable<Coin>{ public static final int CENT=1, NICKEL=5, DIME=10, QUARTER=25; private int value; private double weight; public double getWeight(){ return weight; } public void setWeight(double weight){ this.weight = weight; } public int getValue(){return value;} public Coin(int value, double weight){ this.weight = weight; this.value = value; } @Override public String toString(){ return value == CENT ? "cent" : value == NICKEL ? "nickel" : value == DIME ? "dime" : value == QUARTER ? "quarter" : "unknown"; } @Override public boolean equals(Object that){ if(that == null) return false; if(getValue() != ((Coin) that).getValue()) return false; Coin t = (Coin) that; return( value == t.value && t.equals(t.weight)); } @Override public int hashCode(){ int h1 = new Double (value).hashCode(); int h2 = new Double (weight).hashCode(); final int HASH_MULTIPLIER = 29; int h = HASH_MULTIPLIER * h1 + h2; return h; } @Override public int compareTo(Coin arg0) { // TODO Auto-generated method stub return 0; } } As you can see I need help. My hash code is very bad and I do not even know how to approach the compareTo Eclipse put in. I also need help developing JUnit tests for this code. Can anyone guide me on how to create the tests for this?
https://www.daniweb.com/programming/software-development/threads/355189/equals-hashcode-help
CC-MAIN-2018-34
refinedweb
218
58.28
include standalone ObjC function I'm trying to implement volume level metering using AVAudioEngineas described here:. However, I'm not sure how to access the standalone function vDSP_meamgv()(as opposed to a class/instance method). I assume I would first need to call load_framework('Accelerate'), but after that I'm trying to figure out the equivalent of vDSP_meamgv = ObjCFunction('vDSP_meamgv'). Accelerate libraries are tricky. These are all c functions, so you have to use c.vDSP_blah.argtypes=[...] Etc Meaning you have to dig up all of the function prototypes, etc. However you can just use the equivalent numpy methods, which are probably very similar in speed, since they are also vectorized and probably use the same underlying BLAS code. There are some efficient ways to cast the buffer you get as a numpy array, without copying. Then to get average power you could use np.sqrt(np.mean(np.square(np_data))) To get rms, and np.max(np.abs(np.data)) to get peak. Sorry I meant to post some code on this.. By the way, the answer at the bottom of that stack overflow is what I've been playing around with... But the mixer is screwing up the inputNode, since the format are incompatible. No worries man, appreciate the help. Any chance you could just post that casting snippet for now? That sound tricky for me. As to the second point.. any reason not to just add the processing code to the tap block that updates the RecognitionRequest? (instead of adding a mixer) Sorry, on my phone, away from my iPad... But yes, you get access to the buffer in the handler, and can compute metet directly there before passing on to the recognizer. The one issue is that iOS doesn't seem to respect the buffer size -- instead giving us 16535 samples - about .375 sec -- so you only get new data a few times per second. There is in theory a way to request fewer samples (thus faster call rate And lower latency), using the lower level audiounit, but I can't seem to get that working... Ok, I'll try to figure out the right casting call in the meantime. Yeah that is annoying; however from that stackoverflow post I've verified that calling buffer.setFrameLength(1024)succeeds in speeding up the sampling rate significantly after the first long (0.375s) sample.. haven't checked yet to see if I can update that before the first sample, but shouldn't matter too much for my purposes. def handler(_cmd,obj1_ptr,obj2_ptr): # param1 = AVAudioPCMBuffer # The buffer parameter is a buffer of audio captured # from the output of an AVAudioNode. # param2 = AVAudioTime # The when parameter is the time the buffer was captured if obj1_ptr: obj1 = ObjCInstance(obj1_ptr) #print('length:',obj1.frameLength(),'sample',ObjCInstance(obj2_ptr).sampleTime()) #print('format:',obj1.format()) data=obj1.floatChannelData().contents data_np=np.ctypeslib.as_array(obj=data,shape=(obj1.frameLength(),)) #if you want to use it outside of the handler, use .copy() power=n.sqrt(np.mean(np.square(data_np))) You would then, in the handler, set an attribute on your view with the power, which will get used next frame. (Or better yet, don't use update in the view, instead trigger the draw using the handler, this ensuring you only draw when updated info is available. If you want 60Hz frame rate, you'd want the frameLength to be 735 samples. MONEY. This works great for accessing the sound data!! Sadly, upon further testing, setting the frame length to 1024 makes the speech recognition results very poor. Not sure why.. any ideas? Do you think the speech recognizer is expecting the original frame length somehow? For instance I say "Hello" and it outputs "LOL" sometimes, so maybe the input is getting clipped. My frame length on my phone is actually 4410 by default which is ok, but I guess this is a platform specific number. I have not tried the frameLength trick, but I wonder if the copy is having trouble keeping up, resulting in dropouts. You could write those samples to a .wav file, then listen to it using the quicklook, to see if the quality is suffering. If you comment out the numpy stuff, does the lower frame still cause poor results? If not, there are some techniques we can use to speed that processing. Other possibilities would be to reduce sample rate (8000, 11050, or 22100), which should ease the processor burden. this may be obvious, but be sure to set the frameLength prior to passing it to the recognizer, otherwise it will be getting duplicate data. what happens, i think, is that the buffer contains all of the samples, including the initial 0.375 or whatever sec. if you change frame length to 1024, you are telling the engine how many samples you consumed -- it wants to keep that buffer the same size, and not ever skip, so it calls you sooner next time, where everything shifted left, and new samples appended at the end. The least latency would be those end samples. This takes the latency down from .375 for me to maybe 20-30 msec. def handler(_cmd,buffer_ptr, samptime_ptr): if buffer_ptr: buffer = ObjCInstance(buffer_ptr) # a way to get the sample time in sec of start of buffer, comparable to time.perf_counter. you can differnce these to see latency to start of buffer. hostTimeSec=AVAudioTime.secondsForHostTime_(ObjCInstance(samptime_ptr).hostTime()) #you can also check for skips, by looking at sampleTime(), which should be always incrementing by whatever you set the framelength to... if more than that, then your other processing is taking too long #this just sets up pointers that numpy can read... no actual read yet data=buffer.floatChannelData().contents data_np=np.ctypeslib.as_array(obj=data,shape=(buffer.frameLength(),)) #Take the LAST N samples for use in visualization... i.e the most recent, and least latency update_path(data_np[-1024:]) #this tells the engine how many samples we consumed ... next time, we will get samples [1024:] along with 1024 new samples buffer.setFrameLength_(1024) # be sure to append the buffer AFTER setting the frameLength, otherwise you will keep feeding it repeated portions of the data requestBuffer.append(buffer) Hey @JonB sorry for the slow response, this did help me get over a hump though. I think my frameCapacity is less than yours which is apparently the upper limit for frameLength.. setting sample size to 2048 worked well. I'm planning to post a first crack at a live speech recognition module soon.
https://forum.omz-software.com/topic/5514/include-standalone-objc-function
CC-MAIN-2019-35
refinedweb
1,083
64.71
PhoneGap 2.x Mobile Application Development Hotshot — Save 50% Create exciting apps for mobile devices using PhoneGap with this book and ebook. In this article by Kerri Shotts, author of PhoneGap 2.x Mobile Application Development Hotshot, we'll be creating two JavaScript files in the www/models directory named quizQuestion.js and quizQuestions.js. (For more resources related to this topic, see here.) Getting on with it Before we define our model, let's define a namespace where it will live. This is an important habit to establish since it relieves us of having to worry about whether or not we'll collide with another function, object, or variable of the same name. While there are various methods used to create a namespace, we're going to do it simply using the following code snippet: // quizQuestion.js var QQ = QQ || {}; Now that our namespace is defined, we can create our question object as follows: QQ.Question = function ( theQuestion ) { var self = this; Note the use of self: this will allow us to refer to the object using self rather than using this. (Javascript's this is a bit nuts, so it's always better to refer to a variable that we know will always refer to the object.) Next, we'll set up the properties based on the diagram we created from step two using the following code snippet: self.question = theQuestion; self.answers = Array(); self.correctAnswer = -1; We've set the self.correctAnswer value to -1 to indicate that, at the moment, any answer provided by the player is considered correct. This means you can ask questions where all of the answers are right. Our next step is to define the methods or interactions the object will have. Let's start with determining if an answer is correct. In the following code, we will take an incoming answer and compare it to the self.correctAnswer value. If it matches, or if the self.correctAnswer value is -1, we'll indicate that the answer is correct: self.testAnswer = function( theAnswerGiven ) { if ((theAnswerGiven == self.correctAnswer) || (self.correctAnswer == -1)) { return true; } else { return false; } } We're going to need a way to access a specific answer, so we'll define the answerAtIndex function as follows: self.answerAtIndex = function ( theIndex ) { return self.answers[ theIndex ]; } To be a well-defined model, we should always have a way of determining the number of items in the model as shown in the following code snippet: self.answerCount = function () { return self.answers.length; } Next, we need to define a method that allows an answer to be added to our object. Note that with the help of the return value, we return ourselves to permitting daisy-chaining in our code: self.addAnswer = function( theAnswer ) { self.answers.push ( theAnswer ); return self; } In theory we could display the answers to a question in the order they were given to the object. In practice, that would turn out to be a pretty boring game: the answers would always be in the same order, and chances would be pretty good that the first answer would be the correct answer. So let's give ourselves a randomized list using the following code snippet: self.getRandomizedAnswers = function () { var randomizedArray = Array(); var theRandomNumber; var theNumberExists; // go through each item in the answers array for (var i=0; i<self.answers.length; i++) { // always do this at least once do { // generate a random number less than the // count of answers theRandomNumber = Math.floor ( Math.random() * self.answers.length ); theNumberExists = false; // check to see if it is already in the array for (var j=0; j<randomizedArray.length; j++) { if (randomizedArray[j] == theRandomNumber) { theNumberExists = true; } } // If it exists, we repeat the loop. } while ( theNumberExists ); // We have a random number that is unique in the // array; add it to it. randomizedArray.push ( theRandomNumber ); } return randomizedArray; } The randomized list is just an array of numbers that indexes into the answers[] array. To get the actual answer, we'll have to use the answerAtIndex() method. Our model still needs a way to set the correct answer. Again, notice the return value in the following code snippet permitting us to daisy-chain later on: self.setCorrectAnswer = function ( theIndex ) { self.correctAnswer = theIndex; return self; } Now that we've properly set the correct answer, what if we need to ask the object what the correct answer is? For this let's define a getCorrectAnswer function using the following code snippet: self.getCorrectAnswer = function () { return self.correctAnswer; } Of course, our object also needs to return the question given to it whenever it was created; this can be done using the following code snippet: self.getQuestion = function() { return self.question; } } That's it for the question object. Next we'll create the container that will hold all of our questions using the following code line: QQ.questions = Array(); We could go the regular object-oriented approach and make the container an object as well, but in this game we have only one list of questions, so it's easier to do it this way. Next, we need to have the ability to add a question to the container, this can be done using the following code snippet: QQ.addQuestion = function (theQuestion) { QQ.questions.push ( theQuestion ); } Like any good data model, we need to know how many questions we have; we can know this using the following code snippet: QQ.count = function () { return QQ.questions.length; } Finally, we need to be able to get a random question out of the list so that we can show it to the player; this can be done using the following code snippet: QQ.getRandomQuestion = function () { var theQuestion = Math.floor (Math.random() * QQ.count()); return QQ.questions[theQuestion]; } Our data model is officially complete. Let's define some questions using the following code snippet: // quizQuestions.js // // QUESTION 1 // QQ.addQuestion ( new QQ.Question ( "WHAT_IS_THE_COLOR_OF_THE_SUN?" ) .addAnswer( "YELLOW" ) .addAnswer( "WHITE" ) .addAnswer( "GREEN" ) .setCorrectAnswer ( 0 ) ); Notice how we attach the addAnswer and setCorrectAnswer methods to the new question object. This is what is meant by daisy-chaining: it helps us write just a little bit less code. You may be wondering why we're using upper-case text for the questions and answers. This is due to how we'll localize the text, which is next: PKLOC.addTranslation ( "en", "WHAT_IS_THE_COLOR_OF_THE_SUN?", "What is the color of the Sun?" ); PKLOC.addTranslation ( "en", "YELLOW", "Yellow" ); PKLOC.addTranslation ( "en", "WHITE", "White" ); PKLOC.addTranslation ( "en", "GREEN", "Green" ); PKLOC.addTranslation ( "es", "WHAT_IS_THE_COLOR_OF_THE_SUN?", "¿Cuál es el color del Sol?" ); PKLOC.addTranslation ( "es", "YELLOW", "Amarillo" ); PKLOC.addTranslation ( "es", "WHITE", "Blanco" ); PKLOC.addTranslation ( "es", "GREEN", "Verde" ); The questions and answers themselves serve as keys to the actual translation. This serves two purposes: it makes the keys obvious in our code, so we know that the text will be replaced later on, and should we forget to include a translation for one of the keys, it'll show up in uppercase letters. PKLOC as used in the earlier code snippet is the namespace we're using for our localization library. It's defined in www/framework/localization.js. The addTranslation method is a method that adds a translation to a specific locale. The first parameter is the locale for which we're defining the translation, the second parameter is the key, and the third parameter is the translated text. The PKLOC.addTranslation function looks like the following code snippet: PKLOC.addTranslation = function (locale, key, value) { if (PKLOC.localizedText[locale]) { PKLOC.localizedText[locale][key] = value; } else { PKLOC.localizedText[locale] = {}; PKLOC.localizedText[locale][key] = value; } } The addTranslation method first checks to see if an array is defined under the PKLOC.localizedText array for the desired locale. If it is there, it just adds the key/value pair. If it isn't, it creates the array first and then adds the key/value pair. You may be wondering how the PKLOC.localizedText array gets defined in the first place. The answer is that it is defined when the script is loaded, a little higher in the file: PKLOC.localizedText = {}; Continue adding questions in this fashion until you've created all the questions you want. The quizQuestions.js file contains ten questions. You could, of course, add as many as you want. What did we do? In this task, we created our data model and created some data for the model. We also showed how translations are added to each locale. What else do I need to know? Before we move on to the next task, let's cover a little more of the localization library we'll be using. Our localization efforts are split into two parts: translation and data formatting . For the translation effort , we're using our own simple translation framework, literally just an array of keys and values based on locale. Whenever code asks for the translation for a key, we'll look it up in the array and return whatever translation we find, if any. But first, we need to determine the actual locale of the player, using the following code snippet: // www/framework/localization.js PKLOC.currentUserLocale = ""; PKLOC.getUserLocale = function() { Determining the locale isn't hard, but neither is it as easy as you would initially think. There is a property (navigator.language) under WebKit browsers that is technically supposed to return the locale, but it has a bug under Android, so we have to use the userAgent. For WP7, we have to use one of three properties to determine the value. Because that takes some work, we'll check to see if we've defined it before; if we have, we'll return that value instead: if (PKLOC.currentUserLocale) { return PKLOC.currentUserLocale; } Next, we determine the current device we're on by using the device object provided by Cordova. We'll check for it first, and if it doesn't exist, we'll assume we can access it using one of the four properties attached to the navigator object using the following code snippet: var currentPlatform = "unknown"; if (typeof device != 'undefined') { currentPlatform = device.platform; } We'll also provide a suitable default locale if we can't determine the user's locale at all as seen in the following code snippet: var userLocale = "en-US"; Next, we handle parsing the user agent if we're on an Android platform. The following code is heavily inspired by an answer given online at. if (currentPlatform == "Android") { var userAgent = navigator.userAgent; var tempLocale = userAgent.match(/Android.*([a-zA-Z]{2}-[a-zA-Z] {2})/); if (tempLocale) { userLocale = tempLocale[1]; } } If we're on any other platform, we'll use the navigator object to retrieve the locale as follows: else { userLocale = navigator.language || navigator.browserLanguage || navigator.systemLanguage || navigator.userLanguage; } Once we have the locale, we return it as follows: PKLOC.currentUserLocale = userLocale; return PKLOC.currentUserLocale; } This method is called over and over by all of our translation codes, which means it needs to be efficient. This is why we've defined the PKLOC.currentUserLocale property. Once it is set, the preceding code won't try to calculate it out again. This also introduces another benefit: we can easily test our translation code by overwriting this property. While it is always important to test that the code properly localizes when the device is set to a specific language and region, it often takes considerable time to switch between these settings. Having the ability to set the specific locale helps us save time in the initial testing by bypassing the time it takes to switch device settings. It also permits us to focus on a specific locale, especially when testing. Translation of text is accomplished by a convenience function named __T() . The convenience functions are going to be our only functions outside of any specific namespace simply because we are aiming for easy-to-type and easy-to-remember names that aren't arduous to add to our code. This is especially important since they'll wrap every string, number, date, or percentage in our code. The __T() function depends on two functions: substituteVariables and lookupTranslation. The first function is de fined as follows: PKLOC.substituteVariables = function ( theString, theParms ) { var currentValue = theString; // handle replacement variables if (theParms) { for (var i=1; i<=theParms.length; i++) { currentValue = currentValue.replace("%" + i, theParms[i-1]); } } return currentValue; } All this function does is handle the substitution variables. This means we can define a translation with %1 in the text and we will be able to replace %1 with some value passed into the function. The next function, lookupTranslation, is defined as follows: PKLOC.lookupTranslation = function ( key, theLocale ) { var userLocale = theLocale || PKLOC.getUserLocale(); if ( PKLOC.localizedText[userLocale] ) { if ( PKLOC.localizedText[userLocale][key.toUpperCase()] ) { return PKLOC.localizedText[userLocale][key.toUpperCase()]; } } return null; } Essentially, we're checking to see if a specific translation exists for the given key and locale. If it does, we'll return the translation, but if it doesn't, we'll return null. Note that the key is always converted to uppercase, so case doesn't matter when looking up a translation. Our __T() function looks as follows: function __T(key, parms, locale) { var userLocale = locale || PKLOC.getUserLocale(); var currentValue = ""; First, we determine if the translation requested can be found in the locale, whatever that may be. Note that it can be passed in, therefore overriding the current locale. This can be done using the following code snippet: if (! (currentValue=PKLOC.lookupTranslation(key, userLocale)) ) { Locales are often of the form xx-YY, where xx is a two-character language code and YY is a two-character character code. My locale is defined as en-US. Another player's might be defined as es-ES. If you recall, we defined our translations only for the language. This presents a problem: the preceding code will not return any translation unless we defined the translation for the language and the country. Sometimes it is critical to define a translation specific to a language and a country. While various regions may speak the same language from a technical perspective, idioms often differ. If you use an idiom in your translation, you'll need to localize them to the specific region that uses them, or you could generate potential confusion. Therefore, we chop off the country code, and try again as follows: userLocale = userLocale.substr(0,2); if (! (currentValue=PKLOC.lookupTranslation(key, userLocale)) ) { But we've only defined translations for English (en) and Spanish(es)! What if the player's locale is fr-FR (French)? The preceding code will fail, because we've not defined any translation for the fr language (French). Therefore, we'll check for a suitable default, which we've defined to be en-US, American English: userLocale = "en-US"; if (! (currentValue=PKLOC.lookupTranslation(key, userLocale)) ) { Of course, we are now in the same boat as before: there are no translations defined for en-US in our game. So we need to fall back to en as follows: userLocale = "en"; if (! (currentValue=PKLOC.lookupTranslation(key, userLocale)) ) { But what happens if we can't find a translation at all? We could be mean and throw a nasty error, and perhaps you might want to do exactly that, but in our example, we're just returning the incoming key. If the convention of capitalizing the key is always followed, we'll still be able to see that something hasn't been translated. currentValue = key; } } } } Finally, we pass the currentValue parameter to the substituteVariables property in order to process any substitutions that we might need as follows: return PKLOC.substituteVariables( currentValue, parms ); } Summary In this article we saw the file quizQuestion.js which was the actual model: it specified how the data should be formatted and how we can interact with it. We also saw the quizQuestions.js file, which contained our actual question data. Resources for Article : Further resources on this subject: - Configuring the ChildBrowser plugin [Article] - Adding Geographic Capabilities via the GeoPlaces Theme [Article] - iPhone: Issues Related to Calls, SMS, and Contacts [Article] About the Author : Kerri Shotts creating, designing, and maintaining custom applications (both desktop and mobile), websites, graphics and logos, and more for her clients. You can find her blog posts at her website () and she is active on the Google Group for PhoneGap. When she isn't working, she enjoys photography, music, and fish-keeping. She is the author of two prior books published by Packt : PhoneGap 2.x Mobile Application Development Hotshot and Instant PhoneGap Social App Development. Post new comment
http://www.packtpub.com/article/implementing-data-model
CC-MAIN-2014-10
refinedweb
2,732
57.16
Consistently, one of the more popular stocks people enter into their stock options watchlist at Stock Options Channel is CenturyLink, Inc. (Symbol: CTL). So this week we highlight one interesting put contract, and one interesting call contract, from the August expiration for CTL. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $26 strike, which has a bid at the time of this writing of 60 cents. Collecting that bid as the premium represents a 2.3% return against the $26 commitment, or a 14% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Selling a put does not give an investor access to CTL's upside potential the way owning shares would, because the put seller only ends up owning shares in the scenario where the contract is exercised. So unless CenturyLink, Inc. sees its shares decline 6.6% and the contract is exercised (resulting in a cost basis of $25.40 per share before broker commissions, subtracting the 60 cents from $26), the only upside to the put seller is from collecting that premium for the 14% annualized rate of return. Worth considering, is that the annualized 14% figure actually exceeds the 7.8% annualized dividend paid by CenturyLink, Inc. by 6.2%, based on the current share price of $27.83. And yet, if an investor was to buy the stock at the going market price in order to collect the dividend, there is greater downside because the stock would have to lose 6.58% to reach the $26 7.8% annualized dividend yield. Turning to the other side of the option chain, we highlight one call contract of particular interest for the August expiration, for shareholders of CenturyLink, Inc. (Symbol: CTL) looking to boost their income beyond the stock's 7.8% annualized dividend yield. Selling the covered call at the $29 strike and collecting the premium based on the 70 cents bid, annualizes to an additional 15.3% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 23.1% annualized rate in the scenario where the stock is not called away. Any upside above $29 would be lost if the stock rises there and is called away, but CTL shares would have to advance 4.2% from current levels for that to happen, meaning that in the scenario where the stock is called, the shareholder has earned a 6.7% return from this trading level, in addition to any dividends collected before the stock was called. The chart below shows the trailing twelve month trading history for CenturyLink, Inc., highlighting in green where the $26 strike is located relative to that history, and highlighting the $29 CenturyLink, Inc. (considering the last 252 trading day CTL historical stock prices using closing values, as well as today's price of $27.83) to be 32%. Top YieldBoost CTL.
https://www.nasdaq.com/articles/interesting-august-stock-options-ctl-2016-06-20
CC-MAIN-2021-21
refinedweb
495
63.8
Laughing Blog Tutorial Part 1-The Project Structurepublished on: | by In categories: Tutorial Series , Django My journey as a programmer has taught me one valuable lesson: To effectively learn how to write and understand programming concepts, never underestimate the value of building real-life applications. I remember staying up late trying to grasp the concepts of programming. But to make the concepts stick, I embarked on a journey to create achiengcindy.com and share my journey with you all. The aim of this tutorial series, The laughing blog, is to create a fully functional blog in Django. Overview We will implement functionalities such as : - Registration and Authentication - Newsletter - Sending Emails - Social media share - Blog.list of our blogs and a detailed view of each blog post The code can be found on github. If you follow through, you too can create your own blog or even earn from it. Prerequisites - Basic Git Knowledge - A github or bitbucket account.If you dont have,create one for free on github or bitbucket - Django basics.If you are new to django check my previous tutorial on Django Environment in Linux . - Text Editor of choice.I will be using Sublime Text.You can download it here. Setting up the laughing-blog project In this tutorial, you will learn how to create Django project structure, learn git and some very useful python libraries such as whitenoise and python decouple. I will use pip and virtualenv to create the project structure however, you can use pipenv let's get started Create a folder and name it tutorial where our project will be stored. mkdir tutorial && cd tutorial Nice, next we have to create a Virtual Environment for our project virtualenv env -p python3 specify the python version .We will use python3 To use our virtual environment we must activate it source env/bin/activate After activating the virtual environment. Install Django using the command below: $ pip3 install django We are going to create the project named laughing_blog by using the command below: $ django-admin startproject laughing_blog If you change Directory to laughing_blog, you should have a structure like this: laughing_blog --laughing_blog ----__init__.py ----settings.py ----urls.py ----wsgi.py --manage.py Change the outer laughing_blog to src ( it is just a container that holds our project). To make sure that django is successfully installed run server using the command and if all went well,you should see this page:and if all went well,you should see this page: python manage.py runserver Now we are all set to start writing codes. However, there are some configurations and libraries I want to introduce. Python-decouple Python Decouple will help us separate sensitive settings from the project.Storing passwords and other sensitive information such as secret key in settings.py is not a great idea and that is why we will use python-decouple. Install it using the following command: pip3 install python-decouple After successfully installing Python-decouple, create a .env text file on your project's root directory. Using Python Decouple .env is the file where all the sensitive information will be stored. So far, we need to store our secret key and debug status. It should look like this: SECRET_KEY = your key DEBUG = True settings.py Import config object and place it below import os. from decouple import config This is a snippet from my settings.py. import os from decouple import config Replace the secret key and debug with the following: SECRET_KEY = config('SECRET_KEY'') DEBUG = config('DEBUG', cast=bool) Git It is important to push your changes to a remote repository. Initialize git using: git init // Initializes git Next, we want git to ignore some files with secret information such as the settings, database, etc.Now,create a gitignore file and add the .env, *.pyc ,db.sqlite3 and any other text file in it.To create gitignore use the command: touch .gitignore Next, let's add all our changes, commit and push git add * //adds all changes git commit -m "initial commit" git remote add origin<your username>/laughing_blog.git git push -u origin master Templates Html is a static language used to display data on the browser. Django is dynamic and offers a way to display HTML dynamically by using the powerful in-built template tags. Django templating enable us to separate the presentation of a document from its data. We could simply embed HTML in python code, but it is not a good idea because: - In large projects, it is common to have front-end developers handling HTML and back-end developers handling python. If HTML is hard-coded in python code, it would be difficult for both developers to edit the same file at the same time without interference. - In a single application, you may need to write many lines of HTML codes and troubleshooting the code can be messy if HTML is hard-coded in the python code. Setting up Django Templates We want to create the templates directory in the projects' root directory. We can achieve this by modifying the settings.TEMPLATE-DIRS by adding this: "DIRS": [os.path.join(BASE_DIR, 'templates')], The DIRS defines a list of directories where Django should look for template source files. let's create the templates directory and then create a file called base.html to include the project's main HTML structure mkdir templates cd templates && touch base.html snippet for base.html {% load staticfiles %} <!Doctype html> <html> <head> <title> Laughing blog</title> </head> <body> {% block content %}{% endblock %} </body> </html> How to Serve Static Files in Django Web applications will need additional files like CSS, scripts, and Images for the application and user-uploaded content such as profiles pictures. These files can be categorized as: - Static files: Resource used by the application such as scripts, images - Media files: These are the content uploaded by the user, say user profile picture. We will talk about this later. Configuring Static Files Managing static files in django can be complicated especially if you are not familiar with Django.In settings.py make sure django.contrib.staticfiles in INSTALLED_APPS.In, Most cases,it is already defined. INSTALLED_APPS = [ ... 'django.contrib.staticfiles', ... ] STATIC_URL In settings.py you will find this line of code STATIC_URL = '/static/' This is where Django serves static files for a particular app in your project. Django allows you to have several static folders in a project. For this project, we will create just one static folder in the root project directory so let's configure staticfiles-dirs STATICFILES_DIRS Let's say you have a project and most apps share static assets like styling or images, or in addition to the static files tied to a particular app, you require additional static assets then define STATICFILES_DIRS. The STATICFILES_DIRS tuple tells Django where to look for static files that are not tied to a particular app. STATICFILES_DIRS = [ os.path.join(BASE_DIR, "static"), ] In this case, we just told Django to also look for static files in a folder called static in our project's root folder, not just in our apps. Then create the static directory in the project's root folder $ mkdir static Serving Static Files in Development When django.contrib.staticfiles is installed,running runserver command automatically serve the static files otherwise serve them manually by: Project's urls.py from django.conf import settings from django.conf.urls.static import static from django.views.generic import TemplateView urlpatterns = [ ... ] if settings.DEBUG: urlpatterns +=static(settings.MEDIA_URL,document_root=settings.MEDIA_ROOT) STATIC_ROOT This is the storage folder for every static files after running the collectstatic command.It collects all the static files in one place. Let us tell django to collect all our static files in a folder called staticfiles STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') This is very important in production and whitenoise handles this very well. Whitenoise Managing static files in production is even more complicated,at-least it was for me! To manage our static files with less hustle,we will install a 3rd party library whitenoise To install whitenoise, run: pip3 install whitenoise To use whitenoise in Django.We edit settings.py by adding it to MIDDLEWARE_CLASSESbelow the django SecurityMiddleware MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', ... ] To enable compression to add the following : STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage' Conclusion We created Virtual Environment, Installed Django, installed Python-decouple and created our template and static files directory. We also create gitignore file and added the .env text file we created. Meet you in the next tutorial!
https://achiengcindy.com/blog/2018/04/01/laughing-blog-tutorial-part-1-project-structure/
CC-MAIN-2019-30
refinedweb
1,415
56.25
I've had the exact same issues. Tryed every possible fix but dint work out for me. I really liked the interface of MyNodes.NET but the lack of support just forced me to choose another controller. Will check regularly for updates but wont expect much. Hope it will be updated once again Wodian @Wodian Best posts made by Wodian Latest posts made by Wodian - RE: ESP8266 Gateway with sensor attached - RE: Adding local sensor to NodeMCU gateway @Tmaster Still havent found a solution for this. At the moment im busy with Home Assistant and using the esp8266 for mqtt communication so I dropped mysensors for now. - - RE: Set/Reset states possible? posted in MyNodes.NET - RE: Adding local sensor to NodeMCU gateway Have been a while and trying to add code to my gateway without succes. Ive been trying to add a light sensor to my gateway but it isnt recognized by Mynodes controller. In the serial monitor I cannot see anything logging about a sensor. This is what Ive been using so "*************" #define MY_ESP8266_PASSWORD "*******" //,0,25 // 2 // Error led pin #define MY_DEFAULT_RX_LED_PIN 2 // Receive led pin #define MY_DEFAULT_TX_LED_PIN 2 // the PCB, on board LED #if defined(MY_USE_UDP) #include <WiFiUDP.h> #else #include <ESP8266WiFi.h> #endif #include <MySensors.h> #define CHILD_ID_LIGHT 0 #define LIGHT_SENSOR_ANALOG_PIN 0 unsigned long SLEEP_TIME = 30000; // Sleep time between reads (in milliseconds) MyMessage msg(CHILD_ID_LIGHT, V_LIGHT_LEVEL); int lastLightLevel; void presentation() { // Send the sketch version information to the gateway and Controller sendSketchInfo("Light Sensor", "1.0"); // Register all sensors to gateway (they will be created as child devices) present(CHILD_ID_LIGHT, S_LIGHT_LEVEL); } void loop() { int16_t lightLevel = (1023-analogRead(LIGHT_SENSOR_ANALOG_PIN))/10.23; Serial.println(lightLevel); if (lightLevel != lastLightLevel) { send(msg.set(lightLevel)); lastLightLevel = lightLevel; } sleep(SLEEP_TIME); } Have been working all night to get this thing up and running. Any thoughts? - RE: Set/Reset states possible? posted in MyNodes.NET - RE: Set/Reset states possible? posted in MyNodes.NET - RE: Set/Reset states possible? @derwish Using a set state (input =1 output=1) then saves the state (input=0 output =1). I cannot find any related to a set and reset node so I think ill need to make something myself. - Set/Reset states possible? Hello, Is it possible to build a step by step program using this controller? I've tryed to create custom nodes but I cannot find the Libs/Nodes/Custom folder.. Any help is appriciated. Regards, Wodian
https://forum.mysensors.org/user/wodian
CC-MAIN-2022-21
refinedweb
401
59.09
EJB 3.1 Proposed Final Draft Now Available By Ken Saks on Mar 11, 2009 I'm pleased to announce that the EJB 3.1 Proposed Final Draft is now available. We're well on our way towards finalizing what I'm sure will be a vastly improved version of Enterprise JavaBeans. The spec includes many clarifications to the requirements from the previous drafts, as well as some small feature improvements. Here are a few of the notable changes : Improved portable Local Session Bean lookups The EJB 3.1 Public Draft introduced portable global JNDI names for session beans. While the global syntax works great for Remote lookups, it is not ideal for dynamic lookups of Local session beans. We've gotten a lot of feedback that it's too cumbersome to have to declare an ejb-local-ref or @EJB annotation just to retrieve a session bean defined within the same module. The problem with the portable global JNDI name syntax is that it requires knowing the module name and potentially the application name in order to make the lookup. To minimize these dependencies, we're adding JNDI syntaxes based on two new portable naming scopes called java:module and java:app. java:module lookups are scoped to the module in which the lookup occurs. They allow session beans to be retrieved based only on the ejb-name. java:app lookups are scoped to the application in which the lookup occurs. They allow session beans to be retrieved based only on the module name and the ejb-name. For example, given the following Stateless session bean : 1: @Stateless 2: public class HelloBean { 3: public String hello() { return "hello, world\\n"; } 4: } Any code running within the same module as HelloBean can portably retrieve its reference as follows : 1: InitialContext ic = new InitialContext(); 2: HelloBean hb = (HelloBean) ic.lookup("java:module/HelloBean"); Likewise, if HelloBean is packaged in hello.jar, any code running in a module within the same .ear can portably retrieve its reference using the string "java:app/hello/HelloBean". See Section 4.4 for more details. Timezone support for calendar-based timers One of the limitations of calendar-based timers as defined in the Public Draft is that their schedules can only be specified relative to the time zone in which the application is deployed. That means if you want a calendar-based timeout to occur at a specific time in a given time zone, there's no easy way to do it. The Proposed Final Draft addresses this by allowing an optional time zone ID to be associated with a calendar-based timer. In that case, all timeouts occur relative to the specified time zone. Time zones are supported no matter how the calendar-based timer is defined : programmatically, via annotation, or in ejb-jar.xml. For example, the following code defines an automatic calendar-based timeout that occurs at 10 a.m. U.S. Eastern Time every day, independent of where the application itself is running : 1: @Schedule(hour="10", timezone="America/New_York") 2: public void timeout() { ... } See Section 18.2 for more details. Spec-defined stateful session bean timeouts Developers are always asking why there isn't a spec-defined way to specify the stateful session bean timeout value. Stateful session beans have always had the notion of a timeout, yet it has been left to the vendors to define the configuration for this value. The Proposed Final Draft defines a portable way to specify it. 1: @Stateful 2: @StatefulTimeout(value=10, unit=TimeUnit.MINUTES) 3: public class CartBean { ... } . See Section 4.3.12 for more details. Please check out the spec and leave feedback here or at jsr-318-comments@jcp.org. Note that given where we are in the process the top priority is fixing spec bugs and problems with the APIs. Of course, we're always interested in hearing feature requests for future EJB versions :-) hg in --quiet --template '{files}\\n' | sort | uniq Or is that getting you the same list? -kto Posted by oyun on January 16, 2010 at 10:00 AM EST #. Posted by tütünex on February 11, 2011 at 07:35 PM EST #
https://blogs.oracle.com/kensaks/entry/ejb_3_1_proposed_final
CC-MAIN-2016-22
refinedweb
695
64.2
> match-v3.3.src.rar > block.h /* block.h */ /* Copyright 2001 Vladimir Kolmogorov (vnk@cs.cornell.edu), Yuri Boykov (yuri@csd.uwo */ /* Template classes Block and DBlock Implement adding and deleting items of the same type in blocks. If there there are many items then using Block or DBlock is more efficient than using 'new' and 'delete' both in terms of memory and time since (1) On some systems there is some minimum amount of memory that 'new' can allocate (e.g., 64), so if items are small that a lot of memory is wasted. (2) 'new' and 'delete' are designed for items of varying size. If all items has the same size, then an algorithm for adding and deleting can be made more efficient. (3) All Block and DBlock functions are inline, so there are no extra function calls. Differences between Block and DBlock: (1) DBlock allows both adding and deleting items, whereas Block allows only adding items. (2) Block has an additional operation of scanning items added so far (in the order in which they were added). (3) Block allows to allocate several consecutive items at a time, whereas DBlock can add only a single item. Note that no constructors or destructors are called for items. Example usage for items of type 'MyType': /////////////////////////////////////////////////// #include "block.h" #define BLOCK_SIZE 1024 typedef struct { int a, b; } MyType; MyType *ptr, *array[10000]; ... Block *block = new Block (BLOCK_SIZE); // adding items for (int i=0; i New(); ptr -> a = ptr -> b = rand(); } // reading items for (ptr=block->ScanFirst(); ptr; ptr=block->ScanNext()) { printf("%d %d\n", ptr->a, ptr->b); } delete block; ... DBlock *dblock = new DBlock (BLOCK_SIZE); // adding items for (int i=0; i New(); } // deleting items for (int i=0; i Delete(array[i]); } // adding items for (int i=0; i New(); } delete dblock; /////////////////////////////////////////////////// Note that DBlock deletes items by marking them as empty (i.e., by adding them to the list of free items), so that this memory could be used for subsequently added items. Thus, at each moment the memory allocated is determined by the maximum number of items allocated simultaneously at earlier moments. All memory is deallocated only when the destructor is called. */ #ifndef __BLOCK_H__ #define __BLOCK_H__ #include /***********************************************************************/ /***********************************************************************/ /***********************************************************************/ template class Block { public: /* Constructor. Arguments are the block size and (optionally) the pointer to the function which will be called if allocation failed; the message passed to this function is "Not enough memory!" */ Block(int size, void (*err_function)(char *) = NULL) { first = last = NULL; block_size = size; error_function = err_function; } /* Destructor. Deallocates all items added so far */ ~Block() { while (first) { block *next = first -> next; delete first; first = next; } } /* Allocates 'num' consecutive items; returns pointer to the first item. 'num' cannot be greater than the block size since items must fit in one block */ Type *New(int num = 1) { Type *t; if (!last || last->current + num > last->last) { if (last && last->next) last = last -> next; else { block *next = (block *) new char [sizeof(block) + (block_size-1)*sizeof(Type)]; if (!next) { if (error_function) (*error_function)("Not enough memory!"); exit(1); } if (last) last -> next = next; else first = next; last = next; last -> current = & ( last -> data[0] ); last -> last = last -> current + block_size; last -> next = NULL; } } t = last -> current; last -> current += num; return t; } /* Returns the first item (or NULL, if no items were added) */ Type *ScanFirst() { scan_current_block = first; if (!scan_current_block) return NULL; scan_current_data = & ( scan_current_block -> data[0] ); return scan_current_data ++; } /* Returns the next item (or NULL, if all items have been read) Can be called only if previous ScanFirst() or ScanNext() call returned not NULL. */ Type *ScanNext() { if (scan_current_data >= scan_current_block -> current) { scan_current_block = scan_current_block -> next; if (!scan_current_block) return NULL; scan_current_data = & ( scan_current_block -> data[0] ); } return scan_current_data ++; } /* Marks all elements as empty */ void Reset() { block *b; if (!first) return; for (b=first; ; b=b->next) { b -> current = & ( b -> data[0] ); if (b == last) break; } last = first; } /***********************************************************************/ private: typedef struct block_st { Type *current, *last; struct block_st *next; Type data[1]; } block; int block_size; block *first; block *last; block *scan_current_block; Type *scan_current_data; void (*error_function)(char *); }; /***********************************************************************/ /***********************************************************************/ /***********************************************************************/ template class DBlock { public: /* Constructor. Arguments are the block size and (optionally) the pointer to the function which will be called if allocation failed; the message passed to this function is "Not enough memory!" */ DBlock(int size, void (*err_function)(char *) = NULL) { first = NULL; first_free = NULL; block_size = size; error_function = err_function; } /* Destructor. Deallocates all items added so far */ ~DBlock() { while (first) { block *next = first -> next; delete first; first = next; } } /* Allocates one item */ Type *New() { block_item *item; if (!first_free) { block *next = first; first = (block *) new char [sizeof(block) + (block_size-1)*sizeof(block_item)]; if (!first) { if (error_function) (*error_function)("Not enough memory!"); exit(1); } first_free = & (first -> data[0] ); for (item=first_free; item next_free = item + 1; item -> next_free = NULL; first -> next = next; } item = first_free; first_free = item -> next_free; return (Type *) item; } /* Deletes an item allocated previously */ void Delete(Type *t) { ((block_item *) t) -> next_free = first_free; first_free = (block_item *) t; } /***********************************************************************/ private: typedef union block_item_st { Type t; block_item_st *next_free; } block_item; typedef struct block_st { struct block_st *next; block_item data[1]; } block; int block_size; block *first; block_item *first_free; void (*error_function)(char *); }; #endif
http://read.pudn.com/downloads78/sourcecode/graph/texture_mapping/297638/match-v3.3.src/maxflow/adjacency_list/block.h__.htm
crawl-002
refinedweb
828
58.92
Linker Tools Warning LNK4248 unresolved typeref token (token) for 'type'; image may not run A type doesn’t have a definition in MSIL metadata. LNK4248 can occur when there is only a forward declaration for a type in an MSIL module (compiled with /clr), where the type is referenced in the MSIL module, and where the MSIL module is linked with a native module that has a definition for the type. In this situation, the linker will provide the native type definition in the MSIL metadata, and this may provide for the correct behavior. However, if a forward type declaration is a CLR type, then the linker's native type definition may not be correct For more information, see /clr (Common Language Runtime Compilation). To correct this error - Provide the type definition in the MSIL module. Examples The following sample generates LNK4248. Define struct A to resolve. // LNK4248.cpp // compile with: /clr /W1 // LNK4248 expected struct A; void Test(A*){} int main() { Test(0); } The following sample has a forward definition of a type. // LNK4248_2.cpp // compile with: /clr /c class A; // provide a definition for A here to resolve A * newA(); int valueA(A * a); int main() { A * a = newA(); return valueA(a); } The following sample generates LNK4248. // LNK4248_3.cpp // compile with: /c // post-build command: link LNK4248_2.obj LNK4248_3.obj class A { public: int b; }; A* newA() { return new A; } int valueA(A * a) { return (int)a; } Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/cpp/error-messages/tool-errors/linker-tools-warning-lnk4248?redirectedfrom=MSDN&view=msvc-170
CC-MAIN-2022-27
refinedweb
245
58.62
A snippet: package com.deitel.jhtp7.ch14; public class AccountRecord { ..... Doubt: 1. What does the above package statement imply. As in how(path) this AccountRecord.java will be stored? As in what folder hierarchy should I create? 2. In Command prompt I have set my Current Directory as : E:\RDL\Dropbox\Coding\Core Java\File So will the file AccountRecord.java be inside the above folder structure? 3. E:\RDL\Dropbox\Coding\Core Java> set path=%path%;C:\Program Files\Java\jdk1.6.0_24\bin This is how I set path to the jdk, is that of any significance for package as in do I have to do anything with this to store the .java and .class file and compile and execute process? 4. Also what should be the compile and execute command based on above package? Please explain in easy terms. Thanks in advance.
https://www.daniweb.com/programming/software-development/threads/407065/package-basic-understanding
CC-MAIN-2017-26
refinedweb
145
71.1
July 2017 Jul 2017-SP4 bugfix release (11.27.13) Build Environment - Added the .lib and .h files needed for building extensions to the Windows installer. Bug Fixes - 3470: Support setClob without length restrictions - 6468: JDBC 2.27 fails with year < 1000 - 6482: Query failures on order by on union - 6483: Monetdb crashes, on query - 6487: UNION of NULLs with several tables fails - 6488: Semijoin returns duplicate values from a column with unique values - 6489: Sqlitelogictest - Wrong result set of complex conditional query - 6490: Sqlitelogictest - Select query with an IN clause parse error - 6491: SELECT IN returns NULL instead of false when not found - 6492: Persistent hashes stored and then ignored. Storage info not in sync with actual indices. - 6493: Sqlitelogictest - Aggregation query on empty table with DISTINCT clause - 6494: Sqlitelogictest - Algebra operators priority in select query - 6495: Sqlitelogictest - Omitting AS in a result set column alias name - 6496: Sqlitelogictest - Select interval comparisons between floating-points and NULL - 6497: Sqlitelogictest - Select not between query producing wrong results - 6498: large virtual memory spike on BLOB column COUNT - 6499: Crash when trying to replace a function defined in sys from a different schema - 6502: Query with multiple limit clauses does not return anything - 6508: Segmentation fault in mserver5 on Python2 UDF with TIMESTAMP column input that has NULL values (conversion.c:438, PyNullMask_FromBAT) - 6510: Sqlitelogictest: Wrong output in aggregation query - 6512: Monetdb crashes on query with limit after sort with case - 6514: Sqlitelogictest: Range query between NULL values not possible - 6515: Insert null second interval value results in 0 - 6516: Sqlitelogictest unknown bat append operation - 6517: Sqlitelogictest overflow in conversion during MAL plan execution - 6518: Sqlitelogictest: count aggregation with not in operator - 6519: Sqlitelogictest: algebra join between lng and int BATs undefined - 6520: UPDATE with correlated subquery causes assertion (or segfault) - 6522: Sqlitelogictest: IN operator return a single column - 6523: Sqlitelogictest: Case statement subquery missing - 6524: Sqlitelogictest: Crash in aggregation query with IN operator - 6527: Crash using order by alias in subquery Jul 2017-SP3 bugfix release (11.27.11) MonetDB Common - Reimplemented summing of a column of floating point (flt and dbl) values. The old code could give wildly inaccurate results when adding up lots and lots of values due to lack of precision. Try SELECT sum(c) FROM t; where t is 100,000,000 rows, c is of type REAL and all values are equal to 1.1. (The old code returned 33554432 instead of 1.1e8.) Bug Fixes - 3898: Deadlock on insertion - 6429: ROUND produces wrong data type - 6436: Query sequence with 2x ifthenelse() and next nullif() causes mserver5 Segmentation fault - 6437: System schemas "profiler" and "json" shouldn't be allowed to be dropped. - 6439: Invalid references to sys.columns.id from sys.statistics.column_id - 6442: SEGFAULT with COPY INTO BEST EFFORT and skipping input columns - 6443: complex(?) query forgets(?) column name - 6444: Using 'with' keyword with table returning function crashes monetdb - 6445: Sqlitelogictest crash in MySQL query - 6446: sql_parser.y bug? - 6448: 'insert into' with multiple rows containing subqueries crashes - 6449: Assertion error in rel_dce_refs (sqlsmith) - 6450: Assertion error in exp_bin (sqlsmith) - 6451: Assertion error in sql_ref_dec (sqlsmith) - 6453: Assertion error in rel_rename_exps (sqlsmith) - 6454: SQL lexical error - 6455: Assertion error in rel_apply_rewrite (sqlsmith) - 6456: NULL becomes 0 in outer join - 6459: Assertion error in exp_bin (sqlsmith) - 6462: large virtual memory spike on BLOB column select - 6465: appending to variables sized atom bats other than str bats with force flag may result in corrupted heap - 6467: date_to_str formatter is wrong - 6470: mitosis gets in the way of simple select - 6471: calls to sys.generate_series should auto-convert arguments - 6472: Assertion failure in rel_rename (Sqlsmith) - 6477: assertion eror rel_push_project_up (sqlsmith) - 6478: Crash with nested order by/ limit offset - 6479: Mserver receives an assertion error on a procedure call - 6480: Segfault in mvc_find_subexp (sqlsmith). Important: we found a bug in the Jul2017 and Jul2017-SP1 releases (only) that can corrupt your database. We recommend to not upgrade to these two releases, but to skip directly to the Jul2017-SP2 release. See this email message. Jul 2017-SP1 bugfix release (11.27.5) Debian and Ubuntu uses may want to reimport the MonetDB GPG public key as per the instructions on the download page. See bug 6383 for more information. Build Environment - The Debian and Ubuntu installers have been fixed: there was a file missing in the Jul2017 release. - Added a new RPM called MonetDB-selinux which provides the SELinux policy required to run MonetDB under systemd, especially on Fedora 26. - The Windows installers (*.msi files) are now created using the WiX Toolset. - The Windows binaries are now built using Visual Studio 2015. Because of this, you may need to install the Visual C++ Redistributable for Visual Studio 2015 before being able to run MonetDB. Merovingian - monetdbd was leaking open file descriptors to the mserver5 process it started. This has been fixed. MonetDB Common - Many functions in GDK are now annotated with the GCC attribute __warn_unused_result__ meaning that the compiler will issue a warning if the result of the function (usually an indication of an error) is not used. Bug Fixes - 6325: Merge table unusable in other connections - 6328: Transactional/multi-connection issues with merge tables - 6336: VALUES multiple inserts error - 6339: Mserver5 crashes on nested SELECT - 6340: sample operator takes effect after the execution of the query, expected before - 6341: MERGE TABLE issue: Cannot register - 6342: MERGE TABLE issue: hang - 6344: Spurious errors and assertions (SQLsmith) - 6375: MAL profiler truncates JSON objects larger than 8192 characters Jul 2017 feature release (11.27.1) On Fedora 26 there is a known issue with this release. When you use MonetDB under systemd (if you have enabled the system service with systemctl enable monetdbd.service), and you have SELinux (Security-Enhanced Linux) enabled, the database server processes will not start when you try to connect. MonetDB5 Server - The "sub" prefix of many functions, both at the MAL and the C level, has been removed. - Changed the interfaces of the AUTH* functions: pass values, not pointers to values. - Removed calc.setoid(). - group.subgroup is now called group.group if it is not refining a group. Both group.group and group.subgroup now also have variants with a candidate list. - The allocation schemes for MAL blocks and Variables has been turned into block-based. This reduces the number of malloc()/free() calls. . - Added a new server-side protocol implementation. The new protocol is backwards compatible with the old protocol. Clients can choose whether they want to use the old or the new protocol during the initial handshake with the server. The new protocol is a binary column-based protocol that is significantly faster than the old protocol when transferring large result sets. In addition, the new protocol supports compression using Snappy or LZ4. - Moved the sphinx extension module to its own repository. See. - Removed GSL module: it's now a separate (extension) package. See. - The PCRE library is now optional for systems that support POSIX regular expressions. - Added 5 new sys schema tables: function_languages, function_types, key_types, index_types and privilege_codes. They are pre-loaded with static content and contain descriptive names for the various integer type and code values. See also sql/scripts/51_sys_schema_extension.sql Merovingian - Added handling of a dbextra property per database at the daemon level. The user can set the dbextra property for a database using the command: $ monetdb set dbextra=<path> <database> and the daemon will make sure to start the new server using the correct --dbextra parameter. Client Package - The mclient and msqldump programs lost compatibility with old mserver5 versions (pre 2014) which didn't have a "system" column in the sys.schemas table. - The mclient and msqldump programs lost compatibility with ancient mserver5 versions (pre 2011) which didn't have the sys.systemfunctions table. - Removed the "array" and "quick" functions from the mapi library. To be precise, the removed functions are: mapi_execute_array, mapi_fetch_field_array, mapi_prepare_array, mapi_query_array, mapi_quick_query, mapi_quick_query_array, and mapi_quick_response. - Added a more elaborate \help command for SQL expressions. MonetDB Common - Improved error checking in the logger code (dealing with the write-ahead log); changed return types a several functions from int to gdk_return (i.e., they now return GDK_SUCCEED or GDK_FAIL). The logger no longer calls GDKfatal on error. Instead the caller is responsible for dealing with errors. - BATsort may now create an order index as a by product. - Quantile calculations now use the order index if available (and use BATsort otherwise, producing an order index). - Quantiles calculate a position in the sorted column. If this position is not an integer, we now choose the nearest position, favoring the lower if the distance to the two adjacent positions is equal (round down to nearest integer). - Removed function BATprintf. Use BATprint or BATprintcolumns instead. - Removed BATsave from the list of exported functions. - Replaced BBPincref/BBPdecref with BBPfix/BBPunfix for physical reference count and BBPretain/BBPrelease for logical reference count maintenance. - Removed automatic conversion of 32-bit OIDs to 64 bits on 64-bit architectures. - Removed functions OIDbase() and OIDnew(). - Removed talign field from BAT descriptor. - BATappend now takes an optional (NULL if not used) candidate list for the to-be-appended BAT. - New function BATkeyed(BAT *b) that determines (possibly using a hash table) whether all values in b are distinct. SQL - Made the operator precedence of % equal to those of * and /. All three are evaluated from left to right. - Removed table sys.connections. It was a remnant of an experimental change that had already been removed in 2012. - Protect against runaway profiler events If you hit a barrier block during profiling, the JSON event log may quickly become unwieldy. Event production is protected using a high water mark, which ensures that never within the single execution of MAL block the instruction causes excessive event records. Bug Fixes - 3465: Request: add support for CREATE VIEW with ORDER BY clause - 3545: monetdb commands don't work with -h -P -p options (locally and remotely) - 3996: select * from sys.connections always returns 0 rows. Expected to see at least one row for the active connection. - 6187: Nested WITH queries not supported - 6225: Order of evaluation of the modulo operator - 6289: Crashes and hangs with remote tables - 6292: Runaway SQL optimizer in too many nested operators - 6310: Name resolution error (sqlsmith) - 6312: Object not found in LIMIT clause (sqlsmith) - 6313: Null type resolution in disjunction fails (sqlsmith) - 6319: Server crash on LATERAL (sqlsmith) - 6322: Crash on disjunction with LIMIT (sqlsmith) - 6323: Deadlock calling sys.bbp() - 6324: Sqlitelogictest crash in a IN query (8th) - 6327: The daemon does not respect the actual name of the mserver5 executable - 6330: Sqlitelogictest crash on a complex SELECT query - 6331: sys.statistics column "nils" always contains 0. Expected a positive value for columns that have one or more nils/NULLs - 6332: Sqlitelogictest crash related to an undefined MAL function
https://monetdb.org/OldReleaseNotes/Jul2017
CC-MAIN-2021-17
refinedweb
1,801
52.8
Files The most common way to persist data to storage is to use a file. Files allow you to write text or binary content to them and then easily read said content. To manipulate files, you can use the File class, which provides static methods for dealing with files. The File class is defined within the System.IO namespace, so before using the File class, you need to import the System.IO namespace: using System.IO; string Path = @"textfile.txt"; string str1 = "This is a string."; string str2 = "This is another string."; StreamWriter sw = File.AppendText(Path); sw.Write(str1); sw.WriteLine(str2); sw.Close(); To open a text file for reading, use the File class' OpenText() method: StreamReader sr = File.OpenText(Path); string strRead; while ((strRead = sr.ReadLine()) != null) { MessageBox.Show(strRead); } To deal with binary contents, you should use the File class' OpenRead() method to open a file for reading and the OpenWrite() method to open it for writing. Both methods return a FileStream object (also defined in the System.IO namespace). You can then use the BinaryReader class to read binary data from the FileStream object, and the BinaryWriter class to write binary data to a FileStream object. Listing 1 shows how to copy an image (a binary file) byte-by-byte into another file, essentially making a copy of the file. While writing to and reading from files is a straightforward affair, the downside is that there is no sophisticated mechanism to help you manage the content of the file. For example, suppose you want to replace part of the file with some data. You'd need to seek to the exact location of the data before you could replace it. And in most cases, this simple task involves reading from the original file, filtering the necessary data, and then rewriting the data back to the original file. Hence, you should use files for storing simple datacomments, error logs, and so on. For more structured data, using a database is more appropriate. WebMediaBrands Corporate Info Legal Notices, Licensing, Reprints, Permissions, Privacy Policy. Advertise | Newsletters | Shopping | E-mail Offers | Freelance Jobs
http://www.devx.com/wireless/Article/38433
crawl-002
refinedweb
354
65.83
Hi Roger, There are a couple of ways that you can integrate the two (or Axis and any XML serializing mechanism for that matter). One way is discussed in this article by IBM... They are using Castor, but the underlying theme is the same. Alternatively, you may use a "message" style service... This is from an earlier post: --- An alternative to creating a custom serializer is to use the doc/literal style service and use one of the 4 method signatures that Axis allows. You can inspect the incoming XML to determine what kind of message you have received. From there, use the utilities provided by the JAXB api to translate the XML message into JAXB objects, and perform your business logic. Then, create output objects using JAXB and serialize them to XML--return that from your web service method. So your webservice "controller" class could have a method like this: public Document doService(Document body) throws AxisFault { // inspect the document to see what "kind" of message you have received.. // deserialize the message jc = JAXBContext.newInstance(<your namespace>); u = jc.createUnmarshaller(); m = jc.createMarshaller(); requestObj = u.unmarshal(body); // perform your domain logic on this object // serialize and return your response returnDoc = XMLUtils.newDocument(); m.marshal(responseObj, returnDoc); return returnDoc; } --- As for reuse of the XSD within the WSDL, you are on the right track. You can use import statements in your WSDL to reference the XSD namespace. I have not generated wsdl for a doc/lit using JAXB before--but you should be able to by defining some interface class and using your JAXB objects... Then you would have to massage that output to use your XSD instead of the schema that is generated within the WSDL--someone please correct me if there is another best practice for that? Hope this helps a bit, pc On Fri, 14 Jan 2005 12:18:39 +0100, roger.stoffers@vodafone.com <roger.stoffers@vodafone.com> wrote: > Hi, > > In existing applications, I am using existing XML schema's. Also I use JAXB > to facilitate marshalling/unmarshalling in existing applications. Part of > this application functionality I would like to expose through SOAP. JAAS may > be used to create document/literal style messages however I would like to > use AXIS, since it better fits my needs. > > Therefore, I would like to try to include those existing schemas in my WSDL > file but how can I use AXIS document style and also use JAXB for object > serialization/deserialization? > > Who did this before? Some hints would be appreciated, an example would be > excellent? > > Roger Stoffers > Vodafone Netherlands >
http://mail-archives.apache.org/mod_mbox/axis-java-user/200501.mbox/%3Cb80c68f105011405434ad57e28@mail.gmail.com%3E
CC-MAIN-2018-09
refinedweb
429
56.35
Writing a simple Sublime Text plugin. by Slavko Pesic, Web Progammer. One of the neat Sublime Text features is that it provides you with a list of commands which you can extend (or write your own) and assign them to different key binds. In this blog post I will go over configuring key binds and use insert_snippet command to generate some debug statements and then we will write few lines of python to extend insert_snippet to use text from clipboard as well. Lets start with key mapping. If you go to Sublime Text preferences you will find a two item grouping for key binds (User and Default). Your new key binds should always go into User. Default should never be modified. If you wish to change any of the default key binds you can override them in the user config instead. Sublime Text will always load default config first, followed by OS specific config and user config last (each overriding previous definitions if needed). Ok, lets add couple of simple key binds to our config file: // Default.sublime-keymap [ { "keys": ["ctrl+shift+h"], "command": "insert_snippet", "args": { "contents": "console.log('=== HEARTBEAT $TM_FILENAME [$TM_LINE_NUMBER] ===');${0}" }, "context": [{ "key": "selector", "operator": "equal", "operand": "source.js", "match_all": true }] }, { "keys": ["ctrl+shift+d"], "command": "insert_snippet", "args": { "contents": "console.log('=== $SELECTION $TM_FILENAME [$TM_LINE_NUMBER] ===', $SELECTION);${0}" }, "context": [{ "key": "selector", "operator": "equal", "operand": "source.js", "match_all": true }] } ] The config is simple array of JSON objects, each containing a set of rules for particular key bind. I have configured two key binds: ctrl+shift+h and ctrl+shift+d. Both use insert_snippet command and are defined within the context of "source.js" (I will explain in a bit). We are passing "console.log(...)" as an argument to insert_snippet in both cases. This is the string that will be inserted at the cursor position once we use ctrl+shift+h or ctrl+shift+d. $TM_LINE_NUMBER, $TM_FILENAME and $SELECTION are environment variables which will be dynamically replaced by Sublime Text at insert time. The following snippet - ${0} will set the caret at this position once our console.log is generated. The context allows you to write language specific key binds. In my case these will work with javascript files. You can have the same key bind with different implementations specific to the programming language you are working in. We can duplicate these two blocks and replace source.js with source.php to make it work with php and change console.log to print_r or dpm (or a different debug function) and Sublime Text will pick the correct snippet to insert depending on the language we are working in. This is a sample output of the two key binds we defined above: // ctrl+shift+h // 477 is a line number // some_file.js is the current js file we are working in. console.log('=== HEARTBEAT some_file.js [477] ==='); // ctrl+shift+d // 478 is a line number // testvar was string we had selected when we pressed our key combination // some_file.js is the current js file we are working in. console.log('=== testvar some_file.js [478] ===', testvar); So we have two key binds, one inserting a general debug heartbeat and the other one printing contents of a selected variable. That's ok so far, but I really wanted to be able to use ctrl+shift+d to create a var dump statement of a variable (string) that is in the clipboard as a fallback or use selected text as it behaves currently. Unfortunately insert_snippet doesn't have access to the clipboard content and we don't have environment variable that contains clipboard content either. There is a paste method in Sublime Text, but unfortunately we are unable to wrap arbitrary string around the clipboard content and can only paste clipboard content alone. We have exhausted all the available resources and will have to write few lines of python and create our own plugin/command that will extend the functionality of insert_snippet and allow it to use the contents from clipboard if needed. Lets write our first Sublime Text plugin that will handle the functionality we outlined above. We start by going to Tools > New Plugin... which will generate a template for our new plugin. The code stub will look something like this: import sublime, sublime_plugin class ExampleCommand(sublime_plugin.TextCommand): def run(self, edit): self.view.insert(edit, 0, "Hello, World!") I rewrote this template and my plugin looks something like this: # insert_snippet_and_clipboard.py import sublime, sublime_plugin class InsertSnippetAndClipboardCommand(sublime_plugin.TextCommand): def run(self, edit, **args): for region in self.view.sel(): if not region.empty(): replacement = self.view.substr(region) args['contents'] = args['contents'].replace('$SELECTION_OR_CLIPBOARD', replacement) self.view.run_command('insert_snippet', args) else: replacement = sublime.get_clipboard().strip() args['contents'] = args['contents'].replace('$SELECTION_OR_CLIPBOARD', replacement) self.view.run_command('insert_snippet', args) You can now save the file as insert_snippet_and_clipboard.py within packages/user/. You can open a Sublime Text console using Ctrl+` and debug your new plugin during development by calling your command using view.run_command("example"). You can pass optional arguments to your command by passing them to run_command like this: view.run_command("example", args). By following Sublime Text convention and naming our class SomeFunctionNameCommand(sublime_plugin.TextCommand): we are creating a text command named some_function_name. In our example we are creating insert_snippet_and_clipboard command which will provide user with $SELECTION_OR_CLIPBOARD environment variable. This environment variable will be populated at insert time. In this implementation we are prioritizing selected text, if no text is selected we are using the last clipboard snipped, and as a fallback we will replace the variable with an empty string. And finally, lets update our key bind ctrl+shift+d to use insert_snippet_and_clipboard command: // Default.sublime-keymap [ { "keys": ["ctrl+shift+h"], "command": "insert_snippet", "args": { "contents": "console.log('=== HEARTBEAT $TM_FILENAME [$TM_LINE_NUMBER] ===');${0}" }, "context": [{ "key": "selector", "operator": "equal", "operand": "source.js", "match_all": true }] }, { "keys": ["ctrl+shift+d"], "command": "insert_snippet_and_clipboard", "args": { "contents": "console.log('=== $SELECTION_OR_CLIPBOARD $TM_FILENAME [$TM_LINE_NUMBER] ===', $SELECTION_OR_CLIPBOARD);${0}" }, "context": [{ "key": "selector", "operator": "equal", "operand": "source.js", "match_all": true }] } ] And that is it. We should be able to generate some var debug statements right away by either selecting a piece of text (or copying it) and hitting ctrl+shift+d. Add new comment
http://www.metaltoad.com/blog/writing-simple-sublime-text-plugin
CC-MAIN-2014-15
refinedweb
1,027
50.53
January 2017 Volume 32 Number 1 [HoloLens] Introduction to the HoloLens, Part 2: Spatial Mapping By Adam Tuliper | January 2017 In my last article, I talked about the three pillars of input for the HoloLens—gaze, gesture and voice (msdn.com/magazine/mt788624). These constructs allow you to physically interact with the HoloLens and, in turn, the world around you. You’re not constrained to working only with them, however, because you can access information about your surroundings through a feature called spatial mapping, and that’s what I’m going to explore in this article. If I had to choose a single favorite feature on the HoloLens, it would be spatial mapping. Spatial mapping allows you to understand the space around you, either explicitly or implicitly. I can explicitly choose to work with the information taken in, or I can proceed implicitly by allowing natural physical interactions, like dropping a virtual ball onto a physical table, to take place. Recently, with some really neat updates to the HoloToolkit from Asobo Studio, it’s easy to scan for features in your environment, such as a chair, walls and more. What Is a 3D Model? It might be helpful to understand what a 3D model is before looking at what a spatial map of your area represents. 3D models come in a number of file formats, such as .ma or .blender, but often you’ll find them in either of two proprietary Autodesk formats called .FBX (Filmbox) or .OBJ files. .FBX files can contain not only 3D model information, but also animation data, though that isn’t applicable to this discussion. A 3D model is a fairly simple object, commonly tracked via face-vertex meshes, which means tracking faces and vertices. For nearly all modern hardware, triangles are used for faces because triangles are the simplest of polygons. Inside a 3D model you’ll find a list of all vertices in the model (made up of x,y,z values in space); a list of the vertex indices that make up each triangle; normals, which are just descriptive vectors (arrows) coming off each vertex used for lighting calculations so you know how light should interact with your model; and, finally, UV coordinates—essentially X,Y coordinates that tell you how to take a 2D image, called a texture, and wrap it around your model like wrapping paper to make it look like it was designed. Figure 1 shows virtual Adam, a model that the company xxArray created for me because, well, I wanted to put myself into a scene with zombies. This is just a 3D model, but note the legs, which are made of vertices and triangles, and that the pants texture is, in simple terms, wrapped around the 3D model of the legs to look like pants. That’s nearly all the magic behind a 3D model. Figure 1 UV Mapping of 2D Texture to 3D Object .png) What Does Spatial Mapping Look Like? Spatial mapping is easier in some ways because you’re not dealing with the textures of your environment. All you typically care about is having a fairly accurate mesh created from your environment that can be discovered. The environment is scanned so you can interact with it. Figure 2 shows a scenario slightly more like what you’ll actually get, though contrived. The model on the left shows the vertices, triangles and normals. You can’t see the normal directly, of course, but you see its result by how the object is shaded. Figure 2 What’s Needed for Rendering and for the Physics Engine .png) What you’ve seen thus far in both 3D model scenarios is purely for rendering and has absolutely nothing to do (yet) with physics. The green box outline on the right in Figure 2 is the shape of the collider I’ve moved off the cube to show a point; this is the component that defines the region to the physics system. If you want to fully interact with the world on the HoloLens, a game or in any 3D experience, really, you need a collider for the physics system to use. When you turn the HoloLens on and are in the holographic shell, it’s always mapping your environment. The HoloLens does this to understand where to place your windows. If I walk around my house with the HoloLens, it’s always updating its information about my environment. This serves two purposes: First, when I walk into a room I’ve been in previously, the HoloLens should show me the windows I had open. Second, environments are always changing and it needs to detect those changes. Think of the following common scenarios: someone walks in front of me, my kids are running around in the house, our pet bear walks by and creates a large occlusion zone I can’t see through. The point is, the environment is potentially always changing and the HoloLens is looking for these changes. Before delving into the API, let’s see the spatial mapping in practice (and, by the way, I don’t have a real pet bear). To view spatial mapping in action, you can connect to the Windows Device Portal on a HoloLens, which allows remote management and viewing of the device, including a live 30 FPS video stream of what the device sees. The device portal can be run for nearly any Windows 10 device. It can be accessed by going to the device IP, or to 127.0.0.1:10080 for devices plugged in over USB once it’s been enabled on the HoloLens in the Developer Settings. Most Windows 10 devices can be enabled for a device portal as outlined at bit.ly/2f0cnfM. Figure 3 and Figure 4 show the spatial mesh retrieved from the 3D view in the device portal. Figure 3 shows what the HoloLens sees as soon as I turn it on, while Figure 4 displays the view after a brief walk through my living room. Note the chair next to the far wall on the right, as that appears later on (in Figure 9) when I ask the spatial understanding library to find me a sittable surface. Figure 3 HoloLens Spatial Mesh Right After HoloLens Is Turned on in a New Room .png) Figure 4 HoloLens Spatial Mesh After a Quick Walk-Through a Portion of the Room .png) How Spatial Mapping Works Spatial mapping works via a SurfaceObserver object, as you’re observing surface volumes, watching for new, updated and removed surfaces. All the types you need to work with come with Unity out of the box. You don’t need any additional libraries, though the HoloToolkit-Unity repository on GitHub has lots of functionality for the HoloLens, including some amazing surface detection I’ll look at later, so this repository should be considered essential for hitting the ground running. First, you tell the SurfaceObserver that you’re observing a volume: public Vector3 Extents = new Vector3(10, 10, 10); observer = new SurfaceObserver(); // Start from 0,0,0 and fill in a 10 meter cube volume // as you explore more of that volume area observer.SetVolumeAsAxisAlignedBox(Vector3.zero,Extents); The larger the region, the greater the computational cost that can occur. According to the documentation, spatial mapping scans in a 70-degree cone a region between 0.8 and 3.1 meters—about 10 feet out (the docs state these values might change in the future). If an object is further away, it won’t be scanned until the HoloLens gets closer to it. Keeping to 0.8 meters also ensures the user’s hands won’t accidentally be included as part of the spatial mesh of the room. The process to get spatial data into an application is as follows: - Notify the SurfaceObserver to observe a region of size A and shape B. - At a predefined interval (such as every 3 seconds), ask the SurfaceObserver for an update if you aren’t waiting on other results to be processed. (It’s best not to overlap results; let one mesh finish before the next is processed.) - Surface Observer lets you know if there’s an add, update or removal of a surface volume. - If there’s an add or update to your known spatial mesh: - Clean up old surface if one exists for this id. - Reuse (to save memory, if you have a surface that isn’t being used) or allocate a new SurfaceObject with mesh, collider and world anchor components. - Make an async request to bake the mesh data. - If there’s a removal, remove the volume and make it inactive so you can reuse its game object later (this prevents additional allocations and thus fewer garbage collections). To use spatial mapping, SpatialPerception is a required capability in a Universal Windows Platform (UWP) app. Because an end user should be aware that an application can scan the room, this needs to be noted in the capabilities either in the Unity player settings as shown in Figure 5, or added manually in your application’s package.appxmanifest. Figure 5 Adding SpatialPerception in File-Build Settings .png) The spatial meshes are processed in surface volumes that are different from the bounding volume defined for the SurfaceObserver to observe. The key is once the SurfaceObserver_OnSurface delegate is called to note surface volume changes, you request the changes in the next frame. The meshes are then prepared in a process called baking, and a SurfaceObserver_OnDataReady callback is processed when the mesh is ready. Baking is a standard term in the 3D universe that usually refers to calculating something ahead of time. It’s typically used to talk about calculating lighting information and transferring it to a special image called a lightmap in the baking process. Lightmaps help avoid runtime calculations. Baking a mesh can take several frames from the time you ask for it in your Update function (see Figure 6). For performance’s sake, request the mesh only from RequestMeshAsync if you’re actually going to use it, otherwise you’re doing extra processing when you bake it for no reason. Figure 6 The Update Function private void Update() { // Only do processing if you should be observing. // This is a flag that should be turned on or off. if (ObserverState == ObserverStates.Running) { // If you don't have a mesh creation pending but you could // schedule a mesh creation now, do it! if (surfaceWorkOutstanding == false && surfaceWorkQueue.Count > 0) { SurfaceData surfaceData = surfaceWorkQueue.Dequeue(); // If RequestMeshAsync succeeds, then you've scheduled mesh creation. // OnDataReady is left out of this demo code, as it performs // some basic cleanup and sets some material/shadow settings. surfaceWorkOutstanding = observer.RequestMeshAsync(surfaceData, SurfaceObserver_OnDataReady); } // If you don't have any other work to do, and enough time has passed since // previous update request, request updates for the spatial mapping data. else if (surfaceWorkOutstanding == false && (Time.time - updateTime) >= TimeBetweenUpdates) { // You could choose a new origin here if you need to scan // a new area extending out from the original or make Extents bigger. observer.SetVolumeAsAxisAlignedBox(observerOrigin, Extents); observer.Update(SurfaceObserver_OnSurfaceChanged); updateTime = Time.time; } } } private void SurfaceObserver_OnSurfaceChanged( SurfaceId id, SurfaceChange changeType, Bounds bounds, System.DateTime updateTime) { GameObject surface; switch (changeType) { case SurfaceChange.Added: case SurfaceChange.Updated: // Create (or get existing if updating) object on a custom layer. // This creates the new game object to hold a piece // of the spatial mesh. surface = GetSurfaceObject(id.handle, transform); // Queue the request for mesh data to be handled later. QueueSurfaceDataRequest(id, surface); break; case SurfaceChange.Removed: // Remove surface from list. // ... break; } } The Update code is called every frame on any game object deemed responsible for getting the spatial meshes. When surface volume baking is requested via RequestMeshAsync, the request is passed a SurfaceData structure in which you can specify the scanning density (resolution) in triangles per cubic meter to process. When TrianglesPerCubicMeter is greater than 1000, you get fairly smooth results that more closely match the surfaces you’re scanning. On the other hand, the lower the triangle count, the better the performance. A resolution of <100 is very fast, but you lose surface details, so I recommend trying 500 to start and adjusting from there. Figure 7 uses about 500 TrianglesPerCubicMeter. The HoloLens already does some optimizations on the mesh, so you’ll need to performance test your applications and make a determination whether you want to scan and fix up more (use less memory) or just scan at a higher resolution, which is easier but uses more memory. Figure 7 A Virtual Character Detecting and Sitting on a Real-World Item (from the Fragments Application) .png) Creating the spatial mesh isn’t a super high-resolution process by design because higher resolution equals significantly more processing power and usually isn’t necessary to interact with the world around you. You won’t be using spatial mapping to capture a highly detailed small figurine on your countertop—that’s not what it’s designed for. There are plenty of software solutions for that, though, via a technique called photogrammetry, which can be used for creating 3D models from images, such as Microsoft 3D Builder, and many others listed at bit.ly/2fzcH1z and bit.ly/1UjAt1e. The HoloLens doesn’t include anything for scanning and capturing a textured 3D model, but you can find applications to create 3D models on the HoloLens, such as HoloStudio, or you can create them in 3D Builder (or in any 3D modeling software for that matter) and bring them into Unity to use on the HoloLens. You can also now live stream models from Unity to the HoloLens during development with the new Holographic emulation in Unity 5.5. Mesh colliders in Unity are the least-performant colliders, but they’re necessary for surfaces that don’t fit primitive shapes like boxes and spheres. As you add more triangles on the surfaces and add mesh colliders to them, you can impact physics performance. SurfaceData’s last parameter is whether to bake a collider: SurfaceData surfaceData = new SurfaceData(id, surface.GetComponent<MeshFilter>(), surface.GetComponent<WorldAnchor>(), surface.GetComponent<MeshCollider>(), TrianglesPerCubicMeter, bakeCollider); You may never need a collider on your spatial mesh (and thus pass in bakeCollider=false) if you only want to detect features in the user’s space, but not integrate with the physics system. Choose wisely. There are plenty of considerations for the scanning experience when using spatial mapping. Applications may opt not to scan, to scan only part of the environment or to ask users to scan their environment looking for certain-size surfaces like a couch. Design guidelines are listed on the “Spatial Mapping Design” page of the Windows Dev Center (bit.ly/2gDqQQi) and are important to consider, especially because understating scenarios can introduce various imperfections into your mesh, which fall into three general categories discussed on the “Spatial Mapping Design” page—bias, hallucinations and holes. One workflow would be to ask the user to scan everything up front, such as is done at the beginning of every “RoboRaid” session to find the appropriate surfaces for the game to work with. Once you’ve found applicable surfaces to use, the experience starts and uses the meshes that have been provided. Another workflow is to scan up front, then scan continually at a smaller interval to find real-world changes. Working with the Spatial Mesh Once the mesh has been created, you can interact with it in various ways. If you use the HoloToolkit, the spatial mesh has been created with a custom layer attribute. In Unity you can ignore or include layers in various operations. You can shoot an invisible arrow out in a common operation called a raycast, and it will return the colliders that it hit on the optionally specified layer. Often I’ll want to place holograms in my environment, on a table or, even like in “Young Conker” (bit.ly/2f4Ci4F), provide a location for the character to move to by selecting an area in the real world (via the spatial mesh) to which to go. You need to understand where you can intersect with the physical world. The code in Figure 8 performs a raycast out to 30 meters, but will report back only areas hit on the spatial mapping mesh. Other holograms are ignored if they aren’t on this layer. Figure 8 Performing a Raycast // Do a raycast into the world that will only hit the Spatial Mapping mesh. var headPosition = Camera.main.transform.position; var gazeDirection = Camera.main.transform.forward; RaycastHit hitInfo; // Ensure you specify a length as a best practice. Shorter is better as // performance hit goes up roughly linearly with length. if (Physics.Raycast(headPosition, gazeDirection, out hitInfo, 10.0f, SpatialMappingManager.Instance.LayerMask)) { // Move this object to where the raycast hit the Spatial Mapping mesh. this.transform.position = hitInfo.point; // Rotate this object to face the user. Quaternion rotation = Camera.main.transform.localRotation; rotation.x = 0; rotation.z = 0; transform.rotation = rotation; } I don’t have to use the spatial mesh, of course. If I want a hologram to show up and the user to be able to place it wherever he wants (maybe it always follows him) and it will never integrate with the physical environment, I surely don’t need a raycast or even the mesh collider. Now let’s do something fun with the mesh. I want to try to determine where in my living room an area exists that a character could sit down, much like the scene in Figure 7, which is from “Fragments,” an amazing nearly five-hour mystery-solving experience for the HoloLens that has virtual characters sitting in your room at times. Some of the code I’ll walk through is from the HoloToolkit. It came from Asobo Studio, which worked on “Fragments.” Because this is mixed reality, it’s just plain awesome to develop experiences that mix the real world with the virtual world. Figure 9 is the end result from a HoloToolkit-Examples—SpatialUnderstandingExample scene that I’ve run in my living room. Note that it indicates several locations that were identified as sittable areas. Figure 9 The HoloToolkit SpatialUnderstanding Functionality .jpg) The entire code example for this is in the HoloToolkit, but let’s walk through the process. I’ve trimmed down the code into applicable pieces. (I’ve talked about SurfaceObserver already so that will be excluded from this section.) SpatialUnderstandingSourceMesh wraps the SurfaceObserver through a SpatialMappingObserver class to process meshes and will create the appropriate MeshData objects to pass to the SpatialUnderstaing DLL. The main force of this API lies in this DLL in the HoloToolkit. In order to look for shapes in my spatial mesh using the DLL, I must define the custom shape I’m looking for. If I want a sittable surface that’s between 0.2 and 0.6 meters off the floor, made of at least one discrete flat surface, and a total surface area minimum of 0.2 meters, I can create a shape definition that will get passed to the DLL through AddShape (see Figure 10). Figure 10 Creating a Shape Definition ShapeDefinitions.cs // A "Sittable" space definition..20f), }), }; // Tell the DLL about this shape is called Sittable. AddShape("Sittable", shapeComponents); Next, I can detect the regions and then visualize or place game objects there. I’m not limited to asking for a type of shape and getting all of them. If I want, I can structure my query to QueryTopology_FindLargePositionsOnWalls or QueryTopology_FindLargestWall, as shown in Figure 11. Figure 11 Querying for a Shape SpaceVisualizer.cs (abbreviated) const int QueryResultMaxCount = 512; private ShapeResult[] resultsShape = new ShapeResult[QueryResultMaxCount]; public GameObject Beacon; public void FindSittableLocations() { // Pin managed object memory going to native code. IntPtr resultsShapePtr = SpatialUnderstanding.Instance.UnderstandingDLL. PinObject(resultsShape); // Find the half dimensions of "Sittable" objects via the DLL. int shapeCount = SpatialUnderstandingDllShapes.QueryShape_FindShapeHalfDims( "Sittable", resultsShape.Length, resultsShapePtr); // Process found results. for(int i=0;i<shapeCount;i++) { // Create a beacon at each "sittable" location. Instantiate(Beacon, resultsShape[i].position, Quaternion.identity); // Log the half bounds of our sittable area. Console.WriteLine(resultsShape[i].halfDims.sqrMagnitude < 0.01f) ? new Vector3(0.25f, 0.025f, 0.25f) : resultsShape[i].halfDims) } } There’s also a solver in the HoloToolkit that allows you to provide criteria, such as “Create 1.5 meters away from other objects”: List<ObjectPlacementRule> rules = new List<ObjectPlacementRule>() { ObjectPlacementRule.Create_AwayFromOtherObjects(1.5f), }; // Simplified api for demo purpose – see LevelSolver.cs in the HoloToolkit. var queryResults = Solver_PlaceObject(....) After executing the preceding query to place an object, you get back a list of results you can use to determine the location, bounds and directional vectors to find the orientation of the surface: public class ObjectPlacementResult { public Vector3 Position; public Vector3 HalfDims; public Vector3 Forward; public Vector3 Right; public Vector3 Up; }; Wrapping Up Spatial mapping lets you truly integrate with the world around you and engage in mixed-reality experiences. You can guide a user to scan her environment and then give her feedback about what you’ve found, as well as smartly determine her environment for your holograms to interact with her. There’s no other device like the HoloLens for mixing worlds. Check out HoloLens.com and start developing mind-blowing experiences today. Next time around, I’ll talk about shared experiences on the HoloLens. Until then, keep developing! Adam Tuliper is a senior technical evangelist with Microsoft living in sunny SoCal. He’s a Web dev/game dev Pluralsight.com author and all-around tech lover. Find him on Twitter: @AdamTuliper or at adamtuliper.com. Thanks to the following Microsoft technical expert for reviewing this article:Jackson Fields
https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/january/hololens-introduction-to-the-hololens-part-2-spatial-mapping
CC-MAIN-2020-05
refinedweb
3,600
54.02
- Advertisement toucelMember Content Count117 Joined Last visited Community Reputation188 Neutral About toucel - RankMember - or a little more simply: unsigned int getMaxNumberOfDrops(unsigned int floorsCount) { return (unsigned int)ceil( (sqrt( 1.0 + 8.0*floorsCount ) - 1.0) / 2.0 ); } this is simply solving the quadratic formula (and because we are dealing with integers rounding to the highest int) x(x + 1) / 2 >= 100 x*x + x >= 200 ax*x + bx + c >= 0 a = 1 b = 1 c = -200 ( -b +- sqrt( b*b - 4ac ) ) / 2 = ( -1 + sqrt( 1 - 4*-200 ) ) / 2 = (sqrt( 1 + 800 ) - 1) / 2 note we do not need to consider the alternate case given to us by the quadratic formula because we are dealing with positive whole numbers in this case. - Here is kind of a cleaned up version... int getNumberOfDrops(unsigned int floor, unsigned int topFloor) { if (floor < 1 || floor > topFloor) return -1; unsigned int step = (int)ceil( (sqrt( 1.0f + 8.0f*(float)topFloor ) - 1.0f) / 2.0f ); unsigned int curFloor = step; unsigned int drops = 0; while (++drops) { if (curFloor >= floor) { for (unsigned int i = step - 1; i > 0 && step > 1; --i) { drops++; if ( curFloor - i >= floor) break; } return drops; } curFloor += --step; if (curFloor > topFloor) { step = topFloor - (curFloor - step); curFloor = topFloor; } } return drops; } ... getNumberOfDrops(13, 100); //returns 14 ... [Edited by - toucel on December 8, 2006 5:09:08 PM] - That was an interesting problem. Thanks for sharing. Here's some example code demonstrating (what we have found to be) the optimal searching pattern for an arbitrary number of total floors: unsigned int topFloor = 100; unsigned int brakeFloor = 14; unsigned int maxStep = (unsigned int)ceil( (sqrt( 1.0f + 8.0f*(float)topFloor ) - 1.0f) / 2.0f ); unsigned int step = maxStep; unsigned int curFloor = step; unsigned int drops = 0; while (1) { drops++; cout << "1st egg from #" << curFloor; if (curFloor >= brakeFloor) { cout << " *\n"; if (step > 1) { for (unsigned int i = step - 1; i > 0; --i) { drops++; cout << "\t2nd egg from #" << curFloor - i; if ( curFloor - i >= brakeFloor) { cout << " *"; break; } cout << "\n"; } } break; } cout << "\n"; curFloor += --step; if (curFloor > topFloor) { step = topFloor - (curFloor - step); curFloor = topFloor; } } cout << "\n\nTotal number of drops: " << drops << "\n\n"; Obviously this could (and should) be cleaned up considerably, but I tossed it here, in this format, in case anyone wanted to see it working. edit: that should have been "sqrt( 1.0f + 8.0f ..." not "sqrt( 1.0f - 8.0f ..." [Edited by - toucel on December 8, 2006 5:52:36 PM] ray/lineseg vs triangle collision toucel replied to drvannostrand's topic in Math and PhysicsWhen you are updating the position cast a ray from the current position to the projected end position and check against that using techniques found here: if a collision is found, find the point of collision and you end up with a "correct" end position. Lucid dreams toucel replied to Yann L's topic in GDNet LoungeI tend to have lucid dreams fairly regularly, though they normally seem to occur near the time I will be waking anyways. I have gotten fairly good at being able to control the dreams though I too fight to maintain the control. Ususally when I find that I am part of a dream I do not tend to "play god," though I have done this too on occasion ( flying above a city, bouncing off of walls, doing all kinds of crazy aerobatics etc ;) ). Because I am normally within an already running dream I tend to actively "steer" the current story of the dream. I often put myself in interesting situations, even some that are quite scary (in the end I really enjoy most of them though) Sometimes my mind works against me in this lucid state. I will be reforming the direction of the dream and in doing so doubts will enter my mind. These doubts can cloud and overtake the positive elements. For example: I'm in a hallway running away from something, I am lucid but still within the restrictions I have placed upon the dream world. If I were to begin thinking about how scary it would be if the lights all went out, even if I realized that I should not think about that (because I am aware within my lucid state (thru experience, I guess) that these thoughts my interfere with my intent) and begin attempting to push them out, the very act of thinking these thoughts and thinking to not think about them causes a chain reaction and it can be quite difficult to regain full control at that point. Of course at that time, the lights would ususally go out. I have had a very vivid false awakening: My father woke me up for school, as he did every morning. I didnt want to get up initially, but I did. I proceeded to get ready with the normal routine. I changed clothes, brushed, did my hair, etc. This took about 15 minutes. Then, from far away, I heard someone yelling. I was confused because it was so distant but definitely directed towards me. I suddenly awoke in my bed, extremely confused. I wondered how I got back there, why I didnt have my school clothes on, etc. I then heard the yelling again and found that my father had been yelling at me because I had NOT gotten up and was about to miss my bus. It was hard to dismiss what had happened, it had felt so real. It was an amazing experience that I still remember well. Once I did figure out what had happened, I was quite annoyed, given the situation. In my mind I had just spent about 15 minutes getting ready - now that time was gone and I wasnt even prepared for the bus that was almost there. ;) I had another strange dream that is perhaps tangential to the topic here but could be relevant: I was feeling really sick one day, and I was having a hard time sleeping. I eventually was able to but I was right on the edge of being asleep and awake (or at least it felt like that) and I remember looking at myself (after falling asleep but not realizing it) I watched what was essentially a hud overlay over my entire body. It detailed (thru individal sprites) the body's immune systems' response to the cold I had. I watched as thousands of sprites would attack. Once I had realized what was happening I was able to take some control of the system. I noticed key areas that were not receiving enough attention from my immune system, so I redeployed some to compensate (it felt almost like an rts) I did this for what seemed to be quite a long time. I then woke somewhat abruptly in the middle of the night. I remember being astounded by how much better I felt :) One more, while we are at it: I once had a dream where I was in a library. There were many books in bookcases and a lone librarian. I could tell something was not right about the place and I approached the man. I asked him about the place. He basically smiled, set down a book he was holding, and explained that I was not in a library. He said that this was all in my head. He went further and explained that he was a representation of my subconcious. I talked with him further and though I dont remember all that was said I recall him describing how even though I might not be actively/conciously working on the solution to a problem in my head that "he" was often working on it for me. I have been trying, off and on, to induce some of these things and have done extensive research on the subject. Interested parties may wish to look into binural beats, and perhaps saltcube.com's timer or something similar good dreaming, -toucel Nebulae, part III toucel commented on Ysaneya's blog entry in Journal of YsaneyaWould it be possible to see a picture with only the "emission" coloring and another with only the reflection coloring? I am curious about the influence each has on the final render. Also could you detail the workings of the lookups for the speed and angle? Is speed simply = previous - current? Is it normalized in some fashion? Is the angle = acos( normalize( previous - current ) DOT normalize( next - current )? What do the lookup tables look like? I apologize for the abundance of questions; the technical details here really interest me. Really Simple Quadtree Class in C++ toucel replied to mike_ix's topic in General and Gameplay ProgrammingI rewrote what you offered here. Functionally it is almost identical (so I dont think you'd get any real appreciable speed difference). In the end the focus was on more encapsulation and a cleaner coding style. quadtree.h //////////////////////////////////////////////////////////// class cNode { public: vec3 b[4], center; cNode *child[4]; bool leaf; cNode() { for (int i = 0; i < 4; ++i) child = NULL; leaf = false; } }; //////////////////////////////////////////////////////////// class cQuadTree { private: cNode *root; cNode *initNode(vec3 bound[4]); void closeNode(const cNode *pcNode); void renderNode(const cNode *pcNode); unsigned int patchVertexSize; public: cQuadTree() { patchVertexSize = 0; root = NULL; } ~cQuadTree(){ if (root != NULL) delete root; } void init(float size, unsigned int finalVertSize); void render(); }; //////////////////////////////////////////////////////////// and quadtree.cpp //////////////////////////////////////////////////////////// void getBounds(vec3 out[4], vec3 offset, float size, unsigned int i) { vec3 shift; if (i == 1) shift = vec3(size, 0.0f, 0.0f); else if (i == 2) shift = vec3(0.0f, 0.0f, size); else if (i == 3) shift = vec3(size, 0.0f, size); out[0] = offset + shift; out[1] = offset + vec3(size, 0.0f, 0.0f) + shift; out[2] = offset + vec3(0.0f, 0.0f, size) + shift; out[3] = offset + vec3(size, 0.0f, size) + shift; } //////////////////////////////////////////////////////////// cNode *cQuadTree::initNode(vec3 bound[4]) { float size = bound[1].x - bound[0].x; unsigned int i; cNode *newcNode = new cNode; newcNode->center = (bound[3] - bound[0]) * 0.5 + bound[0]; for (i = 0; i < 4; ++i) newcNode->b = bound; if ((int)size == patchVertexSize) { newcNode->leaf = true; //insert leaf data } else { vec3 b[4]; for (int i = 0; i < 4; ++i) { getBounds(b, bound[0], size/2.0f, i); newcNode->child = initcNode(b); } } return newcNode; } //////////////////////////////////////////////////////////// void cQuadTree::closeNode(const cNode *pcNode) { for (int i = 0; i < 4; ++i) if (pcNode->child != NULL) closecNode(pcNode->child); delete pcNode; } //////////////////////////////////////////////////////////// void cQuadTree::renderNode(const cNode *pcNode) { if (pcNode == NULL) return; if (!pcNode->leaf) for (int i = 0; i < 4; ++i) rendercNode(pcNode->child); else { //render leaf data } } //////////////////////////////////////////////////////////// void cQuadTree::render() { rendercNode(root); } //////////////////////////////////////////////////////////// void cQuadTree::init(float size, unsigned int finalVertSize) { vec3 b[4]; patchVertexSize = finalVertSize; getBounds(b, vec3(0.0f, 0.0f, 0.0f), size, 0); root = initcNode(b); } //////////////////////////////////////////////////////////// note that I have taken out the initial requirement of specifying the bounds - this was intentional; as I see it, almost all applications of a quadtree would require a square shape (as opposed to some arbitrary quad) also in this example we could put the quadtree::init code into the constructor instead (and it would probably be a bit cleaner) but I kept it seperated in case there was an issue of "timing" later down the line (ie the data to fill the leafs was not yet loaded etc) depending on the end result you are looking for it may be better to initially construct this (within the constructor) and then add any data ("parsing" it as it comes down among the quadtree nodes and placing it within the correct nodes) Another thing, if you wish to make this general purpose you are going to have to look at the possibility of a piece of leaf data spanning more than one node and evaluate what you want to do with it. It may be possible to split the data amongst the overlap, or to simply add the data to the node that encompasses it whilst maintaining the ability for other data to drop to the other node children - this would effectively allow nodes to contain leaf information as well as children. so, anyway, to create the quadtree we simply do this: cQuadTree qTree; qTree.init(512, 64); ... qTree.render(); If anyone has any questions / comments please share... ode to oblivion toucel replied to zedzeek's topic in GDNet LoungeThrow up some screenshots for those of us at work ;) Kevin J. Anderson [rant warning] toucel replied to ApochPiQ's topic in GDNet LoungeI had the "opportunity" to go see Brian Herbert give a small presentation at a local Barnes and Noble. I was greatly unimpressed, and left feeling quite saddened. His general manner and attitude seemed to suggest that all of his stories were thrown together and they were working towards volume instead of quality. The story arcs were set within much shorter attention spans. I had read a couple of his books, prequels to Dune. I really wanted to like them, but was thoroughly disappointed. The only really interesting part of the talk was when he was talking about his father and what life was like growing up in there household. Near the end, one gentlemen asked about the yellow in the eyes of some creature from some other Dune book of his (obviously I havent read it) and wondered if there was a connection with the Honored Matres (later in the original Dune series) Brian paused, then responded that he had never thought about that and thought it was a really good idea. I was slightly surprised that the author hadnt thought about this - great attention has been devoted to the eyes of characters in the series. Also the way he seemed to adopt the idea and run with it made me really question his "long term" ideas. It's kind of hard for me to explain exactly and I understand that inspiration will come from many places etc, but the demeanor was simply offsetting. He was not condescending or anything, he just didnt seem like he was really at the wheel or at the very least had no clue where he was going. Shoreline extraction from a heightmap toucel replied to nooan's topic in Math and Physicshere are my thoughts, they will not solve all of your specific needs/wants but they may provide some inspiration/thought my idea would be to use image space techniques, use pixel shaders to create a generic black and white map (alpha) of the heightmap. anything below the water level would be black and anything above would be white (you could also encode the specific depth etc in another color channel for other effects) you would then blur this image using another pixel shader you would be left with an image that has the length from the shoreline encoded in alpha, ie full white would be no distance to land and full black would be fully away from the area of waves I was thinking that you could animate the flow using this distance along with a time parameter and a 1 dimensional wave/flow map (you could use a 2 dimensional one as well but I am not sure how you would find the other coordinate to index into it, I suppose it could be simply a repeatable texture that is indexed by a combination of the relative x/z coordinates and perhaps some noise...) also you could use a noise texture to make it more diverse, you could even encode it in another of the channels Rendering vegetation toucel replied to BradDaBug's topic in Graphics and GPU Programmingalso... humus.ca has some source for reference [MDX] Normal mapping clouds, generating a normal map on GPU [SOLVED, shader inside!] toucel replied to remigius's topic in Graphics and GPU ProgrammingIf you want to soften the normal, simply scale the normal. v * 0.75f, etc... you could also look into blurring... guess the game II toucel replied to Marmin's topic in GDNet LoungeQuote:Original post by ViLiO Quote:Original post by ViLiO I'm really quite proud of this one. I think the drawing is a fair representation of the game and I spent ages drawing Uranus [lol] Maximum cookies to anyone who can guess [grin] Nobody has had a go at this one of mine either [sad] I'll give you a hint, it was a nes game [grin] To the Earth... I had it too HOW can THIS cause an error? toucel replied to CHollman82's topic in General and Gameplay Programmingis the ; intentional? ;) [4E4] Post yer Screenshots toucel replied to Mushu's topic in GameDev.net ContestsQuote:Original post by AnonymousPosterChild Quote:Original post by meganfox Entry: Kasei Any real reason your screenshots have to be nearly a meg in siz,e each? It's because they look awesome ... ;) - Advertisement
https://www.gamedev.net/profile/42147-toucel/
CC-MAIN-2018-47
refinedweb
2,780
63.93
OK so I actually have a couple basic problem but the most important problem is the array. I am making a hangman game. I have a driver that was supplied but it is actually confusing me more. 1. Does the driver get the information from the file for me, or do I need to have the code that I put in main to get the information? 2. I tried to output the String word just so that I knew file.next was working to get the information from the file, but I couldn't get it to output from either the displayGameIntro or play methods because it didn't recognize the variable. Any suggestions? 3. This has to do with #2, since I can't seem to get the String word to be recognized, I can't turn it into an array. My plan is to create an array called letter (which I commented out while i was trying to get the other stuff working) that I can use when I am checking to see if the user has entered the correct letter. Is this the wrong idea to be using. Should I not use an array to check if the guessed letter is correct? Here is the driver: /* This program is a word guessing game called hangman. * A person will try and guess a word before the max * number of guesses are used. Every time the user chooses an * incorrect letter another body part is displayed in the gallows. */ import java.util.*; import java.io.FileInputStream; import java.io.FileNotFoundException; public class HangmanDriver { public static final String filename = "hw7data.txt"; // Driver to run the game using the student Hangman class public static void main(String[] args) { Scanner wordsFile = null; // words data file // open the file containing the words try { wordsFile = new Scanner(new FileInputStream(filename)); } catch (FileNotFoundException e) { System.out.println("File not found or not opened."); System.exit(0); } // create object and prepare to play game Hangman game = new Hangman(wordsFile); Scanner keyboard = new Scanner(System.in); // display an introduction on the game to the player game.displayGameIntro(); // continually play new games if the user desires String playAgain; do { game.play(); System.out.print("Do you want to play again? "); playAgain = keyboard.next(); System.out.println(); } while (playAgain.toUpperCase().startsWith("Y")); System.out.println("Thanks for playing!"); } } This is my program: // This program runs a game of hangman // The point is to guess the word based on the number of blank spaces // // //@author: Kristen Watson //@version: 11/23/201 import java.util.Scanner; import java.io.File; public class Hangman { public final static String filename = "hw7data.txt"; public static void main(String[] args) { Scanner inputFile = null; try { inputFile = new Scanner(new File(filename)); } catch (Exception e) { System.out.println("File could not be opened: " + filename); System.exit(0); } } public Hangman(Scanner file) { // Store the file object in an instance variable for later use String word = file.next(); } //Displays the Intro to the game for the player public void displayGameIntro() { System.out.println("You have just started a game of Hangman."); System.out.println("You have seven guesses to figure out the correct word."); System.out.println("The blank spots indicate how long the word is. Each"); System.out.println("correct letter will show up in the correct spot, while"); System.out.println("a wrong guess will lead to another body part being"); System.out.println("displayed. After seven incorrect guesses, you lose."); } public void play () { // Here is some high-level pseudocode of what you want to do: // // Initialize everything, such as arrays //int [] letters = new int [word.length]; //for loop that takes word into array letters //for(int i = 0; i < word.length; i++){ // letters [i] = word[i]; // // Get one word from the data file // Loop until game is over: // Display current picture, letters guessed, wrong guesses, and word so far // Get a valid, not-yet-guessed letter from the player // Determine if the letter is in the word and update variables appropriately // Handle winning or losing } }
http://www.javaprogrammingforums.com/collections-generics/19817-string-array.html
CC-MAIN-2014-42
refinedweb
669
56.05
. These lecture notes borrow. Someday you may be named among the elite Pythonistas of the world..), which is. When we describe a language, we should pay particular attention to the means that the language provides for combining simple ideas to form more complex ideas. Every powerful language has three such mechanisms: In programming, we deal with two kinds of elements: functions and data. (Soon we will discover that they are really not so distinct.) Informally, data is stuff that we want to manipulate, and functions describe the rules for manipulating the data. Thus, any powerful programming language should be able to describe primitive data and primitive functions, as well as have some methods for combining and abstracting both functions and data. Having experimented with the full Python interpreter in the previous section, we now start anew, methodically developing the Python language element by element. Be patient if the examples seem simplistic --- more exciting material is soon to come. We begin with primitive expressions. One kind of primitive expression is a number. More precisely, the expression that you type consists of the numerals that represent the number in base 10. >>> 42 42 Expressions representing numbers may be combined with mathematical operators to form a compound expression, which the interpreter will evaluate: >>> -1 - -1 0 >>> 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + 1/128 0.9921875 These mathematical expressions use infix notation, where the operator (e.g., +, -, *, or /) appears in between the operands (numbers). Python includes many ways to form compound expressions. Rather than attempt to enumerate them all immediately, we will introduce new expression forms as we go, along with the language features that they support. Strings expressions. A string is a sequence of characters enclosed by matching single or double quotes, such as 'Python' and " is cool!". (The Python interpreter uses single quotes to represent a string, regardless of what kind of quote you use.) >>> 'Python' 'Python' >>> " is cool!" ' is cool!' The enclosing quotes are not actually part of a string; they are merely used for representation. We can see that this is the case by using the + operator to concatenate multiple strings into a larger string: >>> 'Python' + " is cool!" 'Python is cool!' Strings are a general representation for any kind of text, such as words, phrases, URLs, error messages, and so on. Later, we will see many different ways to use and manipulate strings in Python. The most important kind of compound expression is a call expression, which applies a function to some arguments. Recall from algebra that the mathematical notion of a function is a mapping from some input arguments to an output value. For instance, the max function maps its inputs to a single output, which is the largest of the inputs. The way in which Python expresses function application is the same as in conventional mathematics. >>> max(7.5, 9.5) 9.5 This call expression has subexpressions: the operator is an expression that precedes parentheses, which enclose a comma-delimited list of operand expressions. The operator specifies a function. When this call expression is evaluated, we say that the function max is called with arguments 7.5 and 9.5, and returns a value of 9.5. The order of the arguments in a call expression matters. For instance, the function pow raises its first argument to the power of its second argument. >>> pow(100, 2) 10000 >>> pow(2, 100) 1267650600228229401496703205376 Function notation has three principal advantages over the mathematical convention of infix notation. First, functions may take an arbitrary number of arguments: >>> max(1, -2, 3, -4) 3 No ambiguity can arise, because the function name always precedes its arguments. Second, function notation extends in a straightforward way to nested expressions, where the elements are themselves compound expressions. In nested call expressions, unlike compound infix expressions, the structure of the nesting is entirely explicit in the parentheses. >>> max(min(1, -2), min(pow(3, 5), -4)) -2 There is no limit (in principle) to the depth of such nesting and to the overall complexity of the expressions that the Python interpreter can evaluate. However, humans quickly get confused by multi-level nesting. An important role for you as a programmer is to structure expressions so that they remain interpretable by yourself, your programming partners, and other people who may read your expressions in the future. Third, mathematical notation has a great variety of forms: multiplication appears between terms, exponents appear as superscripts, division as a horizontal bar, and a square root as a roof with slanted siding. Some of this notation is very hard to type! However, all of this complexity can be unified via the notation of call expressions. While Python supports common mathematical operators using infix notation (like + and -), any operator can be expressed as a function with a name. Python defines a very large number of functions, including the operator functions mentioned in the preceding section, but does not make all of their names available by default. Instead, it organizes the functions and other quantities that it knows about into modules, which together comprise the Python Library. To use these elements, one imports them. For example, the math module provides a variety of familiar mathematical functions: >>> from math import sqrt >>> sqrt(256) 16.0 and the operator module provides access to functions corresponding to infix operators: >>> from operator import add, sub, mul >>> add(14, 28) 42 >>> sub(100, mul(7, add(8, 4))) 16 An import statement designates a module name (e.g., operator or math), and then lists the named attributes of that module to import (e.g., sqrt). Once a function is imported, it can be called multiple times. There is no difference between using these operator functions (e.g., add) and the operator symbols themselves (e.g., +). Conventionally, most programmers use symbols and infix notation to express simple arithmetic.. A critical aspect of a programming language is the means it provides for using names to refer to computational objects. If a value has been given a name, we say that the name binds to the value. In Python, we can establish new bindings using the assignment statement, which contains a name to the left of = and a value to the right: >>> radius = 10 >>> radius 10 >>> 2 * radius 20 Names are also bound via import statements. >>> from math import pi >>> pi * 71 / 223 1.0002380197528042 The = symbol is called the assignment operator in Python (and many other languages). Assignment is our simplest means of abstraction, for it allows us to use simple names to refer to the results of compound operations, such as the area computed above. In this way, complex programs are constructed by building, step by step, computational objects of increasing complexity. The possibility of binding names to values and later retrieving those values by name means that the interpreter must maintain some sort of memory that keeps track of the names, values, and bindings. This memory is called an environment. Names can also be bound to functions. For instance, the name max is bound to the max function we have been using. Functions, unlike numbers, are tricky to render as text, so Python prints an identifying description instead, when asked to describe a function: >>> max <built-in function max> We can use assignment statements to give new names to existing functions. >>> f = max >>> f <built-in function max> >>> f(2, 3, 4) 4 And successive assignment statements can rebind a name to a new value. >>> f = 2 >>> f 2 In Python, names are often called variable names or variables because they can be bound to different values in the course of executing a program. When a name is bound to a new value through assignment, it is no longer bound to any previous value. One can even bind built-in names to new values. >>> max = 5 >>> max 5 After assigning max to 5, the name max is no longer bound to a function, and so attempting to call max(2, 3, 4) will cause an error. When executing an assignment statement, Python evaluates the expression to the right of = before changing the binding to the name on the left. Therefore, one can refer to a name in right-side expression, even if it is the name to be bound by the assignment statement. >>> x = 2 >>> x = x + 1 >>> x 3 We can also assign multiple values to multiple names in a single statement, where names (on the left of =) and expressions (on the right of =) are separated by commas. >>> area, circumference = pi * radius * radius, 2 * pi * radius >>> area 314.1592653589793 >>> circumference 62.83185307179586 Changing the value of one name does not affect other names. Below, even though the name area was bound to a value defined originally in terms of radius, the value of area has not changed. Updating the value of area requires another assignment statement. >>> radius = 11 >>> area 314.1592653589793 >>> area = pi * radius * radius 380.132711084365 With multiple assignment, all expressions to the left of = are evaluated before any names are bound to those values. As a result of this rule, swapping the values bound to two names can be performed in a single statement. >>> x, y = 3, 4.5 >>> y, x = x, y >>> x 4.5 >>> y 3 One of our goals in this chapter is to isolate issues about thinking procedurally. As a case in point, let us consider that, in evaluating nested call expressions, the interpreter is itself following a procedure. To evaluate a call expression, Python will do the following: Even this simple procedure illustrates some important points about processes in general. The first step dictates that in order to accomplish the evaluation process for a call expression we must first evaluate other expressions. Thus, the evaluation procedure is recursive in nature; that is, it includes, as one of its steps, the need to invoke the rule itself. For example, evaluating >>> mul(add(2, mul(4, 6)), add(3, 5)) 208 requires that this evaluation procedure be applied four times. If we draw each expression that we evaluate, we can visualize the hierarchical structure of this process. This illustration is called an expression tree. In computer science, trees conventionally grow from the top down. The objects at each point in a tree are called nodes; in this case, they are expressions paired with their values. Evaluating its root, the full expression at the top, requires first evaluating the branches that are its subexpressions. The leaf expressions (that is, nodes with no branches stemming from them) represent either functions or numbers. The interior nodes have two parts: the call expression to which our evaluation rule is applied, and the result of that expression. Viewing evaluation in terms of this tree, we can imagine that the values of the operands percolate upward, starting from the terminal nodes and then combining at higher and higher levels. Next, observe that the repeated application of the first step brings us to the point where we need to evaluate, not call expressions, but primitive expressions such as numerals (e.g., 2) and names (e.g., add). We take care of the primitive cases by stipulating that Notice the important role of an environment in determining the meaning of the symbols in expressions. In Python, it is meaningless to speak of the value of an expression such as >>> add(x, 1) without specifying any information about the environment that would provide a meaning for the name x (or even for the name add). Environments provide the context in which evaluation takes place, which plays an important role in our understanding of program execution. This evaluation procedure does not suffice to evaluate all Python code, only call expressions, numerals, and names. For instance, it does not handle assignment statements. Executing >>> x = 3 does not return a value nor evaluate a function on some arguments, since the purpose of assignment is instead to bind a name to a value. In general, statements are not evaluated but executed; they do not produce a value but instead make some change. Each type of expression or statement has its own evaluation or execution procedure. A pedantic note: when we say that "a numeral evaluates to a number," we actually mean that the Python interpreter evaluates a numeral to a number. It is the interpreter which endows meaning to the programming language. Given that the interpreter is a fixed program that always behaves consistently, we can loosely say that numerals (and expressions) themselves evaluate to values in the context of Python programs. Throughout this text, we will distinguish between two types of functions. Pure functions. Functions have some input (their arguments) and return some output (the result of applying them). The built-in function >>> abs(-2) 2 can be depicted as a small machine that takes input and produces output. The function abs is pure. Pure functions have the property that applying them has no effects beyond returning a value. Moreover, a pure function must always return the same value when called twice with the same arguments. Non-pure functions. In addition to returning a value, applying a non-pure function can generate side effects, which make some change to the state of the interpreter or computer. A common side effect is to generate additional output beyond the return value, using the print function. >>> print(1, 2, 3) 1 2 3 While print and abs may appear to be similar in these examples, they work in fundamentally different ways. The value that print returns is always None, a special Python value that represents nothing. The interactive Python interpreter does not automatically print the value None. In the case of print, the function itself is printing output as a side effect of being called. A nested expression of calls to print highlights the non-pure character of the function. >>> print(print(1), print(2)) 1 2 None None If you find this output to be unexpected, draw an expression tree to clarify why evaluating this expression produces this peculiar output. Be careful with print! The fact that it returns None means that it should not be the expression in an assignment statement. >>> two = print(2) 2 >>> print(two) None Pure functions are restricted in that they cannot have side effects or change behavior over time. Imposing these restrictions yields substantial benefits. First, pure functions can be composed more reliably into compound call expressions. We can see in the non-pure function example above that print does not return a useful result when used in an operand expression. On the other hand, we have seen that functions such as max, pow and sqrt can be used effectively in nested expressions. Second, pure functions tend to be simpler to test. A list of arguments will always lead to the same return value, which can be compared to the expected return value. Testing is discussed in more detail later in this chapter. Third, Chapter 4 will illustrate that pure functions are essential for writing concurrent programs, in which multiple call expressions may be evaluated simultaneously. For these reasons, we concentrate heavily on creating and using pure functions in the remainder of this chapter.. Both def statements and assignment statements set the binding of names to values, and any existing bindings are lost. For example, g below first refers to a function of no arguments, then a number, and then a different function of two arguments. >>> def g(): return 1 >>> g() 1 >>> g = 2 >>> g 2 >>> def g(h, i): return h + i >>> g(1, 2) 3 Our subset of Python is now complex enough that the meaning of programs is non-obvious. What if a formal parameter has the same name as a built-in function? Can two functions share names without confusion? To resolve such questions, we must describe environments in more detail.. So far, our environment consists only of the global frame. This environment diagram shows the bindings of the current environment, along with the values to which names are bound. The environment diagrams in this text are interactive: you can step through the lines of the small program on the left to see the state of the environment evolve on the right. You can also click on the "Edit code" link to load the example into the Online Python Tutor, a tool created by Philip Guo for generating these environment diagrams. You are encouraged to create examples yourself and study the resulting environment diagrams. A def statement also binds a name to the function created by the definition. The resulting environment after defining square appears below: Notice that the name of a function is repeated, once in the global frame, and once as part of the function itself. This repetition is intentional: many different names may refer to the same function, but that function itself has only one intrinsic name. However, looking up the value for a name in an environment only inspects bound names. The intrinsic name of a function does not play a role in look up. In the example we saw earlier, The name max is the intrinsic name of the function, and that's what you see printed as the value for f. In addition, both the names max and f are bound to that same function in the global environment. Function Signatures. Functions differ in the number of arguments that they are allowed to take. To track these requirements, we draw each function in a way that shows the function name and its formal parameters. The user-defined function square takes only x; providing more or fewer arguments will result in an error. A description of the formal parameters of a function is called the function's signature. The function max can take an arbitrary number of arguments. It is rendered as max(...). Regardless of the number of arguments taken, all built-in functions will be rendered as <name>(...), because these primitive functions were never explicitly defined. To evaluate a call expression whose operator names a user-defined function, the Python interpreter follows a computational process. As with any call expression, the interpreter evaluates the operator and operand expressions, and then applies the named function to the resulting arguments. Applying a user-defined function introduces a second local frame, which is only accessible to that function. To apply a user-defined function to some arguments: The environment in which the body is evaluated consists of two frames: first the local frame that contains formal parameter bindings, then the global frame that contains everything else. Each instance of a function application has its own independent local frame. To illustrate an example in detail, several steps of the environment diagram for the same example are depicted below. After executing the first import statement, only the name mul is bound in the global frame. First, the definition statement for the function square is executed. Notice that the entire def statement is processed in a single step. The body of a function is not executed until the function is called (not when it is defined). Next, The square function is called with the argument -2, and so a new frame is created with the formal parameter x bound to the value -2. Then, the name x is looked up in the current environment, which consists of the two frames shown. In both occurrences, x evaluates to -2, and so the square function returns 4. The "Return value" in the square() frame is not a name binding; instead it indicates the value returned by the function call that created the frame. Even in this simple example, two different environments are used. The top-level expression square(-2) is evaluated in the global environment, while the return expression mul(x, x) is evaluated in the environment created for by calling square. Both x and mul are bound in this environment, but in different frames. will see how this model can serve as a blueprint for implementing a working interpreter for a programming language. Let us again consider our two simple function definitions and illustrate the process that evaluates a call expression for a user-defined function. operand 0, square names a user-defined function in the global frame, while x names the number 5 in the local frame. Python applies square to 5 by introducing yet another local frame that binds x to 5. Using this environment, the expression mul(x, x) evaluates to 25. Our evaluation procedure now turns to operand 1, for which y names the number 12. Python evaluates the body of square again, this time introducing yet another local frame that binds x to 12. Hence, operand 1 evaluates to 144. Finally, applying addition to the arguments 25 and 144 yields a final return value for sum_squares: 169. This example illustrates many of the fundamental ideas we have developed so far. Names are bound to values, which are distributed across many independent local frames, along with a single global frame that contains shared names. A new local frame is introduced every time a function is called, even if the same function is called twice. All of this machinery exists to ensure that names resolve to the correct values at the correct times during program execution.. The. developer community. As a side effect of following these conventions, you will find that your code becomes more internally consistent. There are many exceptions to these guidelines, even in the Python standard library. Like the vocabulary of the English language, Python has inherited words from a variety of contributors, and the result is not always consistent. programmer should not need to know how the function is implemented in order to use it. The Python Library has this property. Many developers use the functions defined there, but few ever inspect their implementation. To master the use of a functional abstraction, it is often useful to consider its three core attributes. The domain of a function is the set of arguments it can take. The range of a function is the set of values it can return. The intent of a function is the relationship it computes between inputs and output (as well as any side effects it might generate). Understanding functions via their domain, range, and intent is critical to using them correctly in a complex program., but also harder to read. Python also allows subexpression grouping with parentheses, to override the normal precedence rules or make the nested structure of an expression more explicit. >>> (2 + 3) * (4 + 5) 45 evaluates to the same result as >>> mul(add(2, 3), add(4, 5)) 45 When it comes to division, Python provides two infix operators: / and //. The former is normal division, so that it results in a floating point, or decimal value, even if the divisor evenly divides the dividend: >>> 5 / 4 1.25 >>> 8 / 4 2.0 The // operator, on the other hand, rounds the result down to an integer: >>> 5 // 4 1 >>> -5 // 4 -2 These two operators are shorthand for the truediv and floordiv functions. >>> from operator import truediv, floordiv >>> truediv(5, 4) 1.25 >>> floordiv(5, 4) 1 You should feel free to use infix operators and parentheses in your programs. Idiomatic Python prefers operators over call expressions for simple mathematical operations. Functions are an essential ingredient of all programs, large and small, and serve as our primary medium to express computational processes in a programming language. So far, we have discussed the formal properties of functions and how they are applied. We now turn to the topic of what makes a good function. Fundamentally, the qualities of good functions all reinforce the idea that functions are abstractions. These guidelines improve the readability of code, reduce the number of errors, and often minimize the total amount of code written. Decomposing a complex task into concise functions is a skill that takes experience to master. Fortunately, Python provides several features to support your efforts. A function definition will often include documentation describing the function, called a docstring, which must be indented along with the function body. Docstrings are conventionally triple quoted. The first line describes the job of the function in one line. The following lines can describe arguments and clarify the behavior of the function: >>> def pressure(v, t, n): """Compute the pressure in pascals of an ideal gas. Applies the ideal gas law: v -- volume of gas, in cubic meters t -- absolute temperature in degrees kelvin n -- particles of gas """ k = 1.38e-23 # Boltzmann's constant return n * k * t / v When you call help with the name of a function as an argument, you see its docstring (type q to quit Python help). >>> help(pressure) When writing Python programs, include docstrings for all but the simplest functions. Remember, code is written only once, but often read many times. The Python docs include docstring guidelines that maintain consistency across different Python projects.. A consequence of defining general functions is the introduction of additional arguments. Functions with many arguments can be awkward to call and difficult to read. In Python, we can provide default values for the arguments of a function. When calling that function, arguments with default values are optional. If they are not provided, then the default value is bound to the formal parameter name instead. For instance, if an application commonly computes pressure for one mole of particles, this value can be provided as a default: >>> Boltzmann * Boltzmann_K * t / v The = symbol means two different things in this example, depending on the context in which it is used. In the first line above, = is the assignment operator. In the def statement header, = does not perform assignment, but instead indicates a default value to use when the pressure function is called. >>> Boltzmann_K, can be bound in the global frame. The expressive power of the functions that we can define at this point is very limited, because we have not introduced a way to make comparisons and to perform different operations depending on the result of a comparison. Control statements will give us this ability. They are statements that control the flow of a program's execution based on the results of logical comparisons. Statements differ fundamentally from the expressions that we have studied so far. They have no value. Instead of computing something, executing a control statement determines what the interpreter should do next. So far, we have primarily considered how to evaluate expressions. However, we have seen three kinds of statements already: assignment, def, and return statements. These lines of Python code are not themselves expressions, although they all contain expressions as components. Rather than being evaluated, statements are executed. Each statement describes some change to the interpreter state, and executing a statement applies that change. As we have seen for return and assignment statements, executing statements can involve evaluating subexpressions contained within them. Expressions can also be executed as statements, in which case they are evaluated, but their value is discarded. Executing a pure function has no effect, but executing a non-pure function can cause effects as a consequence of function application. Consider, for instance, >>> def square(x): mul(x, x) # Watch out! This call doesn't return a value. This example is valid Python, but probably not what was intended. The body of the function consists of an expression. An expression by itself is a valid statement, but the effect of the statement is that the mul function is called, and the result is discarded. If you want to do something with the result of an expression, you need to say so: you might store it with an assignment statement or return it with a return statement: >>> def square(x): return mul(x, x) Sometimes it does make sense to have a function whose body is an expression, when a non-pure function like print is called. >>> def print_square(x): print(square(x)) At its highest level, the Python interpreter's job is to execute programs, composed of statements. However, much of the interesting work of computation comes from evaluating expressions. Statements govern the relationship among different expressions in a program and what happens to their results. In general, Python code is a sequence of statements. A simple statement is a single line that doesn't end in a colon. A compound statement is so called because it is composed of other statements (simple and compound). Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. Together, a header and an indented suite of statements is called a clause. A compound statement consists of one or more clauses: <header>: <statement> <statement> ... <separating header>: <statement> <statement> ... ... We can understand the statements we have already introduced in these terms. Specialized evaluation rules for each kind of header dictate when and if the statements in its suite are executed. We say that the header controls its suite. For example, in the case of def statements, we saw that the return expression is not evaluated immediately, but instead stored for later use when the defined function is eventually called. We can also understand multi-line programs now. This definition exposes the essential structure of a recursively defined sequence: a sequence can be decomposed into its first element and the rest of its elements. The "rest" of a sequence of statements is itself a sequence of statements! Thus, we can recursively apply this execution rule. This view of sequences as recursive data structures will appear again in later chapters. The important consequence of this rule is that statements are executed in order, but later statements may never be reached, because of redirected control. Practical Guidance. When indenting a suite, all lines must be indented the same amount and in the same way (use spaces, not tabs). Any variation in indentation will cause an error. Originally, we stated that the body of a user-defined function consisted only of a return statement with a single return expression. In fact, functions can define a sequence of operations that extends beyond a single expression.: The effect of an assignment statement is to bind a name to a value in the first frame of the current environment. As a consequence, assignment statements within a function body cannot affect the global frame. The fact that functions can only manipulate their local environment is critical to creating modular programs, in which pure functions interact only via the values they take and return. Of course, the percent_difference function could be written as a single expression, as shown below, but the return expression is more complex. >>> def percent_difference(x, y): return 100 * abs(x-y) / x >>> percent_difference(40, 50) 25.0 So far, local assignment hasn't increased the expressive power of our function definitions. It will do so, when combined with other control statements. In addition, local assignment also plays a critical role in clarifying the meaning of complex expressions by assigning names to intermediate quantities. Python has a built-in function for computing absolute values. >>> abs(-2) 2 We would like to be able to implement such a function ourselves, but we have no obvious way to define a function that has a comparison and a choice. We would like to express that if x is positive, abs(x) returns x. Furthermore, if x is 0, abs(x) returns 0. Otherwise, abs(x) returns -x. In Python, we can express this choice with a conditional statement. This implementation of absolute_value raises several important issues: Conditional statements. A conditional statement in Python consists of a series of headers and suites: a required if clause, an optional sequence of elif clauses, and finally an optional else clause: if <expression>: <suite> elif <expression>: <suite> else: <suite> When executing a conditional statement, each clause is considered in order. The computational process of executing a conditional clause follows. If the else clause is reached (which only happens if all if and elif expressions evaluate to false values), its suite is executed. Boolean contexts. Above, the execution procedures mention "a false value" and "a true value." The expressions inside the header statements of conditional blocks are said to be in boolean contexts: their truth values matter to control flow, but otherwise their values are not assigned or returned. Python includes several false values, including 0, None, and the boolean value False. All other numbers are true values. In Chapter 2, we will see that every built-in kind of data in Python has both true and false values. Boolean values. Python has two boolean values, called True and False. Boolean values represent truth values in logical expressions. The built-in comparison operations, >, <, >=, <=, ==, !=, return these values. >>> 4 < 2 False >>> 5 >= 5 True This second example reads "5 is greater than or equal to 5", and corresponds to the function ge in the operator module. >>> 0 == -0 True This final example reads "0 equals -0", and corresponds to eq in the operator module. Notice that Python distinguishes assignment (=) from equality comparison (==), a convention shared across many programming languages. Boolean operators. Three basic logical operators are also built into Python: >>> True and False False >>> True or False True >>> not False True Logical expressions have corresponding evaluation procedures. These procedures exploit the fact that the truth value of a logical expression can sometimes be determined without evaluating all of its subexpressions, a feature called short-circuiting. To evaluate the expression <left> and <right>: To evaluate the expression <left> or <right>: To evaluate the expression not <exp>: These values, rules, and operators provide us with a way to combine the results of comparisons. Functions that perform comparisons and return boolean values typically begin with is, not followed by an underscore (e.g., isfinite, isdigit, isinstance, etc.). In addition to selecting which statements to execute, control statements are used to express repetition. If each line of code we wrote were only executed once, programming would be a very unproductive exercise. Only through repeated execution of statements do we unlock the full potential of computers. We have already seen one form of repetition: a function can be applied many times, although it is only defined once. Iterative control structures are another mechanism for executing the same statements many times. Consider the sequence of Fibonacci numbers, in which each number is the sum of the preceding two: 0, 1, 1, 2, 3, 5, 8, 13, 21, ... Each value is constructed by repeatedly applying the sum-previous-two rule. The first and second are fixed to 0 and 1. For instance, the eighth Fibonacci number is 13. We can use a while statement to enumerate n Fibonacci numbers. We need to track how many values we've created (k), along with the kth value (curr) and its predecessor (pred). Step through this function and observe how the Fibonacci numbers evolve one by one, bound to curr. Remember that commas seperate multiple names and values in an assignment statement. The line: pred, curr = curr, pred + curr has the effect of rebinding the name pred to the value of curr, and simultanously rebinding curr to the value of pred + curr. All of the expressions to the right of = are evaluated before any rebinding takes place. This order of events -- evaluating everything on the right of = before updating any bindings on the left -- is essential for correctness of this function. A while clause contains a header expression followed by a suite: while <expression>: <suite> To execute a while clause: In step 2, the entire suite of the while clause is executed before the header expression is evaluated again. In order to prevent the suite of a while clause from being executed indefinitely, the suite should always change some binding in each pass. A while statement that does not terminate is called an infinite loop. Press <Control>-C to force Python to stop looping. Testing a function is the act of verifying that the function's behavior matches expectations. Our language of functions is now sufficiently complex that we need to start testing our implementations. A test is a mechanism for systematically performing this verification. Tests typically take the form of another function that contains one or more sample calls to the function being tested. The returned value is then verified against an expected result. Unlike most functions, which are meant to be general, tests involve selecting and validating calls with specific argument values. Tests also serve as documentation: they demonstrate how to call a function and what argument values are appropriate. Assertions. Programmers use assert statements to verify expectations, such as the output of a function being tested. An assert statement has an expression in a boolean context, followed by a quoted line of text (single or double quotes are both fine, but be consistent) that will be displayed if the expression evaluates to a false value. >>> assert fib(8) == 13, 'The 8th Fibonacci number should be 13' When the expression being asserted evaluates to a true value, executing an assert statement has no effect. When it is a false value, assert causes an error that halts execution. A test function for fib should test several arguments, including extreme values of n. >>> def fib_test(): assert fib(2) == 1, 'The 2nd Fibonacci number should be 1' assert fib(3) == 1, 'The 3rd Fibonacci number should be 1' assert fib(50) == 7778742049, 'Error at the 50th Fibonacci number' When writing Python in files, rather than directly into the interpreter, tests are typically written in the same file or a neighboring file with the suffix _test.py. Doctests. Python provides a convenient method for placing simple tests directly in the docstring of a function. The first line of a docstring should contain a one-line description of the function, followed by a blank line. A detailed description of arguments and behavior may follow. In addition, the docstring may include a sample interactive session that calls the function: >>> def sum_naturals(n): """Return the sum of the first n natural numbers. >>> sum_naturals(10) 55 >>> sum_naturals(100) 5050 """ total, k = 0, 1 while k <= n: total, k = total + k, k + 1 return total Then, the interaction can be verified via the doctest module. Below, the globals function returns a representation of the global environment, which the interpreter needs in order to evaluate expressions. >>> from doctest import testmod >>> testmod() TestResults(failed=0, attempted=2) To verify the doctest interactions for only a single function, we use a doctest function called run_docstring_examples. This function is (unfortunately) a bit complicated to call. Its first argument is the function to test. The second should always be the result of the expression globals(), a built-in function that returns the global environment. The third argument is True to indicate that we would like "verbose" output: a catalog of all tests run. >>> from doctest import run_docstring_examples >>> run_docstring_examples(sum_naturals, globals(), True) Finding tests in NoName Trying: sum_naturals(10) Expecting: 55 ok Trying: sum_naturals(100) Expecting: 5050 ok When the return value of a function does not match the expected result, the run_docstring_examples function will report this problem as a test failure. When writing Python in files, all doctests in a file can be run by starting Python with the doctest command line option: python3 -m doctest <python_source_file> The key to effective testing is to write (and run) tests immediately after implementing new functions. It is even good practice to write some tests before you implement, in order to have some example inputs and outputs in your mind. A test that applies a single function is called a unit test. Exhaustive unit testing is a hallmark of good program design. We like square, but would become arduous for more complex examples like +> three arguments the upper bound n together with the functions term and next. We can use summation just as we would any function, and it expresses summations succinctly. Take the time to step through this example, and notice how binding cube and successor to the local names term and next ensures that the result 1*1*1 + 2*2*2 + 3*3*3 = 36 is computed correctly. In this example, frames which are no longer needed are removed to save space. Using an identity function that returns its argument, we can also sum natural numbers. >>> def identity(k): return k >>> def sum_naturals(n): return summation(n, identity, successor) >>> sum_naturals(10) 55 We can define pi_sum in terms of term and next functions, using our summation abstraction to combine components. We pass the argument 1e6, a shorthand for 1 * 10^6 = 1000000, to generate a close approximation to pi. >>> def pi_term(k): denominator = k * (k + 2) return 8 / denominator >>> def pi_next(k): return k + 4 >>> def pi_sum(n): return summation(n, pi_term, pi_next) >>> pi_sum(1e6) 3.1415906535898936. An iterative improvement algorithm begins with a guess of a solution to an equation. It repeatedly applies an update function to improve that guess, and applies an isclose comparison to check whether the current guess is "close enough" to be considered correct. >>> def improve(update, isclose, guess=1): while not isclose(guess): guess = update(guess) return guess One way to know if the current guess "isclose" is to check whether two functions, f and g, are near to each other for that-3): return abs(x - y) < tolerance The golden ratio, often called "phi", is a number that appears frequently in nature, art, and architecture. It can be computed via improve using the golden_update, and it converges when its successor is equal to its square. >>> def golden_update(guess): return 1/guess + 1 >>> def square_near_successor(guess): return near(guess, square, successor) Calling improve with the arguments golden_update and square_near_successor will compute an approximation to the golden ratio. >>> improve(golden_update, square_near_successor) 1.6180371352785146 By tracing through the steps of evaluation, we can see how this result is computed. First, a local frame for improve is constructed with bindings for update, isclose, and guess. In the body of improve, the name isclose is bound to square_near_successor, which is called on the initial value of guess. In turn, square_near_successor calls near, creating a third local frame that binds the formal parameters f and g to square and successor. Completing the evaluation of near, we see that the square_near_successor. Second, it is only by virtue of the fact that we have an extremely general evaluation procedure that small components can be composed into complex processes. Understanding that procedure. >>> phi = 1/2 + pow(5, 1/2)/2 >>> def near_test(): assert near(phi, square, successor), 'phi * phi is not near phi + 1' >>> def improve_test(): approx_phi = improve(golden_update, square_near_successor) assert approx_eq(phi, approx_phi), 'phi differs from its approximation' Extra for experts. We left out a step in the justification of our test. For what range of tolerance values e can you prove that if near(x, square, successor) is true with tolerance value e, then approx_eq(phi, x) is true with the same tolerance? x: >>> def average(x, y): return (x + y)/2 >>> def sqrt_update(guess, x): return average(guess, x/guess)(x): def sqrt_update(guess): return average(guess, x/guess) def sqrt_close(guess): return approx_eq(square(guess), x) x,.000000002151005 the environment first adds a local frame for sqrt and evaluates the def statements for sqrt_update and sqrt_close.
http://inst.eecs.berkeley.edu/~cs61a/book/chapters/functions.html
CC-MAIN-2017-09
refinedweb
7,178
53.61
merged into python2-xynehttpserver Search Criteria Package Details: python2-xynehttpserver 2012.12.24.2-4 Dependencies (1) Sources (2) Latest Comments Xyne commented on 2013-05-10 20:16 Xyne commented on 2012-11-28 00:20 alkino commented on 2012-10-30 14:40 Should be rename python2-xynehttpserver. olebowle commented on 2012-10-24 17:42 It works now, thanks. Xyne commented on 2012-10-24 16:44 The signature file was reuploaded to the server several hours ago. olebowle commented on 2012-10-24 08:54 The .sig file is missing. dvzrv commented on 2011-07-27 11:20 The md5sum has changed again. Please update the PKGBUILD Xyne commented on 2011-02-17 19:46 updated Kosava commented on 2011-02-17 00:09 It not work for me here is what said ==> Checking Runtime Dependencies... ==> Checking Buildtime Dependencies... ==> Retrieving Sources... -> Found python-xynehttpserver-2011.02.03.3.tar.gz ==> Validating source files with md5sums... python-xynehttpserver-2011.02.03.3.tar.gz ... FAILED ==> ERROR: One or more files did not pass the validity check! figue commented on 2010-10-06 07:29 Hi Xyne, any help to make it work with lastest python2 version? Thank you $ quickserve Traceback (most recent call last): File "/usr/bin/quickserve", line 19, in <module> from XyneHTTPServer import ThreadedHTTPServer, BaseHTTPRequestHandler ImportError: No module named XyneHTTPServer figue commented on 2010-10-06 07:26 Hi Xyne, any help to make it work with lastest python3 version? Thank you $ quickserve Traceback (most recent call last): File "/usr/bin/quickserve", line 19, in <module> from XyneHTTPServer import ThreadedHTTPServer, BaseHTTPRequestHandler ImportError: No module named XyneHTTPServer I have finally managed to rewrite this in Python 3. The code is cleaner and more robust now (minus whatever new bugs I have introduced). Multirange file requests are not properly supported, and file uploads should be more efficient, among other things. The new code is included in the python3-threaded_servers pages: I will likely merge this into that package as soon as I find the time to convert voracious.
https://aur.archlinux.org/packages/python2-xynehttpserver/?comments=all
CC-MAIN-2018-22
refinedweb
341
63.09
I don't see any new developments started with xbean, but there are still projects under active development which rely on it. ActiveMQ 5.x might be last one, not sure about others, and it does suffer because no investments in xbean. JAXB is fine, but I doubt if any custom type mapping will be ever able to provide such extensibility as Spring with its declarative configuration. Moving ActiveMQ 5 configuration schema to JAXB would be definitely a difficult task. Schema generated with my branch does not differ from old one. There are some constructions from xbean which fail xml schema validation after upgrading to Spring 4 with its XSD checks. I had no issues updating XML namespace declarations (xsi etc) but some parts of schema simply break. This includes map elements (MapMapping) which are reported as invalid content. I made an attempt to fix that and extend generator to create proper declarations of elements. My changes so far update xbean maven plugin, remove duplicate code and separate generators from namespace handlers. This allows to remove ant dependency from namespace handler runtime imports, let maven plugin use some simple plexus IoC for generator discovery. These are mainly code reorganizations to made xbean and downstream projects maintenance easier. Łukasz Dywicki > On 2 Aug 2018, at 16:25, Guillaume Nodet <gnodet@apache.org> wrote: > > > Over the last years, I have hardly seen anyone using the xbean-spring stuff anymore. I think most of custom namespaces have been implemented using JAXB instead. > I think one of the problem is that the xml tends to be ugly, so starting from the xml and using JAXB usually makes more sense. > I guess if you plan to use it in ActiveMQ, the generated schema has to be compatible with the previous ones, right ? Is that the case with your changes ? > > Guillaume > >> Le jeu. 2 août 2018 à 16:17, <luke@code-house.org> a écrit : >> Ladies and gentlemen, >> I started messing around XBean as its codebase is in moderate form. I’ve run into multiple issues while trying to get it running under Karaf 4.1 together with ActiveMQ and decided to push it forward. I spent last couple of days cleaning up duplicated code and refactoring maven plugin so it does not depend on any specific generator. There is still lots of things to do as there are several Spring tests which are failing. Due to stronger schema validation around 15 spring tests currently fails. This is because generated schema works only for basic elements and fails with embedded collections. I already started to reform that part and I should be able to update XsdGenerator. >> >> I would like to submit PR once I solve all the issues and test it with ActiveMQ would you accept my work? Due to amount of breaking changes I started 5.0.x branch (which might be good to start support Spring 4 or 5). >> There is one big commit so far in my GitHub fork:, which I can chunk into smaller (yet non compilable) commits in order to make history a bit clearer. >> >> Kind regards, >> Łukasz >> — >> Apache Karaf Commiter & PMC member >> luke@code-house.org >> Twitter: ldywicki >> Code-House - >> > > > -- > ------------------------ > Guillaume Nodet >
http://mail-archives.apache.org/mod_mbox/geronimo-dev/201808.mbox/%3CA019FDB3-5FE8-4445-9AA8-81357BDA8AAC@code-house.org%3E
CC-MAIN-2019-18
refinedweb
532
64.61
Using Read Excel(.xlsx) document using Apache POI , boolean as well as text cells. In the below example, i have used Apache POI...Read Excel(.xlsx) document using Apache POI In this section, you will learn how to read Excel file having .xlsx extension using Apache POI library Create Simple Excel(.xls) document using Apache POI POI library. In the given below example, we will going to create a simple excel...;. In the below example, i have used Apache POI version 3.7. For downloading...Create Simple Excel(.xls) document using Apache POI In this section, you Read Simple Excel(.xls) document using Apache POI Read Simple Excel(.xls) document using Apache POI In this section, you will learn how to read Excel file having .xls extension using Apache POI library. In the below example, we will read excel document having one sheet named as " Example jsp maps api example - JSP-Servlet jsp maps api example Can u give a complete working example on usage of JSP GOOGLE MAP API POI Word document (Letter Template) POI Word document (Letter Template) Dear Team, i need code for generating word document(letter format). i am unable to get the code for formats, font settings, letter type settings. please help me for the same. Thanks Find Records of The Rows Using POI Find Records of The Rows Using POI  ...; The methods used in this example are : getFirstCol(): This method is used...; Download this example JSP Database Example This example shows you how to develop JSP that connects to the database and retrieves the data from database. The retrieved data is displayed on the browser. Read Example JSP Database Example Thanks JSP date example insert checkbox in cell using POI api in java insert checkbox in cell using POI api in java I need to insert checkbox in excel cell using POI and java. Any one help me on this. Ashok S Set Data Format in Excel Using POI 3.0 Set Data Format in Excel Using POI 3.0  ... file using Java. POI version 3.0 provides a new feature for manipulating... Java. POI version 3.0 APIs provides user defined formatting facility and also - excel generating problem - JSP-Servlet jsp - excel generating problem Hi, I worked with the creating excel through jsp, which is the first example in this tutorial...:// In this page create excel sheet JSP Paging Example in Datagrid - JSP-Servlet JSP Paging Example in Datagrid Hi, I have tested JSP Paging Example... it successfully. When i try... on the url is customizable or not if Create and Save Excel File in JSP and saving Excel file from JSP application. In this example we are going... this example: Steps: Download the poi-bin-3.1-beta2-20080526.zip... Create and Save Excel File Find String Values of Cell Using POI Find String Values of Cell Using POI  ... of a string and a column value. The methods used in this example... table value 0 = Name Of Example String table value 1 = Status String table value i18n example i18n example Hi.. I need a code for jsp home where user can select his regional language like kannada, telugu , Tamil etc..so that jsp page gets displayed in that language:useBean id="user.../jsp/simple-jsp-example/UseBean.shtml...how can we use beans in jsp how can we use beans in jsp   Overview of the POI APIs Overview of the POI APIs Jakarta POI Jakarta provides Jakarta POI APIs.... In future Jakarta POI (Java API To Access Microsoft Format Files) will be able .(for example if i am selecting country as india in first dropdown list then in second jsp - JSP-Servlet /jsp/poi/createCell.shtml... of that row . It is like attendance.Give jsp code for dynamically ,by getting Empid JSP - JSP-Servlet JSP & Servlet Example Code Need example of JSP & Servlet Creating Shapes using Shape Groups Creating Shapes using Shape Groups In this section, you will learn how to create shapes using Apache POI library. EXAMPLE import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import Excel Sheet Header Excel Sheet Header In this section, you will learn how to create header on a sheet using Apache POI. In the given below example, we will create headers having different position on sheet.These headers will appear Excel Sheet Footer Excel Sheet Footer In this section, you will learn how to create footer on a sheet using Apache POI. In the given below example, we will create footers having different position on sheet. These footers will appear on the hardcopy If statement Example JSP If statement Example In this section with the help of example you will learn about the "If" statement in JSP. Control statement is used... a simple JSP program. Example : This example will help you to understand how to use Excel Splits Pane Feature Excel Splits Pane Feature In this section, you will learn how to split the excel sheet using Apache POI. Sometimes, you need to view more than one copy.... The given below example divide the viewing area of sheet into four areas JSON-JSP example JSON-JSP example In the previous section of JSON-Servlet example you have... will tell you how to use JSON to use it into JSP pages. In this example we have How to Create Excel Page Using JSP how to create excel page using jsp  ... using java .By going through the steps of this example we can create any number of excel pages. To make an excel page we can use third party POI APIs jsp code - JSP-Servlet jsp code sample code for change password example Old Password: new Password: Pagination in jsp - JSP-Servlet Pagination in jsp I need an example of pagination in JSP using display tag
http://www.roseindia.net/tutorialhelp/comment/91063
CC-MAIN-2014-49
refinedweb
963
65.22
Software and scripts we have written in the past. few a view viewer program with interactive regular expression filtering. It is similar to a combination of grep and less. c++int a wrapper to execute C++ source code files without explictely compiling them. Sony RM-X2S XMMS plugin. Use a Sony Joystick from car radios with XMMS. Win-O-Matic an OpenGL program to announce winners of competitions. scsiadd 1.97 lets you add and remove scsi devices from the Linux scsi subsystem on the fly. See the README and manpage for further details. If you like to do it graphical you can try scsiaddgui. If your system lacks the /proc/scsi/scsi file you have to enable the following option in your kernel configuration: Device Drivers->SCSI device support->legacy /proc/scsi/ support. /proc/scsi/scsi Device Drivers->SCSI device support->legacy /proc/scsi/ support cbmfs - a fuse filesystem to read Commodore 8-bit computer floppy images like .d64 and .d81. Those images can be mounted on an operating system with the fuse wrapper (for example linux) and files on the disk image can be accessed with your used tools. qtpod a GUI to control a Line6 POD 2.0 or POD Pro guitar effects processor. killcr will search through your directory recursivly and remove all DOS-style line feeds (\r\n) from your text files. It has a built in binary file check and file suffix check. Includes killws to strip traling whitespace from each line and addcr which adds DOS-style line feeds. dumphex 0.5 prints files and stdin in hex mode with arbitrary bytes per row. Includes a head and tail mode. lowerall a perl script, which will convert every file- and directoryname to lowercase, recursing through the subdirectories. truncate a small c program which truncates your files to a specified length. For further details see the man page. dojstream a C++ iostream compatible library. pdimport a Windows CE application which imports contacts and calendar information into you Pocket PC from xml files. apetag a command line program to extract APE Tags from a file. mp3recode.pl a perl script to encode audio data to mp3. Currently handles .wav, .mpc, .ogg, .wma and .mp3 input files. ID3V1 tags are preserved whenever possible. If you want to handle .mpc files, please download apetag (above) as well. Two perl scripts for m3u to pls conversion and vice versa. Use either m3u2pls or pls2m3u. gnomecomp is a shell script to compile Gnome 2.12 from sources. The first invocation should be sh gnomecomp -get which will download all needed files. If compilation aborts you can fix any errors and restart compilation with sh gnomecomp.Rant to the GNOME maintainers: I can't understand why you're not able to make a source download directoriy which contains all needed software. With every GNOME release at least one software is missing on the official ftp servers. sh gnomecomp -get sh gnomecomp buildapache is a shell script to build an Apache 1.3.x web server together with modperl, Embperl, SSL, modgzip and php5. And buildapache2 compiles Apache 2.2.x, mod_perl 2, Embperl 2, PHP5. dojhttpd is a http server that only generates one html page. It use this simple perl script when I make maintainance on my main http server so users are informed the web service is down. This program is intended to be started from inetd. Use the following inetd.conf line: www stream tcp nowait root /usr/sbin/dojhttpd. www stream tcp nowait root /usr/sbin/dojhttpd stripfilename will rename files so they should be easy usable with command line tools. rndls will search for files in directories given to the command line and print an arbitrary number of random files to stdout. A windows port of the the X11 program makedepend and a compiles GNU make for Windows. mount.nrg is a shell script to mount Nero Burning ROM images (.nrg) via the loopback device as iso9660 volumes on linux. beeper is a shell script which will beep every second forever. I use this after some long compile jobs to indicate it has finished ("make ; beeper"). pgm2cip.cpp is a small C++ program to convert grayscale images into the cip format used by Cisco's VOIP phones. Together with pgm2cip.sh which used ImageMagick's convert program you can convert any image to a cip. Currently the 2bit grayscale cip format is supported. t64dump dumps all files contained in a t64 file and optionally converts the t64 into a d64. sqlite3_pcre.c contains a function for SQLite 3 which lets you match string with regular expressions. The PCRE library is used which provides perl5 like regular expressions for C. Example: create table test ( s text ); insert into test values('this is a test'); insert into test values('foobar'); select * from test where pcre('i', s) > 0; -- will return the row with 'this is a test' sample2ps tools contains program to convert samples to postscript in various formats. multimidicast sends and receives MIDI from Alsa sequencers over network. Cisco XML Services for VoIP phones. wavinfo will display the structure and chunk info of wav files. A sample output: utopiaCriticalStop.wav RIFF 5816 bytes type:WAVE fmt 16 bytes type:"WAVE_FORMAT_PCM" channels:1 rate:22050 data 3724 bytes = 0:00:00 fact 4 bytes sampleLength: 1414285638 DISP 36 bytes text: Windows 95 Utopia Sound Scheme LIST 148 bytes INFO ICMT 70 bytes: Comments: Maz & Kilgore 208 W. 30th #701 New York, NY 10001 mazrob@panix.com ICOP 28 bytes: Copyright: 1995 Microsoft Corporation ISBJ 22 bytes: Subject: Utopia Critical Stop wavmerge will concatenate multiple wav files. wavshaper can amplify a .wav file to resemble the siluette of a picture. pas2ps a port of the pascal to postscript compiler by Dulith Herath. morkdump will dump the contents of files in the mozilla mork db format. htmltable2latex.pm is a perl module to convert html tables to LaTeX code. poor man's lint as a g++ wrapper. pkg-plist.sh is a shell script to help in writing the pkg-plist file for FreeBSD ports. dechexoctbin is a C++ program to convert numbers between 4 different base systems: decimal, hexadecimal, octal, binary. If you are annoyed by endless streams of "Re: Re: Re:" in your email subjects and you use procmail to filter your mails, here's a small mail filter written in perl to fix your subject lines: fixreaw.pl. It will cope with the german variant of "AW" as well. Use it in your .procmailrc with a recipe like this: :0 f | /usr/bin/fixreaw.pl :0 f | /usr/bin/fixreaw.pl sshhackfilter is my perl script to set Linux Firewall (iptables) rules for SSH brute force login attempts. dns_inc_serial is a small C program to increase the serial number in DNS zone files. A similar program is zsu. Use this program with my edit_zone shell script to fully automate editing, checking and reloading your bind zones. edit_zone fix_libtool_la a shell script which will change the location of libtool .la files in all your .la files on your harddisk. This comes handy when you have installed or moved a library and libtool now complains it can't find it anymore. cmi a shell script which will configure, make and install a program from its source archive. If you have a .tar.bz2 or .tar.gz archive getting the program up and running is a easy as typing cmi coolsoftware-1.3.tar.bz2. You can leave out the archive filename and run the script from the source directory. It will generate any missing autoconf script by running an autogen.sh if present. cmi coolsoftware-1.3.tar.bz2 A socket connect wrapper library which is used to lower the default system connect() timeout. setproctitle library 0.3.2 is an enhanced version of the library from Dmitry V. Levin which implements the setproctitle(3) function on Linux. We have added support for a leading "-" character to omit the program name in the process list and optimized memory access. On Linux kernel versions >= 2.6.9 you can use the prctl(2) function with PR_SET_NAME, however it only allows up to 16 characters. The setproctitle library usually allows for longer titles. Your code would look like: PR_SET_NAME #include <sys/prctl.h> prctl(PR_SET_NAME,"Program Name",0,0,0); Message Digest Aggregate Functions for PostgreSQL is a collection of functions bundeled as a module to extent PostgreSQL's cryptographic abilities. ext3create.sh is a shell script for convenient creation of ext3 file systems on Linux. You can also use vfatcreate.sh to create VFAT file systems on linux. A Linux shell script to scan from a Canon Lide 25 USB scanner. You need the scanimage program from SANE. scanimage A perl program top100.pl to convert the High Voltage SID Collection top 100 into WAV/MP3 with sidplay. And a similar program hvsc_convert.pl to convert a SID file from the High Voltage SID Collection to MP3. I still like using CVS for version control, mainly because of the simple repository format, which lets me fix any problems or software confusion my manipulating (text) files. And often you can get a confused repository working again, by simply deleting a ,v or restoring it from a backup. You might loose some history on that file, but at least you have a working repository again. Below you'll find some tools I use along CVS to ease my daily work. ,v vc is a simple shell script for unified command line handling for CVS, subversion, git. It tries to provide the same semantics for some often used commands to these version control software. cvsadd is a shell script to add files and directories recursively to a checked-out CVS directory. This script complements CVS's import command, since add doesn't not work recursively and import can only be used to create repositories. cvsstatus is a perl script to produce one line summaries of modified files. CVS MD5 Checksum is a hook program for a CVS repository to check the repository files for corruptions with MD5 checksums. To automatically set files as binary detected by their file extension I use this cvswrappers configuration.
http://llg.cubic.org/tools/index.html
CC-MAIN-2017-39
refinedweb
1,702
67.15
This blog post is about using python to execute code locally on the server in response to http GET requests. So far you are thinking so what? You are already crafting your comment and it is saying something like, "Google mod_python" or "Google mod_perl". You are right, the best way to do CGI is via mod_perl, mod_php or mod_perl. The problem is user access and chroot. Apache will execute server side scripts as the user / group defined in the main httpd.conf. In my case: apache / apache. Apache will also assume a document root of /var/www/ for scripts (on a Centos 5.5 box) even if the userdir module is in use. My problem was: How to get apache to execute scripts as dave:dave on doc root = /home/dave/. It was critical to get this working because the scripts in question interact with the .gnupg/pubkeyring and .gnupg/seckeyring files under /home/dave/.gnupg/. Basically, I was making some kind of web based PGP key server. A web based gui for remote users to manage keys. In the end I settled for python and the BaseHTTPServer. First of all a simple class that will accept a shell command, execute it and return the stdout. import popen2 class MyShellCommand: """execute a command, capture stdout and return it.""" def __callShellCmd(self, cmd): stdout, stdin = popen2.popen2(command) data = "" while True: c = stdout.read(1) if( c ): data += c else: break return data """concrete example""" def getPublicGPGKey(self, keyid): """TODO: Add logic to validate key id...""" command = "gpg -a --export '%s'" % keyid return self.__callShellCmd(command) Now that we have a utility to retrieve public keys from the gpg keyring, lets call it from a webservice that is owned and operated by user:group, dave:dave. import MyShellCommand import time import BaseHTTPServer, cgi """Configure host ip and port to listen on. Use high port for non root users.""" HOST_NAME = '127.0.0.1' PORT_NUMBER = 8080 class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler): def do_HEAD(s): s.send_response(200) s.send_header("Content-type", "text/html") s.end_headers() def do_GET(s): s.send_response(200) s.send_header("Content-type", "text/html") s.end_headers() """Get the path and find the parameters""" path, query_string = s.path.split('?', 1) params = dict(cgi.parse_qsl(query_string)) """create a shell object""" shell = MyShellCommand() """Validate the call being made""" if path == '/publickey': """Validate the parameters""" if params.has_key('id'): s.wfile.write('%s' % shell.getPublicGPGKey(params['id'])) """I dont like descriptive errors.""" else: s.wfile.write('An error occurred.') else: s.wfile.write('An error occurred.')) If you execute the above script, you will have a working webservice that responds nicely to only one specific set of GET data. Call it with a URL like this:
http://david-latham.blogspot.com/2010/10/
CC-MAIN-2019-51
refinedweb
452
69.18
#include <getquotajob.h> Detailed Description Gets resource limits for a quota root.ajob.h. Member Function Documentation The quota root that resource limit information will be fetched for. Definition at line 73 of file getquotajob.cpp. Set the quota root to get the resource limits for. - Parameters - - See also - GetQuotaRootJob Definition at line 67 of file getquotajob.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2019 The KDE developers. Generated on Thu Apr 18 2019 02:56:32 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/kdepim/kimap2/html/classKIMAP2_1_1GetQuotaJob.html
CC-MAIN-2019-26
refinedweb
104
51.34
This is a basic guide for deploying a LoopBack 4 (LB4) app to IBM Cloud. In the setup explained below, your app will use a provisioned Cloudant service when running on the IBM Cloud. NOTE: Production deployment to IBM Cloud is a much bigger topic with many possible options, refer to “IBM Cloud Continuous Delivery: Build, deploy, and manage apps with toolchains” for the details. Before we begin Make sure you have: - an account on IBM Cloud. If not, you can sign up here. - installed Cloud Foundry CLI Preparing your application We will be using the “todo” example from the loopback-next repository as a basis for the instruction. You can quickly clone the “todo” example app by running the command: lb4 example todo Then you can replace the default memory-based connector of the app with a Cloudant connector, so data is persisted. Step 1: Provisioning a Cloudant database service - Go to the IBM Cloud Catalog, select Cloudantunder All Categories> Databases. Name your Cloudant service name as myCloudant. Keep the defaults for region and resource group. Select “Use both legacy credentials and IAM” as the available authentication methods - Click Create. Step 2: Creating a database named todo. - Go to your IBM Cloud dashboard. - Click on myCloudantunder Services. Click Launch Cloudant Dashboard. In the Cloudant dashboard, click Create Databaseat the top of the page and name it as todo. Step 3: Updating your DataSource Update db.datasource.ts to use the Cloudant connector. The value for the url property is just a placeholder and does not need to have the correct credential because we will be binding the app with the Cloudant service once it’s pushed to IBM Cloud. const config = { name: 'db', connector: 'cloudant', url: '', database: 'todo', modelIndex: '', }; Install the loopback-connector-cloudant package. $ npm i loopback-connector-cloudant Step 4: Updating the application We will use the cfenvmodule to simplify some of the Cloud Foundry related operations. Install cfenvin the project directory. $ npm i cfenv Update the src/index.tsfile to the following to enable service binding. Add the 3 snippets as indicated below: import {TodoListApplication} from './application'; import {ApplicationConfig} from '@loopback/core'; // --------- ADD THIS SNIPPET --------- import {DbDataSource} from './datasources/db.datasource'; const cfenv = require('cfenv'); const appEnv = cfenv.getAppEnv(); // --------- ADD THIS SNIPPET --------- export async function main(options?: ApplicationConfig) { // --------- ADD THIS SNIPPET --------- // Set the port assigned for the app if (!options) options = {}; if (!options.rest) options.rest = {}; options.rest.port = appEnv.isLocal ? options.rest.port : appEnv.port; options.rest.host = appEnv.isLocal ? options.rest.host : appEnv.host; // --------- ADD THIS SNIPPET --------- const app = new TodoListApplication(options); // --------- ADD THIS SNIPPET --------- // If running on IBM Cloud, we get the Cloudant service details from VCAP_SERVICES if (!appEnv.isLocal) { // 'myCloudant' is the name of the provisioned Cloudant service const dbConfig = Object.assign({}, DbDataSource.defaultConfig, { url: appEnv.getServiceURL('myCloudant'), }); app.bind('datasources.config.db').to(dbConfig); } // --------- ADD THIS SNIPPET --------- await app.boot(); await app.start(); const url = app.restServer.url; console.log(`Server is running at ${url}`); return app; } Remove the prestartscript from package.json, since we don’t want to do any building on the cloud. Note: If you make more changes to the application after this point, remember to run npm run build to transpile the code before deploying. (Optional) At project root, create a file called .cfignorewith the following content: node_modules/ .vscode/ .git This step is optional, however, dependencies will be installed during deployment and thus node_moduleswill be generated. It makes the upload of node_modulesreductant and time consuming. Step 5: Deploying the application to IBM Cloud Use cf logincommand to login. If you’re using a federated user id, you can use the --ssooption. After you’ve been successfully logged in, you’ll see the CF API endpoint. API endpoint: (API version: 2.106.0) After logging in, you can run this command: cf push <<your-app-name>> The app name in the command is the Cloud Foundry application that will show up in the IBM Cloud dashboard. Step 6: Binding the Cloudant service to your application - Go to the IBM Cloud dashboard (). - Under Cloud Foundry Applications, you should see your application name. Click on it. - In the “Overview” tab, go to Connections> Create connection. myCloudantservice. - After the binding is done, you should see it from the Overviewpage. - You will be asked to restart your application. Step 7: Testing your endpoints - Go to your application page. If you’re not already in there, it can be found under Cloud Foundry Appsin the IBM Cloud dashboard. - Click Visit App URLto get the URL of your application. It will then bring you to the API explorer for testing your endpoints.
https://loopback.io/doc/en/lb4/Deploying-to-IBM-Cloud.html
CC-MAIN-2021-31
refinedweb
768
59.09
We've covered how a container gets selected, but how does Arquillian know how to locate or communicate with the container? That's where configuration comes in. You can come a long way with default values, but at some point you may need to customize some of the container settings to fit your environment. Let's see how this can be done with Arquillian. Arquillian will look for configuration settings in a file named arquillian.xml in the root of your classpath. If it exists it will be auto loaded, else default values will be used. This file is not a requirement. Let's imagine that we're working for the company example.com and in our environment we have two servers; test.example.com and hudson.example.com. test.example.com is the JBoss instance we use for our integration tests and hudson.example.com is our continuous integration server that we want to run our integration suite from. By default, Arquillian will use localhost, so we need to tell it to use test.example.com to run the tests. The JBoss AS container by default use the Servlet protocol, so we have to override the default configuration. That should do it! Here we use the JBoss AS 6.0 Remote container which default use the Servlet 3.0 protocol implementation. We override the default Servlet configuration to say that the http requests for this container can be executed over test.example.com:8181, but we also need to configure the container so it knows where to deploy our archives. We could for example have configured the Servlet protocol to communicate with a Apache server in front of the JBoss AS Server if we wanted to. Each container has different configuration options. 4 Commentscomments.show.hide Jun 10, 2011 Joshua Davis The XML schema reference doesn't work that well for me inside IntelliJ IDEA. I think it's because the target namespace in the XSD doesn't match the xmlns reference in the example. Here is what works for me: So, basically instead of Feb 25, 2012 Jürg Brandenberger This part is to terse to be helpful, please add the source of the configaration rules to allow adaption to other containers and some real world examples. For instance, how the arquillian.xml would look for a glassfish 3.1 embedded configuration? How and where to tell the glassfish embedded to find the jdbc driver to connect to a data source? How to configure a data source? what goes into the arquillian.xml and what has to be configured in the pom? Do the container configuration options have any relation with the arquillian.xml or are they pom only? Apr 12, 2013 Michel Graciano For me, to make the code completion works at NetBeans I needed to update the schema as follow: Jul 25, 2014 The Alchemist It's not just NetBeans, it's any IDE because the schema location had changed. I updated the example, thanks for the tip! :)
https://docs.jboss.org/author/display/ARQ/Container+configuration
CC-MAIN-2018-22
refinedweb
501
67.45
I am working my way through a c programming book and I am a little confused with the const modifier. It says in my book that after you declare a variable with the const modifier, the content of a variable cannot be changed. But if i run this program: #include <stdio.h> int main(int argc, char *argv[]) { int const x = 10; x = 54; printf("%i",x); system("PAUSE"); return 0; } It compiles fine and I was able to change the contents of the variable. The only thing that happend was my compiler (bloodshed dev c++) give me a warning about trying to change a read only variable. So is that the only purpose of the const modifier, to give you a warning if you try to change it?
http://cboard.cprogramming.com/c-programming/57240-const-modifier-printable-thread.html
CC-MAIN-2014-41
refinedweb
129
67.38
As defined by Amazon Web Services (AWS), Amplify is a set of products and tools with which mobile and front-end web developers can build and deploy AWS-powered, secure, and scalable full-stack apps. Also, you can efficiently configure their back ends, connect them to your app with just a few lines of code, and deploy static web apps in only three steps. Historically, because of their performance issues, managing images and videos is a daunting challenge for developers. Even though you can easily load media to an S3 bucket with AWS Amplify, transforming, compressing, and responsively delivering them is labor intensive and time consuming. Enter Cloudinary as an effective cloud-based media-management platform, with which you can efficiently and seamlessly create, manage, and deliver satisfactory experiences to all browsers and devices, regardless of their bandwidth. Uploading media to Cloudinary is effortless, after which it dynamically transforms them on the fly, largely heading off infrastructure- and maintenance-related concerns. Furthermore, Cloudinary offers software development kits (SDKs) for all popular programming languages. This post shows you how to build a blog, host it with Amplify, and transform its videos with Cloudinary. The code repository is on GitHub. As a prerequisite, set up an AWS account. For reference, see the Amplify documentation and this Amplify tutorial. Follow the steps below: Add Amplify to your terminal. Type: npm install -g @aws-amplify/cli Configure the new project. Type: amplify configure Create a React app as a starting point for the project. Type: npx create-react-app amplify-jamstack-cloudinary-video This step takes a while, after which you’ll see output on the Yarn commands you can run from the project directory. For more details, see this writeup on React toolchains. Go to the project directory and start the React app. Type: cd amplify-jamstack-cloudinary-video yarn start To see the demo running, run the local server with the command npm start. When the app finishes running, React displays its rotating icon. Tip: If you cannot connect to the correct instance, go to the ~/.aws directory and verify that the credentials you entered match those listed there. For more details, see the AWS documentation on named profiles. Now turn your React app into an AWS Amplify app. First, configure your AWS account with the connections required for your app. Type: amplify init In case of no response from Amplify, type ctrl+c to ensure that you have exited your local server. Amplify then displays the following prompts. Respond to each of them by typing Enter to select the default as shown. Below are my responses for your reference. ? Enter a name for the project amplifyjamstackcloud ? Enter a name for the environment dev ? Choose your default editor: Visual Studio Code ? ? Do you want to use an AWS profile? Yes ? Please choose the profile you want to use (Use arrow keys) ❯ default If this is your first time you set up an AWS account profile, I suggest you pick the defaults for your settings. However, if you’re on another AWS-related project that uses the AWS SDK, you might have a separate profile already. Ensure that you specify the correct one. Afterwards, AWS displays a confirmation: Adding backend environment dev to AWS Amplify Console app: d2hxdxps86f74m Go to AWS Amplify and you’ll see your new app amplifyjamstackcloud there, assuming that’s the name you specified. If you don’t see your app, you might be in the wrong geographical region. To double-check, pull down the drop-down menu in the upper-left corner. Now that your app is in the cloud, click General in the left navigation and note these details: - The app, called d2hxdxps86f74m, is identical to the one in the console. - The back end of this project and its location are displayed under Backend environments (see the screenshot below). AWS Amplify leverages AWS CloudFormation under the hood, a technique known as back end as code (BaC). aws-amplify is the main library that works with Amplify in apps. The @aws-amplify/ui-react package contains React-specific UI components, which you’ll leverage. Install it with this command: npm install aws-amplify @aws-amplify/ui-react Next, have React import Amplify and configure it according to the settings created with Amplify’s CLI tool, which is located in ./aws-exports. Type the following commands: Code language: JavaScript (javascript)Code language: JavaScript (javascript) import awsExports from "./aws-exports"; Amplify.configure(awsExports); Your app is now ready to call Amplify and, inherently, the AWS SDK. I suggest adopting GraphQL, which sends real-time notifications, as the database layer for your app. Feel free to use the REST API instead if you prefer. To add a database layer, type: amplify add api Amplify then displays several prompts. Below is my suggestion of the responses at the end of each of the prompts. To take this demo further later on, respond with multiple objects. ? Please select from one of the below mentioned services: GraphQL ? Provide API name: amplifyjamstackcloud ? Choose the default authorization type for the API API key ? Enter a description for the API key: sample ? After how many days from now the API key should expire (1-365): 365 ? Do you want to configure advanced settings for the GraphQL API No, I am done. ? Do you have an annotated GraphQL schema? No ? Choose a schema template: Single object with fields (e.g., “Todo” with ID, name, description) When prompted “Do you want to edit the schema now? (y/n),” typing y opens your default editor for the schema file. I typed n and ran the command below instead, opening all the project files in VSCode. code . The code for the back end you’re creating resides in the amplify directory. The newly created schema, which is in amplify/backend/api/amplifyjamstackcloud/schema.graphql, reads like this: Code language: JavaScript (javascript)Code language: JavaScript (javascript) id: ID! name: String! description: String } Replace Todo with Video and rename the file Videos. Code language: JavaScript (javascript)Code language: JavaScript (javascript) id: ID! name: String! description: String } Next, “push” the schema to the AWS cloud by typing this command: amplify push This process contains numerous steps, many of which are reflected in the amplify/backend/api/amplifyjamstackcloud/build directory. Any questions, feel free to contact me. Amplify then prompts you to reply to the questions below. Respond with the default for each of them. Code language: JavaScript (javascript)Code language: JavaScript (javascript) ✔ Successfully pulled backend environment dev from the cloud. Current Environment: dev ? Are you sure you want to continue? Yes The following types do not have '@auth' enabled. Consider using @auth with @model - Videos Learn more about @auth here: < GraphQL schema compiled successfully. Edit your schema at /Users/ajonp/web/amplify-jamstack-cloudinary-video/amplify/backend/api/amplifyjamstackcloud/schema.graphql or place .graphql files in a directory at /Users/ajonp/web/amplify-jamstack-cloudinary-video/amplify/backend/api/amplifyjamstackcloud/schema ? ⠼ Updating resources in the cloud. This may take a few minutes . . . After creating and executing CloudFormation in the cloud, the AWS Amplify CLI outputs two credentials and stores them in the src/aws-exports.js file: GraphQL endpoint: GraphQL API KEY: <example> Your React app can now connect with GraphQL with the endpoint and API key. Now create a simple app, largely according to the procedure described in the Amplify tutorial Getting Started. Type this command to start React in your browser: npm run start At the bottom of the form are the name and description you added, if any. To see the DynamoDB data and verify that the storage is local, log in to your AWS account and click the table that is displayed for your video name and description. In case of no such display, verify that you’re in the correct geographical region. To set up video on demand or streaming, follow the procedure in the tutorial AWS Amplify Video. Upload the video for this project with Cloudinary’s upload widget. Adopt the signed approach to leverage additional AWS Amplify features and better secure your Cloudinary account. Add an upload preset in Cloudinary by first clicking the gear icon in your console and then clicking the Upload tab near the top to go to console>/settings/upload. Scroll down to Upload presets and click Add upload preset. On the screen that is displayed: - Choose Signed from the pull-down menu under Signing Mode. - Specify a folder name (e.g., example folder) under Folder. Note the name of the upload preset at the top and your cloud name in your Cloudinary dashboard for use later. To add the upload code, edit your public/index.html file, as follows: 1. Add Cloudinary’s upload widget by adding this script to the header section: <script src="<</script> 2. Add the variable below for the upload widget. Be sure to replace my_cloud_name and my_preset with their values. const uploadWidget = window.cloudinary.createUploadWidget({ cloudName: 'my_cloud_name', uploadPreset: 'my_preset'}, (error, result) => { if (!error && result && result.event === "success") { console.log('Done! Here is the image info: ', result.info); } } ); 3. Add the <button> tag anywhere in the file for opening the widget. I recommend placing that code above that for the Submit button. <button style={styles.button} onClick={addVideo}>Create Video</button> 4. Add the function below for opening the widget: Code language: CSS (css)Code language: CSS (css) uploadWidget.open(); } At this point, clicking the Upload Video button might trigger the error message below. That’s because you must first sign this upload for security. See the next section for the procedure. To obtain a signature from Cloudinary, create an API call with AWS Amplify and a Lambda datasource to enable a return of the request. First, create a function by running this command: amplify add function In response to the top five questions of the nine that are displayed, specify the parameters of the Lambda function (serverless function): `cloudinarysignature` `cloudinarysignature` `NodeJS` `Hello World` Respond with No to the bottom four questions. See the screenshot below. Next, connect the Lamba datasource to your AppSync API by adding the line below to the amplify/backend/api/amplifyjamstackcloud/schema.graphql file: type Query { cloudinarysignature(msg: String): String @function(name: "cloudinarysignature-${env}") } ${env} in the above code makes available your development environment for use, especially if you’d like to test your code later in a staging or product environment. Now load the updates to AWS with this command: amplify push AWS then displays the output below, showing the new function to be created and the GraphQL API to be updated. Type Y in response to all the prompts. The process that follows takes a few minutes. Tip: Type amplify push --y to skip the above questions in the future. Note that your stack is displayed in both the AWS console and CloudFormation. Below is an example. For a lighter version (example below), go to Amplify, select your project, and choose backend > dev. After performing a push, Amplify sends you a message with your GraphQL endpoint and API key, which is also a confirmation that Amplify has updated your code, including your GraphQL values, locally. A new query called cloudinarysignature, with which you call your Lambda function through AppSync, is now in your src/graphql/queries.js file. Add this code, which calls your GraphQL endpoint, to your App.js file: //"); } } Then update showWidget to call this post before opening our dialog. You could put this anywhere but I thought this would be easy. const showWidget = () => { fetchCloudinarySignature(); uploadWidget.open(); } Lambda then returns the “Hello from Lambda!” response you requested. {"data":{"cloudinarysignature":"{statusCode=200, body=\\"Hello from Lambda!\\"}"}} For details on this topic, see the documentation on Cloudinary’s Node.js SDK. To update the Lambda function to call Cloudinary, follow these steps: 1. Add the Cloudinary SDK by running npm install cloudinary in the directory amplify/backend/function/cloudinarysignature/src. Tip: If you are using VSCode, just type the command and you’ll be taken to that directory. 2. Add the following code to the amplify/backend/function/cloudinarysignature/src/index.js file: /* eslint-disable no-unused-vars */ /* eslint-disable no-undef */ const cloudinary = require("cloudinary").v2; exports.handler = async (event) => { console.log(event); const secret = process.env.CLOUDINARY_API_SECRET; const response = { statusCode: 400, body: `Missing CLOUDINARY_API_SECRET`, }; if (!secret) { return response; } const timestamp = Math.round(new Date().getTime() / 1000); const signature = await cloudinary.utils.api_sign_request( JSON.parse(event.arguments.msg), secret ); response.body = signature; return JSON.stringify(response); }; Clicking the Upload Video button now triggers the error message that your Cloudinary key is missing: {"data":{"cloudinarysignature":"{statusCode=400, body=Missing CLOUDINARY_API_SECRET}"}} 3. Get the Cloudinary key from your Cloudinary dashboard, go to the AWS Console, and click Lambda > Functions. 4. Click the cloudinarysignature-dev function on the list. Scroll down to the Environment variables section and click Edit. 5. In the Edit environment variables screen (see below), fill in the CLOUDINARY_API_SECRET field with your API key and click Save. Keep your API key confidential and do not put it in your code repository. As a test, go to your React app at and click Upload Video to ensure that React returns your secret key. Now the call is ready for use with your creatUploadWidget function request. Below is the complete content of the App.js file. /* src/App.js */ import React, { useEffect, useState } from "react"; import Amplify, { API, graphqlOperation } from "aws-amplify"; import { createVideo } from "./graphql/mutations"; import { listVideos, cloudinarysignature } from "./graphql/queries"; import awsExports from "./aws-exports"; Amplify.configure(awsExports); const initialState = { name: "", description: "", cloudinary: null, }; //"); } } const App = () => { const [formState, setFormState] = useState(initialState); const [videos, setVideos] = useState([]); useEffect(() => { fetchVideos(); }, []); function setInput(key, value) { setFormState({ ...formState, [key]: value }); } const uploadWidget = window.cloudinary.createUploadWidget( { cloudName: "ajonp", uploadPreset: "dxf42z9k", }, (error, result) => { if (!error && result && result.event === "success") { console.log("Done! Here is the video info: ", result.info); setInput("cloudinary", JSON.stringify(result.info)); } if (error) { console.log(error); } } ); const showWidget = () => { uploadWidget.open(); }; async function fetchVideos() { try { const videoData = await API.graphql(graphqlOperation(listVideos)); const videos = videoData.data.listVideos.items; videos.map((video) => { video.cloudinary = JSON.parse(video.cloudinary); }); setVideos(videos); } catch (err) { console.log("error fetching videos"); } } async function addVideo() { try { if (!formState.name || !formState.description) return; const video = { ...formState }; setVideos([...videos, video]); setFormState(initialState); await API.graphql(graphqlOperation(createVideo, { input: video })); } catch (err) { console.log("error creating video:", err); } } return ( <div style={styles.container}> <h2>Amplify Videos</h2> <button style={styles.uploadButton} <button style={styles.button} onClick={addVideo}> Add Video to List </button> {videos.map((video, index) => ( <div key={video.id ? video.id : index} style={styles.video}> <p style={styles.videoName}>{video.name}</p> <p style={styles.videoDescription}>{video.description}</p> <div style={styles.vids}> <video controls muted <source src={video.cloudinary.secure_url}</source> </video> </div> </div> ))} </div> ); }; const styles = { vids: { maxWidth: "800px", }, container: { width: 400, margin: "0 auto", display: "flex", flexDirection: "column", justifyContent: "center", padding: 20, }, video: { marginBottom: 15 }, input: { border: "none", backgroundColor: "#ddd", marginBottom: 10, padding: 8, fontSize: 18, }, videoName: { fontSize: 20, fontWeight: "bold" }, videoDescription: { marginBottom: 0 }, button: { backgroundColor: "black", color: "white", outline: "none", fontSize: 18, padding: "12px 0px", }, uploadButton: { margin: "22px" }, }; export default App; Be sure to update your GraphQL by adding Cloudinary’s JSON: type Video @model { id: ID! name: String! description: String cloudinary: AWSJSON } type Query { cloudinarysignature(msg: String): String @function(name: "cloudinarysignature-${env}") } Follow these three simple steps: 1. Build your app. Type: npm run build 2. Add the hosting capability. Type: amplify add hosting Respond to the two prompts that are displayed as follows: 3. Push the content to the web. Type: amplify publish That URL at the bottom, is your new app. Feel free to rename it with a custom domain. Here’s a demo of the app. Also, the tutorial Getting Started With AWS Amplify is a handy reference. A few suggestions: - Add the Cloudinary JavaScript loader to spotlight the actual video. - Add an authentication process to restrict the publishing privilege to the authorized people only. - Instead of uploading videos with the Cloudinary upload widget, upload to S3 and then update with the Cloudinary SDK from a Lambda trigger. - For a real-time feel whenever changes occur, update the list through a subscription model.
https://cloudinary.com/blog/amplify_your_jamstack_with_video
CC-MAIN-2022-21
refinedweb
2,690
50.63
You can call a JavaScript function from Go and call a Go function from WebAssembly: package main // This calls a JS function from Go. func main() { println("adding two numbers:", add(2, 3)) // expecting 5 } // This function is imported from JavaScript, as it doesn't define a body. // You should define a function named 'main.add' in the WebAssembly 'env' // module from JavaScript. func add(x, y int) // This function is exported to JavaScript, so can be called using // exports.multiply() in JavaScript. //go:export multiply func multiply(x, y int) int { return x * y; } Related JavaScript would look something like this: // Providing the environment object, used in WebAssembly.instantiateStreaming. env: { 'main.add': function(x, y) { return x + y } // ... other functions } // Calling the multiply function: console.log('multiplied two numbers:', wasm.exports.multiply(5, 3)); You can also simply execute code in func main(), like in the standard library implementation of WebAssembly. If you have tinygo installed, it’s as simple as providing the correct target: tinygo build -o wasm.wasm -target wasm ./main.go If you’re using the docker image, you need to mount your workspace into the image. Note the --no-debug flag, which reduces the size of the final binary by removing debug symbols from the output: docker run -v $GOPATH:/go -e "GOPATH=/go" tinygo/tinygo tinygo build -o /go/src/github.com/myuser/myrepo/wasm.wasm -target wasm --no-debug /go/src/github.com/myuser/myrepo/wasm-main.go Make sure you copy wasm_exec.js to your runtime environment: docker run -v $GOPATH:/go -e "GOPATH=/go" tinygo/tinygo /bin/bash -c "cp /usr/local/tinygo/targets/wasm_exec.js /go/src/github.com/myuser/myrepo/ More complete examples are provided in the wasm examples. Execution of the contents require a few JS helper functions which are called from WebAssembly. We have defined these in tinygo/targets/wasm_exec.js. It is based on $GOROOT/misc/wasm/wasm_exec.js from the standard library, but is slightly different. Ensure you are using the same version of wasm_exec.js as the version of tinygo you are using to compile. The general steps required to run the WebAssembly file in the browser includes WebAssembly.instantiateStreaming, or WebAssembly.instantiate in some browsers: const go = new Go(); // Defined in wasm_exec.js const WASM_URL = 'wasm.wasm'; var wasm; if ('instantiateStreaming' in WebAssembly) { WebAssembly.instantiateStreaming(fetch(WASM_URL), go.importObject).then(function (obj) { wasm = obj.instance; go.run(wasm); }) } else { fetch(WASM_URL).then(resp => resp.arrayBuffer() ).then(bytes => WebAssembly.instantiate(bytes, go.importObject).then(function (obj) { wasm = obj.instance; go.run(wasm); }) ) } If you have used explicit exports, you can call them by invoking them under the wasm.exports namespace. See the export directory in the examples for an example of this. In addition to the JavaScript, it is important the wasm file is served with the Content-Type header set to application/wasm. Without it, most browsers won’t run it. package main import ( "log" "net/http" "strings" ) const dir = "./html" simple server serves anything inside the ./html directory on port 8080, setting any *.wasm files Content-Type header appropriately. For development purposes (only!), it also sets the Cache-Control header so your browser doesn’t cache the files. This is useful while developing, to ensure your browser displays the newest wasm when you recompile. In a production environment you probably wouldn’t want to set the Cache-Control header like this. Caching is generally beneficial for end users. Further information on the Cache-Control header can be found here:
https://tinygo.org/webassembly/webassembly/
CC-MAIN-2019-30
refinedweb
587
52.46
Before we proceed, we need to add a second forest. You will need yet another Windows Server 2003 computer to proceed. Give this new server an IP address of 192.168.1.75 and a name of DCA . Be sure to set the computer's DNS server address to the same value as its IP address. After installing Server 2003, promote the server to domain controller status as as outlined in Chapter 2 using the following guidelines: Create a new domain controller for a new domain Name the domain piggy .wig Create a new domain in a new forest When asked, be sure to choose Install and configure the DNS server on this computer Once your domain controller for the piggy.wig domain is configured, you have not only created a new domain, you have also created a new forest (albeit a forest of one domain and one domain controller). We now need to make sure that these two domains have common ground in which to resolve DNS namespaces. On the new domain controller DCA in the piggy.wig domain, open the DNS administration tools located on Start ˆ’ > Administrative Tools . Right-click the DCA icon on the left pane of the window and choose Properties . Click the Forwarders tab. This window allows you to specify specific domains that are not handled by the piggy.wig DNS server. For each of these domains, we can specify a DNS server in which to forward requests made by clients . Since each domain controller in all three domains in the guinea.pig forest are essentially copies of each other (due to replication), we can point the piggy.wig DNS server to any of the domain controllers in that forest (DC01, DC02, or DC03). Next to the DNS Domain field, click New and add type guinea.pig , hitting Enter . The name guinea.pig appears in the domain list. Click guinea.pig once. Click once in the field labeled Selected domain's forwarder IP address list and enter the IP address of a DNS server in the guinea.pig domain. For this example, we use DC01's IP of 192.168.1.1 . Click Add . Repeat this process for austin.guinea.pig and denver.guinea.pig, pointing both of those domains to DC01 (192.168.1.1). In addition to forwarding requests for all three domains to DC01, you can also add multiple DNS forwards for each domain. For example, if the DNS server DC01 in guinea.pig is down, we can direct the piggy.wig DNS server to try DC02. If DC02 is down, we could instruct piggy.wig to try DC03. Clients in piggy.wig are now able to access objects in the guinea.pig forest. When all is said and done, you should see something similar to this screenshot. Click OK when finished: On DC01 in the guinea.pig domain, add a forwarder to point DC01's DNS server to piggy.wig's DNS server. Do the same for DC02 and DC03. This way, clients in denver, austin, and guinea.pig are able to access objects in piggy.wig.
https://flylib.com/books/en/1.239.1.51/1/
CC-MAIN-2019-13
refinedweb
517
77.03
You want to write a server that handles multiple clients from within the one process, without using Perl 5.8's threads or the complexity of non-blocking I/O. Use the cooperative multitasking framework POE (available from CPAN) and the accompanying POE::Component::Server::TCP module to create the server for you: #!/usr/bin/perl use warnings; use strict; use POE qw(Component::Server::TCP); # Start a TCP server. Client input will be logged to the console and # echoed back to the client, one line at a time. POE::Component::Server::TCP->new ( Port => $PORT_NUMBER, # port to listen on ClientInput => \&handle_input, # method to call with input ); # Start the server. $poe_kernel->run( ); exit 0; sub handle_input { my ( $session, $heap, $input ) = @_[ SESSION, HEAP, ARG0 ]; # $session is a POE::Session object unique to this connection, # $heap is this connection's between-callback storage. # New data from client is in $input. Newlines are removed. # To echo input back to the client, simply say: $heap->{client}->put($input); # and log it to the console print "client ", $session->ID, ": $input\n"; } POE is a cooperatively multitasking framework for Perl built entirely out of software components. POE doesn't require you to recompile the Perl interpreter to support threads, but it does require you to design your program around the ideas of events and callbacks. Documentation for this framework is available at. It helps to think of POE as an operating system: there's the kernel (an object responsible for deciding which piece of code is run next) and your processes (called sessions, implemented as objects). POE stores the kernel object in the variable $poe_kernel, which is automatically imported into your namespace. Each process in your operating system has a heap, memory where the variables for that process are stored. Sessions have heaps as well. In an operating system, I/O libraries handle buffered I/O. In POE, a wheel handles accepting data from a writer and sending it on to a reader. There are dozens of prebuilt sessions (called components) for servers, clients, parsers, queues, databases, and many other common tasks. These components do the hard work of understanding the protocols and data formats, leaving you to write only the interesting codewhat to do with the data or what data to serve. When you use POE::Component::Server::TCP, the component handles creating the server, listening, accepting connections, and receiving data from the client. For each bit of data it receives, the component calls back to your code. Your code is responsible for parsing the request and generating a response. In the call to POE::Component::Server::TCP's constructor, specify the port to listen on with Port, and your code to handle input with ClientInput. There are many other options and callbacks available, including Address to specify a particular interface address to listen on and ClientFilter to change its default line parser. Your client input subroutine is called with several parameters, but we use only three: the POE session object representing this connection, the heap for this session, and the latest chunk of input from the client. The first two are standard parameters supplied by POE to all session calls, and the last is supplied by the server component. The strange assignment line at the start of handle_input merely takes a slice of @_, using constants to identify the position in the method arguments of the session, heap, and first real argument. It's a POE idiom that lets the POE kernel change the actual method parameters and their order, without messing up code that was written before such a change. my ( $session, $heap, $input ) = @_[ SESSION, HEAP, ARG0 ]; The session's heap contains a client shell that you use for communicating with the client: $heap->{client}. The put method on that object sends data back to the client. The client's IP address is accessible through $heap->{remote_ip}. If the action you want to perform in the callback is time-consuming and would slow down communication with other clients that are connected to your server, you may want to use POE sessions. A session is an event-driven machine: you break the time-consuming task into smaller (presumably quicker) chunks, each of which is implemented as a callback. Each callback has one or more events that trigger it. It's the responsibility of each callback to tell the kernel to queue more events, which in turn pass execution to the next callback (e.g., in the "connect to the database" function, you'd tell the kernel to call the "fetch data from the database" function when you're done). If the action cannot be broken up, it can still be executed asynchronously in another process with POE::Wheel::Run or POE::Component::Child. POE includes non-blocking timers, I/O watchers, and other resources that you can use to trigger callbacks on external conditions. Wheels and Components are ultimately built from these basic resources. Information on POE programming is given at, including pointers to tutorials given at various conferences. It can take a bit of mental adjustment to get used to the POE framework, but for programs that deal with asynchronous events (such as GUIs and network servers) it's hard to beat POE for portability and functionality. The documentation for the CPAN modules POE, POE::Session, POE::Wheel, and POE::Component::Server::TCP;; Recipe 17.14
http://etutorials.org/Programming/Perl+tutorial/Chapter+17.+Sockets/Recipe+17.15+Writing+a+Multitasking+Server+with+POE/
CC-MAIN-2017-30
refinedweb
895
51.78
- Virtual Function Basics - Pointers and Virtual Functions - Summary I did it my way. Frank Sinatra Polymorphism refers to the ability to associate many meanings to one function name by means of a special mechanism known as virtual functions or late binding. Polymorphism is one of the fundamental mechanisms of a popular and powerful programming philosophy known as object oriented programming. Wow, lots of fancy words! This article will explain them. Virtual Function Basics virtual adj. 1. Existing or resulting in essence or effect though not in actual fact, form, or name... The American Heritage Dictionary of The English Language, Third Edition A virtual function is so named because it may, in a sense to be made clear, be used before it is defined. Virtual functions will prove to be another tool for software reuse. Late Binding Virtual functions are best explained by am example. Suppose you are designing software for a graphics package that has classes for several kinds of figures, such as rectangles, circles, ovals, and so forth. Each figure might be an object of a different class. For example, the Rectangle class might have member variables for a height, width, and center point, while the Circle class might have member variables for a center point and a radius.. However, because the functions belong to the classes, they can all be called draw. If r is a Rectangle object and c is a Circle object, then r.draw( ) and c.draw( ) can be functions implemented with different code. All this is not news, but now we move on to something new: virtual functions defined in the parent class Figure. Now, the parent class Figure may have functions that apply to all figures. For example, it might. When you think of using the inherited function center with figures of the classes Rectangle and Circle, you begin to see that there are complications here. To make the point clear and more dramatic, let's suppose the class Figure is already written and in use and at some later time you add a class for a brand new kind of figure, say the class Triangle. Now Triangle can be a derived class of the class Figure, and so the function center will be inherited from the class Figure and so the function center should apply to (and perform correctly for!) all Triangles. But there is a complication. The function center uses draw, and the function draw is different for each type of figure. The inherited function center (if nothing special is done) will use the definition of the function draw given in the class Figure, and that function draw does not work correctly for Triangles. We want the inherited member function center to use the function Triangle::draw rather than the function Figure::draw. But the class Triangle and so the function Triangle::draw was not even written when the function center (defined in the class Figure) was written and even compiled! How can the function center possibly work correctly for Triangles? The compiler did not know anything about Triangle::draw at the time that center was compiled! The answer is that it can apply provided draw is a virtual function. When you make a function virtual, you are telling the compiler "I do not know how this function is implemented. Wait until it is used in a program, and then get the implementation from the object instance." The technique of waiting until run time to determine the implementation of a function is often called late binding or dynamic binding. Virtual functions are the way C++ provides late binding. But enough introduction. We need an example to make this come alive (and to teach you how to use virtual functions in your programs). In order to explain the details of virtual functions in C++, we will use a simplified example from an application area other than drawing figures. Virtual Functions in C++ Suppose you are designing a record-keeping program for an automobile parts store. You want to make the program versatile, but you are not sure you can account for all possible situations. For example, you want to keep track of sales, but you cannot anticipate all types of sales. At first, there will only be regular sales to retail customers who go to the store to buy one particular part. However, later you may want to add sales with discounts or mail order sales with a shipping charge. All these sales will be for an item with a basic price and ultimately will produce some bill. For a simple sale, the bill is just the basic price, but if you later add discounts, then some kinds of bills will also depend on the size of the discount. Now your program will need to compute daily gross sales, which intuitively should just be the sum of all the individual sales bills. You may also want to calculate the largest and smallest sales of the day or the average sale for the day. All these can be calculated from the individual bills, but many of the functions for computing the bills will not be added until later, when you decide what types of sales you will be dealing with. To accommodate this, we make the function for computing the bill a virtual function. (For simplicity in this first example, we assume that each sale is for just one item, although with derived classes and virtual functions we could, but will not here, account for sales of multiple items.) Displays 1 and 2 contain the interface and implement for the class Sale. Display 1Interface for the Base Class Sale //This is the header file sale.h. //This is the interface for the class Sale. //Sale is a class for simple sales. #ifndef SALE_H #define SALE_H namespace SavitchSale { class Sale { public: Sale( ); Sale(double thePrice); double getPrice( ) const; void setPrice(double newPrice); virtual double bill( ) const; double savings(const Sale& other) const; //Returns the savings if you buy other instead of the calling object. private: double price; }; bool operator < (const Sale& first, const Sale& second); //Compares two sales to see which is larger. }//SavitchSale #endif // SALE_H Display 2Implementation of the Base Class Sale //This is the file sale.cpp. //This is the implementation for the class Sale. //The interface for the class Sale is in the file sale.h. #include <iostream> #include "sale.h" using std::cout; namespace SavitchSale { Sale::Sale( ) : price(0) { //Intentionally empty } Sale::Sale(double thePrice) { if (thePrice >= 0) price = thePrice; else { cout << "Error: Cannot have a negative price!\n"; exit(1); } } double Sale::bill( ) const { return price; } double Sale::getPrice( ) const { return price; } void Sale::setPrice(double newPrice) { if (newPrice >= 0) price = newPrice; else { cout << "Error: Cannot have a negative price!\n"; exit(1); } } double Sale::savings(const Sale& other) const { return (bill( ) - other.bill( )); } bool operator < (const Sale& first, const Sale& second) { return (first.bill( ) < second.bill( )); } }//SavitchSale For example, Displays 3 and 4 show the derived class DiscountSale. Display 3Interface for the Derived Class DiscountSale //This is the file discountsale.h. //This is the interface for the class DiscountSale. #ifndef DISCOUNTSALE_H #define DISCOUNTSALE_H #include "sale.h" namespace SavitchSale { class DiscountSale : public Sale { public: DiscountSale( ); DiscountSale(double thePrice, double theDiscount); //Discount is expressed as a percent of the price. //A negative discount is a price increase. double getDiscount( ) const; void setDiscount(double newDiscount); double bill( ) const; private: double discount; }; }//SavitchSale #endif //DISCOUNTSALE_H Display 4Implementation for the Derived Class DiscountSale //This is the implementation for the class DiscountSale. //This is the file discountsale.cpp. //The interface for the class DiscountSale is in the header file discountsale.h. #include "discountsale.h" namespace SavitchSale { DiscountSale::DiscountSale( ) : Sale( ), discount(0) { //Intentionally empty } DiscountSale::DiscountSale(double thePrice, double theDiscount) : Sale(thePrice), discount(theDiscount) { //Intentionally empty } double DiscountSale::getDiscount( ) const { return discount; } void DiscountSale::setDiscount(double newDiscount) { discount = newDiscount; } double DiscountSale::bill( ) const { double fraction = discount/100; return (1 - fraction)*getPrice( ); } }//SavitchSale How does this work? In order to write C++ programs, you can just assume it happens by magic, but the real explanation was given in the introduction to this section. When you label a function virtual, you are telling the C++ environment "Wait until this function is used in a program, and then get the implementation corresponding to the calling object." Display 5 gives a sample program that illustrates the virtual function. Display 5Use of a Virtual Function //Demonstrates the performance of the virtual function bill. #include <iostream> #include "sale.h" //Not really needed, but safe due to ifndef. #include "discountsale.h" using std::cout; using std::endl; using std::ios; using namespace SavitchSale; int main( ) { Sale simple(10.00);//One item at $10.00. DiscountSale discount(11.00, 10);//One item at $11.00 with a 10% discount. cout.setf(ios::fixed); cout.setf(ios::showpoint); cout.precision(2); if (discount < simple) { cout << "Discounted item is cheaper.\n"; cout << "Savings is $" << simple.savings(discount) << endl; } else cout << "Discounted item is not cheaper.\n"; return 0; } //The objects discount and simple use different code for the member function //bill when the less-than comparison is made. Similar remarks apply to savings. Sample Dialogue Discounted item is cheaper. Savings is $0.10 Programming Tip: The Virtual Property Is Inherited The property of being a virtual function is inherited. For example, since bill was declared to be virtual in the base class Sale (Display 1), the function bill is automatically virtual in the derived class DiscountSale (refer to Display 3). So, the following two declarations of the member function bill would be equivalent in the definition of the derived class DiscountSale: double bill( ) const; virtual double bill( ) const; Thus, if SuperDiscountSale is a derived class of the class DiscountSale that inherits the function savings, and if the function bill is given a new definition for the class SuperDiscountSale, then all objects of the class SuperDiscountSale will use the definition of the function bill given in the definition of the class SuperDiscountSale. Even the inherited function savings (which includes a call to the function bill) will use the definition of bill given in SuperDiscountSale whenever the calling object is in the class SuperDiscountSale. Programming Tip: When to Use a Virtual Function There are clear advantages to using virtual functions and no clear disadvantages that we have seen so far. So, why not make all member functions virtual? In fact, why not define the C++ compiler so that (like some other languages, such as Java) all member functions are automatically virtual? The answer is that there is a large overhead to making a function virtual. It uses more storage and makes your program run slower than if the function were not virtual. That is why the designers of C++ gave the programmer control over which member functions are virtual and which are not. If you expect to need the advantages of a virtual member function, then make that member function virtual. If you do not expect to need the advantages of a virtual function, then your program will run more efficiently if you do not make the member function virtual. Pitfall: Omitting the Definition of a Virtual Member Function It is wise to develop incrementally. This means code a little, test a little, then code a little more, and test a little more, and so forth. However, if you try to compile classes with virtual member functions, but do not implement each member, you may run into some very-hard-to-understand error messages, even if you do not call the undefined member functions! If any virtual member functions are not implemented before compiling, then the compilation fails with error messages similar to this: Undefined reference to Class_Name virtual table. Even if there is no derived class and there is only one virtual member function, but that function does not have a definition, then this kind of message still occurs. What makes the error messages very hard to decipher is that without definitions for the functions declared virtual, there will be further error messages complaining about an undefined reference to default constructors, even if these constructors really are already defined. Of course, you may use some trivial definition for a virtual function until you are ready to define the "real" version of the function. This caution does not apply to pure virtual functions, which we discuss in the next section. As you will see, pure virtual functions are not supposed to have a definition. Abstract Classes and Pure Virtual Functions You can encounter situations where you want to have a class to use as a base class for a number of other classes, but you do not have any meaningful definition to give to one or more of its member functions. When we introduced virtual functions we discussed one such scenario. Let's review it now. Suppose you are designing software for a graphics package that has classes for several kinds of figures, such as rectangles, circles, ovals, and so forth. Each figure might be an object of a different class, such as the Rectangle class or the Circle class.. If r is a Rectangle object and c is a Circle object, then r.draw( ) and c.draw( ) can be functions implemented with different code. Now, the parent class Figure may. By making the member function draw a virtual function, you can write the code for the member function Figure::center in the class Figure and know that when it is used for a derived class, say Circle, the definition of draw in the class Circle will be the definition used. You never plan to create an object of type Figure. You only intend to create objects of the derived classes such as Circle and Rectangle. So, the definition that you give to Figure::draw will never be used. However, based only on what we covered so far, you would still need to give a definition for Figure::draw, even though it could be trivial. If you make the member function Figure::draw a pure virtual function, then you do not need to give any definition to that member function. The way you make a member function into a pure virtual function is to mark it virtual and to add the annotation = 0 to the member function declaration as in the following example: virtual void draw( ) = 0; Any kind of member can be made a pure virtual function. It need not be a void function with no parameters as in our example. A class with one or more pure virtual functions is called an abstract class. An abstract class can only be used as a base class to derive other classes. You cannot create objects of an abstract class, since it is not a complete class definition. An abstract class is a partial class definition because it can contain other member functions that are not pure virtual functions. An abstract class is also a type, so you can write code with parameters of the abstract class type and it will apply to all objects of classes that are descendents of the abstract class. If you derive a class from an abstract class it will itself be an abstract class unless you provide definitions for all the inherited pure virtual functions (and also do not introduce any new pure virtual functions). If you do provide definitions for all the inherited pure virtual functions (and also do not introduce any new pure virtual functions) the resulting class is a not an abstract class, which means you can create objects of the class. Programming Example: An Abstract Class Display 6 shows the interface for the class Employee. Display 6Interface for the Abstract Class Employee //This is the header file employee.h. //This is the interface for the abstract class Employee. #ifndef EMPLOYEE_H #define EMPLOYEE_H #include <string> using std::string; namespace SavitchEmployees { class Employee { public: Employee( ); Employee(string theName, string theSsn); string getName( ) const; string getSsn( ) const; double getNetPay( ) const; void setName(string newName); void setSsn(string newSsn); void setNetPay(double newNetPay); virtual void printCheck( ) const = 0; private: string name; string ssn; double netPay; }; //The implementation for this class is the same as in Chapter 14, //except that no definition is given for the member function printCheck( ) }//SavitchEmployees #endif //EMPLOYEE_H The word virtual and the = 0 in the member function heading tell the compiler this is a pure virtual function and so the class Employee is now an abstract class. The implementation for the class Employee includes no definition for the class Employee::printCheck (but otherwise the implementation of the class Employee is the same as before). It makes sense that there is no definition for the member function Employee::printCheck, since you do not know what kind of check to write until know what kind of employee you are dealing with.
http://www.informit.com/articles/article.aspx?p=26063&amp;seqNum=3
CC-MAIN-2017-09
refinedweb
2,791
52.19
Perforce Defect Tracking Integration Project defect_trackerclass defect_tracker_issueclass defect_tracker_fixclass defect_tracker_filespecclass translatorclass This is the Perforce Defect Tracking Integration (P4DTI) Integrator's Guide. It explains how to extend the P4DTI to work with defect tracking systems that aren't supported by the standard distribution, or adapt the P4DTI to work with a supported defect tracker but in some way that isn't supported. The intended readership is developers adapting or extending the P4DTI, and project staff. This manual is not confidential. The Integration Kit is a copy of the development sources for the P4DTI. The directory layout is summarized in the index to the kit. I use some words in a precise way to express the importance of an instruction. I say "must" when the instruction is critical. This means that the integration will fail if the instruction is not followed. I say "should" when the instruction is essential. This means that integration will be of noticeably lower quality than the supported integrations if the instruction is not followed. However, it won't fail. I say "may" when the instruction is optional. This means that the integration will not suffer much if you don't follow the instruction. This section gives an overview of the requirements, architecture and design of the P4DTI, with references to the documents that provide more detail. You must have a good overall understanding of the P4DTI in order to extend or adapt it. This manual assumes you are familiar with the following subjects: The jobs subsystem of Perforce, and the relationship between jobs, fixes and changelists [Perforce 2001-06-18a, 10]. How the P4DTI works, from the administrator's point of view. I strongly recommend that you download, install, configure and run one of the supported integrations, following the Perforce Defect Tracking Integration Administrator's Guide [RB 2000-08-10a], so that you know what the administrator has to know and do, where the data is stored, what problems can occur. How the P4DTI works, from the user's point of view. I strongly recommend that you try out one of the supported integrations, carrying out all the tasks in the Perforce Defect Tracking Integration User's Guide [RB 2000-08-10b], so that you know what it's like to use, and what benefit the users get. The programming language Python. See the Python web site <> for downloads and documentation. If you're new to Python, try the tutorial [van Rossum 2000-10-16], or the book Programming Python [Lutz 1996]. The five most important requirements are these [GDR 2000-05-24, 1-5]: Defect tracker state is consistent with the state of the product sources. The defect tracking integration makes the jobs of the developers and managers easier (i.e. make it easier for them to produce a quality product etc.). It is easy to discover why the product sources are the way they are, and why they have changed, in terms of the customer requirements. The interface that allows Perforce to be integrated with defect tracking systems is public, documented, and maintained. The integration provides the ability to ask questions involving both the defect tracking system and the SCM system. The P4DTI meets requirement 1 and requirement 5 by replicating data between the defect tracker and Perforce (see section 2.3). It meets requirement 2 by making it possible for developers to do their routine defect tracking activity entirely from Perforce (by making the defects available through Perforce's jobs interface). It meets requirement 3 by supplying a user guide [RB 2000-08-10b] that describes a development process in which issues are linked to changes by making fixes in Perforce. It meets requirement 4 by making the project sources and documents available to the public. See the Perforce Defect Tracking Integration Project Requirements [GDR 2000-05-24] for a full and maintained set of requirements and references to their original sources. The P4DTI meets these requirements using a replication architecture [RB 2000-08-10c]. A replicator process repeatedly polls two databases (Perforce and the defect tracker) and copies entities from one to the other. This makes and keep them consistent, to meet requirement 1; it makes them available to users of both systems, to meet requirement 2; and it makes them available for queries combining data from both systems, to meet requirement 5. The replicator replicates four relations: Issues are replicated from the defect tracker to Perforce (where they appear as jobs). Changes to issues and jobs are replicated in both directions, but the Perforce jobs are considered to be a subsidiary copy of the real data in the defect tracker. This means that when the two databases differ (for example, because they have been changed simultaneously) the defect tracker is considered to be definitive. Changelist descriptions are replicated from Perforce to the defect tracker. Fixes (links between issues and changelists) are replicated in both directions. Filespecs (links between issues and files) are replicated in both directions. The P4DTI replicates the filespec relation in order to support use cases like "Associate revisions of documents with task" [GDR 2000-05-03, 6.2] and "Check out copies of revisions of documents associated with task" [GDR 2000-05-03, 6.3] and to support defect trackers like DevTrack by TechExcel that provide a revision control interface based on associating documents with an issue. However, because the supported defect trackers (TeamTrack and Bugzilla) have no such feature, and because alpha and beta testing showed no demand for use cases involving associating documents with tasks, we haven't made any use of this relation (for example, it's not documented in the user's guide). However, it's there if you need it for integrating with your defect tracker. The replicator is designed to be highly independent of both Perforce and the defect tracker, using public interfaces wherever possible, so that the integration doesn't have to change frequently to keep up with the systems it integrates (requirement 27) and the cost of maintenance is low (requirement 30). It runs as a separate process and uses public protocols to access both databases. It doesn't require any special support from either system (though users benefit if the defect tracker provides an interface to Perforce fixes; (see section 10). The replicator is written in the interpreted programming language Python, a portable, stable, readable and open programming language (to meet requirement 21, requirement 24, requirement 25, and requirement 26.). Figure 1 below shows the broad outlines of how the replicator is constructed. Parts in black are shared by the integrations with all defect trackers. The components in red are the components that you must write in order to integrate with your defect tracker. If you need to modify any other components to integrate with your defect tracker, that's a defect in the integration kit. Please report it (see section 12.1) or make the necessary modifications and submit them as a contribution (see section 12.2). This section gives an overview of the work required in adapting an existing integration or developing a new integration. Someone might already have developed the integration or adaption that you plan to work on. Take a look at the P4DTI contributions page <>. Someone might be currently be working on the integration or adaption that you plan to work on. If so, Perforce support may know about them. The feature you want may in fact be part of the supported P4DTI product and it is missing from the manuals or the manuals are unclear. If so, Perforce support can tell you. And if the manuals are unclear or missing information, then please submit a defect report (see section 12.1). The Perforce Defect Tracking Integration Kit is a supported product. If you have trouble adapting the P4DTI or developing an integration after following the instructions in here, contact Perforce support for help. Ravenbrook Limited may be able to develop or consult on adaptions and extensions to the P4DTI. You may need to adapt the P4DTI to work with a supported defect tracker but in some way that isn't supported. For example: In these and many similar cases, make the P4DTI do what you want by writing a "configuration generator" (see section 8.6). But don't skip straight to that section. At least skim the rest of the manual. You'll need to understand many of the details in order to write a configuration generator, especially how to write translator classes (see section 7.5) and how the configuration works (see section 8). Follow these steps to integrate Perforce with a new defect tracker: Choose a name for the integration. This should be the name of the defect tracker, for example "TeamTrack" or "Bugzilla". This name (when converted to lower case) must be used as part of the names of modules making up the integration (see section 7 and section 8). Decide which of the optional features you are going to support (see section 3.5). Provide full implementations of these components: A documented design for extensions for the defect tracker database schema (see section 4); A Python interface to the defect tracker (see section 6); A defect tracker module (see section 7); A configuration generator (see section 8). You should develop and apply tests (both automated and manual) of your integration (see section 9). You should provide a defect tracker interface to the Perforce relations, if possible (see section 10). You must adapt or extend these components: The configuration module config.py (see section 8.5). The Administrator's Guide (see section 11); The User's Guide (see section 11); All other components are designed to be portable between defect trackers. If your integration cannot be made to work without changing the portable components, then there is a defect in the P4DTI Integration Kit. Please report this (see section 12.1). Once all the work outlined above is completed and tested to your satisfaction, you should make your work available to the community so that others can benefit from your efforts (see section 12.2). I estimate that at least 10 weeks of effort are required to develop, test, document and release a new integration [GDR 2000-05-30]. You must extend the database schema by adding new fields to the issue relation (see section 4.1), and adding three new relations: the changelist relation (see section 4.2), the fixes relation (see section 4.3), and the filespecs relation (see section 4.4). You should add another relation to the database, to store the replicator state and configuration (see section 4.5). These schema extensions must be documented so that users of your integration can implement database queries and reports that use this data, to meet requirement 5. These relations should be stored in separate tables if possible, to most easily support queries and reporting using standard database tools. However, some defect trackers do not support this. Example. TeamTrack release 4.5 doesn't support the addition of tables to its database schema, so the TeamTrack schema extensions squash these relations into a single table, using a type field to distinguish them [GDR 2000-09-04, 2.1]. The design must support multiple replicators replicating from a single defect tracker, and support a single replicator replicating to multiple Perforce servers from one defect tracker, in order to meet requirement 96. To support this, each relation includes a replicator identifier which identifies the replicator which is handling replication for that record, and a Perforce server identifier, which identifies the Perforce server that the record is replicated to. Examples. The TeamTrack database schema extensions [GDR 2000-09-04] and the Bugzilla database schema extensions [NB 2000-11-14b]. The issue relation must be extended with these fields: You may add these fields to the defect tracker's issue table, or you may store them in a separate table and use the issue key to relate the two tables. Examples. The TeamTrack integration adds the new fields to the existing TS_CASEStable [GDR 2000-09-04, 3.1]. The Bugzilla integration creates a table p4dti_bugscontaining the new fields and associates them with the bugstable using the bug_idfield [NB 2000-11-14b]. .) The associated filespecs relation has these fields: By design, the replicator has no internal state. This is to make the replicator robust against losing a network connection, or the machine it's running on crashing in the middle of a replication: when the network comes back up or it starts again, it tries the replication again [GDR 2000-09-13, 2.9]. This design principle helps to meet requirement 1 (consistency between databases). This means that if you need to store information, such as a record of which changes have been replicated (see section 4.6) you must store it in the defect tracker's database. The replicator also needs to pass information to the defect tracker, to support an interface from the defect tracker to Perforce (see section 10). There are three configuration parameters which should be communicated to the defect tracker by storing them in a configuration table: changelist_url, job_url, and p4_server_description. The replicator works by repeatedly polling the databases, so you must provide a way to tell it which issues have changed since the last time it polled. Here are some strategies: If the defect tracker has a changes table which records the history of changes to issues, then store a record number in the replicator state that gives the last record in the changes table that has been replicated. Example. The TeamTrack integration uses this approach [GDR 2000-09-04, 3.5]. If the defect tracker has a last modified date field in the issue table, store the value of this field at the point when the replicator was last replicated. Then you can fetch the changed issues by looking for issues whose last modified date is greater than the last replicated date. This is likely to be less efficient than solution 1. Modify the defect tracker so that it supports solution 1 or 2. If all else fails, store a "shadow" table of issues, containing copies of the issue records as they were when last modified. Then you can find changed issues by finding differing corresponding records. This is likely to be very inefficient. The replicator needs to distinguish the changes it made from changes made by other users of the defect tracker. Otherwise it attempts to replicate its own changes back to Perforce. This won't actually end up in an infinite loop of replication, since when it replicates back it discovers that there are no changes to be made, and so not actually do anything. However, this double replication gives twice the opportunity for conflicts, and hence annoying e-mail messages for the users of the P4DTI (see Ravenbrook issue job000016). Here are some strategies: Suppose that the defect tracker has separate concepts of "logged in user" and "user who is making the change". In this case, make a special user to represent the replicator and have the replicator log in as that user. The replicator's changes show up with logged in user being the replicator user; all other changes need to be replicated. Example. The TeamTrack integration uses this approach [GDR 2000-09-04, 5]. Store a table listing the changes that were made by the replicator. Any other changes need to be replicated. If the defect tracker has a last modified date field in the issue table, store the value of this field at the point when the replicator was last replicated. Then an issue has been changed by someone else if its last modified date differs from the last replicated date. The replicator replicates user fields in issues, changelists and fixes (for example, the owner of an issue or the user who submitted a changelist) by applying a user translation function (see section 7.5.4). When a defect tracker user has no unknown users in changelists and fixes to the special TeamTrack user 0 (representing "no user"). When there's an unknown user in an issue, the integration rejects the attempt to replicate it by raising an error. This section covers coding conventions followed in the P4DTI. You should follow these conventions in your adaptions and extensions. They make your code more reliable and easier to debug, and make it easier for users to diagnose problems and fix them. If you contribute your code for inclusion in the P4DTI (see section 12.2) then it is easier for us to integrate your contribution. Examples. Look at the TeamTrack module, dt_teamtrack.py, and the Bugzilla module, dt_bugzilla.pyfor uses of all conventions and features covered in this section. The message.py module defines a class of messages. You must use this class when writing messages to the replicator's log (see section 5.3). You should use this class when raising errors (see section 5.4). You create a message like this: import message id = 123 text = "Constructed a test message." priority = message.DEBUG product = "Test" msg = message.message(id, text, priority, product) The four arguments to the constructor are as follows: id A message identifier (an integer). This is unique among all messages generated by the product. text The text of the message (a string). priority The level of importance of the message. This must be one of the constants in the following table: product The name of software product which generated the message. For the supported P4DTI, this must be "P4DTI". You can format a message as text by converting the message object to a string: >>> str(msg) "(Test-123X) Constructed a test message." Note that a check digit has been appended to the message identifier. (The check digit uses a mod-11 algorithm similar to that used in ISBNs [ISO 2108], so the check digit can be 0-9 or X.) The idea of the check digit is so that Perforce support can ask users for the message identifier of the error that they are reporting. The check digit makes it very likely that if the error is misreported or misheard the mistake is detected. You can wrap a message to some number of columns by calling its wrap method: >>> print msg.wrap(25) (Test-123X) Constructed a test message. You may create each message when you need it, but you should use a message catalog. A catalog helps you keep message identifiers distinct and internationalizes your code. A message catalog is a dictionary that maps message identifier to a tuple of two elements: the message priority, and a format string that can be used to build the message text. For example: # Test catalog in English test_en_catalog = { 123: (message.DEBUG, "Constructed a test message."), 124: (message.CRIT, "Couldn't connect to defect tracker on host '%s'."), 125: (message.ERR, "User '%s' has no permission to edit issue '%s'."), 126: (message.INFO, "Replicated issue %d."), # ... } Note that a message catalog must not have an entry for message id 0. That's reserved for errors from the catalog implementation. Once you have a message catalog for a product, you should build a message factory that dispenses messages from that catalog, like this: import message product = "Test" factory = message.catalog_factory(test_en_catalog, product) Now you can construct a message by calling the factory's new method and passing the message identifier, and the arguments for the format string: msg1 = factory.new(124, 'dt.ravenbrook.com') msg2 = factory.new(125, ('gdr', 'BUG00123')) See the catalog.py module for the P4DTI catalog and message factory. The P4DTI logs its progress and errors by creating messages (see section 5.1) and sending them to a "logger": that is, an instance of the logger class defined by the logger.py module. The logger module defines classes for logging to files, to standard output, and to the system log on Unix. The multi_logger class directs a single message to several loggers. Each logger class takes a priority argument on instantiation: only messages with this priority, or a higher priority, appear in the log. You should log as many debugging messages as you like (by default the log_level configuration parameter is message.INFO so these messages won't appear). You should log informational messages sparingly, and only when you actually make a change in a database. You should not log error messages, but should raise them as exceptions instead (see section 5.4); the replicator logs them for you when it catches them. To add a message to a log, create a message object (see section 5.1) and pass it to the logger's log message: import logger # Log messages of priority INFO and higher to test.log: logger_object = logger.file_logger("test.log", message.INFO) msg = factory.new(126, issue_id) logger_object.log(msg) The configuration generator (see section 8) must construct a logger object for use by the replicator. The same logger object should be used by the defect tracker module (see section 7) as well, so that all messages are collected in the same place. You must allow the P4DTI administrator to control the volume of log messages by setting the log_level configuration parameter. In the P4DTI, errors are indicated by raising a Python exception, not by returning an exceptional value. Raise an error using a string as the exception object, and a message object (see section 5.1) as the message. For example: error = "Example error" # ... raise error, factory.new(124, 'dt.ravenbrook.com') It doesn't make any difference what priority you give to the message, but it is conventional to use message.CRIT when the replicator stops (for example, configuration errors), and message.ERR when the replicator continues (for example, untranslatable fields or permission failures). You should include in each file of source code: The author. An introduction explaining what the file is intended to achieve: for example, which requirements does it help to meet? A references section listing the sources you've used in preparing the code and which other people need to sources. It must be possible to carry out this merge reliably and without introducing errors. That means being able to consider each change separately, evaluate it and make a decision about how to merge it. To enable us to do so, follow the following rules: Don't delete stuff. Comment it out or skip it. Don't fiddle with the formatting of code or comments. It creates bogus conflicts that create extra work when merging. Add comments explaining why a change was made. Sign the comments with your name and the date. Explain why you had to make the change. Refer to defect reports when fixing them. You'll need a way for Python to read and write defect tracking records. If the defect tracker has an API of some sort, you'll need to use that; if not, you'll have to read and write the database directly, using one of the Python database interfaces. Your defect tracker interface [GDR 2000-08-08]. Example. Bugzilla has no API: you have to understand the Bugzilla database schema [NB 2000-11-14a] and connect directly to the MySQL database. The Bugzilla integration uses a wrapper module that encapsulates the direct database operations as defect tracker oriented functions like update_bug. See bugzilla.pyand its design [NB 2000-11-14c]. You must create a module called dt_defect_tracker.py (where defect_tracker is the lower-case form of the name your chose for your defect tracker (see section 3)) that implements these classes: The defect tracker interface itself: a subclass of dt_interface.defect_tracker (see section 7.1). Defect tracker issue: a subclass of dt_interface.defect_tracker_issue (see section 7.2). If you support the fixes feature, defect tracker fix: a subclass of dt_interface.defect_tracker_fix (see section 7.3). If you support the filespecs feature, defect tracker filespec: a subclass of dt_interface.defect_tracker_filespec (see section 7.4). A translator between dates in the defect tracker and Perforce: a subclass of translator.translator (see section 7.5.1). A translator between multi-line text fields in the defect tracker and Perforce: a subclass of translator.translator (see section 7.5.3). A translator between users in the defect tracker and Perforce: a subclass of translator.user_translator (see section 7.5.4). Any other translator classes that argument is an object whose attributes are the configuration parameters for the defect tracker. See section 8 for the details of how configuration parameters end up in this object. This method should check that all configuration parameters are supplied and have valid values. Use the methods in check_config.py for basic checks. The required parameters should certainly include changelist_url, job_url, p4_server_description, rid, sid, and start_date, but may include others, either supplied by the P4DTI administrator in config.py or generated by the configuration generator.. all_issues(self) Return a cursor (see section 7.6) that fetches all defect tracking issues that either (a) are replicated by this replicator or (b) are not replicated and have been modified since the starting point for replication (that is, the date given by the P4DTI administrator in the start_date parameter). Include in the cursor: Issues replicated by this replicator (that is, the replicator identifier for those issues matches the rid configuration parameter); Issues not replicated by any replicator (that is, the replicator identifier for those issues is blank) and changed since the start date. Omit from the cursor: Issues replicated by a different replicator (that is, the replicator identifier for those issues differs from the rid configuration parameter); Issues not replicated by any replicator and unchanged since the start date. Each element fetched by the returned cursor must belong to your subclass of the dt_interface.defect_tracker_issue class (see section 7.2). changed_entities(self) This method is called at the start of each replication cycle to determine what work there is to do. The method poll_start is called just before this. It must return a tuple of three elements: A cursor (see section 7.6) that fetches the defect tracking issues that require replication. Each element fetched by the returned cursor must belong to your subclass of the dt_interface.defect_tracker_issue class (see section 7.2). Include in the cursor: Issues replicated by this replicator (that is, the replicator identifier for those issues matches the rid configuration parameter); Issues not replicated by any replicator (that is, the replicator identifier for those issues is blank). The replicator considers these issues as candidates for replication. Omit from the cursor: Issues replicated by a different replicator (that is, the replicator identifier for those issues differs from the rid configuration parameter). Issues known to be up to date with Perforce; either because they are unchanged since they were last replicated, or because they have only been changed by the replicator (see section 4.6 and section 4.7). The empty list [ ]. (This is for symmetry with the Perforce interface, which returns a list of changelists. Since changelists are not editable in the defect tracker, there's nothing that can be returned here, hence the empty list.) A marker. This must be some token that identifies what has been done on this poll. At the end of the replication cycle it is passed to mark_changes_done. Example. The TeamTrack integration uses the record number of the last record in the TS_CHANGEStable that the replicator looked at as the marker indicating what it's done. See dt_teamtrack.pyand the design [GDR 2000-09-04, 3.5]. This method must not record that the issues it returns have been considered for replication or replicated. The replicator can encounter an error during the course of replication that prevents it from making any progress (Perforce can go down, the defect tracker can go down, the replicator can crash). When the system comes back up, the replicator must re-consider these issues and possibly replicate them again. This helps keep the databases consistent (requirement 1) and is consistent with the design principle that the replicator must have no internal state (see section 4.5). Recording that issues have been replicated must be left for the end of each replication cycle, when the marker (the third item in the tuple) is passed to mark_changes_done. init(self) This method is called each time the replicator starts. The method must initialize the defect tracking database so that it is ready to start replication. The tables and fields in your schema extensions (see section 4) must be added if they are not yet present. issue(self, issue_id) Return the defect tracking issue identified by the issue_id argument, or None if there is no such issue. The returned issue (if any) must belong to your subclass of the dt_interface.defect_tracker_issue class (see section 7.2). The issue_id argument is a string identifying the issue (see section 7.2.1). mark_changes_done(self, marker) This method is called at the end of each replication cycle, when all issues have been replicated. The marker argument is the third item in the tuple returned by the changed_entities method at the start of the replication cycle. This method must now record that it has considered all changes up to the start of this replication cycle and replicated them successfully, so that at the next replication cycle it can ignore these changes and consider a new set of changes (see section 4.) Replicate a changelist to the defect tracker database (see section 4.2). The arguments specify the changelist; these arguments correspond to a subset of the fields in the changelist relation in the Perforce database (the names of the actual files changed, and their new revision numbers, are not replicated).). The dt_bugzilla.pymodule defines a class bugzilla_bug(issues are called "bugs" in Bugzilla). The replicator needs a unique identifier for each issue in the defect tracker. This must be a string, so that it can be stored in the P4DTI-issue-id field in the Perforce jobspec [GDR 2000-09-13, 4.2]. The replicator gets the identifier from an issue's id method. Later, it may pass the identifier to the defect tracker's issue method. Example. TeamTrack uniquely identifies issues by their record number in the database. So in dt_teamtrackmodule, the issue identifier is the string conversion of the record number. The replicator considers an issue to consist of a collection of named fields, with a value for each field. Instances of the defect_tracker_issue subclass must support at least the __getitem__ method, so that the replicator can get the value for a field in an issue using the expression issue["fieldname"]. You may want to implement the whole of the Python dictionary interface for your own use, but the replicator only uses __getitem__. A subclass of dt_interface.defect_tracker_issue must define the following methods: __getitem__(self, field) Return the value of the field named by the field argument. Raise KeyError if the issue has no such field. __str__(self) Return a string describing the issue, suitable for presentation to a user or administrator in a report. Having several lines of the form "field name: value").). corresponding_id(self) If this issue has been replicated, return the name of the Perforce job to which this issue is replicated. If this issue has not yet been replicated, return the name for the Perforce job to which this issue.) Return a list of the fixes for this issue. Each item in the list belongs to your subclass of the defect_tracker_fix class (see section 7.3). id(self) Return a string that can be used to uniquely identify this issue among all the issues in the defect tracker and to fetch it in future (see section 7.2.1). readable_name(self) Return a string giving a human-readable name for the issue. This name is only used in logs and e-mail messages. rid(self) Return the replicator identifier of the replicator that is in charge of replicating this issue, or the empty string if the issue is not being replicated. setup_for_replication(self, jobname) Set up the issue for replication. That is, record that the issue is replicated by this replicator and record any other information in the database that is needed to replicate this issue. You must do at least these three steps: Record that the issue is replicated by this replicator, so that in the future its rid method returns the correct replicator identifier (this is the rid parameter in the configuration passed to the defect tracker class when it was instantiated). Record the Perforce server identifier of the Perforce server it is replicated to (this is the sid parameter in the configuration passed to the defect tracker class when it was instantiated). Record that the issue is replicated to the Perforce job named by the jobname argument, so that in future its corresponding_id method returns jobname. See section 4.1. update(self, user, changes) Update the issue in the defect tracker's database. The user argument is the user who made the change. It has been converted by the user translator (see section 7.5.4). The changes argument is a dictionary of the changes that must be applied to the issue. The keys of the dictionary are the names of the fields that have changed; the values are the new values for those fields. Each value in the dictionary has been converted by the appropriate translator. If changes is the empty dictionary, then do nothing. If the defect tracker supports transitions in a workflow, then this method should deduce the transition to apply (if any) based on the old and new values for the issue fields. Example. The TeamTrack integration attempts to find and apply a transition when the STATEfield changes. It looks at all the available transitions for the issue and selects the transition that results in the correct new state. Example. Bugzilla doesn't have transitions, so there's no need for the Bugzilla integration to deduce one. This method must check that the proposed change to the issue is legal in the defect tracker. (The changed fields have been converted by their translators, so each is legal individually, but the defect tracker may be more stringent, for example it may require a field not to have a value when the issue is in a particular state.) It must also check that the user has permission to make the proposed change. It's best if you can call a function in the defect tracker's API to apply the defect tracker's own rules (this is likely to be robust and maintainable), but if there's no such function, then you must do your best to emulate the defect tracker's checks. If the issue can't be updated (for example, because the user doesn't have permission to make the change, or because no workflow transition can be discovered, or because the proposed change is illegal in some way) then this method must raise an error. Example. The TeamTrack integration calls the TSServer::Transitionmethod). Examples. The dt_teamtrack.pymodule defines a class teamtrack_fix. The dt_bugzilla.pymodule defines a class bugzilla_fix. A subclass of dt_interface.defect_tracker_fix must define the following methods: change(self) Return the change number for the fix, an integer. delete(self) Delete the fix in the defect tracker so that the change is no longer linked to the issue. status(self) Return the status of the fix, a string. update(self, change, client, date, status, user) Update this fix in the defect tracker so that has the given fields. If the fields are unchanged, do nothing. This method is called when someone makes a new fix between the change and issue of an existing fix (for example, the status used to be "open", but now is "closed"). Since there can be only one fix for a given change and issue, the replicator updates the fix rather than creating a new fix.).). Examples. The dt_teamtrack.pymodule defines a class teamtrack_filespec. The dt_bugzilla.pymodule defines a class bugzilla_filespec. A subclass of dt_interface.defect_tracker_filespec must define the following methods: delete(self) Delete the filespec record so that the issue is no longer associated with the filespec. name(self) Return the filespec, a string. translator.translatorclass A subclass of translator.translator translates values of a particular type between the defect tracker and Perforce. You should define a translator for each field type in the defect tracker that you want the P4DTI administrator to be able to replicate. You must define translators for dates (see section 7.5.1), multi-line text fields (see section 7.5.3), and users (see section 7.5.4). If your defect tracker has any concept of the state of an issue, then you must define a translator for states (see section 7.5.2). Example. The TeamTrack integration defines, in addition to the three required translators, translators for: fields that cross-reference an auxiliary table like TS_PROJECTS; elapsed time fields; selection fields; and the STATEfield. The translator base class doesn't know anything about Perforce; all it knows is that it is translating between two defect trackers, called 0 and 1. In the P4DTI, defect tracker 1 is always Perforce, but we haven't limited the design of the translator class by requiring that it is. Each subclass of translator.translator must define the following methods: translate_0_to_1(self, value, dt0, dt1, issue0=None, issue1=None) Return value, suitably translated from defect tracker 0 to defect tracker 1. If translation is not possible, raise an error. The job in Perforce to which the value is going, or None if the value isn't going to a job (represented by an instance of a subclass of dt_interface.defect_tracker_issue). This method takes defect trackers as arguments because it may need to query the defect tracker to carry out the translation. Example. In the TeamTrack integration, the single select translator needs to read the TS_SELECTIONStable to discover the available selections. To do this it calls the private method read_selectionsin dt0. This method takes issues as arguments because some translators need to know about the whole issue in order to carry out the translation. Example. In the TeamTrack integration the state translator needs to know the project to which the issue belongs (because different projects may have different states with the same name which correspond to the same Perforce state). Many translators can ignore the dt0 and dt1 arguments; most can ignore the issue0 and issue1 arguments. translate_1_to_0(self, value, dt0, dt1, issue0=None, issue1=None) Return value, suitably translated from defect tracker 1 to defect tracker 0. If translation is not possible, raise an error. The job in Perforce from which the value comes, or None if the value doesn't come from a job (represented by an instance of a subclass of dt_interface.defect_tracker_issue). Warning. Be careful not to assume that the dictionary representing the job has all fields present. It's possible that it has only a subset of fields. So don't write issue1['Spong'], write if issue1.has_key('Spong'): or issue1.get('Spong', default_value). You must define a date translator class, a subclass of translator.translator, to translate dates between your defect tracker and Perforce. When translating to Perforce: An empty or null date field must be translated to the empty string. Any other date must be translated to a string looking like "2000/12/31 23:59:59" (you can do this by calling time.strftime with "%Y/%m/%d %H:%M:%S" as the first argument). When translating from Perforce: The empty string must be translated to an empty or null date field. A string in the format "2000/12/31 23:59:59" specifies the calendar date. (This form is used by changelists and jobs.) A string consisting only of digits specifies the number of seconds since 1970-01-01 00:00:00 UTC. (This form is used by fixes.) Timezones. When Perforce creates a timestamp for a changelist or for a field in job with a preset of $now, it uses local time on the Perforce server. For other date fields in jobs, Perforce just stores the date the user entered, without conversion. Your date translator must make sure that its translations in the two directions are inverses of each other. The simplest way to do this is to follow the same principle as the Perforce server: just treat the date as you get it, without conversion. Example. TeamTrack specifies all dates as seconds since 1970-01-01 00:00:00, so the TeamTrack integration uses time.strftimeto convert from TeamTrack to Perforce, and either time.mktimeor simply intto convert from Perforce to TeamTrack. If your defect tracker has a concept of states for issues, then you must define a state translator class, a subclass of translator.translator. The state field in Perforce should be a "select" field (see section 8.4) so the values for this field should be legal selections in Perforce. This means no whitespace, hashes, double quotes, semicolons or slashes. Since the defect tracker probably allows these character to appear in state names, you must convert them somehow. We have provided a translator to do this conversion: it is the keyword_translator class in the translator.py module. You shouldn't just use the keyword translator as your state translator, since all it does is to convert strings. You should develop a translator that checks that applies the keyword translator, checks that the converted state is legal and raises an error if it is not. You must define a text translator class, a subclass of translator.translator, to translate multi-line text fields between your defect tracker and Perforce. This translator must translate line endings (if needed). Perforce uses newline ( "\n") as the line ending; values always end in a newline (unless the field is empty); values never end in more than one newline. Example. TeamTrack uses a carriage return plus a newline ( \r\n) as its line ending, and there need not be a final newline. Example. The MySQL database interface converts newlines if necessary, so the Bugzilla integration uses translator.translator(a translator which does nothing) for its text translator. You must define a user translator class, a subclass of translator.user_translator, to translate users between your defect tracker and Perforce. It is important not to assume that userids are the same in Perforce and the defect tracker, because an organization may have different policies for assigning userids in the two systems, or there may be legacy users from a previous policy. The TeamTrack and Bugzilla integrations translate between users based on their e-mail addresses. Your integration should do the same if possible and appropriate. When translating from the defect tracker to Perforce: Map the defect tracker user to a Perforce user with the same e-mail address, if there is one. Otherwise, map the defect tracker user to the Perforce user with the same userid, if there is one. Otherwise, return the defect tracker userid unchanged (assuming it is valid syntactically as a Perforce userid; if it isn't, apply the keyword translator (see section 7.5.2) to it). When translating from Perforce to the defect tracker: Map the Perforce user to a defect tracker user with the same e-mail address, if there is one. Otherwise, map the Perforce user to the defect tracker user with the same userid, if there is one. Otherwise, if translating the user in a changelist or fix, map the Perforce user to some dummy defect tracker user (see section 4.8). You can tell that you're translating a changelist or fix rather than an issue because the issue0 argument to the translate_1_to_0 method is None. Otherwise, you're translating a user field in an issue and you can't find a match either by e-mail address or by name. Raise an error. Each subclass of translator.user_translator must define the following method: unmatched_users(self) This method should examine all the users in the defect tracker and Perforce and return a report on the users in each system that have no corresponding userid in the other. It must return a tuple. The first four elements of the tuple must be as follows: A dictionary of users in the defect tracker that have no corresponding userid in Perforce. The keys of the dictionary are strings naming the defect tracker userids; the values of the dictionary are the e-mail addresses of the defect tracker users. A dictionary of users in Perforce that have no corresponding userid in the defect tracker. The keys of the dictionary are the Perforce userids; the values of the dictionary are the e-mail addresses of the Perforce users. A comment (a string or message) about the users in the first dictionary explaining how they are treated by this user translator. Example. The TeamTrack integration says, "These TeamTrack users will appear as themselves in Perforce even though there is no such Perforce user." A comment (a string or message) about the users in the second dictionary explaining how they are treated by this user translator. Example. The TeamTrack integration says, "These Perforce users will appear in TeamTrack as the user (None). It will not be possible to assign issues to these TeamTrack integration says, "These TeamTrack users have duplicate e-mail addresses. They may have been matched with the wrong Perforce user." A comment (a string or message) about the Perforce users with duplicate e-mail addresses, explaining what the problem is. If there are no such users, then you may specify None here. Example. The TeamTrack integration says, "These Perforce users have duplicate e-mail addresses. They may have been matched with the wrong TeamTrack user." This method is called each time the replicator is started. The results are used to compose an e-mail to the P4DTI administrator reporting on unmatched users. The all_issues and changed_entities methods return cursors. A cursor is a representation of the result set of a query into a database. It has the following method: fetchone(self) Return the next item in the result set, or None if there are no more items. This section describes how to configure the P4DTI to work with your extension. To understand how the configuration works, see [GDR 2000-09-13, 5]. You must write a configuration generator for your defect tracker. This must be a module called config_defect_tracker.py, where defect_tracker is the name you chose for your defect tracker (see section 3), converted to lower case. It must provide the following function: configuration(config) The config argument is a module whose members are the configuration parameters specified by the P4DTI administrator in config.py. It must check all the user configuration parameters that are specific to your defect tracker. It must return a that. The revised configuration module must include the following parameter for the Perforce interface. (This is in addition to the parameters p4_client_executable, p4_password, p4_port, and p4_user which came from the user configuration.) logger This is a logger object (see section 5.3) to which log messages are written. It must log to log_file if that is specified, to standard output, and to any appropriate system logging facility. It must respect the log_level. The revised configuration module must include the following parameters for the replicator. (These are in addition to the administrator_address, p4_user, poll_period, replicate_p, replicator_address, rid, and smtp_server parameters which came from the user configuration.) date_translator A date translator instance (see section 7.5.1). field_map A description of how fields map from the defect tracker to Perforce and back again. It is a list of tuples, one for each field to be replicated. Each tuple has three elements: The name of the field in the defect tracker. The name of the field in Perforce. A translator instance (see section 7.5) that can be used to translate between values in the two fields. The field map must match the defect tracker database and the jobspec configuration parameter.". jobspec The Perforce jobspec which the replicator is going to use, or None if the Perforce jobspec is left unchanged. 2001-06-18 TeamTrack integration, Perforce's five required fields are specified in the jobspec like this: (101, "Job", "word", 32, "required", None, None, "The job name."), (102, "State", "select", 32, "required", "_new", "_new/assigned/resolved/verified/deferred", " The state of the job in the TeamTrack workflow."), (103, "Owner", "word", 32, "required", "$user", None, "The person responsible for taking action."), (104, "Date", "date", 20, "always", "$now", None, "The date this job was last modified."), (105, "Title", "text", 0, "required", "$blank", None, ""),, "TeamTrack issue database identifier. Do not edit!"), (194, "P4DTI-user", "word", 32, "always", "$user", None, "Last user to edit this job. You can't edit this!"), These fields have high numbers so that they appear at the bottom of the jobspec where people don't have to look at them. The remainder of the jobspec should be filled in with the fields that the P4DTI administrator has specified for replication in the replicated_fields configuration parameter (for Bugzilla or TeamTrack). Make sure that the values for "select" fields are legal in Perforce (see section 7.5.2). name of the host on which the defect tracker runs, or the user to connect to the database as), then you must adapt the config.py module, as follows. Add a new # dt_name = "Defect_Tracker" line near the start of section 2, to indicate that your defect tracker integration is available. Add a new subsection to section 3, starting elif dt_name == "Defect_Tracker":. This should contain default values for the configuration parameters required only by your integration. Add a history entry to Appendix B explaining what you've done. Warning: The configuration methods in this section are not supported by Perforce or TeamShare. This section describes techniques you can use if you want to adapt a supported integration to do something that's not supported. Here are some of the things that are possible by making your own configuration. Here's are the steps you need to follow to make your own configuration: Choose a name for your configuration: my_configuration, say. Edit config.py, adding the line configure_name='my_configuration' Make a new module configure_my_configuration.py. Make your new module into a configuration generator (see section 8.1). See below for some examples. The best approach to making a configuration generator is to use an existing one and modify its output. That way, you benefit from improvements and corrections to the configuration generator in future releases of the P4DTI. Suppose that you. The following configuration generator achives the above: import configure_teamtrack import re import string convert = { 'State': 'Status', 'Owner': 'User', 'Title': 'Description', 'Description': 'Long_Description', } def configuration(config): config = configure_teamtrack.configuration(config) # Convert field names in the jobspec. f = config.jobspec[1] for i in range(len(f)): if convert.has_key(f[i][1]): f[i] = (f[i][0], convert[f[i][1]]) + f[i][2:] # Convert Perforce field names in field_map to match the jobspec. f = config.field_map for i in range(len(f_map)): if convert.has_key(f_map[i][1]): f[i] = (f[i][0], convert[f[i][1]], f[i][2]) return. The following configuration generator achievesType) for (t,p) in state_pairs: if value == t: return p raise error, factory.new(1, value) def translate_1_to_0(self, value, dt0, dt1, issue0 = None, issue1 = None): assert isinstance(value, types.StringType) for (t,p) in state_pairs: if value == p: return t raise error, factory.new(2, value) def configuration(config): config = configure_teamtrack.configuration(config) # Tell the replicator not to update the jobspec. config.jobspec = None # Make a field_map that works with my existing jobspec. config.field_map = [ ('STATE', 'Status', my_state_translator()), ('OWNER', 'User', config.user_translator), ('TITLE', 'Description', translator.translator()), ('DESCRIPTION', 'User_Impact', config.text_translator), ] return config Note the use of coding conventions in this example: message catalogs (see section 5.2) and raising exceptions when a value can't be translated (see section 5.4). Warning. If you leave your Perforce jobspec unchanged, you must check that it is compatible with the P4DTI. The reason for this is that the replicator uses p4 -G job -o jobname to get a job from Perforce; this command applies more stringent checking than p4 job -o jobname. Check that the "Presets" for each select field is valid for that field (that is, it appears as one of the "Values" for that field). Some organizations set up a jobspec with a field like this: Fields: 120 Severity select 20 required Values: Severity critical/essential/optional Presets: Severity setme Their intention is that since "setme" is not a legal value for the Severity field, the person submitting the job must give it a value; they can't just ignore it and leave it with the default value. However, this won't work with the P4DTI, because the command p4 -G job -o won't even give you a blank job form; instead it gives you an error message. To build the P4DTI, follow the release build procedure [GDR 2000-10-17]. This procedure uses automated support from the build.py tool; this is documented in [GDR 2001-07-13]. You may adapt these three documents so that your new integration built by the same procedure as the other integrations in the P4DTI. To test the P4DTI, follow the release test procedure [RB 2001-03-21]. This uses the sample data and automated tests in the test/ directory of the integration kit. See [GDR 2001-07-02] for the test design. You may adapt the existing tests so that they test your integration. The defect tracker should display, for each issue that is replicated, a description of the Perforce server to which the issue is replicated. Use the configuration parameter p4_server_description which you should have stored in a table in the defect tracker (see section 4.5). The defect tracker should display the jobname of the job to which the issue is replicated. The jobname should be a link to the URL given by the job_url configuration parameter, with the jobname inserted. This configuration parameter is defined in the Administrator's Guide as being suitable for passing to sprintf as the format string: it must have one %s format specified (for which the jobname is substituted) and it may have any number of doubled percent signs %% (which must become single percent signs in the resulting URL) [RB 2000-08-10a, 5.1]. The defect tracker should display on each issue description page a table of fixes for that issue (if there are any). The table should look like the table below. Points to note about this table: Pending changelists are distinguished from submitted changelists. This is important because the effect of a pending changelist does not happen until the changelist is submitted. So in the above table the status of the job is still "open" but it is understood that when changelist 5634 is submitted it becomes "closed". The user and date are for the change (not for the fix). Knowing when the change was made and by whom is much more important than knowing when the change was linked with the job. The user is the defect tracker user who corresponds to the Perforce user who made the change. The change number is a link to the URL given by the changelist_url configuration parameter, with the change number inserted. This configuration parameter is defined in the Administrator's Guide as being suitable for passing to sprintf as the format string: it must have one %d format specified (for which the change number is substituted) and it may have any number of doubled percent signs %% (which must become single percent signs in the resulting URL) [RB 2000-08-10a, 5.1]. All the fixes for an issue are replicated by the same replicator and from the same Perforce server as the issue itself. So when building this table you only need to select records with the same replicator identifier and Perforce server identifier as the issue. A single defect tracker may replicate issues to several Perforce servers (see section 4). Each Perforce server has a different changelist URL. So it is important to select the URL for the correct Perforce server (namely the one to which the issue is replicated) when making this table. When adding material relating to your defect tracker to the manuals, surround each section with the HTML tags <div class="defect_tracker"> and </div>. This makes the material for a particular defect tracker easy to find, extract and check, to meet requirement 32. You must adapt the Perforce Defect Tracking Integration Administrator's Guide [RB 2000-08-10a] to describe your integration, as described in the list below. Add a new subsection to section 3, specifying the software and procedural prerequisites for using your defect tracker with the P4DTI. If your integration requires a new installation procedure, or installs on a new platform, update section 4. Add your new configuration parameters to section 5.1. Add a new subsection to section 5, explaining how to configure your defect tracker for the P4DTI. Add a new item to the list in section 10, explaining how to uninstall your integration and return your defect tracker to its original state. Add the error messages that your code can produce to section 11.2 (include the product, message identifier and check digit just like the other errors messages in that section). Add likely error messages from systems with which your code interacts to section 11.3 (for example, errors from the defect tracker, or from your database interface). Add references to documentation for your defect tracker, and any other supporting materials that you referred to, to appendix A. Add a history entry to appendix B explaining what you've done. If you provided an interface from your defect tracker to the Perforce fixes relation (see section 10), then you must adapt the Perforce Defect Tracking Integration User's Guide [RB 2000-08-10b] to describe your integration, as follows: Add a paragraph to section 10.3 explaining how to access Perforce fixes from the defect tracker. Defects in the P4DTI Kit include (but aren't limited to): An essential piece of information can't be found in this manual or in the design documents it refers to. Inconsistencies between this manual, the design documents it refers to, and the sources they document. Defects in the P4DTI sources or in the test cases. Please report any defects you find to Perforce support, so that they can be fixed and the product improved. Please provide the following information with your defect report: The release of the P4DTI Kit you are using (look in the readme.txt that came with the P4DTI Kit to identify the release). The name and release of the defect tracker you are integrating with. If you're reporting a defect in documentation: What you're trying to do. The information you need. Where you expected to find it. Where else you looked for it. If you're reporting a defect in the code: What you did immediately prior to the defect's occurrence. What you think should have happened. What actually happened. The Perforce release you are using. Any source code you've added or modified, including your config.py file. A section of the P4DTI log that includes the error that you're reporting and some context around that error. Copies of any related e-mail messages generated by the P4DTI. Please send your contributions (fixes, adaptions and extensions) to Perforce support. Please include the following: A description of your contribution: what it is designed to achieve; which files you've changed; which files you've added. The release of the P4DTI Kit you have been developing against (look in the readme.txt that came with the P4DTI Kit to identify the release). The complete P4DTI Kit, including your modifications and additions. Make a tarball or a ZIP archive of the whole P4DTI Kit directory. (Please do this even if you've only changed a couple of files. This allows us to add your contribution to Perforce and use p4 diff2 to see exactly what changes you've made.) What you are prepared for us to do with your contribution. Are you willing for us to make it available for distribution from Perforce or Ravenbrook's web site? Are you willing for us to incorporate it into the P4DTI and maintain and support it? Have you made it available under an open source license? This section lists significant changes in the specification that a defect tracker has to meet. Changes are upwards compatible (in the sense that a defect tracker module that worked with the previous release of the integration kit will work with the new release) unless stated otherwise. The all_issues and changed_entities methods of the defect_tracker class (see section 7.1) must now return cursors (see section 7.6), not lists of issues. The purpose of this change is to allow the replicator to work if there are more issues in the defect tracker than)./1.5/manual/ig/index.html#3 $
http://www.ravenbrook.com/project/p4dti/release/1.5.1/ig/
crawl-003
refinedweb
10,143
55.13
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Like math puzzles and physics puzzles, there are many different kinds of programming puzzles. One way to classify programming puzzles is by the skills they exercise or the concepts they illustrate: knowledge of a language, designing a new algorithm, understanding a specification, etc. On LtU, I'm interested in puzzles that are not specific to a language yet rely on a PL notion. Here's one attempt of mine, inspired by my student Jun Dai: Your stair-climbing robot has a very simple low-level API: the "step" function takes no argument and attempts to climb one step as a side effect. Unfortunately, sometimes the attempt fails and the robot clumsily falls one step instead. The "step" function detects what happens and returns a boolean flag: true on success, false on failure. Write a function "step_up" that climbs one step up (by repeating "step" attempts if necessary). Assume that the robot is not already at the top of the stairs, and neither does it ever reach the bottom of the stairs. How small can you make "step_up"? Can you avoid using variables (even immutable ones) and numbers? What do you think? What is your solution? How are the solutions related to each other? (If "step" fails with a fixed probability, then how many times does "step_up" expect to call "step"?) More importantly, what is your favorite PL puzzle that is not (terribly) language-specific? redacted. I don't want to be a spoiler. What do you think? What is your solution? Edit: I originally posted my solutions and analysis here, but took James's lead and decided to stick them somewhere else. Here they are. How are the solutions related to each other? If you CPS-convert and then defunctionalize the second program I think you'll see that the continuation data-type is isomorphic to the natural numbers. So the two programs should be roughly isomorphic from this point of view. In more operational terms you could say that the chain of activation records is being used as an implicit counter: a non-tail call (frame push) corresponds to a counter increment and a non-tail return (frame pop) corresponds to a counter decrement. If "step" fails with a fixed probability, then how many times does "step_up" expect to call "step"? Let the success probability be p and q = 1-p be its complement, the failure probability. The probability of succeeding in 1 step is p. You cannot succeed in 2 steps. The probability of succeeding in 3 steps is q*p^2. You cannot succeed in 4 steps. The probability of succeeding in 5 steps is q^2*p^3. The general pattern (easily provable) is that succeeding in 2n steps has probability p*(p*q)^(2n). So we have to sum the series sum 2n*p*(p*q)^(2n). First, let's rewrite this as 2p sum n*((p*q)^2)^n. Setting r = (p*q)^2, this reduces to sum n*r^n. Since r is constant with respect to n this can be summed using the usual differentiation trick for geometric series taught to freshmen. Edit: Made some silly mistakes in this analysis. Even assuming my overall analysis was correct the terms would be (2n+1)*(stuff) rather than 2n*(stuff). But I missed something else too: a factor to account for the different permutations. Once you put that in the series gets messy, so I withdraw my analysis. You can find a solution with Google by looking for a derivation of the "mean return time" for random walks. This isn't an instance of the canonical random walk problem but it's pretty close. More importantly, what is your favorite PL puzzle that is not (terribly) language-specific? I wouldn't say it's my favorite, but Danvy often talks about the problem of testing the balance of a Calder mobile in a single traversal. Not very difficult, but it's kind of cute. Is this code I just wrote a solution to "the problem of testing the balance of a Calder mobile in a single traversal", or did I miss something else? If the tagging and untagging with Just is undesired, one can Church-encode the Maybe type into continuation-passing or exception-throwing code... I'd like to see even more puzzles. Yes, that looks about right. I said it was easy! The relevance to programming languages is that there are some nice ways of deriving and relating the different solutions that involve continuations. I believe Danvy has discussed this problem in a number of places. This note (in French) is dedicated to it: Sur un exemple de Patrick Greussay,. The expect number of calls to step() will be, in a recurrence, X = p*1 + (1-p)*(1 + 2X) Solving for X, you get X = 1/(2*p -1) This looks right since X=1 when p=1 and it goes to infinity when p falls to 0.5 f(0) = 0 f(n) = 1 + p * f(n-1) + (1-p) * f(n+1) f(2) = 2f(1). In your solution, I believe you are counting those where you go up 2 steps then fall back down 1, to get up a step. But any implementation would have stopped already! So you get infinitely many solutions because you look at all those possible futures. This is clearly one of my favorites: You have a simple linked list with more than 15 elements and should return from it the 15th element. And don't even dare thinking about iterating the list twice! Eric Bodden wrote: ``You have a simple linked list with more than 15 elements and should return from it the 15th element. And don't even dare thinking about iterating the list twice!'' As others noted, you probably mean the 15th element from the end. The problem doesn't seem to be too hard; it can be solved in constant space. I guess the trick is to traverse the same list using two pointers, one of them is suitably delayed. -- Nth element of the list, from the end, assuming the list is longer -- than n. lastN n lst = loop lst (drop n lst) where loop (h0:_) [] = h0 loop (_:t0) (_:t1) = loop t0 t1 test1 = lastN 15 (reverse [1..30]) test2 = lastN 15 [1..100] This is just the generalization of Haskell's standard function `last'. Your function loop is equivalent to \xs ys -> last (zipWith const xs ys). loop \xs ys -> last (zipWith const xs ys) You are off by one. You have a labeled tree, which (if my Haskell is not too rusty) is defined by: data LTree a = Leaf String a | Branch String [LTree a] That might correspond to a normal file system, ignoring . and .., where the leaves are files and the branches are directories. You are to produce an outline view of this tree, showing only those leaves which satisify some predicate function and the branches necessary to reach those leaves. You should assume that the tree is wide and shallow. data LTree a = Leaf String a | Branch String [LTree a] I find this problem interesting, in part because it is practical (my original implementation was to produce a view of an Apache configuration file showing only directives which pertained to a given module), but mostly because the nature of the solution seems to vary considerably between languages. For example, the C solution (or at least a C solution) requires no heap allocation. I was musing on the issue of language-specific solutions while watching Chris Rathman translate SICP into various languages, including Lua. Lua does not have a primitive datatype for linked lists; instead, it has a primitive mapping type (tables). In Lua, the 15th last element problem above would be trivial: function nthlast(t, n) return t[#t+1-n] end which rather avoids the elegance of the intended solution. The (well, my) Lua solution to the filtered tree problem uses a single temporary table and thereby avoids a certain amount of list reversing which seems to show up in solutions in other languages. function nthlast(t, n) return t[#t+1-n] end module Prune where data LTree a = Leaf String a | Branch String [LTree a] deriving (Eq, Show) -- A simple solution prune :: (a -> Bool) -> LTree a -> Maybe (LTree a) prune pred tree@(Leaf _ a) = if pred a then Just tree else Nothing prune pred (Branch label branches) = if null pruned then Nothing else Just (Branch label pruned) where pruned = [ branch | Just branch <- map (prune pred) branches ] -- For testing binary :: Int -> Int -> LTree Int binary n k = if k > 0 then Branch bit [binary n' k', binary (succ n') k'] else Leaf bit n where bit = show (n `mod` 2) n' = n + n k' = pred k This is indeed an interesting problem; it is also the excellent illustration of `stack marks' or stack traversal facility described in the paper `Delimited dynamic binding' (see, in particular Section 6.2 of the paper and the `nub' example). The OCaml code of the solution is available here assuming the Dynvar library introduced in that paper, and whose source is available from that web site. The source code contains a few tests and their output. Posting a lot of source code at LtU doesn't seem suitable (nor appropriate). It is easy to see that the problem can indeed be solved without any heap allocation, requiring only O(tree-depth) stack space -- provided that Dynvar library (actually, the underlying shift/reset implementation were a bit smarter -- like that in Chez Scheme). I've posted two K solutions to this clever little problem here: Subtree where leaf x is f x Solution 1 is what I would call "semi-recursive". Here's the idea. We represent a tree with a nested dictionary, which is one of K's recursive datatypes (the other being a general list.) Then (recursive part) we compute a list of paths to the leaves. From here we proceed non-recursively to (i) extract the data at the leaves as a vector, (ii) apply the predicate to the vector (an array operation, which returns a boolean vector), and (iii) construct a new dictionary consisting of just those paths to leaves which satisfy the predicate. Solution 2 is non-recursive, and uses a more economical representation of a tree as three equal-length vectors: T, where T[i]=j means that node i is a child of node j (T[0] = 0 = the root of the tree); D, where D[i] = the data at leaf i (if i is not a leaf, then D[i] is NaN); and N, where N[i] = the label of node i. Solution 2 is presented as "open code": t, d, and n together represent the input tree, f is the predicate, and T, D, and N represent the output subtree. (Wrapping the code in a function is trivial, but I assume that readers who execute the solution will want to examine the intermediate values.) Solution 1 is shorter, and probably easier to understand. (well ...) It was certainly easier to write, being quasi-idiomatic, and familiar to all seasoned k programmers. Probably no more than 10 minutes work, most of which went towards the nasty-looking (recursive) pretty-printer (and the comments). Solution 2 contains a few subtleties, and required more time and testing to get right. But solution 2 scales quite well. I invite others to test their implementations on larger trees having varying width/depth. e.g. 500,000 nodes, 10,000 of which are leaves. Note added. I've included a fully recursive solution for comparison: st:{(./)._n,+{:[5=4:z;,/_f'[x,/:!z;y;z[]];y z;,(x;:;z);()]}[();y;x]} st descends until it finds a leaf z. Where y is the predicate, if y z is true, st returns (x;:;z), where x is the path to z. If y z isn't true, st returns null. The result is amended to produce the desired subtree. Note added. I've also included a non-recursive K4 solution, which is half the code of the K2 version. And finally, a solution in the Q dialect of K4: select from t where n in raze({exec p from t where n in x}scan)each exec n from t where f d Further details in the paper linked above. I gather that this particular puzzle is older than dirt, though it's new to me. I first came across the Josephus Flavius' problem in the CTM book. Actually I like it not so much for the problem itself, but rather for the declarative solutions in Oz and Alice ML. (I stumbled across the cut-the-knot website that looks to be an interesting resource for puzzle pontification). Here's a cute little Haskell solution: josephus init nth = last alive where alive = init ++ kill (length init) alive kill 0 line = [] kill (n+1) line = take (nth-1) line ++ kill n (drop nth line) Then you can solve the canonical problem instance by calculating, e.g. josephus [1..40] 3 Here is a link to my solution in Haskell. I could shorten it by making it less readable (and I might just do that as an annotation. That looks like one pass to me. A quick answer, for the Erlang emulator. :) "Step" gives 4/5 chance of success, "Step_up" returns a history of attempts. Try "Nontrivial_steps(Steps(100))". void step_up() { if(!step()) { step_up(); step_up(); } } ...when the going gets tough, this'll blow the stack. (Cue "The Importance of TCO" from Anton...) It's non-deterministic by nature of the problem, but if the first recursive call succeds it's just as likely the second one will too. So, your point was? As the success probability of step() goes down, the expected number of times we'll need to call it goes up (see recurrence analyses above). In this implementation (as in others) step() is called once per call to step_up(), so the expected number of calls to step() is related (though not identical to) the expected stack depth. (Another interesting exercise might be to precisely characterize this relationship.) In any case, if the success probability is low enough, this Java code will certainly result in a StackOverflowError. The point of this was that the equivalent Scheme code (linked from one of the first comments above) is guaranteed not to suffer from this problem (since Scheme implementations are required to execute tail calls in constant space). However, now that I think about it more carefully, I realize that I was wrong about this. TCO doesn't matter in this case, as the first recursive call is not in tail position. So the Scheme version will also likely run out of space sooner than a version using a loop and explicit counter. Perhaps this is what you meant... I'm having a little trouble understanding your comment. TCO is a very useful tool. Another excellent and underappreciated tool is a very big stack, i.e. being able to shove a heap worth of stuff onto your stack when it suits you. For example, here is the standard non-tail-recursive map function in Erlang's lists module: map map(F, [H|T]) -> [F(H)|map(F, T)]; map(F, []) when is_function(F, 1) -> []. and here's mapping over a million-element list: 6> L = lists:seq(1, 1000000), ok. ok 7> lists:map(fun(X) -> X + 1 end, L), ok. ok No fuss no muss. But try it in your language! My language Kogut handles large stacks just fine. They are silently resized on overflow, silently shrunk by GC, and they aren't scanned as a whole by minor GC. Perhaps this is what you meant... I'm having a little trouble understanding your comment. Yes, that's what I meant. Even in a language without tail call optimization, if the first recursive call - which is not in tail position - succeeds, its just as likely that the second recusive call - which is in tail position - also does. void stepup () { while (!step()) { stepup(); } } Now 50% better! If it falls 1 step, it should go up 2 steps. Notice the while loop. Yes, I know, it's not natural! This version consumes stack space proportional to the number of steps it needs to go up, as opposed to the number of steps it attempts. This is a substantial difference if the probability of success is only somewhat greater than or equal to 1/2. Unfortunately, sometimes the attempt fails and the robot clumsily falls one step instead. For every return of false by step(), the number of needed successful calls to step() increments. procedure step_up; var level: integer; begin level := -1; while level < 0 do if step then inc(level) else dec(level); end; or with standard tail-recursion: procedure step_up_from(level: integer); begin if level = 0 then exit; if step then step_up_from(level+1) else step_up_from(level-1); end; I would generally not assume that the robot will never fall twice in a row, since the problem does not say that it never will. There's no guarantees of the sort in real life. pkhuong's code is correct, even if the robot steps down twice in a row.. As for the stack consumed, I would say that it is proportional to the "maximum" number of failed step up. I didn't realize he used a while() instead of an if(). void step_up() { while( !step() ){ step_up(); } } yes stack overflow is a problem, but then so is battery life, so forget about it. I'm rather partial to the problem of forward references as it has a very slick solution using laziness and circular programming. Usually, I use it as an example using a simple program that does the following: The program takes lines of the form define <var> ([*]<var>)+ i.e. define y a *y The intent is that y is defined as being the literal 'a' followed by the contents of y, i.e. y = a a a a a a a ... in this example. A very simple version can be written in about 5 lines of typical Haskell code. A somewhat more involved version and some discussion is located at. One way to make this more of a "puzzle" is the following: You can change the straightforward backward references only solution into the full solution with just a very minor change... if your language is lazy. I have a nice solution to this using the function 'loeb' I defined here. 'loeb' is a cycle fixing function that can be used with all kinds of containers. (The type of 'loeb' corresponds to that of Loeb's theorem in the modal logic GL). So here's a solution (minus the parser): import Maybe loeb :: Functor a => a (a x -> x) -> a x loeb x = fmap (\a -> a (loeb x)) x data Dict a = Dict { unDict :: [(String,a)] } deriving Show instance Functor Dict where fmap f (Dict ((a,b):abs)) = Dict ((a,f b):unDict (fmap f (Dict abs))) fmap f (Dict []) = Dict [] f +++ g = \x -> f x ++ g x get a (Dict b) = fromJust $ lookup a b test = loeb (Dict [("a",const "x"+++get "b"),("b",const "y"+++get "a")]) Note how the argument to 'loeb' isn't circular, so it's easy to produce from a parser. In this case I'm solving: define a x *b define b y *a (The cool thing is that I'd been hunting for an application of 'loeb' and now I have one, a practical one even.) Cool! Your post inspired me to see if it's possible to encode the same solution in OCaml. Indeed it is, provided you have a (lazy enough) implementation of lazy lists. In this case, Run is a naive implementation of catenable lists, with thunked constructors. Encoding Haskell type classes as ML Functors: Run module type FUNCTOR = sig type 'a t val map : ('a -> 'b) -> ('a t -> 'b t) end module Loeb (F : FUNCTOR) = struct let rec loeb x = F.map (fun a -> a (lazy (loeb x))) x end module Dict = struct type 'a t = (string * 'a) list let map f (d:'a t) : 'b t = List.map (fun (a, b) -> a, f b) d let get x (d:'a t) = List.assoc x d end (* define var = lit (lit|var)+ *) let parse d = let compile l env = List.fold_right (fun e r -> let tail () = r in match e with | `Lit c -> Run.cons c tail | `Var w -> Run.append (fun () -> Dict.get w (Lazy.force env)) tail) l Run.Nil in let module L = Loeb(Dict) in L.loeb (Dict.map compile d) As you remarked, the compilation proceeds from an absolutely flat representation. Note that I had to control the depth of evaluation in Loeb, and force the promise in the application code, as required. The result: Loeb # let test = parse [ "a", [`Lit 'x'; `Var "b"]; "b", [`Lit 'y'; `Var "a"]; ] ;; val test : char Run.t Dict.t = [("a", Run.Cons ('x', <lazy>)); ("b", Run.Cons ('y', <lazy>))] # Run.take 6 (Dict.get "a" test) ;; - : char list = ['x'; 'y'; 'x'; 'y'; 'x'; 'y'] (define (climb N) (if (> N 0) (if (step) (climb (- N 1)) (climb (+ N 1))))) (define (stepup) (climb 1)) If we can assume an lcons that evaluates its car and cdr lazily, here is one that can work (in pseudo-but-nearly-valid-scheme) - lcons car cdr ; A step function which fails 50% of the time. (define (step) (if (= 0 (rand 2)) (begin (display "DOWN") #f) (begin (display "UP") #t))) ; Gets the second element of a list s, ; forcing the value of the first element if ; the list happens to be lazy. (define (forced-next s) (first s) (first (rest s))) ; (steps) generates a sequence of ; successful upward steps. To climb N steps, ; just take N elements from (steps). (define (steps) (lcons (if (step) #t (forced-next (rest (steps)))) (steps))) I don't think this is stack safe, but what the heck, laziness is cool :) It is not terribly difficult to implement lazy-cons in R5RS, using syntax-rules and lambdas... I should be able to express it using delay and force, but ... I'm too lazy :) Interesting what you can stumble upon on the web. Two years later, here is my solution. In C: void step_up() { while (!step()) step_up(); } This provides an alternative complexity analysis: P(loop terminates) = p P(loop terminates after n calls to step) = p(1-p)^(n-1) E{n} = 1/p Since each failed test results in a call to step_up, we expect 1/p - 1 recursions. So the recurrence relation is X = 1/p + (1/p - 1)X, which solves as X = 1 / (2p - 1) - exactly what kenhirsch got. Thanks for resurrecting the thread. I hadn't seen it and it's neat. I'm not sure that you can really call for a solution "not using variables or numbers." By a diagonalization argument you can show that the problem can't be solved by any finite state automata -- you need at least a stack machine. You have to encode an arbitrary non-negative integer in mutable (or infinitely extensible) state, one way or another. So, I would hope a student who gave a well presented diagonalization or similar argument could get nearly full credit, with extra credit if the student also gives the "but this is the answer you were looking for" solution. -t Recursive: stepUp() { while ( ! step() ) stepUp(); } Iterative: stepUp() { for ( int i = 1 ; i > 0 ; i -= step() ? 1 : -1 ); } -1 [step [1 +] [1 -] if] whilenz So, two years and a half after the article was posted, in Haskell: import System.Random (randomRIO) -- I implemented this function so I would have something to test it with. step :: IO Bool step = putStr "step " >> randomRIO (True, False) >>= putDirection where putDirection x = (if x then putStrLn "up" else putStrLn "down") >> return x -- This is the actual function. stepUp :: IO () stepUp = step >>= (\x -> if x then return () else stepUp >> stepUp) Maybe step_up() could be given by this BNF: S ::= A true A ::= false A true | [empty] using step() as the lexer. How about: S := true | false S S So that FFTTFTFTT is also possible. Direct transliteration of the Java if-based code. The while-based code would be: S := true | { false S } Sorry, that's wrong, should have been: S := { false S } true I tried to transform the (1) to (2) and back: (1) S ::= t | f S S (2) S ::= { f S } t (1) S ::= t | f S S (2) S ::= { f S } t From (1) to (2) we can "rewrite" the last S: (3) S ::= t | f S t | f S f S S (4) S ::= t | f S t | f S f S t | f S f S f S S The last S will eventually rewrite to t, so we can express it as: (2) S ::= { f S } t This can actually be "done" mechanically and I guess that's how the tail recursion is transformed to a cycle. (3) S ::= t | f S t | f S f S S (4) S ::= t | f S t | f S f S t | f S f S f S S The (2) to (1) way, if we apply the definition of { }: (5) S ::= A t (6) A ::= eps | f S A If we substitute the A in (5): (7) S ::= t | f S A t But the last "A t" in (7) is what S in (5) rewrites to, so we can, hmm, write it as: (1) S ::= t | f S S This way doesn't seem to be so "mechanizable". But who wants to transform a cycle to a recursion. (5) S ::= A t (6) A ::= eps | f S A (7) S ::= t | f S A t void stepup() { while (!step()) { stepup(); } } I see a lot of people came to the same solution :)
http://lambda-the-ultimate.org/node/1872
CC-MAIN-2017-22
refinedweb
4,340
70.33
- Advertisement destluckMember Content Count11 Joined Last visited Community Reputation114 Neutral About destluck - RankMember Functions destluck replied to destluck's topic in For Beginners's ForumPerfect tyvm for the fast response :) Functions destluck posted a topic in For Beginners's ForumAlright im not to sure if im using the right terms. Anyhow i wanted to know if there is a difference in declaring/initializing a function before the main or after it. - Is it better to declare it before main or after? - What affect does it cause? example: // function delcared before main() char askYesNo1() { ......... } // main function int main() { return 0; } // function delcared after main() char askYesNo2() { .... } Any idea what i did wrong? destluck replied to destluck's topic in For Beginners's Forumty Any idea what i did wrong? destluck posted a topic in For Beginners's ForumAlright i keep getting 2 errors: Error 1 error LNK2005: _main already defined in main.obj Error 2 error LNK1169: one or more multiply defined symbols found #include<iostream> #include <string> #include <random> #include <time.h> using namespace std; class Player { public: string mFirstName; string mLastName; string mStreetName; string mGender; int mAdress; int mAge; }; Player GetPlayerFromConsole() { Player npc; cout << " New Player Signup: " << endl; cout << "Enter your first name: " << endl; cin >> npc.mFirstName; cout << "Enter your last name: " << endl; cin >> npc.mLastName; cout << " Enter your Age: " << endl; cin >> npc.mAge; cout << "Enter your Gender: " << endl; cin >> npc.mGender; cout << "Enter your Street name: " << endl; cin >> npc.mStreetName; cout << "Enter your adress: " << endl; cin >> npc.mAdress; return npc; } int main() { Player newPlayer; newPlayer = GetPlayerFromConsole(); cout << "New Player info Sheet: " << endl; cout << "Player name: " << newPlayer.mFirstName << endl; cout << newPlayer.mFirstName << endl; cout << "Player Adress: " << newPlayer.mAdress << newPlayer.mStreetName << endl; cout << "Player Gender: " << newPlayer.mGender << endl; cin.get(); system("pause"); return 0; } - thx for all the great posts and links reading trough them as we speak :) great stuff - Thx ill give it a read. - hehe well that's reassuring i think ... when i finish work in 1h ill be able to pinpoint exactly the parts im having issues with. Normally i learn by knowing what specific things are used for in a game based environment. Keep getting stuck destluck posted a topic in For Beginners's ForumAlright i have been trying to learn C++ for a while now. Been buying VTM's and books and i always seem to get stuck. Always around the chapters 6-7 and that turns out to be where pointers kick in. Maybe the books i have a just bad or im just a real slow learner. Anyhow i was wondering if anyone could link me a few things i can read or tutorials that helped you learn C++. If anyone is willing to coach me a bit that would be awesome :) . Any info or help is more then welcomed sick of being stuck or not able to wrap my head around some issues i have. DestLuck Image detection destluck replied to destluck's topic in General and Gameplay Programmingthis for the info yah basicly the anti-macro screen pops up randomly and shows 3-5 icons. Then you have a list of icons at the bottom and you must click the icons that match. as you can see in this picture there 3 icons/images from the tops have to be matched with the ones on the bottom. [url=][/URL] Image detection destluck posted a topic in General and Gameplay Programmingwhat would be the best language/method to get image detection to work? The anti-Macro brings up a screen with images/icons that you need to match. The icons keep changing. this video should show what i need the program to do. ** Link ** *** *** thx for the info LF Programmer to Hire! destluck posted a topic in General and Gameplay ProgrammingLooking to Hire a programmer to help me bypass a game issue. You tell me the price for the project and i will pay you for it. Game: Ashen Empires Payment method: Paypal Payment: Milestone this video should show what i need the program to do. [mod edit: link redacted] - Advertisement
https://www.gamedev.net/profile/221915-destluck/
CC-MAIN-2019-47
refinedweb
679
73.58
On Mon, 04 Jan 2010 21:27:12 -0800, Nav wrote: > @ Steven.... > "No, you're confused -- the problem isn't with using the global > namespace. > The problem is that you don't know what names you want to use ahead of > time. " > > Actually I know what the names would be and how I want to use them. You said earlier: "I have a class of let's say empty bottle which can have a mix of two items. I want to create let's say 30 of these objects which will have names based on the 2 attributes (apple juice, beer, grape juice, beer, etc) that I provide from a list." Your description is confusing to me. What on earth is an empty bottle which has a mix of two items in it? Surely that means it's not empty any more? But putting that aside: "All the objects are a mix of (1 of three alcohols) and (1 of 10 juices), so I don't want to go through typing in the names of all the objects (which would be totally stupid)." Right... so your problem isn't that you don't know what the variable names is, but there are too many to comfortably enumerate in the source code. The answer is, again, avoid named variables. Instead of (say): gin_apple_strawberry = Bottle('gin', 'apple') gin_apple_orange = Bottle('gin', 'orange') # etc. again you should use a list or dict: bottles = [] for alcohol in ('gin', 'beer', 'wine'): for fruit in ('apple', 'banana', 'blueberry', 'strawberry', 'orange', 'peach'): bottles.append(Bottle(alcohol, fruit)) for bottle in bottles: process(bottle) -- Steven
https://mail.python.org/pipermail/python-list/2010-January/563277.html
CC-MAIN-2016-36
refinedweb
266
80.01
Continuing the ongoing saga… Using the --silence hackery from Jacob Berkman I found a way to finally STFU at least libtool. In your configure.ac add: changequote(,)dnl LIBTOOL="\\$(QUIET_LT)${LIBTOOL}" changequote([,])dnl Add a Makefile.decls file to the root of your project, containing: QUIET_LT = @echo ' ' LIBTOOL $@; And include it in every Makefile.am: include $(top_srcdir)/Makefile.decls This will silence all libtool invocations; you can make this all conditional, obviously: just sorround the QUIET_* declarations with if VARIABLE...endif and define VARIABLE using AM_CONDITIONAL in your configure template. Now, I’ll just have to find a way to make gcc shut up1, and so the saga will end with a third chapter. (continuing) The wasted build cycle is why I am not satisfied with there being some way to turn off the output suppression, but insist that the default — no, only — behavior of the Makefile should be to print all the commands exactly as executed. So, since I’m being all down on your perfectly reasonable desire for build logs where the actual compiler diagnostics stand out more, let me offer an alternative. What you should be doing is writing a wrapper for ‘make’ which runs its output through a filter that substitutes the short abbreviations you want for each unwieldy command you don’t like. This only affects your personal view of the build, so it doesn’t have any of the problems I am concerned with. And it also avoids having to figure out how to suppress the command lines for each tool individually. I see I never answered your questions from last time (sorry) so let me try again to explain why this is a terrible idea… Yes, what you’re doing leaves the compiler diagnostic messages intact, but there are failure modes, especially of automated build systems, where you must be able to see the command lines as well in order to fix the problem. An actually-happened-to-me example: GCC used to hide the exact commands for some moderately complicated shell operations embedded in the makefile. These made … some assumption I don’t remember anymore … which was true for the typical build environment, but untrue in an automated build system used by a company I consulted for, that was (a) very slow, (b) distributed across several dozen machines in a desperate attempt to compensate for the slowness, and (c) inaccessible to the engineers, because the project managers didn’t trust them to check all their changes into ClearCase otherwise. For failed builds, all we could have was the ‘make’ output. If I remember correctly, I wasted four or five build cycles on experiments — each taking not quite long enough to get anything else done in the meantime — before I gave up and ripped out all the @ signs in the Makefile, which made the cause of the problem obvious in the build logs and the fix straightforward. Now you might say that this is a scenario from a dysfunctional, dying company, and one involving GCC, whose makefiles can be used to scare small children; your software won’t ever be built in such an awkward environment, and won’t have makefiles like that anyway. But the Debian build-daemon network works exactly the same way as the build system at this company. You can’t do trial builds on it, you can only make new package uploads; you can’t see how the systems are configured; if a build fails, all you get is the logs. And Automake’s compiler invocations are already bad enough to make me worry about this kind of problem — If $(srcdir)is wrong, or if somehow a file named roster_delta.cchas gotten created in the build directory when it shouldn’t have, this is going to go off the rails. And if all I can see in the build logs is then I’m hosed, aren’t I? And I get to waste at least one build cycle digging through your makefiles to find out what crazy thing you’ve done to suppress the real command lines and turn it off. You might want to turn off smart quotes for this entry so people don’t copy and paste them by accident.
http://blogs.gnome.org/ebassi/2008/01/25/paint-the-silence2/
CC-MAIN-2014-52
refinedweb
704
53.75
namespace problem on IE8 [Solved] Hi I was happy to try Ext: 4.2 when I found a strange thing in Internet Explorer 8 : I was unable to use the namespace. For Example, this tiny sample send a pop-up on fire-fox. But the Javascript crash on the assignation line. What have I done wrong ? Thank for any advice. HTML Code: <html> <header> <script type="text/javascript" src="ext.js"></script> <script type="text/javascript"> Ext.onReady(function() { Ext.ns("DoD.var"); DoD.var.urlRoot = 'foo'; alert(DoD.var.urlRoot); }); </script> </header> <body> </body> </html> Last edited by norto; 4 Apr 2013 at 12:14 AM. Reason: Solved - Join Date - Apr 2007 - Location - Sydney, Australia - 17,690 - Vote Rating - 769 'var' is a reserved word, use something else.Evan Trimboli Sencha Developer Twitter - @evantrimboli Don't be afraid of the source code!
https://www.sencha.com/forum/showthread.php?260402-namespace-problem-on-IE8
CC-MAIN-2015-27
refinedweb
143
69.89
Setting autodoc_tree_index_modules makes documentation builds fail Bug Description The arguments originally being passed into sphinx.apidoc specified '.' as the path to index. Unfortunately this includes the setup.py module. Sphinx dies while trying to process the setup.rst likely because the setup.py module calls setuptools.setup() when imported causing some sort of recursion. The final result is something like: 2013-12-08 21:08:12.088 | reading sources... [ 80%] api/setup 2013-12-08 21:08:12.100 | /usr/lib/ 2013-12-08 21:08:12.101 | warnings.warn(msg) 2013-12-08 21:08:12.102 | /usr/lib/ 2013-12-08 21:08:12.102 | warnings.warn(msg) 2013-12-08 21:08:12.103 | usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] 2013-12-08 21:08:12.103 | or: setup.py --help [cmd1 cmd2 ...] 2013-12-08 21:08:12.104 | or: setup.py --help-commands 2013-12-08 21:08:12.104 | or: setup.py cmd --help 2013-12-08 21:08:12.104 | 2013-12-08 21:08:12.105 | error: invalid command 'build_sphinx' 2013-12-08 21:08:12.622 | ERROR: InvocationError: '/home/ I did submit a patch (https:/ diff --git i/pbr/packaging.py w/pbr/packaging.py index a066d3b..37b41c6 100644 --- i/pbr/packaging.py +++ w/pbr/packaging.py @@ -636,6 +636,9 @@ try: from sphinx import config from sphinx import setup_command + import sys + sys.modules[ + class LocalBuildDoc( This is sort of a hack and I'm not entirely sure where the best place to put those lines. Right now this is not on Keystone's critical path because I have created a workaround: http:// An even simpler potential workaround is to add 'api/setup.rst' to the exclude_patterns list in Sphinx's conf.py. Pbr will still make it generate doc/source/ @Benjamin, I don't think that will fix the problem. It's not just that I don't want the setup to be a part of the code documentation, it's that the process of creating the docs will import setup.py. It may be possible to use your idea and wrap our setup function invocation in "if __name__ == '__main__'" to get around both issues. We successfully use this feature in several projects now. Is this still a problem? As far as I know it is still a problem. Other projects are using autodoc_ We have some patches up for oslo projects that use autodoc_ Doug, that's a different configuration option. It's autodoc_ Here is the bug in action! With an invalid exclude path set the documentation built correctly, but included the setup module: http:// When I corrected the commit to use api/setup.rst as recommended I get an error because Keystone we turn warnings into errors. From https:/ 2014-09-23 22:59:46.925 | raise SphinxWarning( 2014-09-23 22:59:46.926 | sphinx. 2014-09-23 22:59:46.926 | 2014-09-23 22:59:47.066 | ERROR: InvocationError: '/home/ Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit 0d6bfaf2e3f8397 Author: David Stanek <email address hidden> Date: Tue Sep 9 20:25:27 2014 +0000 Adds option for excluding files from autodoc trees The arguments originally being passed into sphinx.apidoc specify '.' as the path to index. Unfortunately this includes the setup.py module. This causes Sphinx to complain or break depending on the configuration. This patch ignores setup.py by default and allows the project to override it in their setup.cfg. Change-Id: I7c164d42a096ba Closes-Bug: #1260495 Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit 045e47938f44574 Author: David Stanek <email address hidden> Date: Mon Sep 15 18:38:25 2014 +0000 Removes temporary fix for doc generation A temporary fix was added to get around a bug in how pbr handles its autodoc_ longer need the work around. Change-Id: Id8274ef5c244bf Closes-Bug: #1260495 Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit 6f98a9e2bd7eed7 Author: David Stanek <email address hidden> Date: Wed May 13 12:01:16 2015 +0000 Removes temporary fix for doc generation A temporary fix was added to get around a bug in how pbr handles its autodoc_ longer need the work around. Change-Id: I6af0fdd6d1efac Closes-Bug: #1260495 Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit 07292c2d8437ca3 Author: Matt Riedemann <email address hidden> Date: Wed Jul 8 07:35:20 2015 -0700 Add more documentation around building docs After digging in pbr and sphinx source code for a day to figure out what I was doing wrong, let's update the pbr docs with respect to building docs using the autodoc features in pbr. Specifically, document the autodoc_ options and point out that you will probably need to set the exclude_ modules from the docs build and warnerrors=True. Also provide some links to the Sphinx docs for more details. Related-Bug: #1260495 Related-Bug: #1472276 Change-Id: Ib43830d08a156f Adding Keystone to make sure we update our code to use this when committed and remove the temporary extension.
https://bugs.launchpad.net/python-keystoneclient/+bug/1260495
CC-MAIN-2019-22
refinedweb
834
67.55
Control.LVish.DeepFrz Description The DeepFrz module provides a way to return arbitrarily complex data structures containing LVars from Par computations. The important thing to know is that to use runParThenFreeze to run a Par computation, you must make sure that all types you return from the Par computation have DeepFrz instances. This means that, if you wish to return a user-defined type, you will need to include a bit of boilerplate to give it a DeepFrz instance. Here is a complete example: import Control.LVish.DeepFrz data MyData = MyData Int deriving Show instance DeepFrz MyData where type FrzType MyData = MyData main = print (runParThenFreeze (return (MyData 3))) Synopsis The functions you'll want to use runParThenFreeze :: DeepFrz a => Par Det NonFrzn a -> FrzType aSource Under normal conditions, calling a freeze operation inside a Par computation makes the Par computation quasi-deterministic. However, if we freeze only after all LVar operations are completed (after the implicit global barrier of runPar), then we've avoided all data races, and freezing is therefore safe. Running a Par computation with runParThenFreeze accomplishes this, without our having to call freeze explicitly. In order to use runParThenFreeze, the type returned from the Par computation must be a member of the DeepFrz class. All the Data.LVar.* libraries should provide instances of DeepFrz already. Further, you can create additional instances for custom, pure datatypes. The result of a runParThenFreeze depends on the type-level function FrzType, whose only purpose is to toggle the s parameters of all IVars to the Frzn state. Significantly, the freeze at the end of runParThenFreeze has no runtime cost, in spite of the fact that it enables a deep (recursive) freeze of the value returned by the Par computation. runParThenFreezeIO :: DeepFrz a => Par d NonFrzn a -> IO (FrzType a)Source This version works for nondeterministic computations as well. Of course, nondeterministic computations may also call freeze internally, but this function has an advantage to doing your own freeze at the end of a runParIO: there is an implicit barrier before the final freeze. Further, DeepFrz has no runtime overhead, whereas regular freezing has a cost. Some supporting types DeepFreezing is a type-level (guaranteed O(1) time complexity) operation. It marks an LVar and its contents (recursively) as frozen. DeepFreezing is not an action that can be taken directly by the user, however. Rather, it is the final step in a runParThenFreeze invocation. Associated Types type FrzType a :: *Source This type function is public. It maps pre-frozen types to frozen ones. It should be idempotent. Instances An uninhabited type that signals that an LVar has been frozen. LVars should use this in place of their s parameter. Instances
http://hackage.haskell.org/package/lvish-1.0.0.6/docs/Control-LVish-DeepFrz.html
CC-MAIN-2016-44
refinedweb
448
53.81
Contrary to what most developers think, tree shaking isn’t very complicated. The discussion around the nomenclature (dead code elimination vs. tree shaking) can introduce some confusion, but this issue, along with some others, is clarified throughout the article. As JavaScript library authors, we want to achieve the most lightweight code bundle possible. In this post, I’ll walk you through the most popular patterns that deoptimize your code as well as share my advice on how to tackle certain cases or test your library. A bit of theory Tree shaking is a fancy term for dead code elimination. There is no exact definition of it. We can treat it as a synonym for dead code elimination or try to put only certain algorithms under that umbrella term. If we look at the definition listed on the webpack's docs page, it seems to be mentioning both approaches. “Tree shaking is a term commonly used in the JavaScript context for dead-code elimination. It relies on the static structure of ES2015 module syntax, i.e. import and export.” The first sentence implies it's a synonym while the second one mentions some specific language features that are used by this algorithm. Nomenclature dispute “Rather than excluding dead code (dead code elimination), we’re including live code (tree shaking elimination)”, distinguishes Rich Harris in his excellent post on the topic. One practical difference between both approaches is that the so-called tree shaking usually refers to the work done by bundlers, whereas dead code elimination is performed by minifiers, like Terser. As a result, the whole process of optimizing the final output often has 2 steps if we are discussing the creation of production-ready files. In fact, webpack actively avoids doing dead code eliminations and offloads some of that work to Terser while dropping only the necessary bits. All of this is to make the work easier for Terser, as it operates on files and has no knowledge of modules or the project structure. Rollup, on the other hand, does things the hard way and implements more heuristics in its core, which allows for generating less code. It's still advised to run the resulting code through Terser, though, to achieve the best overall effect. If you ask me, there is little point in arguing which definition is correct. It’s like battling over whether we should say function parameters or function arguments. There’s a difference in meaning, but people have been misusing the terms for so long that these terms became interchangeable in everyday use. Speaking of tree shaking, I understand Rich's point, but I also think that trying to distinguish separate approaches has introduced more confusion than clarification, and that ultimately, both techniques check the exact same things. That is why I'm going to use both terms interchangeably throughout this post. Why even bother? The frontend community often seems to be obsessed with the size of JavaScript bundles that we ship to our clients. There are some very good reasons behind this concern, and we definitely should pay attention to how we write code, how we structure our applications, and what dependencies we include. The primary motivating factor is to send less code to the browser, which translates to both faster download and execution, which in turn means that our sites can be displayed or become interactive faster. No magic The currently popular tools like webpack, Rollup, Terser, and others don't implement a lot of overly complicated algorithms for tracking things through function/method boundaries, etc. Doing so in such a highly dynamic language as JavaScript would be extremely difficult. Tools like Google Closure Compiler are much more sophisticated, and they’re capable of performing more advanced analysis, but they’re rather unpopular and tend to be hard to configure. Given that there is not that much magic involved in what those tools do, some things simply cannot be optimized by them. The golden rule is that if you care about the bundle size, you should prefer composable pieces rather than functions with tons of options or classes with a lot of methods, and so on. If your logic embeds too much and your users use only 10% of that, they will still pay the cost of the whole 100% – using the currently popular tooling there is just no way around it. General view on how minifiers and bundlers work Any given tool performing static code analysis operates on the Abstract Syntax Tree representation of your code. It's basically the source text of a program represented with objects which form a tree. The translation is pretty much 1 to 1, and converting between the source text and AST is semantically reversible – you can always deserialize your source code to AST and later serialize it back to the semantically-equivalent text. Note that in JavaScript things like whitespaces or comments don’t have semantic meaning and most tools don't preserve your formatting. What those tools have to do is figure out how your program behaves, without actually executing the program. It involves a lot of book-keeping and cross-referencing deduced information based on that AST. Based on that, tools can drop certain nodes from the tree once they prove that it won't affect the overall logic of the program. Side effects Given the language you use, certain language constructs are better than others for static code analysis. If we consider this very basic program: function add(a, b) { return a + b } function multiply(a, b) { return a * b } console.log(add(2, 2)) We can safely say that the whole multiply function isn’t used by this program and therefore doesn’t need to be included in the final code. A simple rule to remember is that a function can almost always be safely removed if it stays unused because a mere declaration doesn’t execute any side effects. Side effects are the most vital part to understand here. They are what actually affects the outer world, for example, a call to a console.log is a side effect because it yields an observable outcome of a program. It wouldn’t be OK to remove such a call as users usually expect to see it. It's hard to list all possible side effect types a program might have, but to name a few: - Assigning a property to a global object like window - Changing all other objects - Calling many builtin functions, like fetch - Calling user-defined functions that contain side effects The code that has no side effects is called pure. Minifiers and bundlers have to always assume the worst and play safe since removing any given line of code incorrectly can be very costly. It can tremendously alter the program's behavior and waste people's time on debugging bizarre problems that manifest only on production. (Minifying the code during development is not a popular choice.) Popular deoptimizing patterns and how to fix them As mentioned at the beginning, this article is dedicated primarily to library authors. Application development usually focuses on functionality, rather than optimization. Over-optimizing the aspects mentioned below in the application code is generally not advised. Why? The application codebase should contain only the code that’s actually in use – profits coming from the implementation of eyebrow-raising techniques would be negligible. Keep your apps simple and understandable. 💡 It's really worth noting that any advice given in this article is only valid for the initialization path of your modules, for what gets executed right away when you import a particular module. Code within functions, classes, and others is mostly not a subject of this analysis. Or to put it differently, such code is rarely unused and easily discoverable by linting rules like as no-unused-vars and no-unreachable. Property access This might be surprising, but even reading a property cannot be dropped safely: const test = someFunction() test.bar The problem is that the bar property might actually be a getter function, and functions can always have side effects. Given that we don't know much about someFunction, as its implementation might be too complex to be analyzed, we should assume the worst-case scenario: this is a potential side effect and as such cannot be removed. The same rule applies when assigning to a property. Function calls Note that even if we were able to remove that property read operation, we'd still be left with the following: someFunction() As the execution of this function potentially leads to side effects. Let's consider a slightly different example that might resemble some real-world code: export const test = someFunction() Assume that thanks to the tree shaking algorithms in a bundler, we already know that test isn’t used and thus can be dropped, which leaves us with: const test = someFunction() A simple variable declaration statement doesn't contain any side effects either, therefore it can be dropped as well: someFunction() In a lot of situations, however, the call itself cannot be dropped. Pure annotations Is there anything that can be done? It turns out that the solution is quite simple. We have to annotate the call with a special comment that the minifying tool will understand. Let's put it all together: export const test = /* #__PURE__ */ someFunction() This little thing tells our tools that if the result of the annotated function stays unused, then that call can be removed, which in turn can lead to the whole function declaration being dropped if nothing else refers to it. In fact, parts of the runtime code generated by bundlers are also annotated by such comments, leaving the opportunity of the generated code being dropped later. Pure annotations vs. property access Does /* #__PURE__ */ work for getters and setters? Unfortunately not. There isn’t much that can be done about them without changing the code itself. The best thing you could do is to move them to functions. Depending on the situation, it might be possible to refactor the following code: const heavy = getFoo().heavy export function test() { return heavy.compute() } To this: export function test() { let heavy = getFoo().heavy return heavy.compute() } And if the same heavy instance is needed for all future calls, you can try the following: let heavy export function test() { // lazy initialization heavy = heavy || getFoo().heavy return heavy.compute() } You could even try to leverage #__PURE__ with an IIFE, but it looks extremely weird and might raise eyebrows: const heavy = /* #__PURE__ */ (() => getFoo().heavy)() export function test() { return heavy.compute() } Relevant side effects Is it safe to annotate side-effectful functions like this? In the library context, it usually is. Even if a particular function has some side effects (a very common case after all), they are usually only relevant if the result of such a function stays used. If the code within a function cannot be safely dropped without altering the overall program's behavior, you should definitely not annotate a function like this. Builtins What might also come as a surprise is that even some well-known builtin functions are oftentimes not recognized as "pure" automatically. There are some good reasons for that: - The processing tool cannot know in what environment your code will actually get executed, so, for example, Object.assign({}, { foo: 'bar' })could very well just throw an error, like "Uncaught TypeError: Object.assign is not a function". - The JavaScript environment can be easily manipulated by some other code the processing tool isn’t aware of. Consider a rogue module that does the following: Math.random = function () { throw new Error('Oops.') }. As you can see, it's not always safe to assume even the basic behavior. Some tools like Rollup decide to be a little bit more liberal and choose pragmatism over guaranteed correctness. They might assume a non-altered environment, and in effect, allow to produce more optimal results for the most common scenarios. Transpiler-generated code It's rather easy to optimize your code once you sprinkle it with the #__PURE__ annotations, given you’re not using any additional code-transpiling tools. However, we often pass our code through tools like Babel or TypeScript to produce the final code that will get executed, and the generated code cannot be easily controlled. Unfortunately, some basic transformations might deoptimize your code in terms of its treeshakeability, so sometimes, inspecting the generated code can be helpful in finding those deoptimization patterns. I’ll illustrate, what I mean, with a simple class having a static field. (Static class fields will become an official part of the language with the upcoming ES2021 specification, but they are already widely used by developers.) class Foo { static defaultProps = {} } Babel output: class Foo {} _defineProperty(Foo, "defaultProps", {}); TypeScript output: class Foo {} Foo.defaultProps = {}; Using the knowledge gained throughout this article, we can see that both outputs have been deoptimized in a way that might be hard for other tools to handle properly. Both outputs put a static field outside the class declaration and assign an expression to the property – either directly or through the defineProperty call (where the latter is more correct according to the specification). Usually, such a scenario isn’t handled by tools like Terser. sideEffects: false It’s been quickly realized that tree shaking can automatically yield only some limited benefits to the majority of users. The results are highly dependent on the included code since a lot of the code in the wild uses the above-mentioned deoptimizing patterns. In fact, those deoptimizing patterns aren’t inherently bad and most of the time shouldn’t be seen as problematic; it’s normal code. Making sure that code isn’t using those deoptimizing patterns is currently mostly a manual job, so maintaining a library tree-shakeable tends to be challenging in the long run. It’s rather easy to introduce harmless-looking normal code that will accidentally start retaining too much. Therefore, a new way to annotate the whole package (or just some specific files in a package) as side-effect-free has been introduced. It's possible to put a "sideEffects": false in a package.json of your package to tell bundlers that files in that package are pure in a similar sense that was described previously in the context of the #__PURE__ annotations. However, I believe that what it does is vastly misunderstood. It doesn't actually work like a global #__PURE__ for function calls in that module, nor does it affect getters, setters, or anything else in the package. It's just a piece of information to a bundler that if nothing has been used from a file in such a package, then the whole file can be removed, without looking into its content. To illustrate the concept, we can imagine the following module: // foo.js console.log('foo initialized!') export function foo() { console.log('foo called!') } // bar.js console.log('bar initialized!') export function bar() { console.log('bar called!') } // index.js import { foo } from './foo' import { bar } from './bar' export function first() { foo() } export function second() { bar() } If we only import first from the module, then the bundler will know it can omit the whole ./bar.js file (thanks to the "sideEffects": false flag). So, in the end, this would be logged: foo initialized! foo called! This is quite an improvement but at the same time, it's not, in my humble opinion, a silver bullet. The main problem with this approach is that one needs to be extra careful about how the code is organized internally (the file structure, etc.) in order to achieve the best results. It’s been common advice in the past to "flat bundle" library code, but in this case, it’s to the contrary – flat bundling is actively harmful to this flag. This can also be easily deoptimized if we decide to use anything else from the ./bar.js file because it will only be dropped if no export from the module ends up being used. How to test this Testing is hard, especially since different tools yield different results. There are some nice packages that can help you, but I've usually found them to be faulty in one way or another. I usually try to manually inspect the bundles I get after running webpack & Rollup on a file like this: import 'some-library' The ideal result is an empty bundle – no code in it. This rarely happens, therefore a manual investigation is required. One can check what got into the bundle and investigate why it could have happened, knowing what things can deoptimize such tools. With the presence of "sideEffects": false, my approach can easily produce false-positive results. As you may have noticed, the import above doesn't use any export of the some-library, so it's a signal for the bundler that the whole library can be dropped. This doesn't reflect how things are used in the real world, though. In such a case I try to test the library after removing this flag from its package.json to check what would happen without it and to see if there’s a way to improve the situation. Happy tree shaking! Don't forget to check our other content on dev.to! If you want to collaborate with us on expanding the area of business messaging, visit our Developer Program! Discussion (2) Great article! Thank you for it! Awesome explanation of DCE. Here's something I wrote for beginners though not as detailed as your article: devopedia.org/dead-code
https://dev.to/livechat/tree-shaking-for-javascript-library-authors-4lb0
CC-MAIN-2021-21
refinedweb
2,919
52.6
17 June 2013 18:00 [Source: ICIS news] HOUSTON (ICIS)--Here is Monday’s midday ?xml:namespace> CRUDE: Jul WTI: $98.12/bbl, up 27 cents; Aug Brent: $106.22/bbl, up 29 cents NYMEX WTI crude futures worked modestly higher, tracking a rally in the stock market and receiving underlying support from geopolitical concerns that the Syrian civil war could escalate throughout the region. WTI topped out at $98.74/bbl before running out of steam. RBOB: Jul $2.8819/gal, down by 1.48 cents/gal Reformulated blendstock for oxygen blending (RBOB) gasoline futures were slightly lower in early trading. Prices were volatile, trading on either side of Friday’s close throughout, as traders were concerned that supplies would surge in the Midwest now that a large refinery in NATURAL GAS Jul: $3.826/MMBtu, up 9.3 cents The July contract on the NYMEX natural gas market surged through Monday morning trading, boosted by warmer weather forecasts for the US northeast and Midwest over the coming two weeks. ETHANE: higher at 23.75 cents/gal Ethane spot prices were higher as natural gas futures strengthened in early trading. AROMATICS: Benzene wider at $4.25-4.38/gal Prompt benzene spot prices were discussed within a wider range early in the day, as offers firmed. The morning range was further apart from $4.25-4.30/gal FOB (free on board) the previous session. OLEFINS: Ethylene steady at 59-60 cents/lb, PGP offered higher at 64.5-66.0 cents/lb US June ethylene spot prices were steady at 59-60 cents/lb as market players continued to wait for more information from the Williams
http://www.icis.com/Articles/2013/06/17/9679296/noon-snapshot---americas-markets-summary.html
CC-MAIN-2015-14
refinedweb
278
67.65
I am working on desktop game so that want to move focus using arrow keys. Default initial button focus, I have already setup. As well by pressing arrow keys controls is also moving from one button to other but I want to set up scale up and down animation. When button got focus, I want to do scale up for that button. When button lost focus, I want to do normal scale for that button. <-- I want to implement this thing. I found this way but this don't contains scale up and down feature. This contains sprite change functionality. For UI setup, I have used new Unity UI, so my all UI controls belongs to that. There is no question here. Edit your post to fill out more information including what you have tried so far. @meat5000, what I need to do that I don't know!! So that I asked here. I thought Unity has definitely some mechanism exist for this stuff that I don't know. Well, by 'focus' do you mean Highlighted or Selected Look at the selectable class, perhaps, and just place your scaling code within the OnSelect callback or just use the isHighlighted flag. Answer by meat5000 · May 06, 2016 at 12:20 PM This is actually very easy. Just put this on each Selectable Object you wish to scale on Select. using UnityEngine; using System.Collections; using UnityEngine.UI; using UnityEngine.EventSystems; public class ScaleButton : MonoBehaviour, ISelectHandler, IDeselectHandler{ Vector2 offMax; Vector2 offMin; RectTransform myRT; void Start () { myRT = transform as RectTransform; offMax = myRT.offsetMax; offMin = myRT.offsetMin; } public void OnSelect(BaseEventData data) { myRT.offsetMax = offMax * 1.1f; myRT.offsetMin = offMin * 1.1f; Debug.Log(gameObject.name + " Selected"); } public void OnDeselect(BaseEventData data) { myRT.offsetMax = offMax; myRT.offsetMin = offMin; Debug.Log(gameObject.name + " Deselected"); } } Get the transform as RectTransform and cache the current offset values. Scale the offsets in Select and put them back to the cached value on Deselect. @meat5000, Thanks for your awesome reply but I have one question now. If I have 6 buttons then I need to write 6 scripts like this? any way exist through which I can detect button as well. Simply use this to detect which button is pressed. public void OnSelect(BaseEventData data) { if(data.selectedObject.name == "$$anonymous$$yButton") { myRT.offset$$anonymous$$ax = off$$anonymous$$ax * 1.1f; myRT.offset$$anonymous$$in = off$$anonymous$$in * 1.1f; } } No, you dont do like that! You create 1 script, and apply the same script to each button. on select event it will trigger own instance of the script, you dont even need to care of the button name @meat5000, why we are changing offset value here? I want scale up down of selected object. As I mentioned in a comment, actually changing the scale of the RectTransform can alter the sprite in a way that prevents click detection in the correct areas. The proper way to scale a UI element is to adjust the corners of the rectangle with respect to the position of the Anchors. Did you try the script? The buttons will grow by 10% when clicked. The pointers in the Standalone Input module take care of the touch and mouse, since the Touch Input $$anonymous$$odule was deprecated. Check out the class in the Scripting API for more info. The end result will be the same. The offset values are a normalised scale respective to the position of the anchors. You should place the anchors where you want your reference box to be for that element. For example you can position your anchors at the biggest size you want the element to be and in the correct position and then use an offset value of 0.9 and 1 ins$$anonymous$$d of 1 and 1.1. Really using the offsets is just one way of doing it. If you read the RectTransform docs you will discover many more useful variables and methods to help you get the behaviour you want. The important thing in this answer was the Implementation of the interfaces in order to use the desired functions for each behaviour. What you do within those functions is up to you. Just remember that there is much useful information provided to these Interfaces through BaseEventData and PointerEventData. Answer by Briksins · May 06, 2016 at 11:32 AM You need to use custom script, where you detect when button in focus According to API "Button" Class implements ISelectHandler which has method OnSelect, it should fire off each time when your button selected. Once you capture this event you need to get selected button RectTransform component and scale from there @Briksins, Okay I understand your custom script concept. but how to detect particular controls focus enter and focus leave? just updated my response with exact class names, interfaces and method name Note that some users report that adjusting the scale of a UI element messes with the click detection. Scale using anchors etc. @Briksins and @meat5000, now let me implement all this. @Briksins, this one also cool answer, I up voted your answer too. "No, you dont do like that! You create 1 script, and apply the same script to each button. on select event it will trigger own instance of the script, you dont even need to care of the button 68 People are following this question. RPG Maker style menu with UI buttons that are selected by controller 0 Answers Create a new event for Button UI using script. 1 Answer UI Button onclick not detected 0 Answers Holding down UI Button and shooting raycast from touch position doesn't work simultaneously:(( 0 Answers Selecting a selectable using script doesn't seem to work anymore? 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/1182238/ui-element-got-focus.html
CC-MAIN-2022-40
refinedweb
956
66.94
Tuple in list has less items than the function needs Python 3 I'm kinda new to Programming and Python and I'm self learning before going to uni so please be gentle, I'm a newbie. I hope my english won't have too many grammatical errors. Basically I had this exercise in a book I'm currently reading to take a list of tuples as a function parameter, then take every item in the each tuple and put it to 2nd power and sum the items up. My code looks like this and works good if my function call includes the same amount of arguments as the function for loop requires: def summary(xs): for x,y,z in xs: print( x*x + y*y + z*z) xs =[(2,3,4), (2,-3,4), (1,2,3)] summary(xs) However, If I use a list with less tuples than the function definition, I get an error: ValueError : not enough values to unpack(expected 3, got 0): xs =[(2,3,4), (), (1,2,3)] I would like to know how to make a function that would accept a tuple I shown before () - with no tuples, and the function would return 0. I have been trying multiple ways how to solve this for 2 days already and googling as well, but it occurs to me I'm either missing something or I'm not aware of a function i could use. Thank you all for the help. One way is to iterate over the tuple values, this would also be the way to tackle this problem in nearly every programming language: def summary(xs): for item in xs: s = 0 for value in item: s += value**2 print(s) Or using a list comprehension: def summary(xs): for item in xs: result = sum([x**2 for x in item]) print(result) also note that sum([]) will return 0 for an empty iterable. 5. Data Structures, Remove the first item from the list whose value is x. A list comprehension consists of brackets containing an expression followed by a for clause, then zero or apply a function to all the elements >>> [abs(x) for x in vec] [4, 2, 0, 2, Tuples may be nested: u = t, (1, 2, 3, 4, 5) >>> u ((12345, 54321, 'hello! Learn how to code in Python. Python 3 List Methods & Functions. Returns the position of the first list item that has a bytes, tuple, list, or range) or Well, the issue is that you don't have enough indices in your inner tuple to unpack into three variables. The simplest way to go around it is to manually unpack after checking that you have enough variables, i.e.: def summary(xs): for values in xs: if values and len(values) == 3: x, y, z = values # or don't unpack, refer to them by index, i.e. v[0], v[1]... print(x*x + y*y + z*z) else: print(0) Or use a try..except block: def summary(xs): for values in xs: try: x, y, z = values # or don't unpack, refer to them by index, i.e. v[0], v[1]... print(x*x + y*y + z*z) except ValueError: # check for IndexError if not unpacking print(0) Why no tuple comprehension? - Users, It was only many versions later, in Python 3, that dict and set (24, 'word', 2.5). rather than a long sequence of homogeneous items, like lists, so there is less need for a tuple comprehension. If you need a tuple, use a Do you know what does this function do and what are its performance characteristics? Instead of permanently deleting a tuple, Python moves it to a free list if the tuple has less than 20 items. >>> a = (1,2,3) >>> id(a) 140235617388224 >>> del a >>> b = (4,5,6) >>> id(b) One way is to use try / except. In the below example, we use a generator and catch occasions when unpacking fails with ValueError and yield 0. While you are learning, I highly recommend you practice writing functions which return or yield rather than using them to def summary(xs): for item in xs: try: yield sum(i**2 for i in item) except ValueError: yield 0 xs = [(2,3,4), (), (1,2,3)] res = list(summary(xs)) print(res) [29, 0, 14] Or to actually utilise the generator in a lazy fashion: for i in summary(xs): print(i) 29 0 14 5. Data Structures, list.remove(x): Remove the first item from the list whose value is x. It is an If the expression would evaluate to a tuple, it must be parenthesized. Here we take a list of numbers and return a list of three times each number: For example, a < b == c tests whether a is less than b and moreover b equals c. When comparing the built-in functions for Python Tuple and the list, Python Tuple has lesser pre-defined built-in functions than the lists. A few of the advantages of lists against the Python Tuple are that the list can be defined for variable lengths and can be copied from one destination to another, but Python Tuple can have only fixed You should use the "len > 0" condition. This code should work for any list or tuple length : def summary(xs): for tup in xs: prod = [a*a for a in tup if len(tup)>0] print(sum(prod)) Note that I defined a "prod" list in order to use "sum" so that it is not calculated the hard way. It replaces your "x* x + y* y + z* z" and works for any tuple length. Optimization tricks in Python: lists and tuples, Python has two similar sequence types such as tuples and lists. a = (1,2,3) >>> a[0] = 10 Traceback (most recent call last): File "<stdin>", line 1, working with arguments and parameters; returning 2 or more items from a function If a tuple no longer needed and has less than 20 items instead of deleting. It often pays to separate your algorithm into functions that just do one thing. In this case a function to sum the squares of a list of values and a function to print them. It is very helpful to keep your variable names meaningful. In this case your xs is a list of lists, so might be better named xss import math def sum_of_squares(xs): return sum(map(math.sqr, xs)) def summary(xss): for xs in xss: print sum_of_squares(xs) xss = [(2,3,4), (), (1,2,3)] summary(xss) or map(print, sum(map(math.sqr, (x for x in xs)))) 18 Most Common Python List Questions, Discover how to create a list in Python, select list elements, the difference This is why lists, strings, tuples, and sets are called “iterables”. a certain element is part of your sequence) will go faster with sets than with lists. It's an anonymous function, so you need to pass it your list element to make sure In above code we have tuple a and list b with same items but the size of tuple is less than the list. Different Use Cases At first sight, it might seem that lists can always replace tuples. Python Tuples Tutorial, Similar to Python lists, tuples are another standard data type that allows you to It is helpful to know some built-in functions with tuples and you will see some Tuples are initialized with () brackets rather than [] brackets as with lists. Tuple 'n_tuple' with a list as one of its item. n_tuple = (1, 1, [3,4]) #Items Python has 3 methods for deleting list elements: the item at index 3, from the list, you could use: list can be changed or modified after its creation according to needs whereas tuple has Lists and Tuples in Python – Real Python, You'll cover the important characteristics of lists and tuples in Python 3. In short, a list is a collection of arbitrary objects, somewhat akin to an array in many other Lists can even contain complex objects, like functions, classes, and modules, which You specify the index of the item to remove, rather than the object itself. A. Tuples, Syntactically, a tuple is a comma-separated list of values: Without the comma Python treats ('a') as an expression with a string in parentheses that Another way to construct a tuple is the built-in function tuple. print t[1:3] ('b', 'c') be checked to see if it is greater than, less than or equal to another value of the same type. Stack Overflow Public questions and answers; Teams Private questions and answers for your team; Enterprise Private self-hosted questions and answers for your enterprise; Talent Hire technical talent
http://thetopsites.net/article/50709759.shtml
CC-MAIN-2020-40
refinedweb
1,461
61.09
This section demonstrates you the use of close() method. Description of code:Streams represent resources which is to be clean up explicitly. You can done this using the method close(). This method automatically flush out the stream. It is necessary to close the stream after performing any file operation before exiting the program otherwise you could lose buffered data. In the given example, we have used BufferedWriter class along with FileWriter class to write some text to the file. The method write() of BufferedWriter class writes the text into the file. The method newLine() writes the line separator and using the method close(), we have closed the stream and keep the data safe. Here is the code: import java.io.*; public class FileClose { public static void main(String[] args) throws Exception { File file = new File("C:/data.txt"); if (file.exists()) { BufferedWriter bw = new BufferedWriter(new FileWriter(file, true)); bw.write("Welcome"); bw.newLine(); bw.close(); } } } In the above code, we have used close() method to flush out the stream. It is essential as it could leak the resources.
http://www.roseindia.net/tutorial/java/core/files/fileclose.html
CC-MAIN-2014-41
refinedweb
179
76.32
Pádraig Brady wrote: > On 09/04/10 14:41, jeff.liu wrote: >> Hello All, >> >> Please ignore the previous patchsets, there is an issue I just fixed. >> >> The revised version were shown as following: >> >> From: Jie Liu <address@hidden> >> Date: Fri, 9 Apr 2010 21:31:27 +0800 >> Subject: [PATCH 1/2] Add fiemap.h for fiemap ioctl(2) support. >> >> diff --git a/src/fiemap.h b/src/fiemap.h >> new file mode 100644 >> index 0000000..d33293b >> --- /dev/null >> +++ b/src/fiemap.h > > I guess we should emulate this only when fiemap.h is not available > For now, I do check as below, #ifndef HAVE_FIEMAP # include "fiemap.h" #endif >> +# include <linux/types.h> > > The emulation should not need the above #include > Thanks for pointing this out. >> From: Jie Liu <address@hidden> >> Date: Fri, 9 Apr 2010 21:38:23 +0800 >> Subject: [PATCH 1/1] Add fiemap copy for cp(1). >> This feature is intended to for optimization of backup sparse files. >> >> Fiemap copy can be trigger via 'cp --fiemap=[WHEN]', if 'fiemap=auto' >> specify and >> the underlying FS does not support FIEMAP or fiemap copy failed, fall back to >> normal copy. > > I'm not convinced this feature needs any options. > I.E. any reason for not always trying this and falling back if not possible? > At first, I only consider to optimize the sparse file copy with the benefits of fiemap (i.e., avoid examining holes). If this feature does not affect the cp(1) semantics, maybe we can always do fiemap copy. I have done some tests, the performance of fiemap copy is pretty much the same thing by comparing to the normal copy for the regular files. > cheers, > Pádraig. Cheers, -Jeff
https://lists.gnu.org/archive/html/coreutils/2010-04/msg00011.html
CC-MAIN-2019-13
refinedweb
282
75.91
Aug 01, 2011 11:58 AM|drazic19|LINK Hi, I have a typical scenario where I have a parent entity that can map to multiple child entities. On the parent edit form I have a ListBoxFor that shows all the possible child entities. The user can select more than one option and post the form. At the moment i've got it so the return types are an int (the parent id) and formcollection. I then use UpdateModel to perform the update and then save. All is good apart from the fact the ListBox is ignored completely. I can see in the formcollection that the listbox is listed in the keys but how do I get it, presumable cast it and iterate through the selected items create child entities for each? Model is: public class RouteFormViewModel { //Properties public Route Route { get; private set; } public IEnumerable<SelectListItem> RouteTypes { get; set; } //Constructor public RouteFormViewModel(Route route) { GenericRepository genericRepository = new GenericRepository(); Route = route; RouteTypes = genericRepository.FindAllRouteTypes(); } } The View is: <p> <%: Html.Label("", "Categories") %> <%: Html.ListBoxFor(model => model.Route.RouteTypeMappings, Model.RouteTypes) %> </p> The POST Controller is: public ActionResult Edit(int id, FormCollection collection) { //Get Route Route route = routeRepository.GetRoute(id); // check model state if (ModelState.IsValid) { try { //update route UpdateModel(route, "Route"); // ** Need to save all selected items from listbox ** routeRepository.Save(); return RedirectToAction("View", new { id = route.rID }); } catch (Exception e){ return View(new RouteFormViewModel(route)); } } return View(new RouteFormViewModel(route)); } Any help of suggestions would be great. Thanks, Michael mvc All-Star 49334 Points Aug 01, 2011 04:06 PM|bruce (sqlwork.com)|LINK what is the definition of RouteTypeMappings? it should be a List<> or array. mvc All-Star 156233 Points Moderator MVP Aug 01, 2011 08:30 PM|ignatandrei|LINK; } All-Star 49334 Points Aug 01, 2011 09:21 PM|bruce (sqlwork.com)|LINK ignatandrei; } in the above code RouteTypes is the dropdown's list values, so IEnumerable is fine. Aug 04, 2011 03:02 PM|drazic19|LINK Hi Bruce, Indeed you're right about RouteTypes being a simple dropdown and therefore IEnumerable works fine. To answer your previous question RouteTypeMapping (poorly named) is meant to be a collection of RouteTypes attached to the main Route entity. It's a one to many relationship. The RouteTypeMapping is a simple table of IDs - mappingID, routeID, routeTypeID. The goal is to post back the updated Route plus it's RouteTypeMapping child items. Thanks for the help. 4 replies Last post Aug 04, 2011 03:02 PM by drazic19
http://forums.asp.net/t/1706029.aspx
CC-MAIN-2014-35
refinedweb
417
57.57
As we learned in the previous lesson, dealing with Exceptions properly is a big part of keeping our applications stable, user friendly, and performant. In this lesson, we'll learn how to create custom exceptions we can use to tailor our applications even more to our user's needs. As we don't have a frontend for our application, this will help us out a lot - after all, what if someone sends data to our application that doesn't arrive in the right format, or creates some kind of error? We don't have a frontend to control what the user submits even a little bit. Exceptions can really help us out here. Let's create a custom exception that displays an error message, and an HTTP status code when a user tries to submit something incorrectly. In your src/main/java package, create a new package called exceptions. Exceptions aren't mystical beings from faraway lands - they are actually just normal, Plain Jane, Average Joe, boring old Java objects. Let's write one! Create a new Java class and call it ApiException. Add the following code: package exceptions; public class ApiException extends RuntimeException{ private final int statusCode; public ApiException (int statusCode, String msg){ super(msg); this.statusCode = statusCode; } public int getStatusCode() { return statusCode; } } This is a short file - it doesn't need to be long, because it is, as you can see, inheriting functionality from the RuntimeExeption class. You are probably wondering what this line is about: super(msg); What is happening here is called calling super or calling to super or even super call. What this means, in plain English, is that we are calling the superclass' (i.e. the object class we are inheriting from, the parent class) constructor in order to instantiate an unnamed object at this location. The next line, this.statusCode = statusCode; is affecting that object that the call to super created. Because we don't really need to have the exception object here, the call to super() is more concise and carries less overhead. If you are curious about learning more about what this means, the Java Docs for once have a fairly straightforward explanation you can read (here)[] Now it's time to implement our custom exception. Let's throw a new error if a user tries to access a route with an id that doesn't exist. Change your route handler for a single restaurant as follows: get("/restaurants/:id", "application/json", (req, res) -> { int restaurantId = Integer.parseInt(req.params("id")); Restaurant restaurantToFind = restaurantDao.findById(restaurantId); if (restaurantToFind == null){ throw new ApiException(404, String.format("No restaurant with the id: \"%s\" exists", req.params("id"))); } return gson.toJson(restaurantToFind); }); If we boot up postman and fire a request at localhost:4567/restaurants/132, we will see the following in our terminal window: [qtp1736140021-16] ERROR spark.http.matching.GeneralError - exceptions.ApiException: No restaurant with the id: "132" exists at App.lambda$main$3(App.java:85) Right on! So this is where these errors come from. But, we are still seeing an ugly 500 server error in our Postman window, which is not ideal, it would be better if we displayed a custom message here too that the user can see when they are using the API. Let's work on that next. We can use a filter, similar to our after() filter to improve how we handle our errors. Open up your App.java, and add the following code above your after filter: exception(ApiException.class, (exc, req, res) -> { ApiException err = (ApiException) exc; Map<String, Object> jsonMap = new HashMap<>(); jsonMap.put("status", err.getStatusCode()); jsonMap.put("errorMessage", err.getMessage()); res.type("application/json"); //after does not run in case of an exception. res.status(err.getStatusCode()); //set the status res.body(gson.toJson(jsonMap)); //set the output. }); Similar to how a filter can run before or after every route, and a route runs whenever a URL is requested, this exception rule runs whenever an exception is generated by the server. If you implement the code above and and fire a request at localhost:4567/restaurants/132 again, you'll see our ugly 500 server error has been replaced by a new output (in JSON!) in Postman. This is much better. Let's walk through exactly what happens in this exception handler. exception(ApiException.class, (exception, req, res) -> { When an exception happens, generated an exception object. ApiException err = (ApiException) exception; Cast the generic exception object as a specific ApiException Map Cool. Now you have yet another tool in your toolbox that make your apps communicate well with your user, and handle issues gracefully. You can use you knowledge of custom exceptions to improve your Blog or To Do List by building custom 404 or error pages, redirecting users when they hit 404's, and much, much more.
https://www.learnhowtoprogram.com/java/api-development-extended-topics/defining-and-using-custom-exceptions
CC-MAIN-2019-04
refinedweb
806
55.44
Easy ways to make your python code more Concise and efficient! I just wanted to share some tricks to make code more concise and more efficient that are really simple to implement F-Strings F-Strings come in really handy when you need a clean way to use strings with variables, and to format them! If you want to make your strings and prints look much cleaner, and perform better, F-strings are by far the easiest way, lets say we have some variables: name='bob' address='123 Park Place' Hours=(9,17) A lot of people would print it like this: import random print('Hello '+name+',\nwe have a package to deliver to you at '+address+', If it is alright with you, we will be dropping your package off at '+str(random.choice(list(set(range(24))-set(range(*Hours)))))+' oclock') While that approach certainly works, It looks quite a bit cleaner to use F-Strings An F string is declared by putting an f before quotations like f'' or f"", all you have to do to get your variables in your string, is to wrap them in brackets within those quotations, I.E f'Hello, {name}', If we were to do what we did above with F-strings this is how it would appear: print(f'Hello {name},\nwe have a package to deliver to you at {address}, if it is alright with you, we will be dropping youre package off at {random.choice(list(set(range(24))-set(range(*Hours))))} oclock') And lets say we are just making a program that says hi, name=input() print('hello, '+name+' how are you doing') print(f'hello, {name} how are you doing') The f-string looks quite a bit cleaner, and also allows you to use formatting to do things like this: print(f'{"hello":->10} {name},\n how are you doing') The :->10 just means, ( : means this is going to format) ( - Can be any character you want to be used as padding (or none at all)) ( > implies which direction, you can also use ^ or <) ( 10 is how much to pad it by, doing a higher number would move it further) Lambdas, Maps, Listcomps, Zips, and Filter, and Enumerate (tools for iteration) Listcomps and maps One of the easiest ways to make your code more precise, and potentially faster is with the use of These handy functions and loop syntax, Lets say that we are trying to remove all odd numbers from a list of numbers that have been converted to strings, The long way to accomplish this would probably be something along the lines of this list1 = ['0', '7', '21', '3', '0', '8', '13', '10', '21', '8', '19', '17', '3', '5', '2', '18', '18', '2', '15', '13', '20', '1', '22'] #list for the converted strings list2=[] for num in list1: #check if even if int(num)%2==0: #add to final list list2.append(int(num)) That method is definitely Great, but it can be vastly improved upon with something called a list comprehension, a list comprehension is essentially the concise version of the code above. A list comprehension is always enclosed in brackets, always iterates over something and always returns a list, here are a few examples of the syntax in different scenarios: If you want an if else, this is the syntax: newlist = [<output> if <condition> else <output> for item in iterable] #an example of this in use evenodd=['even' if number%2==0 else 'odd' for number in range(30)] What is happening here is that it checks the conditional every time it iterates over an item, and the output is what will appear in the position it was on in the old list. The next syntax style is when you only want items if a certain condition is true, I.E We only want to keep the item if it has a 'b' in it l=['bad','bass','oops','fish','salt','bin'] onlybs=[text for text in l if 'b' in text] #^the code above basically goes through these steps #for every text in the list #if there is the letter b in text #add that text to the new list #That listcomp is equivelant to this: newl=[] for text in l: if 'b' in text: newl.append(text) Now that we now how to use listcomps,our adjusted odd remover would like this: list2=[x for x in list1 if int(x)%2==0] In that specific case, converting to an int before checking if it was even was fine, but in some situations, youl end up needing to convert the whole list before you iterate, which is where maps come in. ap is a function that needs a function, and an iterable, and all it does is apply that function to every iterable. There is one caveat to maps though, the first one being that it doesnt actually store all the values until converted to a list, or iterated over, and the second one being that due to its unique implementation, it is quite a bit faster then listcomps when given a builtin method I.E #map(function,iterable) strin=map(str,range(50)) print(strin) print(list(strin)) what we did there was apply str() to every item in range(50), which is a concise and easy way to convert large amounts of items types. zip, enumerate and filter enumerate A lot of times when iterating, knowing which index you are on is useful, and thats exactly what enumerate is for. Simply all it does is simplify counting which index you are on. c=0 for x in range(50): print(x) c+=1 #this can be done with enumerate like this for index,item in enumerate(range(50)): print(f'{item} is at {index}th index') What enumerate does is take an iterable, and create a new one that is a list of tuples where the first item is the index, and the second item is the actual part of the itarable. when iterating over an enumerate, it is best to unpack it into two variables. Her is a real world use case for enumerate: def findall(iterable,target): res=[] for index,item in enumerate(iterable): if item==target: res.append(index) return res in listcomp form: def findall(iterable,target): return [count for count,item in enumerate(iterable) if item==target] filter lets say we want to remove all digits from a string, this could be done with a listcomp or a normal loop, But filter can do it much more concisely lol='eufu803fnhanus830j' rmdigit=''.join([x for x in lol if x.isdigit()==False]) rd=''.join(filter(str.isalpha,lol)) Just like maps, filter is only better than a listcomp performance-wise if it is using a builtin function Zips lets say we have a list of cars, a list of their colors,a list of their model numbers, and a list of their years, and we want to have all of them together, the best way to do this is with the zip function, instad of iterating over all the lists. name=['corola','tokoma','aventador','focus'] numb=[334,556,7778,3321] colors=['green','beige','grey','pink'] years=[2020,2019,2021,2013] cars=zip(name,numb,colors,years) #as apposed to cars2=[] for x in range(len(name)-1): cars2.append(name[x],numb[x],colors[x],years[x]) Although the zip doesn't seem much shorter, than the naive method, it does a lot more in cases where lists could be uneven. It automatically accounts for the lengths of the iterates which means it wont error if one is longer than the other. Another great use of zips is in portioning out an iterable i.e: foo=[1,4,5,7,83,3,13,56,6] #we want to divid it into sections of 3 bar=zip(foo[::3],foo[1::3],foo[2::3]) Lambdas Though lambdas dont really have anything to do with iterables, they are quite often used within maps filters and other things. Basically a lambda is a mini one line function Here is how you define one: =lambda : I.E printdown=lambda s:print('ew') if 'q' in s else print('\n'.join(list(s))) Using sets for better SPEED, and more concise functions sets are a unique type of data structure, they cant have duplicates, dont have an order, are slow to iterate over but are huge speed boosts in certain scenarios. Detecting duplicates One of the most useful things that sets can help you with is detecting/removing duplicates, because a set conversion removes duplicates, it will be shorter than what it was converting from which allows you to use stuff like isdupe=lambda i:len(set(i))==len(i) instead of def isdupe2(i): for item in i: if i.count(item)!=1: return False return True That second example actually brings us to another efficiency trick with set Making count more effecient This one is pretty simple, you dont to need more counts of items than there are items so reducing a list to just unique items makes getting the count of a given item much faster def isdupe2(i): for item in set(i): if i.count(item)!=1: return False return True or the very barebones count dictionary (using a dictcomp :D) frequency = lambda t:{l:t.count(l) for l in set(t)} Super set,Subset,difference Finding wheather all items in one thing are countained in another can best be done by converting iterables to sets,and then using the built issubset and issuperset methods subset =lambda x,y:set(x).issubset(y) In speed One of the most important thing about sets is that checking if an item is contained is significantly faster with a set than with a list or a tuple i.e l=[7,33,3282801,83] g=set(l) s=7 in l u=7 in g Removing duplicates, set and dict.fromkeys If you want to remove all duplicates from a list and dont care about order, its as simple as this: nodupes=lambda t:list(set(t)) If you care about order: nodupes2=lambda r:list(dict.fromkeys(r)) more info shorthand if-else One of the easiest ways to make code look cleaner is to use shorthand if else if else i.e: def iseven(i): return True if i%2==0 else False #^shorthand def iseven2(i): if i%2==0: return True else: return False Dataclasses This one is pretty simple, The example on their docs pretty much explains why its so great from dataclasses import dataclass @dataclass class InventoryItem: name: str unit_price: float quantity_on_hand: int = 0 #looks a lot cleaner than class item2: def __init__(self, name: str, unit_price: float, quantity_on_hand: int=0): self.name = name self.unit_price = unit_price self.quantity_on_hand = quantity_on_hand it makes classes meant for storing data a lot a easier Conclusion I hope this was helpful and that you had fun reading, and learned a lot! :) p.s if youre wondering about the performcance differences between map and listcomp, here ya go The essay typer includes a number of customer testimonials from satisfied customers who have achieved their goals using this writing service. The consumer experience with this service is full of good reviews that ensure sterling market reputation and popularity. As a writer, I'm working on an article that can describe this easily, currently I'm working in vector art company near me, with highly professional designer nad writers of the USA. Nice tutorial! Just a tip, add the pyextention at the end of the three ticks to get syntax highlighting, like this:
https://replit.com/talk/learn/Easy-ways-to-make-your-python-code-more-Concise-and-efficient/133554
CC-MAIN-2022-33
refinedweb
1,945
50.74
protocol-buffers: Parse Google Protocol Buffer specifications [ bsd3, library, text ] [ Propose Tags ] Parse proto files and generate Haskell code. Modules - Text - Text.ProtocolBuffers Downloads - protocol-buffers-1.4.0.tar.gz [browse] (Cabal source package) - Package description (as included in the package) Maintainer's Corner For package maintainers and hackage trustees Readme for protocol-buffers-1.4.0[back to package description] This the README file for protocol-buffers, protocol-buffers-descriptors, and hprotoc. These are three interdependent Haskell packages by Chris Kuklewicz. This README was updated most recently to reflect version 1.4.0 Questions and answers: What is this for? What does it do? Why?. How well does this Haskell package duplicate Google's project? This provides non-mutable messages that ought to be wire-compatible with Google. These messages support extensions. These messages support unknown fields if hprotoc is passed the proper flag (-u or --unknown_fields). This does not generate anything for Services/Methods. Adding support for services has not been considered. I think that Google's code checks for some policy violations that are not well documented enough for me to reverse engineer. Some (all?) of Google's APIs include the possibility of mutable messages. I suspect that my message reflection is not as useful at runtime as in some of Google's APIs. What is protocol-buffers? The protocol-buffers part is the main library which has two faces:... The hprotoc part is a executable program which reads ".proto" files and uses the protocol-buffers package to produce a tree of Haskell source files. The program is called "hprotoc". Usage is given by the program itself, the options themselves are processed in order. It can take several input search paths, and allow an additional module prefix, a selectable output directory, and ends with a list of of proto file to generate from. The output has to be a tree of modules since each message is given its own namespace, and a module is the only partitioning of namespace in Haskell. The keys for extension fields are defined alongside the message whose namespace they share. Since message names are both a data type and a namespace the filename and the message name match (aside from the .hs file extension). And what are the examples and tests sub-directories? The examples sub-directory is for duplicating the addressbook.proto example that Google has with its code. The ABF and ABF2 file are included as binary addressbooks. These can be read by the C++ examples from Google, and vice-versa. The tests sub-directory is where I have written some test code to drive the UnittestProto code generated from Google's unittest.proto (and unittest_import.proto) files. The 'patchBoot' file has the needed file patches to fix up the recursive imports (no longer needed!). What do I need to compile the code? I use ghc (version 6.10.1, previously version 6.8.3) and cabal (version 1.6.0.1, previously version 1.2.4.0). The dependencies are listed in the .cabal files, and these currently require you to go to hackage.haskell.org and get packages "binary" (I use version 0.4.4, previously version 0.4.2) and "utf8-string" (I use version 0.3.3, previously version 0.3.1.1) and hprotoc needs haskell-src-exts (exactly the "latest" version 4.8.0, upgrades tend to break compilation until (easily) patched). The hprotoc Lexer.hs is produced from Lexer.x by the alex program (I use version 2 will has been tested for interoperability against Google's read/write code with addressbook.proto. hprotoc generates and uses the Text.DescriptorProtos tree from Google "descriptor.proto" file. hprotoc has generated code from Google/protobuf/unittest.proto and Google/protobuf.unittest_import. These compile after adding hs-boot files TestAllExtensions.hs-boot, TestFieldOrderings.hs-boot, and TestMutualRecursionA.hs-boot to resolve mutual recursion. The TestEnumWithDupValue has duplicated values which cause a compilation warning. There has been QuickCheck tests done for UnittestProto/TestAllType.hs and UnittestProto/TestAllExtensions.hs in the tests subdirectory. These pass as of 2008-09-19 for version 0.2.7 (which has been tagged right after writing this). These test that random messages can be roundtripped to the wire format without changing — with the caveat that the new extension keys are read back as raw bytes but compare equal because of the parsing done by (==). Mutual recursion is a problem? Not using ghc. The haskell-src-exts let me generate code with {-# SOURCE #-} annotated imports. And hprotoc generates the needed hs-boot files for ghc. And key import cycles are broken by creating 'Key.hs files, which users can ignore. How stable is the API? This is the first working release of the code. I do not promise to keep any of the API but I am lazy so most things will not change. The reflection capabilities may get improved/altered. Stricter warnings and error detection may be added. Code will move between protocol-buffers and hprotoc projects. The internals of reading from the wire may be improved. Where is the API documentation? These file should be able to have cabal run the haddock generation. I am using Haddock version 2.2.2 at the moment. The imports of Text.ProtocolBuffers are the public API. The generated code's API is Text.ProtocolBuffers.Header. The only usage examples are in the examples sub-directory and the tests sub-directory. Since the messages are simply Haskell data types most of the manipulation should be easy. The main thing that is weird is that messages with extension ranges get an ExtField record field that holds ... an internal data structure. This is currently a Map from field number to a rather complicated existential + GADT combination that should really only be touched by the ExtKey and MessageAPI type class methods. The ExtField data constructor is not hidden, though it could be and probably ought to be. Note that extension fields are inherently slower, especially in ghci (though ghc's -O2 helps quite a bit). The entire proto file is stored in the top level module in wire-encoded form and can be accessed as a FileDescriptorProto. The Haskell code also defines its own reflection data types, with one stored in each generated module and also in a master data type in the top level module (via Show and Read). Who reads this far? I suspect no one ever will. Why define your own Haskell reflection types in addition to FileDescriptorProto's types? This allows for the protocol-buffers library package to not depend on a single thing defined in the protocol-buffers-descriptor package. This lack of recursion made for much simpler bootstrapping and allows the descriptor.proto generated files to be build separately. While descriptor.proto files are a great fit as output from parsing a proto file they are not as good a fit for code generation. They mix fields and extension keys, they have all optional fields even though some things (especially names) are compulsory. They obscure which descriptors are groups. They have a nested structure which is useful when resolving the names but not for iterating over for code generation. What are the pieces of protocol-buffers doing? Basic.hs re-exports what is needed for the user API. What are the pieces of hprotoc doing? alex uses Lexer.x to generated Lexer.hs which slices up the ".proto" file into tokens. The ".proto" layout is well designed, quite unambiguous, and easy to tokenize. The lexer also does the jobs of decoding the backslash escape codes in quotes strings, and interpreting floating point numbers. Errors and unexpected input are inserted into the token list, with at least line number level precision. The Parser.hs file has a Parsec parser which are really used as nested parsers (allowing for the type of the user state to change). The ".proto" grammar is well designed and the system never needs to backtrack over tokens. The default values and options' values parsed according to the expected type, and string default are check for valid utf8 encoding. (This also import the Instances.hs file) The Resolve.hs has code to resolve all the names to a fully qualified form, including name mangling where necessary. This includes code to load and parse all the imported ".proto" files, reusing parses for efficiency, and detecting import loops. The context built from each imported file is combined to change the FileDescriptorProto into a modified FileDescriptorProto. This stage also determines that extension keys are in a valid extensions range declaration, and enum default values exists. The MakeReflections.hs file converts the nested FileDescriptorProto into a flatter Haskell reflection data structure. This includes parsing the default value stored in the FileDescriptorProto. The BreakRecursion.hs file builds graphs describing the imports and works out whether and how to create hs-boot and 'Key.hs files to allow allow for warning-free compilation with ghc (as of 6.10.1). The Gen.hs file takes a Haskell data structure from MakeReflections and builds a module syntax data structure. The syntax data is quite verbose and several helper functions are used to help with the composition. The result is easy to print as a string to a file. The ProtoCompile.hs file is the Main module which defines the command line program 'hprotoc'. This manages most of the interaction with the file system (aside from import loading in Resolve). Everything that is needed is collected into the Options data type which is passed to "run". The output style can be tweaked by changing "style" and "myMode".
http://hackage.haskell.org/package/protocol-buffers-1.4.0
CC-MAIN-2019-51
refinedweb
1,600
60.61
Janet Daly and Benjamin stopped by to say hi. RESOLVED: to close XMLProfile-29, withdrawing the request on XML Core WG The following agenda were not discussed The meeting is in two facilities at the MIT Stata Center: room 262 Monday all day and Tuesday morning, and room 346 Tuesday afternoon. Telephone access is through the Zakim bridge (+1.617.761.6200) using the normal TAG pass code, 0824#. irc notes by NM et. al. start 14:40:26 NW convened the meeting with all TAG members more or less present (SW by video etc.; see participants above). We reviewed the proposed agenda and moved a few things around. We started to review the record of the 22 Nov 2004 teleconference but postponed approval until more of us had reviewed it. We RESOLVED: to meet again 6 Dec, 13 Dec 20 Dec, and 3 Jan. NW gave regrets for 6 Dec. For 13 December, PC gave regrets and CL gave regrets due to vacation. CL noted a travel risk for 20 December. We RESOLVED: to cancel the 27 Dec 2004 teleconference. ACTION CL: to cause draft of press release to happen. CONTINUES from 2004-10-07 irc notes by NM et. al. start 14:53Z We considered reviewing the list off issues we postponed until after webarch V1 and considering cleanup of webarch V1, but opted rather to discuss futre work at a high level, without constraining the discussion by either of those. PC noted the possible interaction between likely topics of work and votes for TAG membership. TBL was positive about looking at RDF/SemanticWeb and said that doing Web Services would be appropriate, but expressed concern about the defacto architecture that that's coming, e.g. from corporate sources. NM said that those working on Web Services would benefit from the right kind of guidance on how to better leverage the Web Architecture and build scaleable systems; on the Semantic Web side, he said the TAG should not get too far out ahead of what widespread deployment validates. DC ask that the TAG "go to school", reading up on topics like information theory and discussing them with experts... having the Web Servcies Choreography WG deliver a presentation aimed exactly at the TAG audience. CL suggested the interaction domain topics merit more attention than they have gotten. PC said the mobile web would be a really good area for attention, noting enthusiasm at the recent mobile web workshop in Barcelona. CL noted there are already some estimates that the mobile web user base is bigger than the desktop web user base. TBL suggested different appoaches in different areas: PC and DC discussed the border between quality-of-implementation issues and architectural issues in the emerging mobile platform. CL noted the importance of tuning presentation of old architectural issues and solutions to new industry groups, e.g. to prevent telcos from making the "latin-1 is good enough" mistake that the web community has since learned from; CL noted there there is a standards body that does web standards for China and Korea and encouraged the TAG to get involved, suggesting that ignoring a single Chinese telco standard could seriously fragment the web. SKW said the telcos and mobiles have a major incentive to grow beyond flattening voice revenues. PC reported from that community an increasing focus on device independence. TBL suggested a more direct liaison to the OMA than the TAG currently has. We went on to discuss the balance between talking and listening, writing, travelling, etc. PC noted that the TAG did less outreach in the recent six months than the previous six months, though our focus on review comments explains that to some extent. PC noted the possibility of meeting with each new W3C WG, perhaps at the W3C technical plenary. NM suggested disconnected, peer to peer, and other non-traditional-web models as a focus of future TAG work; in particular A list of topics emerged from the conversation, as well as ways of working on them: norm: Ian says that if we get stuff together by today, we get them in the packet Above links are (1) slide proposal and (2) proposed written summary, both from Paul. Now discussing written summary at paul: I have some nervousness about what's stated about outreach. Is it OK?> chris: yes paul: this is more or less in the same form as previous reports norm: looks fine to me <DanC_lap> (bummer the Basel record wasn't clear enough; I thought we empowered PaulC to deliver the summary directly to the AC meeting materials without the TAG in the critical path) stuart: as I said on list, would favor a bit on future directions norm: seems more appropriate for presentation than summary <timbl> "The stuart: will go with flow <timbl> TAG also moved the following issues to a deferred state since they are <timbl> awaiting action from another group" tim: I'm nervous about pending vs. pending someone: that's from the minutei think we should substitute :-) <Chris> origin of these terms is the exit software that Ian was using <DanC_lap> it's clear enough to me: "TAG also moved the following issues to a deferred state since they are <DanC_lap> awaiting action from another group" Note to those reading the pretty printed copy, the above quote from Tim should read: "The TAG also moved the following issues to a deferred state since they are <timbl> awaiting action from another group" <Chris> and that propogates into the issues list <scribe> ACTION CL: to turn proposed summary into HTML Now discussing proposed slides from Paul at: <DanC_lap> slides seem good. themes: people (membership, election coming...), proposed REC noah: we should mention correct nomination deadline of 12/14 tim: AWWW is generally not an appropriate normative reference, because we provide very little in terms of prescribed grammar, protocol, etc. paul: I'd push back. Things like i18n are also very horizontal and are referenced normatively., <Norm> zakim wouldn't let me "ack" you DanC_lap paul: how will community know where AWWW stops and specific specs start? It's a recommendation. It talks about principles. Looks & feels like QA framework. tim: how does that affect the way you quote it? paul: I'll answer with respect to something you might discuss tomorrow. QA framework mandates in each spec a section on XXXX. Can't you make a normative reference to QA framework for that. Roy Fielding arrives. paul: I'm saying people will be confused? tim: are you saying quoting normatively or obeyed? <Zakim> DanC_lap, you wanted to say (a) yes, we're happy for folks to cite webarch normatively if they find it useful (b) we're happy for folks to cite it in comments on other specs but paul: let me restate your position, Tim. AWWW would often influence other specs, but not by making normative reference. tim: only place I could see that happening is when we define a term like information resource ... still, our glossary remains a bit informal for that purpose chris: I disagree with Tim. I think other specs can/should refer to AWWW normatively. E.g. statement in SVG that "we believe that nothing here conflicts with AWWW". tim: hmm chris: make sense? tim: maybe? <Zakim> Chris, you wanted to speak about normative ref to webarch <Norm> a? noah: points to his email "A correction becomes normative -- of equal status as the text in the published Recommendation -- through one of the processes described below." <Chris> in other words, we were asking for people to point out conflicts if they found them. In that sense it was normative <Zakim> noah, you wanted to discuss my note on normativity <timbl> Noah: [defines normative in the sense "Love me, love their awww"] <Zakim> Stuart, you wanted to say that the value of putting AWWW on Rec track is the concensus process. noah: so, I think we could go either way. Some preference for allowing normative references. stuart: the value of putting AWWW on Rec track is the concensus process ... in our charter we have the term "Architectural Recommendation". Is there a class of docs in the W3C that is more like arch documents, and should the process document say more about them? <Zakim> DanC_lap, you wanted to say (a) yes, we're happy for folks to cite webarch normatively if they find it useful (b) we're happy for folks to cite it in comments on other specs but <Stuart> +1 to Dan <Norm> +1 to DanC dan: we crossed the bridge a long time ago in allowing normative reference to things that are not testable. We should allow normative references, but there's no institutionalized enforcement of webarch-conformance, other than normal peer comment processes (where the TAG may occasionally play the role of peer) noah: fine with me tim: doesn't say one way or the other that tag as a group has a certain organizational role and influence norm: right, we don't need to say that there ... I think the worry was not "can you make a normative reference" but rather "do you have to obey it"? tim: typically, "no you don't have to obey it, but you have to have a good reason" <DanC_lap> i.e. "you have to answer comments" paul: Some scepticism. For past two years we've struggled to speak in lower case letters. Nobody believes it. <timbl> DanC: well, it is the consensus of a large group of people now, and soon the AC too, so it deserves the respect of a W3C rec when a w3c rec. <timbl> Noah: I don't see how anyone can read a statement as we ofetne say a la "we thing this is a good idea" as being an absolute requirement. <timbl> Noah .... It is a loittle weird totlk about obeying the document where it doesn't even insist on anything <timbl> Norm: People still feel sometim,es that "you should consider..." is too much of a constraint. <timbl> Noah: "MUST" isi used sparingly in awww. tim: diminishing returns on this? chris: I think I have what I need. paul: Is Chris generating slides? <timbl> (Danc, there was push back on our not presenting anything from Ian and SteveB) <DanC_lap> (yes, I saw the pushback. what I didn't see was our getting convinced) <scribe> ACTION CL: to prepare slide HTML based on input from Paul dan: you asked what will come up in our session. I think XLink will. paul: what do you think will be the way it will be raised? norm: are you aware that core working group is "picking up" XLink to do an XLink 1.1 in order to fix a small bug. noah: it will retain the characteristics that have "bothered" some people? norm and chris: won't fix the concerns of the HTML working group, but will meet the expressed needs of SVG and DocBook stuart: any dialog with the hypertext coordination group on this? norm: this was given to a linking task force that Liam Quinn is leading, but core seems not to be waiting for this. <timbl> <timbl> noah: on versioning, should we ask David Ezell to dial in tomorrow to explain schema use cases? norm: probably not needed tomorrow, and our planned afternoon time doesn't line up with his availability in any case. <Chris> 16:30 - 17:15 Technical Architecture Group <Chris> [Discussion] [slides] noah: I'll tell him no thank you. norm: anything else on AC prep <Chris> Since the May 2004 AC <Chris> meeting the TAG has: <Chris> The TAG has spent most of the time since the May 2004 AC <Chris> meeting dealing with Last Call issues, however the TAG has: <Chris> agreed? ok agreed Lunch Break... <Chris> Action completed: htmlize AC summary <Chris> W3C Technical Architecture Group Summary <Chris> Scribe: Chris <Chris> Paul: Suggest TAG should ask for a slot, decide what it wants topresent. past sessions well received <Chris> Norm: Yes, in past this worked well, we should do that again <Chris> Roy: Timing is a little difficult, due to election churn.... <Chris> Paul: Plan is to have old and new people at the first meeting after an election <Chris> Paul: Could do a topic ex E&V where Tag, schema, etc were the pannelists <Chris> Norm: What are our future plans, TP might influence that direction <Chris> TimBL; Should not be afraid to argue technical things , open cans of worms <Chris> DanC: A play depicting the history of HTTP range-14 <Chris> Norm: drama might be good :) <Chris> DanC: prefer topic based slots to group based ones <Chris> TimBL; interested in mor egeneral topics also <Chris> Noah: Is there the usual planning committee? <timbl> (see ) <Chris> Noah: Any more good, deep parts of WebArch that we could talk about? <Chris> Paul: There is already a WS-Addressing meeting at TP <Chris> Noah: Email preceeding email is good to get folks up to speed <Chris> Paul: Perhaps discuss with DO tomorrow <Chris> Noah: also depends what they are naming at what granularity, eg a port number .... <Chris> DanC: Looked at spec, some parts are opaque <Chris> TimBL: have had offline discussions to explain a bit <Chris> Paul: So, should I reply to Steve sayingyes, an E&V slot was endorsed by TAG <timbl> ^ Slide overveiw of the WS adderssing endpoint issue <Chris> DanC: so, what is the TAG position on E&V, what is the elevator speech/ Can we narrow the focus? <Chris> Paul: Can narrow to the Schema aspect, Schema 1.0 and 1.1 <Chris> Noah: this should really be discussed tomorrow in the agenda slot allocated to it <Chris> DanC: OK with the topic, details need to be worked out <Chris> Norm: happy with that response too <Chris> Norm: any objections? <Chris> (none) <Chris> (Paul sense email responding to Steve) <Chris> Dan: is this editorial <Chris> Norm: agree its editorial, approve <Chris> Roy: me too <Chris> Dan: fine by me <Chris> Approved to make this change to WebArch <Chris> is it editorial? <Chris> Norm: next thing - changes to glossary. is it editorial? <Chris> Dan: commentor seems to think not <Chris> Norm: Not substantially different <Chris> Norm: discussed at last weeks telcon. if a ns uri identifies an IR then that Resource is a namespace document <Chris> Chris: That change makes if clearer, for me? <Chris> ACTION NDW: Norm respond to commentor re defn "namespace document" in glossary <Chris> Noah: Generally agree but tricky in one respect <Chris> Noah: so if its not an IR there is nothing to retrieve? (some nods) but we don't actually say that <Chris> Noah: so in other cases, all bets are off. <Chris> DanC: We say it should be an IR <Chris> Noah: so people miht be tempted to have physical resources and ns uris <Chris> Noah: OK for us to be silent on that, but it is a point of confusion <Zakim> timbl, you wanted to ask whether TAG members have suggestions for general topics for TP <timbl> A "namespace document" is an Information Resource, whose URI is the same string as the namespace prefix, and whose content describes the namespace. <timbl> A "namespace document" is an Information Resource, whose URI is the same string as the namespace URI, and whose content describes the namespace. <Chris> Noah: prefix is wrong there <DanC_lap> "whose content" should be phrased using representationg <timbl> A "namespace document" is an Information Resource, whose URI is the same string as the namespace URI, and which describes the namespace. <timbl> You should have one. <Chris> Noah: this says if you don't provide an IR hten you can't call it a NS doc <Chris> Chris: ok, so a dog is not a ns doc. Good <Chris> Roy: we decided this last week <Chris> Norm: ok so anyone move to reopen? <Chris> (no-one) <Chris> Fine so we stick with original wording "If blah" as recorded in 22 Nov minutes <Zakim> DanC_lap, you wanted to lob the xlink grenade in this context cuz, hey, I'm completely insane and to note caldav with urn: namespaces <Chris> DanC: to clarify this is an agenda request, not something on crit path for PR <DanC_lap> (caldav stuff ) <timbl> <Norm> <Chris> thanks <timbl> "Locks are indispensible when multiple authors may modify or create the same resources" caldav caldav <Chris> summarises the current situation re RFC3023 <Chris> Roy: response should be yes, 3023 is wrong and this conflict is being resolved by editing 3023 <Chris> ACTION CL: respond to Eric about this <Chris> +1s from Norm and Dan <Chris> Roy: Tim Bray would have plus oned as well <Chris> Yuxiao Zhao <Chris> <Chris> Point one is evental need, immediate realisation of a future extensibility point <Chris> point 2: explain more <Chris> on point 4 do we not give examples of audio and video as not xml suitable? <Chris> "not universally applicable" <Chris> Point 1 we adress. Point 2 not sure. Point 3 not clear its a good reason and point 4 is covered already <Chris> on point 3, xml can encode a range of things from higly abstract to highly presentational <Chris> ACTION NDW: Norm respond to commentor <Chris> Norm; this is all the comments arising form PR <Chris> Dan: What is the plan for making a REC draft? <Chris> Chris: Whatare the other 2 talking points <Chris> The third one is the community review W3C process etc <Chris> Paul" documenting the pronciples on which thwe web has been built helps other people grow the web <Chris> Paul: Rude Q&A - what next? <DanC_lap> (I'm OK with Volume 1) <Chris> Tag intends to further develop ..... architectiral questions and issues that have broad impact on future of web <Chris> Dan: Insert general W3C message <Chris> Chris: the 'what changed, so what' question <Chris> Noah: Weeb bilt of small specs working together, what was folklore is not set down clearer, so specs will work together better <Chris> Paul: Jorney equally important as result - engaging web community <DanC_lap> yes... the consensus/journey aspects might merit 2 points, not just the usual 1 about w3c process <DanC_lap> (list workshops? lifesci? mobile?) <Chris> TimBL: Discussions involved many WGs and helped them work together in compatible ways. Mobile Web is particularly worth mentioning <Chris> Paul: check workshops form last few months - MWeb, CompDoc etc <Chris> Noah: reaches a wider, non priesthood audience <Chris> Noah: consciousness raising, wider audience <Chris> Paul: WebArch plus more stuff from findings would make a nice softcover book <Chris> Paul: some folk wil not read the webarch as is, rewriting in a more accessible way <Chris> DanC: Interesting <Chris> Norm: for a later Agenda item <Chris> Paul: Are we expecting testimonials from everyones company? <Chris> Yes <Chris> WBS form has the 'promotion' part <Chris> Chris: Roy, are you putting something in on your own behalf? <Chris> Roy: yes <timbl> <Chris> Dan: request froma publisher,more than a year ago. <Chris> time pressures, plus my view vs tag view <Chris> Chis: i was also contacted, prefered to wait till it was done <Chris> Roy: group project is a big time sink <Chris> Separate chapters by different people isn't that great an idea..... <Chris> Paul; My standard response is "no" its less than minimum wage so the benefits are purely getting ones name on the cover <Chris> Paul: OK to do if being paid, but enough other things to do <Chris> Noah: A few authors make money, often the book just sinks though. Concern about auto adding new people to author list. <Chris> Noah: Some specs look impemetrableso the authors get to write a book <Chris> Noah: OK with other people doing their own take on the wen arch <Chris> Noah: Annotated xml spec added good value <Chris> Roy: There are http books that are 70% copied out spec <Chris> Roy: A W3C Recs book might be interesting, just republished <Chris> TimBL: Re-launch W3J? <Chris> TimBL: MIT press publication of W3C workshops? <Chris> Roy: XML Recs - the complete set. Colectors special edition <Chris> Dan: Some communitoies we don't reach because they read paper and we don't do paper <Chris> DanC: what about interviews? <Chris> TimBL: unhappy to see personal comment mixed with webarch <Chris> Noah: can direct people to www-tag and answer questions there <Chris> Dan: more interested in outreach rather than personal comment <Chris> Paul: Giving a 1.5 hour talk and print 250 copies of Webarch at university. getting it into the curriculum <Chris> ... in the context of a distributed systems course <Roy> Ric Holt at U Waterloo: <Chris> (tag discussed further outreach opportunities) <Chris> Paul: deep linking and linking into resources <Chris> Chris: deep linking vs 'bandwidth theft' <DanC_lap> (see earlier request for discussion of p2p influence on http) <Chris> Chris: also the deep linking transclusion issue - copyright etc <paulc> <Chris> Athens olympic site <Chris> Noah; we can't chase all the people that did not read the finding. On the other hand, high profile cases can help ensure uptake <Chris> (tag discusses lack of cute baby seals to power a hall of fame approach) <Chris> We se no new evidence here and the finding still stands <Chris> referer tactic and bandwidth hteft - consistent with 'technical not legal' approach of the finding <Chris> ACTIION: Chris produce draft revised finding on deep linking <Chris> ACTION CL: produce draft revised finding on deep linking <DanC_lap> (pointer to where the SMIL WG says "don't address into media files"???) <paulc> <Chris> part of the issue is use of fragment identifiers and adressing sub resources <DanC_lap> (I'm having trouble relating what Chris is saying to what Concolato is saying in 0046) <Chris> part of it is, what is the nature of the resource <Chris> quote "My suggestion is to specify that the audio element should handle audio <Chris> streams only, and the video element video streams only. This means <Chris> removing (or deprecating) the audio related attributes from the video <Chris> element. It also means that the xlink:href attribute of those elements <Chris> have to be precise enough to identify one stream (in a container file, <Chris> on a server, ...) maybe using fragment identifiers. The mimeType <Chris> attribute in this case would describe the type of media stream and not <Chris> the type of container." <Chris> responses <Chris> <Chris> and <Chris> <timbl> (I Just added Athens Olympics to my own personal hall of flame) <Chris> Norm: Email from Jack says that SMIL WG knows they need to get to this but have not yet. <Chris> Summary TAG does not see a big architectural issue here and suggests further coordination with SYMM WG <DanC_lap> (topic: Benjamin) <paulc> <Norm> <Roy> <paulc> <Roy> <paulc> and XML Schema Libraries and Versioning: The UBL Case at <paulc> <Chris> (discussion on URNs and resolvability) <Chris> Noah: An ordinary person can't really get from our finding why using a URN is not a good idea. In particular, subtleties of whether an http URI is always resolved by HTTP <Chris> DanC: they do actually track these URNs to ensure non overlap and the tracking device is a Web page so in effect there is an http URI for these <Chris> TimBL: HTTP is a namespace, we have the power to change the protocols <Chris> (TAG was visited by a representation of ) <Chris> Noah: prefered result is to look at p2p in the http namespace. one answer is roys, use HTTP and upgrade later. HTTP namespace is not restricted to the HTTP protocol <Chris> Noah: If Tim is right and we can deploty a range of protocols in this namespace, my comfort level goes way up about telling people to se http namespace <Chris> Noah: If people choose URN to avoid being tied to a given protocol, then telling them this now would help <Chris> * Stuart's TAG Issues grouped by theme <Chris> o Which items go on the bottom? <Chris> o Review of issues list/future planning: <Chris> + first batch, second batch(es) <Chris> o Review of draft findings: <Chris> + xmlProfiles-29 <Chris> + Authoritative Metadata resolves putMediaType-38? <Chris> + mediaTypeManagement-45 <Chris> + Other draft findings... <Chris> Paul: Issues list is not maintained <Chris> Norm: take issues.html and rip out the database parts, flatten, and date it <Chris> Paul: its very misleading currently <Chris> DanC: not sure where it is incorrect right now <Chris> (some consulting of the issues list) <Chris> Paul: links to some findings are not to latest one <Chris> Norm: not maintainable without more staff resource (general agreement) so change the page, put a last mod date <Chris> Chris: perhaps put each issue in a separate page, all linked from one summary table <Chris> oops but fragments can't be redirected to separate pages <Chris> Paul: still not clear what we are actually doing <Chris> DanC: if a finding was discussed, add to the page <Chris> Chris: perhaps start with <Chris> Paul: happy to scale back the level of information, as long as what is there is up to date and accurate and reliable <Chris> Noah: issues might get more important now AWWW is out the door and shapes the next phase of work <DanC_lap> (I hope to add a link from each issue to a search for that issue in the archive, ala) <Chris> tes back <DanC_lap> <Norm> <Chris> status is: nothing is happening <Chris> TimBL: overtaken by events <Chris> Norm: was one approach to http-range14 <paulc> <Chris> Timbl: "RDF documents use URIs as identifiers for things including for <Chris> relations. An RDF statement "S P O" means that a given binary relation <Chris> identified by P holds between to things identified by S and O. (S, P <Chris> and O are URIs)" <Chris> Disagreeing with dereferencing P means don't use P <Chris> DanC: Meanwhile W3C has 2 different definitions of rdfs class <Chris> TimBL: not sure they are inconsistent <Chris> TimBL: Suggest leaving on back burner, interesting, leave for now. ArchDoc contributed a lot but does not really address RDF yet <Chris> TimBL: expect to see movement on this for WebArch 2, is a prerequisite <Chris> paul: Disagree. TAG should not do this, why isn;t this something that the SW WGs fix themselves? <Chris> Paul: Why not do this for other areas, eg Web Services <Chris> DanC: (gives WSDL example ... some discussion) <Chris> Noah: early bound vs late bound checking. <Chris> Paul: (DOS attack from malformed SOAP messages) <Chris> Norm: meaning of URIs in RDF is RDFs problem. OTOH, if RDF says one thing and WS says another, everyone looses <Chris> TimBL: Job of TAG is to glue things together, ensure things work together <Zakim> timbl, you wanted to say why we need to stitch OWL to URI <Chris> TimBL: owl people saw no vale in dereferencing, need to explain this to them. <Chris> Paul: AWWW says to use XML instead of binary, but instead of 'no binary' we spun it off to another group to study in depth. Same here, surely? <Chris> Paul: TAG wrote a taxonomy to describe the problem space <timbl> OWL people didn't sign up to the 'meaning' of a URI being connected to what you get when you dereference it. <Chris> Noah: sort of with you here, but maybe we should pass it back to them. if they think their use of URIs is disconnected from everyone elses, ask them to justify that <Zakim> DanC_lap, you wanted to agree that as stated, it's about URIs in RDF; either we should hand it to the SemWeb Best Practices WG, or we should re-state it to apply to abstract <Chris> Norm: Clear we can't close this now. Is it closable short term or long term. <Chris> DanC: its stated as an RDF problem. If its only an RDF problem, not a TAG issue. If its also a WS problem then there is TAG relevance <Chris> Noah: Or we could keep it open and pending, we want them to track <Chris> DabC: No, I was proposing to close it not move it to pending <Chris> ACTION DC: ask SWBP to take the issue <Chris> Norm: If they will take the issue, then we can decide what to do <Chris> DanC: so three options, will they take it if the tag keep s it open, would that take it and allow TAG to close it, or will they not take it <Chris> Paul: Looks like we need a requirements document for the AWWW. We can't design on the fly without aplan in place. <DanC_lap> (the part of W3C that developed requirements process is... not something timbl encouraged, I think) <Chris> Norm: not comig to closure here. Prefer to see the outcome of Dans action <Chris> Paul: Ask them to rewrite in current AWWW terminology <Chris> TimBL: when we started TAG, we took issues and once we had some, broke them into ones to solve now and one to defer. That outline view was the requirements document <Chris> TimBL: needs a f2f meeting and a whiteboard <DanC_lap> (as an agenda request, I concur) <Chris> Noah: good way to prioritize <timbl> <DanC_lap> ACTION DC: make sure issue raising message is linked from issue 39 <Chris> That is the original statement of thre issue that Dan needs to point to <Chris> DanC: Remember I presented GRDDL before <Chris> DanC: at last TP, Mark Birbeck presented another RDF syntax from the HTML WG <DanC_lap> slides <Chris> DanC: David Wood presented at XML 2004 <Chris> in particular <Chris> DanC: so, the TAG finding needs to be updated to reflect this <Chris> Oh - there is no actual finding. Maybe don't need one here. Already solved. Question went away <Chris> Chris: So is this only for HTML (because of the 'ignore unknown tags'/'hide in attributes' design' <Chris> Norm: why not just say rdf|* { display: none} <timbl> rdf/eh <timbl> t-15? <Chris> # If a user agent encounters an element it does not recognize, it must process the element's content. <Chris> <Chris> 3.2. User Agent Conformance <Chris> as opposed to..... <DanC_lap> ah, indeed, chris. interesting. <DanC_lap> sigh. how did that get thru CR? <Chris> still looking for the HTML4 part <Chris> but it was non normative <Norm> XML 2.0: <Chris> and is not in the HTML 4 conformance section <Chris> I have pointed this out at least twice before <Roy> RDF/A uses qnames in content == evil <DanC_lap> sorry for being dense, chris. <Chris> sorry for getting annoyed. just felt i had explained it all before <timbl> "If a user agent encounters an element it does not recognize, it must process the element's content." <DanC_lap> I'm pretty familiar with what HTML 4 said, and yes, it was non-normative. <timbl> Should we raiseit as a problem with XHTML1 extensibility? <Chris> Norm: Concernover loss of namespace declarations <Chris> DanC: if there are XML Queries that will break here .... <Chris> TimBL: doesn't it need a schema? <Chris> Noah: no, could be declared in the query <Chris> Noah: queries will not preserve namespace prefixes it was not aware it needed. But then qnames will break <Chris> TimBL: and we are back to magic prefixes <Chris> Paul: So we need a liaison here between Query and HTML <Chris> Norm: Should this wait for last call? <Chris> Dan, TimBL: No! <Chris> QName in Context finding could show that qnames have the following problems, and avoiding it means you don't hit the problems. At the expense of big URIs <Norm> ACTION NDW: Norm to update QNames in content finding to contrast XSLT and XQuery support for namespace delcarations that used by qnames in content <DanC_lap> ACTION DC: comment on qnames in content in RDF/A, based on updated finding <Chris> Noah: finding may need to point out the range of ugliness of qnames <Chris> TimBL: mapping was not defined in the XML Namespaces spec <Chris> Paul: its already published as final, so we need to republish <Chris> Norm: yes <Chris> tag groans <Chris> Norm: TAG asked Core to do a Rec track document here <Chris> Norm; Core feels it could do a WG note but not a rec track document - not enough value <Chris> Norm: could be XML except a DOCTYPE. <Chris> replaces one production. <Chris> Noah: current processors can't be used, too heavy .... <Chris> Norm: people want a soap like subset, Core feels this doesn't make a lot of sense <Chris> Norm: But TAG asked for a Rec track document, do we really need one Clarification: Noah was saying we need to understand why you would want a profile defined, and was speculating that core was assuming that a suitable profile would encourage common use of processors for the subset. <Chris> DanC: Sympathetic, no urgent need for it. No architectural reason for it to exist <Chris> DanC: is a readable subset, XML slimmed down with all the DTD cruft taken out FWIW: I am less convinced than many others that the requirements are common across users. I'm also unconvinced that optimized SOAP processors, for example, will necessarily share parsers with other applications. <Chris> Paul: parameter entities double implementation time <Roy> Chris: would the content be marked as somehow different from "real" XML <Chris> Chris: The content would not be distinguished in any way, how does a processor know it conforms and not to add forbidden doctype eg an entity declaration <Chris> Norm: It would not <Zakim> noah, you wanted to say use of common processors is overrated in high-perf situations <Chris> Noah: Assumption is that the soap subset is valuable. maybe it is, maybe not. <Chris> Noah: high performance soap processors are highly tuned. Not clear that for oither uses there is enough commonality. Needs to be demonstrated, not assumed <Chris> Noah: currently you know that it conforms to the soap subset because its a soap envelope.... <Chris> Norm: if an outcome is that TAG reconsiders whether to ask Core to do this I would be really pleased Noah says +1 to Norm on that. <paulc> <Chris> Norm; i was concerned about fractionally different subsets, but that does not seem to be happening <DanC_lap> (in this case, paul's spelunking is already reflected in the issues list) <Chris> Paul: Future of XML workshop in next 6 months or so might conclude that a larger spec (including XML namespaces etc)... also XBC will report then. Uptake of XML 1.1. <Chris> Paul: TAG might find an audience for a broader answer <Chris> Norm: So it would make sense for the Core WG to sit on that for a while <Chris> Paul: If the workshop does take place on schedule <Chris> Dan: has anyone promised this? <Chris> Chris: not until XBC is done <Chris> Dan: will people get on a plane for such a workshop? <paulc> Member only: <paulc> Member only on XML 1.1: <Chris> Chris: people actually doing non-ascii element names *will* run into this problem <Chris> Straw poll: do we close xmlprofiles-29 and send message to Core saying its no longer required <Chris> Yes: norm, roy, noah, paul, chris, tim, dan <Chris> unanimous <Chris> Paul: Any objections to closing XMLProfile29 and witdrawing the request on Core <Chris> No objections RESOLVED: to close XMLProfile-29, withdrawing the request on XML Core WG <Chris> Noah: Are we going to pick this up tomorrow? <Chris> yes <Chris> </meeting> <paulc> TAG issues grouped by theme: <DanC_lap> PaulC: which issues shall we discuss next? Stuart: I'd like to discuss metadataInURIs 31 <DanC_lap> <DanC_lap> PaulC observes that 31 is in the "2.2.2 URI and Fragment Issues" cluster of <DanC_lap> PaulC observes requests jive with <DanC_lap> PC: outstanding actions here? <DanC_lap> CL: just the long-standing action to edit the revision <paulc> <DanC_lap> subject: 3023 update (was Re: Agenda TAG Telcon: 8th Nov 2004) <DanC_lap> CL: there's pushback on the use of XPointer <DanC_lap> ACTION CL: explain how just using the RECcomended parts of XPointer isn't too much of a burden in RFC3023 <DanC_lap> CL: ... charset... <DanC_lap> SW: I found Martin's distinction between use in registration docs and use in exchanged documents useful <DanC_lap> CL: I don't see how it helps to require documenting and implementing it but not using it <paulc> <Chris> see also <Chris> In general, a representation provider SHOULD NOT specify the character encoding for XML data in protocol headers since the <Chris> ation provider SHOULD NOT specify the character encoding for XML data in protocol headers since the data is self-describing <DanC_lap> " In general, a representation provider SHOULD NOT specify the character encoding for XML data in protocol headers since the data is self-describing." <Chris> Roy: Existing 3023 says must, so we went to SHOULD NOT; if it had not, we would have said MUST NOT <DanC_lap> RF: if not for the "providers MUST..." in RFC3023, we'd have said "MUST NOT". <DanC_lap> CL: revising the finding along those lines seems like a good next step <DanC_lap> PC: there may be knock-on effects on webarch <DanC_lap> DC: I don't see an opportunity to do that before REC <DanC_lap> PC: no, but eventually <DanC_lap> CL: ... charset... local disk... xml processor... <DanC_lap> ... transcoding proxy <DanC_lap> ... +xml <DanC_lap> PC: in sum, CL is continuing to negotiate changes to 3023 ... <noah> NM: I asked whether we were going so far as to encourage transcoding of XML into different encodings, while revising the XML declaration appropriately. <Stuart> Two questions: 1) Is text/*+xml allowed (discouraged but allowed) 2) Should charset be used with an instance of text/*+xml if it occurs however discouraged? <noah> NM: The answer I got was: "IF you choose to transcode, THEN you must keep the decl in sync" <Chris> Chris: not encouraging, but recognising that it happens and also, that the +xml convention has value here for unrecognized media types <DanC_lap> PC: I subscribed to ietf-xml-mime. how many others? CL <noah> NM: I agree with that, but raised another point: "We should note that such transcoding has costs for other reasons: there are situations in which I depend on my XML files being byte-for-byte unmodified (e.g. CVS diffs)." <DanC_lap> Re: MIME Type Review Request: image/svg+xml <DanC_lap> CL: yes, I'll let the TAG know when the RFC3023 revision merits TAG review or discussion <Chris> <DanC_lap> Fw: XML media types, charset, TAG findings Fri, 08 Oct 2004 10:48:00 +0900 <Chris> That summarizes what I have been saying <DanC_lap> CL: oh yeah... I had an action here... <Chris> pointer to the list Paul is projecting? <DanC_lap> <Chris> ACTION CL: explain why resources that have further server side processing (includes, php, asp etc) might want to have different media type when placed on server and when retrieved from it <DanC_lap> DC: likely for monday's telcon? CL: maybe. 1/2hr email, provided I find the 1/2hr... <Chris> its a half hour email thing, well try to do in next few daya <DanC_lap> reviewing ACTION CL: draft finding on 45 <Chris> text from minutes <DanC_lap> NM asks about raising issues; several encourage him to raise them in www-tag <Chris> Chris: note to self, also discuss impossibility of media types for combinations of different document formats (xhtml+matml+svg+etc) <DanC_lap> NM: things like application/soap+xml seem to be stretching MIME... people want this mix-in... <Chris> Chris: there was asuggestion to do a three way hierarchy, like application/foo/xml or another suggestion was xml/image/foo <DanC_lap> ... but if I really want to say "this is a SOAP purchase order" the 2-level system doesn't accomodate it well <Chris> +xml precludes adding a +somethingelse <DanC_lap> NM: decisions like "don't use ..." seem to be made on-the-margin <DanC_lap> PC asks about the number of +'s allowed <DanC_lap> PC: does RFC3023 restrict it to just one + ? <DanC_lap> NM: I think so <DanC_lap> NM: is it better not to raise an issue until there's a constructive solution in sight? I wonder, sometimes. <DanC_lap> RF: there's a lot of aspects of media types that suggest "let's redesign the whole system..." <DanC_lap> ... [missed] <DanC_lap> PC: why isn't that[?] a comment on RFC3023? <DanC_lap> RF: ... image/* is a whole bunch of unrelated formats; media types are a processing declaration more than a format declaration. <DanC_lap> ... every text/xml thing is also a text/plain thing, but the difference is how you process it suggested global issue: RethinkingMediaTypes? <Norm> I sometimes worry about the whole media type/fragment identifier tangle of issues <DanC_lap> CL: there was discussion of application/soap/xml , which is hierarchical, but... <Chris> ... but if you extend it fiurther, is application/spap/cml/signed the same as application/soap/signed/xml ? <DanC_lap> DC: this seems like issue 45, to me <DanC_lap> [this = NM's questions] <DanC_lap> NM: I'm glad to work with Chris on this <DanC_lap> SW: how does this relate to compound documents? <DanC_lap> CL: yes, quite... the html/svg/mathml 2^N stuff... <Chris> PC: the 2^n+1 problem <noah> FWIW, I think DC has been proven write. 3023 speaks of a suffix of +xml but does not outright prohibit additional "+" signs by my reading. Hang on, I'll copy some pertinent text., <DanC_lap> SW: the compound documents WG seems relevant here <Chris> <noah> When a new media type is introduced for an XML-based format, the name <noah> of the media type SHOULD end with '+xml'. This convention will allow <noah> applications that can process XML generically to detect that the MIME <noah> entity is supposed to be an XML document, verify this assumption by <noah> invoking some XML processor, and then process the XML document <noah> accordingly. Applications may match for types that represent XML <noah> MIME entities by comparing the subtype to the pattern '*/*+xml'. (Of <noah> course, 4 of the 5 media types defined in this document -- text/xml, <noah> application/xml, text/xml-external-parsed-entity, and <noah> application/xml-external-parsed-entity -- also represent XML MIME <noah> entities while not conforming to the '*/*+xml' pattern.) ." <noah> Also: "This document recommends the use of a naming convention (a suffix of <noah> '+xml') for identifying XML-based MIME media types, whatever their <noah> particular content may represent. This allows the use of generic XML <noah> processors and technologies on a wide variety of different XML <noah> document types at a minimum cost, using existing frameworks for media <noah> type registration. <noah> Although the use of a suffix was not considered as part of the <noah> original MIME architecture, this choice is considered to provide the <noah> most functionality with the least potential for interoperability <noah> problems or lack of future extensibility. The alternatives to the ' <noah> +xml' suffix and the reason for its selection are described in <noah> Appendix A." DC: I think it will be nifty if CDF presented their requirement document to TAG <DanC_lap> DC: "compund documents" is a huge design space. I'm surprised W3C chartered a WG with a problem that big. I'm interested to have them present their requirements doc to us <noah> See above...they really mean use of multiple namespaces that are designed to be mixed and matched. > <noah> There's good reason to debate the pros and cons of W3C having a working group in that area, but it's a very narrow slide of what I consider compound documents. <paulc> 1. grounded in good practise 2. make it short <paulc> Dan C suggested a finding on mediaType Management-45 should do the above two points <DanC_lap> short ~= 5 pages <DanC_lap> ACTION SW: coordinate with CDF WG. e.g. requirements presentation plenary week [note paulc comments were actually paul minuting DC comments] <DanC_lap> of <DanC_lap> reviewing ACTION DC: with Norm, develop a finding on httpRange-14 starting with the HashSlashDuality text <DanC_lap> NDW: I've done a little work on that <DanC_lap> ... since preempted by webarch work <DanC_lap> NDW/DC: delivery to tag late Jan is our best guess. <DanC_lap> ACTION DC: update 14 in the issues list to put in on an agenda in late jan 2005 <Chris> Proof that the split we asked for happened <Chris> <Chris> <DanC_lap> RF: I suggest that this should be marked pending IETF completing IRI spec [something] which is almost done <Chris> Under normative references <Chris> I-D IRI <Chris> Martin Dürst, Michel Suignard, Internationalized Resource Identifiers (IRIs), Internet-Draft, September 2004. (See.) [NOTE: This reference will be updated once the IRI draft is available as an RFC.] <Chris> PC: Schema has anyURI that is defined in terms of XLink <DanC_lap> PC: current state is anyURI type in XML Schema... <Chris> PC: XLink defines some of IRI <Chris> PC: Quaery 1.0 xslt 2.0 xpath 2.0 (the qt specs) all inherit this <Chris> CL: so there is already a support of a subset of IRI <Zakim> DanC_lap, you wanted to note TimBL's questions <DanC_lap> DC: to summarize: are there 2 spaces, or one space with 2 encodings? <DanC_lap> SW: I've asked MD and found his answers somewhat unsatisfying... he seems to say "both" <DanC_lap> CL: both seem to be useful... <DanC_lap> RF: I'm unlikely to read the IRI spec again until the IESG approves it. it changed just yesterday <DanC_lap> ... and the IESG is all but decided. <Chris> RF: Once exists it will be approved <DanC_lap> timbl, do you want the action on this? <DanC_lap> that is... <Chris> all URIs are IRIs, so there is only one identifier space <DanC_lap> ACTION RF: notify the TAG when IESG has decided on IRI spec and suggest answers to timbl's questions <DanC_lap> swapping in <DanC_lap> reviewing Action CL: Write up a summary of the resolution. <DanC_lap> CL: I've been hesitant to draft that since the relevant terminology in webarch was changing <Chris> but now its stable <Chris> 'secondary resource' and so on <Chris> So I can do this, eta one week <DanC_lap> ACTION CL: Write up a summary of the resolution. on fragmentInXML-28 continues. <Stuart> <DanC_lap> reviewing ACTION Stuart revise finding <DanC_lap> ^30 Nov summary of feedback <DanC_lap> ACTION Stuart: revise finding on metadataInURI-31 <DanC_lap> reviewing PC's action to find out about DO's action <DanC_lap> DC suggests withdraw <DanC_lap> action PC WITHDRAWN. <DanC_lap> SW: ETA xmas <DanC_lap> SW: I'm willing to work on this until it's finished, regardless of my term <DanC_lap> ---- <DanC_lap> break 'till 10:50 <Chris> I have just updated to take into account the xml:id last call <Norm> Ugh. Chris did you send the HTML to Ian? <DanC_lap> ---- resuming from break <DanC_lap> reviewing ACTION2003-01-12 <DanC_lap> reviewing ACTION2003-01-12 DC Propose example of a site description. <DanC_lap> PC finds "Action TB: Beef up use cases in draft finding." <DanC_lap> CL: people use "web site" in 2 senses... <Chris> scribe: Chris <Chris> NW: confused by what chris said, didn't seem to be about sitedata 36 <Chris> DC: Its derived from robots.txt and p3p and things that saw of parts of the namespace <Stuart> little wormholes in URI space <Chris> DC: TBray wanted to have a doc that said 'this is a website' and I saifd 'no, its a website description' hence my action <Chris> NW: Interested to see a finding in this area <Chris> PC: Should we ask TBray about this? <DanC_lap> DC: does the XQuery spec specify how to take the FO namespace URI and a name like concat and make a URI out of it? <DanC_lap> PC: no <DanC_lap> DC: webarch says you MUST <DanC_lap> NDW: I have a proposal that I haven't yet made... to add fragids <DanC_lap> PC: let's add this to our todo list, Norm. IR1 thingy. <Chris> Hi David <Chris> We are meeting in a different room this afternoon <DanC_lap> David, 346 is the room for after lunch. <Chris> joining us for lunch? <DanC_lap> (a room number wasn't sufficient for me; I wandered around the building for 10 minutes before somebody held my hand...) <DanC_lap> in basel () we made a nearby decision, but not one to close this issue <DanC_lap> PC: is this referenced in webarch? <DanC_lap> DC: yes, in 4.5.2 <Chris> ---- <Chris> # xlinkScope-23 : What is the scope of using XLink? <Chris> NW: Propose to wait for XLink 1.1 to see what happens <Chris> SKW: Waiting for Liam to cause task force to meet <DanC_lap> task force charter/genesis... <Chris> first real message <Chris> <DanC_lap> DC: no duration. not optimal. <DanC_lap> (also no public accountability) <Chris> i agree, it needs to have an actual charter, milestones and deliverables <Chris> To quote Ian Hikson "I don't really have a good solution though, not even for this very small <Chris> problem set ("identify links and classify them as either hyperlinks or <Chris> source links, without using external files, and without making it a pain <Chris> to use for authors"). D'oh." <Stuart> Bye <noah> scribenick: Roy <roy_scribe> scribenick: roy_scribe <paulc> <roy_scribe> <roy_scribe> Dave Orchard in attendance <roy_scribe> PC chair for this afternoon <roy_scribe> PC: I saw DO's presentation at the conference, a high-level summary -- shall we go through it? <roy_scribe> DO's slides are not on-line <roy_scribe> PC: DO has 45 minutes <roy_scribe> PC: to present <DanC_lap> Updated rough draft finding on extensibility and versioning for F2F <DanC_lap> DC: I like the producer/consumer diagram. I wonder why 3 arcs and not 2/4 <DanC_lap> +1 discussion of substitution rules. I don't care for "ignore" terminology <DanC_lap> nice diagram: <DanC_lap> hmm... there's a question of _whether_ to use a schema language, not just which one, yes? . <DanC_lap> "Others substitution mechanisms exist, such as the fallback model in XSLT." is news to me. a specific section link into the XSLT spec would be nice <paulc> Plan for issue 41: <DanC_lap> while there are various ways documents can produced/consumed, but the web architecture has one main one, I think. <paulc> Aug F2F discussion of issue 41: <paulc> Oct F2F discussion of issue 41: <DanC_lap> (checking to see if "xml 1.1 is not a compatible change" is noted in the Nov 2004 draft...) <DanC_lap> yup... "A good example of an incompatible changed identified as a minor change is XML 1.1." <DanC_lap> a citation link would be nice. <timbl> XML1.1 is back but nor forwards compatible with 1.0, I assume <timbl> Oh, no ... not compatible, when you include the <?xml ele <DanC_lap> yes, I don't know why "incompatible" rather than "not forward compatible" was used <DanC_lap> (I have an action from another forum to work on a persistence ontology... quite relevant to "version identification" slide) <DanC_lap> (work in progress: ) <timbl> Eg the "Decision" on this slide connects to the transformation rules on anotehr slide to give f/b compatability results. <timbl> UBM the example for a change every time anything changes. <paulc> QA Spec Guidelines interaction with Issue 41: <DanC_lap> "for each compatible version" -- forward or backward? (seems odd to introduce the fwd/back terminology and then not use it) <DanC_lap> hmm... less fuzzy examples might be better, yes. <timbl> Universal Business Language <DanC_lap> UBL <DanC_lap> DO: I sent a comment on UBL asking for [something] <DanC_lap> (I wonder what became of that comment; I gather UBL is done-and-dusted) <DanC_lap> DO: UBL don't intend to support distributed extensibility <roy_scribe> timbl: what happens when entire namespace changes is that people begin programming to specific exceptions (i.e., if the parts I use have not changed, just internally ignore the namespace) -- that has a negative effect on third-party processing <DanC_lap> NM: I'd like to discuss this at length; going over examples like UBL might be as important as the solution <DanC_lap> ... or solutions <timbl> timbl: What happens when UBL comes out with a different version is that the application engineers and lawyers look at the specs and contracts and decide whether one can for them be tretawed like the other. <DanC_lap> "this is the most common" ... hmm... <DanC_lap> NM: are people happy with the ns2 approach? DO: no, prolly not <DanC_lap> did I miss a slide about "or use a different schema language"? <DanC_lap> or "don't use a schema language"? <DanC_lap> "swap trick" .. I don't grok. would have liked a slide on thqat <DanC_lap> re slide "CVI Strategy #3..." <roy_scribe> slide: Extension Element <roy_scribe> slide: Schema V2 <DanC_lap> (noodling on a set of slides on how RDF addresses these issues: sacrifices handy XML syntax for stuff like order and containership; establishes the "erasure" substituion rule...) <roy_scribe> NM: have concerns about focus on existing Schema limitations, rather than on the way forward on general issues <DanC_lap> ("the current schema language" bugs me. Relax-NG is current. RDF is a W3C REC and addresses many of these issues.) <roy_scribe> slide: #5 Incompatibe Extensions <DanC_lap> TBL relates "#5..." slide to issue xmlFunctions-NN <DanC_lap> xmlfunctions-34 <DanC_lap> xmlFunctions-34 <timbl> The RSS problem of which David speaks in my view follows from defining processing as oppossed to meaning <DanC_lap> yes, but defining meaning is only a solution if it answers the processing questions, right, timbl? <timbl> Yes, which is does in this case. <DanC_lap> DO: I didn't get around to elaborating on RelaxNG and OWL/RDF for these slides, but I wrote a blog entry <DanC_lap> PC: the "versioning activities" list is missing QA WG work <roy_scribe> DO: plan to do more reference to the QA work in the finding <roy_scribe> PC: what QA is proposing is that it would be better if the SOAP spec laid out the specific extensibility points in one section <roy_scribe> DO: end of presentation .... questions? <paulc> Plan for issue 41: <paulc> Which of these items are done? <Zakim> tim2, you wanted to say: Missing concept -- damage involved. Compat can be quntitative, eg when middle name is removed? <Zakim> timbl, you wanted to ask for a more formal treatment in some places. For example, operation of interpreting a doc in language x as if it were in namespace y. XML version numbers <paulc> I did not ack him twice - I think someone else did. <paulc> IRC indicates that timbl himself did the "ack tim" <DanC_lap> ah <roy_scribe> timbl: the may-ignore extensions are not really ignored -- they are just not processed (kept in some reserve, perhaps) <roy_scribe> timbl: you could write down some math that reflect the extension rules using substitutions <roy_scribe> timbl: [discussion of other ways to describe forward and backward compatibilty by phrasing it in terms of substution rules] <roy_scribe> DC: there are no documents that are both xml 1.0 and 1.1 <roy_scribe> DC: because the version is labeled with the document [??] <Zakim> noah, you wanted to make a number of comments queued up on Dave's presentation <roy_scribe> noah: there are many shades of gray. <timbl> XML 1.1 specifies a language we can call XMLP1.1 whcih is the set of documents an XML 1.1 processor is supposed tro be able to receive, and is union of XML1.0 and XML1.1. <timbl> XML1.0 and XML1.1 are incompatable. <timbl> XMLP1.1 is backward compatible with XMLp1.0 = XML1.0 <roy_scribe> noah: my view is that rather than saying there is a binary backwards and forwards-compatibility, they should state the ways in which they will process content <Zakim> DanC_lap, you wanted to ask if there are any non-hypothetical examples of "must understand" mechanisms (does new HTTP verbs count? non-WF XML?) and to say I like do's diagram, and <roy_scribe> noah: there is a bit of a trap in treating it as a binary condition, maybe looking at it as shades of gray would free up the text <noah> suggest s/treating it/treating compatibility (e.g. forward compatibility or backward compatibility)/ <roy_scribe> DC: I like the diagram -- it oversimplifies in ways that are consistent with web architecture <roy_scribe> PC: had some high-level questions about the docs sent in e-mail on 26 Nov <roy_scribe> <roy_scribe> PC: missing response to the work plan... how much is done? <paulc> Plan for issue 41: <roy_scribe> DO: I have not done the protocol extensibility and service compatibility (from the work plan message) <paulc> Items on the plan not done: <paulc> - add protocol extensibility, <paulc> - Add material on issue about service compatibility <roy_scribe> DO: looking at what can be done to describe compatible/incompatible flags to operation extensions <DanC_lap> (glad to know DO is noodling on all this stuff, and that my impression that XML Schema problems was the whole story was mistaken) <timbl> Compatible services: ? <paulc> Only first item in Part 2 was done: <paulc> - insert original xml schema material <roy_scribe> noah: first, I think there is a lot of good work here... trying to figure out what is appropriate for a TAG finding <roy_scribe> noah: there are a set of idioms ... would be stronger if the finding strarted by emphasizing the principle <roy_scribe> noah: up front <roy_scribe> <paulc> Above link is member-only. <DanC_lap> (bummer V-F1, VF-2 is member-only; pls send to www-tag, noah) <roy_scribe> noah: look for the principles, list the use cases, and treat the issues at a high level before getting into the details of idioms <Zakim> noah, you wanted to talk about more of the things I had queued up during dave's talk and to mention input from XML Schema WG <Zakim> DanC3, you wanted to comment on the barrier to entry to the XML Schema WG <roy_scribe> noah: would like DO to enter the schema group and see the (non-public) scenarios <timbl> <roy_scribe> DC: sympathetic to barrrier to entry in schema WG -- it is natural effect from a wg with 7 years of history <roy_scribe> DC: it needs to be made public <Zakim> DanC_lap, you wanted to ask if there are any non-hypothetical examples of "must understand" mechanisms (does new HTTP verbs count? non-WF XML?) <roy_scribe> DC: are there any examples to provide that show must-understand in practice? <DanC_lap> NM: SOAP <roy_scribe> Roy: is that SOAP in practice, or just theory? <roy_scribe> DO: bulk of use is not in distributed extensibility (planning for other folks extensions) <roy_scribe> DO: version identifiers often mean capability rather than format of this message <roy_scribe> NM: XML 1.1 is a countter-example (very rare) <roy_scribe> NM: flexibility is often in conflict with interoperability <DanC_lap> RF: yes, HTTP spec says new verbs are "must understand". except for proxies <DanC_lap> DO had a sort of "good question" look. timbl said yes. <roy_scribe> NM: M-PUT extensibility mechanism is an example, but not widely deployed <roy_scribe> PC: we extracted some text for webarch -- does the updated finding mean that we should change the text in webarch? <roy_scribe> DO: no, it is augmentation so far <roy_scribe> PC: I think NM was saying that the material in webarch was not high-level enough? <roy_scribe> NM: I think there are lots of principles between the levels of webarch and the current content of the draft finding <roy_scribe> DO: some are implicit <roy_scribe> NM: they should be explicit -- they are the main event when it comes to teaching others how to do extensibility and versioning <paulc> Norm: are you still there? <roy_scribe> DO: trade-off of breadth vers brevity <Zakim> DanC2, you wanted to noodle on writing the E+V book breadth-first or depth first, and to lean toward "write about what you know" <roy_scribe> ACTION NM: to work with DO to come up with improved principles and background assumptions that motivate versioning finding <roy_scribe> DC: glad to see chapter 2, see noah asking for chapter 1, but I'd like to see more discussion of the rest of the problem space beyond issues with schema 1 <DanC_lap> ACTION DC: review blog entry on RDF versioning [pointer?] <roy_scribe> ACTION noah: work with DO to come up with improved principles and background assumptions that motivate versioning finding <roy_scribe> DC: title is much broader than the topics being discussed in the finding -- what about RELAX NG, OWL/RDF, ... <roy_scribe> timbl: can we change the title of the first draft to better reflect the content? ) <roy_scribe> PC: xml-binary is an example where we wrote a problem statement and then asked others to form a group -- we could do the same here <roy_scribe> timbl: TAG work has been half vertical and half horizontal (finding depth and webarch breadth) <roy_scribe> NM: the schema WG work and TAG's work (through DO) seem to be taking place on different planets, which is unhealthy <Zakim> timbl, you wanted to say there is some connection you could write up between message format extension and protocol extension. <Zakim> noah, you wanted to ask about review process for what Dave has written <roy_scribe> DO: there are limitations to what a single volunteer has time to cover <roy_scribe> NM: is now the time to focus on this (process wise)? <Norm> I'm here <roy_scribe> DO: TAG in general has not said what the next step should be (i.e., indicated approval of the outline so far) <roy_scribe> NM: how about placing that on the agenda for a specific meeting in January? <DanC_lap> +1 the ball is with the readers, not the writers, at the point <Norm> I think collecting some solid review would be good <roy_scribe> DC: wants this stuff to be public first <roy_scribe> NM: will need to check for permission first (no objections likely) <roy_scribe> PC: why not just pass the work? <roy_scribe> DC: we are talking about joint work because we (Dave, Norm) have invested a lot of work and have (so far) been unable to interface with Schema due to legacy barrier <roy_scribe> DC: has Schema done a public working draft? <roy_scribe> DO: fails to mention widcarding stuff <paulc> What we have to decide: <Zakim> timbl, you wanted to say there is some connection you could write up between message format extension and protocol extension. <paulc> a) when we will review XV Part 1 and when will we discuss the feedback <paulc> b) what we need to finish today <Norm> Still on break? <roy_scribe> yes <Norm> thx <roy_scribe> back from break, returning to discussion on extensibility and version <roy_scribe> ing <Chris> Daves slides <Chris> <roy_scribe> Norm: I'd like to see feedback from the TAG first <roy_scribe> PC: happy to read it on the flight back home <DanC_lap> ACTION PC, DC: review nov part 1, 2 of E+V draft finding <DanC_lap> ... for 10 Jan <roy_scribe> ACTION PC: to review parts 1 and 2 of extensibility and versioning editorial draft finding prior to discussion for 10 Jan. ACTION DC: paulc to review parts 1 and 2 of extensibility and versioning editorial draft finding prior to discussion for 10 Jan <roy_scribe> PC: regarding tech plenary, our discussion earlier suggested that a session on this topic would be good <Chris> take two: DO's slides <Chris> This is the presentation that Dave Orchard gave at todays TAG meeting. <Chris> <roy_scribe> ACTION PC: paulc to inform QA and Schema WGs of the new version of the e&v draft <noah> ACTION NM: to explore means of getting current and future Schema WG work on versioning into public spaces <DanC_lap> (I gather a certain amount of duplication is inevitable, but yes, let's mitigate it to some extent) <noah> Agreed...just some discomfort with the spin that as long as there are no patent issues, anything goes <timbl> Danc, DaveO may have "substitution groups" mentioned in the existing part2. <Zakim> noah, you wanted to say we should still watch out for duplicating requirements effort on XSD-specific requirements <timbl> So he may have covered this method of doing extensions. <DanC_lap> 3.3 Substitution Groups <DanC_lap> (did you just mean to clear the agenda?) <DanC_lap> (the meeting gets to timbls' agenda request...) <Zakim> DanC_lap, you wanted to offer some work on CDF and XML Schema mixing <roy_scribe> DC: was trying to see if composition of data formats is possible using schema <Norm> What file are we looking at? <Chris> <Chris> mathml-renamed.xsd <Norm> ty <Chris> np <roy_scribe> so is the scribe <DanC_lap> (yes, Dave, I think it gets down to fine details about when "compatible" assumes access to a schema) <roy_scribe> PC: let's return to open issues <roy_scribe> <roy_scribe> PC: was pointed out that the document is not yet polished <roy_scribe> DC: start with Stuart's null hypothesis: if WSDL has done something and is happy, do we need any further action? <Chris> This was the () considered harmful in fragments..... <roy_scribe> DO: Roy has an action URIGoodPractice-40 <DanC_lap> (I and a few others have been discussing good URI construction practice in a wiki. GoodURIs) <roy_scribe> DO: so, this issue is done unless it needs to be revisited after URIGoodPractice-40 <Zakim> Chris, you wanted to talk about what HTML did <Zakim> noah, you wanted to say that schema is chugging along too, if that matters <noah> FYI: July 2004 Working Draft of XML Schema:Component Designators is at ) <Norm> ping? <DanC_lap> (yeah! the relevant WGs are talking about it already!) <paulc> SW use of SCUDs: <paulc> <DanC_lap> (yay! dan't can't spell yay!) <DanC_lap> RF's offer to write on 40 on Jan stands, but there's some questions about the relationship to 37... <DanC_lap> TBL: let's change 37 to get rid of the ()'s [?] <DanC_lap> DO: let's start a finding on 40 that argues against ()s <DanC_lap> CL: ISO MPEG is building a thing of indexing into video based on XPointer-like syntax, using ()s <Chris> WebCGM also uses nested parens in fragments <DanC_lap> RF: meanwhile, RFC3023 is headed toward endorsing XPointer ()s for all +xml media types. <DanC_lap> TBL: I don't follow the argument that LR syntax in fragids is bad <Chris> <Chris> "Pictures and objects (application structures) within a WebCGM are addressed using the mechanism of the URI fragment. These WebCGM rules are derived from and are consistent with the Web protocols defined in RFC-2396." <Chris> (BNF follows) <Chris> WebCGM 1.0 Second Release <Chris> W3C Recommendation, 17 December 2001 <DanC_lap> RF: URIs ala lsdjflkj#abc(../foo) get parsed wrong; consumers treat / as part of the path <DanC_lap> some consumers <roy_scribe> DerivedResources-43 <roy_scribe> DO: can't remember which of XInclude's use of fragments was the issue <roy_scribe> DO: brought this up because there was no normative material explaining why what they were doing was unsound <timbl> +1 <noah> FWIW, The AWWW PR says: <roy_scribe> NW: as a result of other feedback, XInclude changed its use of fragments and that we may not need to do anything further <noah> ...never mind... <noah> Found it. AWWW says "The Internet Media Type defines the syntax and semantics of the fragment identifier (introduced in Fragment Identifiers (§2.6)), if any, that may be used in conjunction with a representation." <Zakim> timbl, you wanted to say that the issue that, as he recalls, was about the way XInclude seemed to be abusing fragids, and that he agreed, and that XInclude was changed. <Zakim> DanC_lap, you wanted to ask timbl to say, kinda slowly, why he thinks the status quo is correct, and maybe we could RESOLVE that that's the answer to this issue <roy_scribe> DC: 1) if the WG was persuaded to change things, I don't mind writing down the argument <roy_scribe> DC: 2) if it was just a non-persuaded process decision, then there's no point in going there <DanC_lap> I think a TAG decision is worthwhile here. <roy_scribe> example: href="...chap3#xpointer(h2[3]) <roy_scribe> example: 200 response from action says the representation is "text/plain" <noah> Doesn't it matter if it's href="(h2[3]) <roy_scribe> Chris; provides example of math+xml and the desire to identify an SVG view of part of the rendered math <DanC_lap> DC asks CL to package that mathml/SVG example up, mail it to the CDF WG and ask them if they're going to solve it or not <DanC_lap> DC also pointed out that if the mathml/SVG example had 200 content-type: mathml, then the mathml media type spec would have to specify how the XSLt transformation to SVG interacts with fragid syntax <roy_scribe> timbl: use of a URI in a retrieval action has a single meaning that cannot be overridden by something like XInclude just because it appears as an identifier during an inclusion action <DanC_lap> ACTION CL: to package that mathml/SVG example up, mail it to the CDF WG and ask them if they're going to solve it or not <DanC_lap> next agendum <paulc> <paulc> Paul suggests that meeting record makes clear what issues we did not do at this meeting. <DanC_lap> RESOLVED to thank the host! thanks, Amy! <paulc> TAG thanks Amy and W3C for hosting our meeting. Minutes formatted by David Booth's scribe.perl 1.81 (CVS log) $Log: 29-30-tag.html,v $ Revision 1.19 2004/12/09 18:44:46 connolly moved photo/participants up Revision 1.18 2004/12/09 18:42:56 connolly more actions/agenda stuff photo Revision 1.17 2004/12/09 18:20:17 connolly condensed action summary (items after siteData-36 still need review of actions) Revision 1.16 2004/12/09 17:56:50 connolly found more actions Revision 1.15 2004/12/09 17:51:23 connolly agenda/summary includes a decision on profile29 found some more actions Revision 1.14 2004/12/09 17:39:24 connolly refined proposed rec comment items promoted rec planning to an agendum found action re 39 fixed a few in-your-face URIs Revision 1.13 2004/12/09 17:13:19 connolly integrated action summary into agenda Revision 1.12 2004/12/09 16:13:42 connolly reviewing action summary; connecting them to agenda Revision 1.11 2004/12/09 15:53:09 connolly moved agenda stuff Revision 1.10 2004/12/09 15:49:26 connolly integrating 2 IRC sections into one set of minutes Revision 1.9 2004/12/09 15:40:24 connolly moved agenda topics for from day 1 IRC section to top Revision 1.8 2004/12/09 15:37:59 connolly removed irc-log versions of topics I did manage to summarize Revision 1.7 2004/12/09 15:34:07 connolly pasted in IRC logs processed by the dbooth scribe thingy Revision 1.6 2004/12/07 22:06:48 connolly "future ..." section done-ish Revision 1.5 2004/12/07 21:55:58 connolly nav tweaks Revision 1.4 2004/12/05 22:40:28 connolly - summarized monday admin - working on future directions
http://www.w3.org/2001/tag/2004/11/29-30-tag
CC-MAIN-2013-48
refinedweb
11,757
63.22
Download presentation Presentation is loading. Please wait. Published byBraydon Tease Modified about 1 year ago 1 XML: Extensible Markup Language 2 Slide Chapter Outline Introduction Structured, Semi structured, and Unstructured Data. XML Hierarchical (Tree) Data Model. XML Documents, DTD, and XML Schema. XML Documents and Databases. XML Querying. XPath XQuery 3 Slide Introduction Although HTML is widely used for formatting and structuring Web documents, it is not suitable for specifying structured data that is extracted from databases. A new language—namely XML (eXtended Markup Language) has emerged as the standard for structuring and exchanging data over the Web. XML can be used to provide more information about the structure and meaning of the data in the Web pages rather than just specifying how the Web pages are formatted for display on the screen. The formatting aspects are specified separately—for example, by using a formatting language such as XSL (eXtended Stylesheet Language). 4 Slide Structured, Semi Structured and Unstructured Data Three characterizations: Structured Data Semi-Structured Data Unstructured Data Structured Data: Information stored in databases is known as structured data because it is represented in a strict format. The DBMS then checks to ensure that all data follows the structures and constraints specified in the schema. 5 Slide Structured, Semi Structured and Unstructured Data (contd.) Semi-Structured Data: In some applications, data is collected in an ad-hoc manner before it is known how it will be stored and managed. This data may have a certain structure, but not all the information collected will have identical structure. This type of data is known as semi-structured data. In semi-structured data, the schema information is mixed in with the data values, since each data object can have different attributes that are not known in advance. Hence, this type of data is sometimes referred to as self-describing data. 6 Slide Structured, Semi Structured and Unstructured Data (contd.) Unstructured Data: A third category is known as unstructured data, because there is very limited indication of the type of data. A typical example would be a text document that contains information embedded within it. Web pages in HTML that contain some data are considered as unstructured data. 7 Slide Structured, Semi Structured and Unstructured Data (contd.) Semi-structured data may be displayed as a directed graph... The labels or tags on the directed edges represent the schema names—the names of attributes, object types (or entity types or classes), and relationships. The internal nodes represent individual objects or composite attributes. The leaf nodes represent actual data values of simple (atomic) attributes. 8 Slide FIGURE 27.1 Representing semistructured data as a graph. 9 Slide XML Hierarchical (Tree) Data Model FIGURE 27.3 A complex XML element called 10 Slide XML Hierarchical (Tree) Data Model (contd.) The basic object is XML is the XML document. There are two main structuring concepts that are used to construct an XML document: Elements Attributes Attributes in XML provide additional information that describe elements. 11 Slide XML Hierarchical (Tree) Data Model (contd.) As in HTML, elements are identified in a document by their start tag and end tag. The tag names are enclosed between angled brackets, and end tags are further identified by a backslash. Complex elements are constructed from other elements hierarchically, whereas simple elements contain data values. It is straightforward to see the correspondence between the XML textual representation and the tree structure. In the tree representation, internal nodes represent complex elements, whereas leaf nodes represent simple elements. That is why the XML model is called a tree model or a hierarchical model. 12 Slide XML Hierarchical (Tree) Data Model (contd.) It is possible to characterize three main types of XML documents: 1.Data-centric XML documents These documents have many small data items that follow a specific structure, and hence may be extracted from a structured database. They are formatted as XML documents in order to exchange them or display them over the Web. 2.Document-centric XML documents: These are documents with large amounts of text, such as news articles or books. There is little or no structured data elements in these documents. 3.Hybrid XML documents: These documents may have parts that contains structured data and other parts that are predominantly textual or unstructured. 13 Slide XML Documents, DTD, and XML Schema Two types of XML Well-Formed XML Valid XML 14 Slide XML Documents, DTD, and XML Schema Well-Formed XML It must start with an XML declaration to indicate the version of XML being used—as well as any other relevant attributes. It must follow the syntactic guidelines of the tree model. This means that there should be a single root element, and every element must include a matching pair of start tag and end tag within the start and end tags of the parent element. 15 Slide XML Documents, DTD, and XML Schema Well-Formed XML (contd.) A well-formed XML document is syntactically correct This allows it to be processed by generic processors that traverse the document and create an internal tree representation. DOM (Document Object Model) - Allows programs to manipulate the resulting tree representation corresponding to a well-formed XML document. The whole document must be parsed beforehand when using dom. SAX - Allows processing of XML documents on the fly by notifying the processing program whenever a start or end tag is encountered. 16 Slide XML Documents, DTD, and XML Schema Valid XML A stronger criterion is for an XML document to be valid. In this case, the document must be well-formed, and in addition the element names used in the start and end tag pairs must follow the structure specified in a separate XML DTD (Document Type Definition) file or XML schema file. 17 Slide XML Documents, DTD, and XML Schema (contd.) FIGURE 27.4 An XML DTD file called projects 18 Slide XML Documents, DTD, and XML Schema (contd.) XML DTD Notation A * following the element name means that the element can be repeated zero or more times in the document. This can be called an optional multivalued (repeating) element. A + following the element name means that the element can be repeated one or more times in the document. This can be called a required multivalued (repeating) element. A ? following the element name means that the element can be repeated zero or one times. This can be called an optional single-valued (non-repeating) element. An element appearing without any of the preceding three symbols must appear exactly once in the document. This can be called an required single-valued (non-repeating) element. 19 Slide XML Documents, DTD, and XML Schema (contd.) XML DTD Notation (contd.) The type of the element is specified via parentheses following the element. If the parentheses include names of other elements, these would be the children of the element in the tree structure. If the parentheses include the keyword #PCDATA or one of the other data types available in XML DTD, the element is a leaf node. PCDATA stands for parsed character data, which is roughly similar to a string data type. Parentheses can be nested when specifying elements. A bar symbol ( e1 | e2 ) specifies that either e1 or e2 can appear in the document. 20 Slide XML Documents, DTD, and XML Schema (contd.) Limitations of XML DTD First, the data types in DTD are not very general. Second, DTD has its own special syntax and so it requires specialized processors. It would be advantageous to specify XML schema documents using the syntax rules of XML itself so that the same processors for XML documents can process XML schema descriptions. Third, all DTD elements are always forced to follow the specified ordering the document so unordered elements are not permitted. 21 Slide XML Documents, DTD, and XML Schema (contd.) FIGURE 27.5 An XML schema file called company 22 Slide XML Documents, DTD, and XML Schema (contd.) FIGURE 27.5 An XML schema file called company (contd.) 23 Slide XML Documents, DTD, and XML Schema (contd.) FIGURE 27.5 An XML schema file called company (contd.) 24 Slide XML Documents, DTD, and XML Schema (contd.) FIGURE 27.5 An XML schema file called company (contd.) 25 Slide XML Documents, DTD, and XML Schema (contd.) XML Schema Schema Descriptions and XML Namespaces It is necessary to identify the specific set of XML schema language elements (tags) by a file stored at a Web site location. The second line in our example specifies the file used in this example, which is: "". Each such definition is called an XML namespace. The file name is assigned to the variable xsd using the attribute xmlns (XML namespace), and this variable is used as a prefix to all XML schema tags. 26 Slide XML Documents, DTD, and XML Schema (contd.) XML Schema (contd.) Annotations, documentation, and language used: The xsd:annotation and xsd:documentation are used for providing comments and other descriptions in the XML document. The attribute XML:lang of the xsd:documentation element specifies the language being used. E.g., “en” 27 Slide XML Documents, DTD, and XML Schema (contd.) XML Schema (contd.) Elements and types: We specify the root element of our XML schema. In XML schema, the name attribute of the xsd:element tag specifies the element name, which is called company for the root element in our example. The structure of the company root element is a xsd:complexType. 28 Slide XML Documents, DTD, and XML Schema (contd.) XML Schema (contd.) First-level elements in the company database: These elements are named employee, department, and project, and each is specified in an xsd:element tag. If a tag has only attributes and no further sub- elements or data within it, it can be ended with the back slash symbol (/>) and termed Empty Element. 29 Slide XML Documents, DTD, and XML Schema (contd.) XML Schema (contd.) Specifying element type and minimum and maximum occurrences: If we specify a type attribute in an xsd:element, this means that the structure of the element will be described separately, typically using the xsd:complexType element. The minOccurs and maxOccurs tags are used for specifying lower and upper bounds on the number of occurrences of an element. The default is exactly one occurrence. 30 Slide XML Documents, DTD, and XML Schema (contd.) XML Schema (contd.) Specifying Keys: For specifying primary keys, the tag xsd:key is used. For specifying foreign keys, the tag xsd:keyref is used. When specifying a foreign key, the attribute refer of the xsd:keyref tag specifies the referenced primary key whereas the tags xsd:selector and xsd:field specify the referencing element type and foreign key. 31 Slide XML Documents, DTD, and XML Schema (contd.) XML Schema (contd.) Specifying the structures of complex elements via complex types: Complex elements in our example are Department, Employee, Project, and Dependent, which use the tag xsd:complexType. We specify each of these as a sequence of subelements corresponding to the database attributes of each entity type by using the xsd:sequence and xsd:element tags of XML schema. Each element is given a name and type via the attributes name and type of xsd:element. We can also specify minOccurs and maxOccurs attributes if we need to change the default of exactly one occurrence. For (optional) database attributes where null is allowed, we need to specify minOccurs = 0, whereas for multivalued database attributes we need to specify maxOccurs = “unbounded” on the corresponding element. 32 Slide XML Documents, DTD, and XML Schema (contd.) XML Schema (contd.) Composite (compound) attributes: Composite attributes from ER Schema are also specified as complex types in the XML schema, as illustrated by the Address, Name, Worker, and WorksOn complex types. These could have been directly embedded within their parent elements. 33 Slide XML Documents and Databases. Approaches to Storing XML Documents Using a DBMS to store the documents as text: We can use a relational or object DBMS to store whole XML documents as text fields within the DBMS records or objects. This approach can be used if the DBMS has a special module for document processing, and would work for storing schemaless and document-centric XML documents. Using a DBMS to store the document contents as data elements: This approach would work for storing a collection of documents that follow a specific XML DTD or XML schema. Since all the documents have the same structure, we can design a relational (or object) database to store the leaf-level data elements within the XML documents. 34 Slide XML Documents and Databases. Approaches to Storing XML Documents (contd.) Designing a specialized system for storing native XML data: A new type of database system based on the hierarchical (tree) model would be designed and implemented. The system would include specialized indexing and querying techniques, and would work for all types of XML documents. Creating or publishing customized XML documents from pre-existing relational databases: Because there are enormous amounts of data already stored in relational databases, parts of these data may need to be formatted as documents for exchanging or displaying over the Web. 35 Slide XML Documents, DTD, and XML Schema (contd.) Extracting XML Documents from Relational Databases. Suppose that an application needs to extract XML documents for student, course, and grade information from the university database. The data needed for these documents is contained in the database attributes of the entity types course, section, and student as shown below (part of the main ER), and the relationships s-s and c-s between them. 36 Slide Subset of the UNIVERSITY database schema FIGURE 27.7 Subset of the UNIVERSITY database schema needed for XML document extraction. 37 Slide XML Documents, DTD, and XML Schema (contd.) Extracting XML Documents from Relational Databases One of the possible hierarchies that can be extracted from the database subset could choose COURSE as the root. 38 Slide Hierarchical (tree) view with COURSE as the root FIGURE 27.8 Hierarchical (tree) view with COURSE as the root. 39 Slide XML schema document with COURSE as the root FIGURE 27.9 40 Slide XML Documents, DTD, and XML Schema (contd.) Breaking Cycles To Convert Graphs into Trees It is possible to have a more complex subset with one or more cycles, indicating multiple relationships among the entities. Suppose that we need the information in all the entity types and relationships in figure below for a particular XML document, with student as the root element. 41 Slide An ER schema diagram for a simplified UNIVERSITY database. FIGURE 27.6 42 Slide XML Documents, DTD, and XML Schema (contd.) Breaking Cycles To convert Graphs into Trees One way to break the cycles is to replicate the entity types involved in cycles. First, we replicate INSTRUCTOR as shown in part (2) of Figure, calling the replica to the right INSTRUCTOR1. The INSTRUCTOR replica on the left represents the relationship between instructors and the sections they teach, whereas the INSTRUCTOR1 replica on the right represents the relationship between instructors and the department each works in. We still have the cycle involving COURSE, so we can replicate COURSE in a similar manner, leading to the hierarchy shown in part (3). The COURSE1 replica to the left represents the relationship between courses and their sections, whereas the COURSE replica to the right represents the relationship between courses and the department that offers each course. 43 Slide Converting a graph with cycles into a hierarchical (tree) structure FIGURE 27.13 44 Slide XML Querying XPath An XPath expression returns a collection of element nodes that satisfy certain patterns specified in the expression. The names in the XPath expression are node names in the XML document tree that are either tag (element) names or attribute names, possibly with additional qualifier conditions to further restrict the nodes that satisfy the pattern. 45 Slide XML Querying XPath (contd.) There are two main separators. It is customary to include the file name in any XPath query allowing us to specify any local file name or path name that specifies the path. doc()/company => COMPANY XML doc 46 Slide XML Querying 1.Returns the COMPANY root node and all its descendant nodes, which means that it returns the whole XML document. 2.Returns all department nodes (elements) and their descendant subtrees. 3.Returns all employeeName nodes that are direct children of an employee node, such that the employee node has another child element employeeSalary whose value is greater than This returns the same result as the previous one except that we specified the full path name in this example. 5.This returns all projectWorker nodes and their descendant nodes that are children under a path /company/project and that have a child node hours with value greater than 20.0 hours. 47 Slide Some examples of XPath expressions FIGURE Some examples of XPath expressions on XML documents that follow the XML schema file COMPANY in FIGURE 27.5. 48 Slide XML Querying XQuery XQuery uses XPath expressions, but has additional constructs. XQuery permits the specification of more general queries on one or more XML documents. The typical form of a query in XQuery is known as a FLWR expression, which stands for the four main clauses of XQuery and has the following form: FOR LET WHERE RETURN 49 Slide XML Querying 1.This query retrieves the first and last names of employees who earn more than The variable $x is bound to each employeeName element that is a child of an employee element, but only for employee elements that satisfy the qualifier that their employeeSalary is greater that This is an alternative way of retrieving the same elements retrieved by the first query. 3.This query illustrates how a join operation can be performed by having more than one variable. Here, the $x variable is bound to each projectWorker element that is a child of project number 5, whereas the $y variable is bound to each employee element. The join condition matches SSN values in order to retrieve the employee names. 50 Slide Some Examples of XQuery Queries Some examples of XQuery queries on XML documents that follow the XML schema file COMPANY in FIGURE 27.5. 51 Slide Recap Introduction Structured, Semi structured, and Unstructured Data. XML Hierarchical (Tree) Data Model. XML Documents, DTD, and XML Schema. XML Documents and Databases. XML Querying. XPath XQuery Similar presentations © 2016 SlidePlayer.com Inc.
http://slideplayer.com/slide/4219221/
CC-MAIN-2016-50
refinedweb
3,064
54.73
Assignment: Average Rainfall Write a program. After all iterations, the program should display the number of months, the total inches of rainfall, and the average rainfall per month for the entire period. Input validation: Do not accept a number less than 1 for the number of years. Do not accept negative numbers for the monthly rainfall. /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package averagerainfall; import java.util.Scanner; /** * * @author uuuuuuu */ public class Main { /** * @param args the command line arguments */ public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); int years; int months; double rainfall = 0; double totalRainfall = 0.0; System.out.println("Enter the number of years:"); years = keyboard.nextInt(); for(int count = 1; count<= years;count++) { for(months = 1; months<=12;months++) { System.out.println("Enter inches of rainfall for month " + months); rainfall = keyboard.nextDouble(); totalRainfall += rainfall;; } System.out.println("Number of months: " + (months)); System.out.println("Total inches of rainfall: " + totalRainfall); System.out.println("Average rainfall per month: " + (totalRainfall/(years * months))); } } } In the code, where do I put [rainfall>0]? When I enter [1] for years, the output for number of months is [13], why isn't it [12], how do I fix it? When I put [2] for years, why doesn't the number of months accumulate; it should be [24] months not [12] and [12] (in my case, it is [13] and [13] because of coding problems.' I'll fix it the output decimal format into (##0.00) afterward.
http://www.javaprogrammingforums.com/whats-wrong-my-code/5708-help-nested-loops.html
CC-MAIN-2016-07
refinedweb
254
50.63
My question was unclear... Of course the A* algorythme will be only implement in C++... The problem is : 1) we have (will have) 100 functions available for the scripts (ex: castOf(who), isPlayer(who)) 2) we have 10 or 20 incoming type of message (ex: on_hitBy(who), on_near(who),) Some NPC must be able to be pre-programmed to execute a succesion of commands : that's the problem, I have no idea of doing a such thing and I don't know of to avoid TONS and TONS of "if then else"... I think (but I'm perhaps wrong), that having some simple knowledge in AI would help, but I can't find any thing on the web to help me (I've red many many articles but nothing help) Any idea or am I always unclear ? Thanks -----Message d'origine----- De: Sean Thomas Middleditch [SMTP:sean.middleditch@iname.com] Date: samedi 25 mars 2000 01:22 A: Multiple recipients of list Objet: Re: LUA and AI Nicholas Hesketh wrote: > That would depend on the sort of AI you're trying to do, and the number of entities you need to do > it for. > > For the mid to high level portion of rpg character AI it should be ideal as the scripting > flexibility outweighs computational overhead, and you can always migrate the expensive stuff to > C/C++ as the game develops. > > Using it for adaptive pathfinding of several hundred units in a strategy game is probably not a good > idea though ;-) > > It's a case of flexibility verses performance, but you'll probably find scripting useful for at > least a portion of your game logic. > > Nick Hesketh. > -----Original Message----- > From: Christophe Gimenez <chris@kandji.com> > To: Multiple recipients of list <lua-l@tecgraf.puc-rio.br> > Date: 24 March 2000 19:51 > Subject: LUA and AI > > > > >Okay that seems a strange question... > > > >But here is my problem : as I would (and will) use LUA as the scripting > >langage for a game project I've started to find information about AI in > >games (and of course I don't know a word about AI). > > > >Thus, do you think that implementing basic AI principles could be done with > >LUA ? > > > >If yes, we to start from ? > > > >I've spent many many many hours and collected many many many links, pdf, > >doc, html files but for the moment I could'nt learn some basic that I could > >use in a game. > > > >thanks > > > >[ if there is an AI-GOD in the mailing list, please send me a mail ;-) ] > > > > AI is fun. The best AI model I've seen was implemented in Java for a roguelike game... The object-oriented nature rocked. For an AI, write the unmutable stuff (like pathfinding, logic, etc.) in your C/C++ code. Then write the control in script. Something like (in PSEUDO code) if (see_enemy) then enter_battle () endif if (is_dying) then find_path (ESCAPE) follow_path () endif etc. That's a damn poor example, but I'm tired, so I have an excuse. ;-) I think for controlling actions (like go north 3, west 4, get item, east 6, say "My, it is raining frogs.", attack duck, south 3, west 2, say "Oh no! I lost my magic Blunt Stick of Sharpness!!!") a specialized language would do best... something simple like MOVE east 3 SAY "I'm lost" is best. Sean Middleditch
http://lua-users.org/lists/lua-l/2000-03/msg00095.html
crawl-001
refinedweb
559
69.52
By Randal L. Schwartz, Tom Phoenix Cover | Table of Contents | Colophon find one near you. Most of the time, you can also simply visit http:// COUNTRYCODE.cpan.org/ where COUNTRYCODEis your two-letter official country code (like on the end of your national domain names). Or,.) #!/usr/bin/perl @lines = `perldoc -u -f atan2`; foreach (@lines) { s/\w<([^>]+)>/\U$1/g; print; } #!line, as we saw before. You might need to change that line for your system, as we discussed earlier. ` `"). (The backquote key is often found next to the number 1 on full-sized American keyboards. Be sure not to confuse the backquote with the single quote, " '".) The command we're using is perldoc -u -f atan2; try typing that at your command line to see what its output looks like. The perldoc command is used on most systems to read and display the documentation for Perl and its associated extensions and utilities, so it should normally be available. This command tells you something about the trigonometric function atan2; we're using it here just as an example of an external command whose output we wish to process. @lines. The next line of code starts a loop that will process each one of those lines. Inside the loop, the statements are indented. Although Perl doesn't require this, good programmers do. s/\w<([^>]+)>/\U$1/g;. Without going into too much detail, we'll just say that this can change any line that has a special marker made with angle brackets ( ex1-1, for simplicity, since it's exercise 1 in Chapter 1.) helloor the Gettysburg Address). Although you may think of numbers and strings as very different things, Perl uses them nearly interchangeably. helloor the Gettysburg Address). Although you may think of numbers and strings as very different things, Perl uses them nearly interchangeably. 1.25 255.000 255.0 - the E may be uppercase 0 2001 -40 255 61298040283768 61_298_040_283_768 hello). Strings may contain any combination of any characters. 'fred' # those four characters: f, r, e, and d 'barney' # those six characters '' # the null string (no characters) 'Don\'t let an apostrophe end this string prematurely!' 'the last character of this string is a backslash: \\' 'hello\n' # hello followed by backslash followed by n 'hello there' # hello, newline, there (11 characters total) '\'\\' # single quote followed by backslash -woption on the command line: $ perl -w my_program #!line: #!/usr/bin/perl -w #!perl -w '12fred34'as if it were a number: Argument "12fred34" isn't numeric -wswitch could. See the perllexwarn manpage for more information on these warnings. $Fredis a different variable from $fred. And all of the letters, digits, and underscores are significant, so: $a_very_long_variable_that_ends_in_1 $a_very_long_variable_that_ends_in_2 $. In the shell, you use $to get the value, but leave the $off to assign a new value. In awk or C, you leave the $off entirely. If you bounce back and forth a lot, you'll find yourself typing the wrong things occasionally. This is expected. (Most Perl programmers would recommend that you stop writing shell, awk, and C programs, but that may not work for you.) $ris probably not very descriptive but $line_lengthis. A variable used for only two or three lines close together may be called something simple, like $n, but a variable used throughout a program should probably have a more descriptive name. $super_bowlis a better name than $superbowl, since that last one might look like $superb_owl print( )operator makes this possible. It takes a scalar argument and puts it out without any embellishment onto standard output. Unless you've done something odd, this will be your terminal display. For example: print "hello world\n"; # say hello world, followed by a newline print "The answer is "; print 6 * 7; print ".\n"; print "The answer is ", 6 * 7, ".\n"; $meal = "brontosaurus steak"; $barney = "fred ate a $meal"; # $barney is now "fred ate a brontosaurus steak" $barney = 'fred ate a ' . $meal; # another way to write that $barney = "fred ate a $meat"; # $barney is now "fred ate a " print "$fred"; # unneeded quote marks print $fred; # better style ifcontrol structure: if ($name gt 'fred') { print "'$name' comes after 'fred' in sorted order.\n"; } elsekeyword provides that as well: if ($name gt 'fred') { print "'$name' comes after 'fred' in sorted order.\n"; } else { print "'$name' does not come after 'fred'.\n"; print "Maybe it's the same string, in fact.\n"; } ifcontrol structure. That's handy if you want to store a true or false value into a variable, like this: $is_bigger = $name gt 'fred'; if ($is_bigger) { ... } undefis false. (We'll see this a little later in this section.) '') is false; all other strings are normally true. '0', has the same value as its numeric form: false. undef, 0, '', or '0', it's false. All other scalars are true—including all of the types of scalars that we haven't told you about yet. <STDIN>. Each time you use <STDIN>in a place where a scalar value is expected, Perl reads the next complete text line from standard input (up to the first newline), and uses that string as the value of <STDIN>. Standard input can mean many things, but unless you do something uncommon, it means the keyboard of the user who invoked your program (probably you). If there's nothing waiting to be read (typically the case, unless you type ahead a complete line), the Perl program will stop and wait for you to enter some characters followed by a newline (return). <STDIN>typically has a newline character on the end of it. So you could do something like this: $line = <STDIN>; if ($line eq "\n") { print "That was just a blank line!\n"; } else { print "That line of input was: $line"; } chompoperator. chompoperator, it seems terribly overspecialized. It works on a variable. The variable has to hold a string. And if the string ends in a newline character, chompcan get rid of the newline. That's (nearly) all it does. For example: $text = "a line of text\n"; # Or the same thing from <STDIN> chomp($text); # Gets rid of the newline character chomp, because of a simple rule: any time that you need a variable in Perl, you can use an assignment instead. First, Perl does the assignment. Then it uses the variable in whatever way you requested. So the most common use of chomplooks like this: chomp($text = <STDIN>); # Read the text, without the newline character $text = <STDIN>; # Do the same thing... chomp($text); # ...but in two steps chompmay not seem to be the easy way, especially if it seems more complex! If you think of it as two operations—read a line, then chompit—then it's more natural to write it as two statements. But if you think of it as one operation—read just the text, not the newline—it's more natural to write the one statement. And since most other Perl programmers are going to write it that way, you may as well get used to it now. chompis actually a function. As a function, it has a return value, which is the number of characters removed. This number is hardly ever useful: $food = <STDIN>; $betty = chomp $food; # gets the value 1 - but we knew that! chompwith or without the parentheses. This is another general rule in Perl: except in cases where it changes the meaning to remove them, parentheses are always optional. chompremoves only one. If there's no newline, it does nothing, and returns zero. whileloop repeats a block of code as long as a condition is true: $count = 0; while ($count < 10) { $count += 1; print "count is now $count\n"; # Gives values from 1 to 10 } iftest. Also like the ifcontrol structure, the block curly braces are required. The conditional expression is evaluated before the first iteration, so the loop may be skipped completely, if the condition is initially false. undefvalueis neither a number nor a string; it's an entirely separate kind of scalar value. undefautomatically acts like zero when used as a number, it's easy to make an numeric accumulator that starts out empty: # Add up some odd numbers $n = 1; while ($n < 10) { $sum += $n; $n += 2; # On to the next odd number } print "The total was $sum.\n"; $sumwas undefbefore the loop started. The first time through the loop, $nis one, so the first line inside the loop adds one to $sum. That's like adding one to a variable that already holds zero (because we're using undefas if it were a number). So now it has the value 1. After that, since it's been initialized, adding works in the traditional way. $string .= "more text\n"; $stringis undef, this will act as if it already held the empty string, putting "more text\n"into that variable. But if it already holds a string, the new text is simply appended. undefwhenis the line-input operator, <STDIN>. Normally, it will return a line of text. But if there is no more input, such as at end-of-file, it returns undefto signal this. To tell whether a value is undefand not the empty string, use the definedfunction, which returns false for undef, and true for everything else: $madonna = <STDIN>; if ( defined($madonna) ) { print "The input was $madonna"; } else { print "No input available!\n"; } undefvalues, you can use the obscurely named undefoperator: $madonna = undef; # As if it had never been touched undefvalues, or any mixture of different scalar values. Nevertheless, it's most common to have all elements of the same type, such as a list of book titles (all strings) or a list of cosines (all numbers). $fred[0] = "yabba"; $fred[1] = "dabba"; $fred[2] = "doo"; $fred[0] = "yabba"; $fred[1] = "dabba"; $fred[2] = "doo"; "fred") is from a completely separate namespace than scalars use; you could have a scalar variable named $fredin the same program, and Perl will treat them as different things, and wouldn't be confused. (Your maintenance programmer might be confused, though, so don't capriciously make all of your variable names the same!) $fred[2]in every place where you could use any other scalar variable like $fred. For example, you can get the value from an array element or change that value by the same sorts of expressions we used in the previous chapter: print $fred[0]; $fred[2] = "diddley"; $fred[1] .= "whatsis"; $number = 2.71828; print $fred[$number - 1]; # Same as printing $fred[1] undef. This is just as with ordinary scalars; if you've never stored a value into the variable, it's undef. $blank = $fred[ 142_857 ]; # unused array element gives undef $blanc = $mel; # unused scalar $mel also gives undef undefvalues. $rocks[0] = 'bedrock'; # One element... $rocks[1] = 'slate'; # another... $rocks[2] = 'lava'; # and another... $rocks[3] = 'crushed rock'; # and another... $rocks[99] = 'schist'; # now there are 95 undef elements rocksthat we've just been using, the last element index is $#rocks. That's not the same as the number of elements, though, because there's an element number zero. As seen in the code snippet below, it's actually possible to assign to this value to change the size of the array, although this is rare in practice. $end = $#rocks; # 99, which is the last element's index $number_of_rocks = $end + 1; # okay, but we'll see a better way later $#rocks = 2; # Forget all rocks after 'lava' $#rocks = 99; # add 97 undef elements (the forgotten rocks are # gone forever) $rocks[ $#rocks ] = 'hard rock'; # the last rock $#namevalue as an index, like that last example, happens often enough that Larry has provided a shortcut: negative array indices count from the end of the array. But don't get the idea that these indices "wrap around." If you've got three elements in the array, the valid negative indices are -1(the last element), -2(the middle element), and -3(the first element). In the real world, nobody seems to use any of these except -1, though. $rocks[ -1 ] = 'hard rock'; # easier way to do that last example above $dead_rock = $rocks[-100]; # gets 'bedrock' $rocks[ -200 ] = 'crystal'; # fatal error! (1, 2, 3) # list of three values 1, 2, and 3 (1, 2, 3,) # the same three values (the trailing comma is ignored) ("fred", 4.5) # two values, "fred" and 4.5 ( ) # empty list - zero elements (1..100) # list of 100 integers (1..5) # same as (1, 2, 3, 4, 5) (1.7..5.7) # same thing - both values are truncated (5..1) # empty list - .. only counts "uphill" (0, 2..6, 10, 12) # same as (0, 2, 3, 4, 5, 6, 10, 12) ($a..$b) # range determined by current values of $a and $b (0..$#rocks) # the indices of the rocks array from the previous section ($a, 17) # two values: the current value of $a, and 17 ($b+$c, $d+$e) # two values ("fred", "barney", "betty", "wilma", "dino") qwshortcut makes it easy to generate them without typing a lot of extra quote marks: qw/ fred barney betty wilma dino / # same as above, but less typing qwstands for "quoted words" or "quoted by whitespace," depending upon whom you ask. Either way, Perl treats it like a single-quoted string (so, you can't use \nor $fredinside a qwlist as you would in a double-quoted string). The whitespace (characters like spaces, tabs, and newlines) will be discarded, and whatever is left becomes the list of items. Since whitespace is discarded, here's another (but unusual) way to write that same list: ($fred, $barney, $dino) = ("flintstone", "rubble", undef); ($fred, $barney) = ($barney, $fred); # swap those values ($betty[0], $betty[1]) = ($betty[1], $betty[0]); undef. ($fred, $barney) = qw< flintstone rubble slate granite >; # two ignored items ($wilma, $dino) = qw[flintstone]; # $dino gets undef ($rocks[0], $rocks[1], $rocks[2], $rocks[3]) = qw/talc mica feldspar quartz/; @) before the name of the array (and no index brackets after it) to refer to the entire array at once. You can read this as "all of the," so @rocksis "all of the rocks." This works on either side of the assignment operator: @rocks = qw/ bedrock slate lava /; @tiny = ( ); # the empty list @giant = 1..1e5; # a list with 100,000 elements @stuff = (@giant, undef, @giant); # a list with 200,001 elements $dino = "granite"; @quarry = (@rocks, "crushed rock", @tiny, $dino); @quarrythe five-element list (bedrock, slate, lava, crushed rock, granite) @rocks = qw{ flintstone slate rubble }; print "quartz @rocks limestone\n"; # prints five rocks separated by spaces print "Three rocks are: @rocks.\n"; print "There's nothing in the parens (@empty) here.\n"; $email = "fred@bedrock.edu"; # WRONG! Tries to interpolate @bedrock $email = "fred\@bedrock.edu"; # Correct $email = 'fred@bedrock.edu'; # Another way to do that @fred = qw(hello dolly); $y = 2; $x = "This is $fred[1]'s place"; # "This is dolly's place" $x = "This is $fred[$y-1]'s place"; # same thing $ycontains the string "2*4", we're still talking about element 1, not element 7, because "2*4"as a number (the value of $yused in a numeric expression) is just plain 2. @fred = qw(eating rocks is wrong); $fred = "right"; # we are trying to say "this is right[3]" print "this is $fred[3]\n"; # prints "wrong" using $fred[3] print "this is ${fred}[3]\n"; # prints "right" (protected by braces) print "this is $fred"."[3]\n"; # right again (different string) print "this is $fred\[3]\n"; # right again (backslash hides it) foreachloop steps through a list of values, executing one iteration (time through the loop) for each value: foreach $rock (qw/ bedrock slate lava /) { print "One rock is $rock.\n"; # Prints names of three rocks } $rockin that example) takes on a new value from the list for each iteration. The first time through the loop, it's "bedrock"; the third time, it's "lava". @rocks = qw/ bedrock slate lava /; foreach $rock (@rocks) { $rock = "\t$rock"; # put a tab in front of each element of @rocks $rock .= "\n"; # put a newline on the end of each } print "The rocks are:\n", @rocks; # Each one is indented, on its own line foreachloop is automatically saved and restored by Perl. While the loop is running, there's no way to access or alter that saved value. So after the loop is done, the variable has the value it had before the loop, or undefif it hadn't had a value. That means that if you want to name your loop control variable " $rock", you don't have to worry that maybe you've already used that name for another variable. foreachloop, Perl uses its favorite default variable, $_. This is (mostly) just like any other scalar variable, except for its unusual name. For example: foreach (1..10) { # Uses $_ by default print "I can count to $_!\n"; } $_when you don't tell it to use some other variable or value, thereby saving the programmer from the heavy labor of having to think up and type a new variable name. So as not to keep you in suspense, one of those cases is $_if given no other argument: $_ = "Yabba dabba doo\n"; print; # prints $_ by default reverseoperator takes a list of values (which may come from an array) and returns the list in the opposite order. So if you were disappointed that the range operator, .., only counts upwards, this is the way to fix it: @fred = 6..10; @barney = reverse(@fred); # gets 10, 9, 8, 7, 6 @wilma = reverse 6..10; # gets the same thing, without the other array @fred = reverse @fred; # puts the result back into the original array @fredtwice. Perl always calculates the value being assigned (on the right) before it begins the actual assignment. reversereturns the reversed list; it doesn't affect its ar
http://www.oreilly.com/catalog/9780596001322/toc.html
crawl-001
refinedweb
2,942
70.94
Public Interface TestBench Function Test(ByVal log As List(Of String)) As TestResult End Interface Public Enum TestResult Fail = 0 Fail_Data_Error Fail_Exception Fail_Timeout Pass = 20 Pass_No_Test_Needed End EnumTheAs you can see, all I have done is add the interface and hit enter, which automatically puts the function into your class. I then added a single entry to the log, and return the desired result type. If you have the function in the class, but without the implements statement, the test app will not pick it up, as it looks for a class with the interface in it. NextLet, ReadAllBytes()function in the System.IO.Filenamespace. This allows you to load the DLL in without locking it. The replacement of the line above is: Dim assembly As Reflection.Assembly = Nothing assembly = Reflection.Assembly.Load(System.IO.File.ReadAllBytes(path))Next is the loop: For Each t As Type In assembly.GetTypes If t.IsClass AndAlso t.GetInterface("TestBench") IsNot Nothing Then '... End If NextThis loops through all of the Types in the assembly. There are a lot of these included which we have no interest in, such as TestITem.My.MyProject+MyWebServices. Dim r As TestBench.TestResult = NothingWe then use the Activator.CreateInstance()function to create an instance of our class, and use DirectCastto cast it to the interface, which allows us to execute anything contained in the interface, in this case, the Test()function. r = DirectCast(Activator.CreateInstance(t), TestBench.TestBench).Test(_log) Public Sub Main()of any sort in the class, it will work, but if you have a constructor with parameters , you must also have a blank one, even if it is only there for use with your Test Bench. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/vb/ReflectionTesting.aspx
crawl-002
refinedweb
289
56.96
Nov 09, 2009 04:22 AM|imarash|LINK If we create certain pages dynamically by reading from some database rows, how do those pages (which do not exist as a file in our web directory) get indexed by search engines? For example I have a ViewProfile page which is empty in design mode and all its controls and values get generated from database when requested. My question is do those pages still get indexed in search engines? Thanks in advance for the answers! Nov 09, 2009 05:33 AM|paul.vencill|LINK whether it exists on disk or not is completely irrelevant to search engines, they don't access your pages that way. They look at content returned at a URL. So as long as your URLs are consistent, and unique on your site, they can fetch it just fine. Member 20 Points Nov 09, 2009 02:27 PM|guillermoguerini|LINK Exactly! I would even recomend you to read more about xml sitemaps. This is a XML file the contains all the "navigation" or better, the sitemap, of your website. It helps the search engines go to the right places to "read" and index your site. And yes, it will index the pages even if the content is dynamic and loaded from your database. I hope it helps! G Nov 09, 2009 05:47 PM|paul.vencill|LINK It means that anything that a link on your site provides as output will get indexed. As the other poster mentioned, using a sitemap is one way to ensure that you create a link to everything on your site that you want indexed, but of course a good navigation structure (and appropraite on-page links) helps, and is (imo) even more important b/c it keeps your app user-focused. Again: where your data resides / how your pages are generated has *nothing* to do with it (flat file, database, web service call, whatever). What matters is what is actually sent to a user when a hyperlink is followed. Note that Ajax-based pages and flash-based content works a little differently, I'm talking here about just regular hyperlinks & URLs and the html content returned by them. Nov 09, 2009 06:19 PM|imarash|LINK Thanks a lot for the explanation! Lets say I have a page for user profiles (pageViewProfile.aspx), now this is only one page but depending on different users, will generate different outcome: pageViewProfile.aspx?uid=user-id I want to know 1-Will different users contents (of their personal profile on pageViewProfile) get indexed as well? 2-How about if pageViewProfile needs user authentication? Sorry my questions might sound dumb but I really need to know this. Thanks a lot for your previous answers as well :) Nov 10, 2009 03:07 PM|paul.vencill|LINK 1) depends on the search engine. I don't know all the differences, but I've heard that some ignore querystring params (e.g. ?uid=user-id) and pay attention only to the domain and path. Google does pick up querystrin gparams, but generally one of the "best practices" in the industry right now is to use params just for things like sorting and filtering, but use the path (everything after the domain, eg. /my-page/something.aspx) as the way of identifying hte resource itself. Also, one of the strengths of the flexible routing offered by the new System.Web.Routing namespace si the ability to (if you have IIS configured right) drop the file extension (.aspx) off completely. Makes your URLs more user friendly, and lets you name your resources in ways more meaningful to searches as well. 2) Search engines don't have an identity on your site, so they will not index stuff that requires authentication to access. 6 replies Last post Nov 10, 2009 03:07 PM by paul.vencill
http://forums.asp.net/t/1490362.aspx
CC-MAIN-2014-52
refinedweb
643
72.46
04 April 2007 08:39 [Source: ICIS news] SINGAPORE (ICIS news)--Formosa Petrochemical has targeted a 45% rise in ethylene production this year to reach 2.758m tonnes after posting higher 2006 olefins operating profit, a company official said on Wednesday. ?xml:namespace> Operating profit at its olefins segment rose 4% to New Taiwan dollar (NT$) 18.1bn ($546.5m) from a year ago on higher volume and prices. The Taiwanese refining and chemicals major operated its crackers at more than 100% last year and ethylene production reached 1.902m tonnes. It will start up its new 1.2m tonne/year cracker in Mailiao in the second quarter. Other segments in the company did not perform well. Its refining operating profit fell 19% to NT$29.5bn even though revenue rose 19% to NT$424.7bn. Operating profit at another segment, which covers other products such as butadiene, liquefied petroleum gas (LPG) and methyl tertiary butyl ether (MTBE), fell 46% to NT$200m. At group level, its operating profit fell 13% to NT$53.4bn while revenue rose 19% to NT$529.5bn. In 2007, ?xml:namespace> Capacity of the refinery will reach 540,000 bbl/day in the second quarter of 2008. The company will also complete its 10,000 bbl/day base oil project in Mailiao at the same time. ($1 = NT$33.12)($1 = NT$33
http://www.icis.com/Articles/2007/04/04/9018427/formosa-petchem-2007-ethylene-output-to-rise-45.html
CC-MAIN-2015-22
refinedweb
229
66.23
Please confirm that you want to add Complete Guide for Custom Inspectors & Windows in Unity! to your Wishlist. Exposing properties to the editor is useful, but it can't be used to create complex things. By creating custom inspectors and editors, you'll have full control over how Unity looks and exposes itself to Game Designers and people who are there to balance the game and play around with your scripts. Your job is to make their job easier, and this can be a good start point! This course expects some proficiency in programming, so you should be comfortable with C# for Unity before you get started. This unique content will get you through the whole process of creating nice looking custom inspectors for your scripts, windows and custom properties, but it will also cover quick ways to change the appearance of a script's inspector by using variable and method attributes. Let's learn what attributes are before we actually jump in the action! Learn about the Range attribute and how you can improve your code's inspector with it. Multiline makes the string field even better! These attributes are made to make your inspector shine by calling the designer's attention to specific areas. Context menus have been a part of Unity since forever now, but there's not many people that actually know about it nor how to use it. You got lucky! Let's start off by coding a script on top of which we'll be working. This lecture goes through the process of defining a custom inspector for the script we've just created and explains a few key aspects on custom inspector creation. This lecture will take you through the process of adding elements to the inspector and also how to modify your element's Transform via the inspector itself. The simulate tab is a quick way to preview the interpolation you've just made. Also, you'll learn how to add colors to the inspector to call the attention to desired areas. Custom inspectors are not constrained to the inspector tab. From this point on, you'll start learning how to add 3D elements to the scene tab! More on how you can add elements to the scene view. This time, how to add handlers (Those things that control the Transform of your Game Object in the scene). The scene view can also be used to draw 2D elements like button or whatever else you want! Learn how in this lecture. The act of building a game should be the smoothest process during the creation, but sometimes errors may occur. In this lecture you'll learn how to avoid errors due to the namespace UnityEditor. This lecture goes back to the scripting part and doesn't directly deal with the inspector itself. You'll learn how to create a coroutine that takes your Game Object through the path you've created. Before we get in depth on custom Editor Windows, let's learn how they work1 Let's start by creating the Item base class, which we'll be editing in the editor later on! Our list of Items must be managed by a, well... Manager class. This is what we're doing in this lecture. Here's another fun bit! In this lecture you'll learn how to proceed when creating a custom Editor. Starting our Editor customization, we'll build a toolbar to choose between two ways of editing our list of Items. In this lecture you'll learn how to access the manager class from an EditorWindow. Now we're going to draw the Create tab and make the user able to add new Items to the database. In this lecture we'll check for taken IDs before we add a new item to the database plus we'll solve the non-serializable problem that affects user-created classes. This basically corrects the problem of the instances of the class not being saved by Unity so whenever you press Play or quit the program, your information is no longer there. Re-usability of code makes it way easier to draw the Edit tab. We will use the same function we've used in the last lecture to draw all the elements we have so far. After drawing all the elements we've noticed that we need a Scrollbar to let us view them all. Let's do this! In this introductory video, you'll learn about how do custom attributes (property drawers) work. Some more information on what a Property Drawer is, since you may have forgotten. Let's define our own PropertyDrawer class in this lecture. Here's everything you need to know when designing yours. I don't particularly like the way a bool displays. I'd rather have a button that turns green or red depending on its current state. Well, why won't we do this? Remember how the Range attribute receives a couple of parameters? In this lecture we'll go through the process of doing this for our own PropertyDrawer!.
https://www.udemy.com/unity-custom-inspectors-guide/
CC-MAIN-2017-39
refinedweb
847
72.36
>> JSON field in Django models Beyond Basic Programming - Intermediate Python 36 Lectures 3 hours Practical Machine Learning using Python 91 Lectures 23.5 hours Practical Data Science using Python 22 Lectures 6 hours In this article, we will see how to add JSON fields to our Django models. JSON is a simple format to store data in key and value format. It is written in curly braces. Many a time, on developer website, we need to add developer data and JSON fields are useful in such cases. First create a Django project and an app. Please do all the basic things, like adding app in INSTALLED_APPS and setting up urls, making a basic model and render its form in an HTML file. Example Install the django-jsonfield package − pip install django-jsonfield Now, let's create a model in models.py, for example − import jsonfield from django.db import models # Create your models here. class StudentData(models.Model): name=models.CharField(max_length=100) standard=models.CharField(max_length=100) section=models.CharField(max_length=100) the_json = jsonfield.JSONField() In admin.py, add the following lines − from django.contrib import admin from .models import StudentData admin.site.register(StudentData) We created a model here which has four fields, one of it is our thirdparty JSON field. Now, run these commands − python manage.py makemigrations python manage.py migrate python manage.py createsuperuser These commands will create the table and the last command will create an admin user for you. Now, you are all done. Output Go to and go to your model admin, then add an instance, you will see a field like this − - Related Questions & Answers - Making a Pickle field in Django models - Exporting models data in Django - Implementing models reversion in Django - Importing data into models in Django - Adding a DeleteView in Django - Adding translation to a model instance in Django - Add the slug field inside Django Model - How to add a Money field in Django? - How to make a Country field in Django? - Django model data to JSON data in 2 lines - How to add a text editor field in Django? - Adding dash between spaces in field name in MySQL? - How to get a JSON field in a nested JSON using Rest Assured? - MongoDB Aggregate JSON array field for the matching field of other collection? - How to get a JSON array field in a nested JSON using Rest Assured?
https://www.tutorialspoint.com/adding-json-field-in-django-models
CC-MAIN-2022-40
refinedweb
399
58.08
How to use speech.say() in different voice? - uncompleted If you check the setting in iOS you will find that voice can be spoken in different people: For example: en-US: the default is Samantha(female), there is Fred(male), and you also can download Allison, Ava, Nick... as well as Siri How can we use them in speech.say()? The propose is because that the quality of default voice of zh-cnis terrible which is read by Tian-Tian, but I check others like Ting-Ting or female Siri, the quality is much better. Does anyone know how to change the setting? Many thanks I just tested it and it's ok but I changed a little bit to find the wanted voice voices=AVSpeechSynthesisVoice.speechVoices() for i in range(0,len(voices)): print(i,voices[i]) ... #voice=AVSpeechSynthesisVoice.voiceWithLanguage_('fr-FR') voice = voices[25] - uncompleted Awesome!!!!!!!! ~~~~ Thanks a lot!!! @uncompleted Awesome = @JonB 😀 I only did a Google search "Pythonista speech voice" Remark that speech.get_languages() gives 53 elements like AVSpeechSynthesisVoice.speechVoices() but speech module does not allow, I think, to select particular voice for the same language... @omz the script here-after shows that the speech module 'knows' all the 53 languages offered by the Apple module, it could be possible to get a wanted voice by passing to the say function the language and, for instance, the voice name, isn'it? import speech from objc_util import * AVSpeechSynthesisVoice=ObjCClass('AVSpeechSynthesisVoice') l1 = speech.get_synthesis_languages() l2 = AVSpeechSynthesisVoice.speechVoices() for i in range(0,len(l1)): l = str(l2[i].description()) j = l.find('Language: ') k = l.find(', Quality:') # [AVSpeechSynthesisVoice 0x1c0a15f40] Language: ar-SA, Name: Maged, Quality: Default [com.apple.ttsbundle.Maged-compact] print('speech:',l1[i],'Objective-c:',l[j+10:k])``` - Darth Friese I can get it to change languages, and apparently Klingon is an option in pythonista. Not sure what it accomplishes, but it is fun. By default it will use the language set on the phone but you can lookup other BCP 47 language identifier. They usually look like "en-US" for American English or "de" for German. def count_down(): num = input("Number to countdown from: ") for i in range(int(num), -1, -1): speech.say(str(i), "i-klingon") print(i) time.sleep(1.0) speech.say("SoHDaq destruct ghuS", "i-klingon") if __name__ == "__main__": count_down()
https://forum.omz-software.com/topic/4706/how-to-use-speech-say-in-different-voice
CC-MAIN-2019-04
refinedweb
390
59.4
1.1 anton 1: \ Etags support for GNU Forth. 2: 1.9 anton 3: \ Copyright (C) 1995,1998: 21: 1.1 anton: 1.11 ! dvdkhlng: 1.8 pazsan 45: require search.fs 46: require extend.fs 1.7 pazsan 47: 1.1 anton 1.4 anton 79: sourcefilename last-loadfilename 2@ d<> 1.1 anton 80: if 81: #ff r@ emit-file throw 82: #lf r@ emit-file throw 1.4 anton 83: sourcefilename 2dup 1.1 anton 1.5 anton 94: current @ locals-list <> and \ not a local name 1.1 anton 95: last @ 0<> and \ not an anonymous (i.e. noname) header 96: if 97: tags-file-id >r 98: r@ put-load-file-name 99: source drop >in @ r@ write-file throw 100: 127 r@ emit-file throw 1.11 ! dvdkhlng 101: \ bl r@ emit-file throw 1.1 anton 102: last @ name>string r@ write-file throw 1.11 ! dvdkhlng 103: \ bl r@ emit-file throw 1.1 anton 104: 1 r@ emit-file throw 1.4 anton 105: base @ decimal sourceline# 0 <# #s #> r@ write-file throw base ! 1.1 anton 106: s" ,0" r@ write-line throw 107: \ the character position in the file; not strictly necessary AFAIK 108: \ instead of using 0, we could use file-position and subtract 109: \ the line length 110: rdrop 1.5 anton 111: endif ; 1.1 anton 112: 113: : (tags-header) ( -- ) 114: defers header 115: put-tags-entry ; 116: 117: ' (tags-header) IS header
http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/etags.fs?annotate=1.11;sortby=log;f=h;only_with_tag=MAIN
CC-MAIN-2019-39
refinedweb
247
79.26
(For more resources related to this topic, see here.) Understanding DQL DQL is the acronym of Doctrine Query Language. It's a domain-specific language that is very similar to SQL, but is not SQL. Instead of querying the database tables and rows, DQL is designed to query the object model's entities and mapped properties. DQL is inspired by and similar to HQL, the query language of Hibernate, a popular ORM for Java. For more details you can visit this website:. Learn more about domain-specific languages at: To better understand what it means, let's run our first DQL query. Doctrine command-line tools are as genuine as a Swiss Army knife. They include a command called orm:run-dql that runs the DQL query and displays it's result. Use it to retrieve title and all the comments of the post with 1 as an identifier: php vendor/bin/doctrine.php orm:run-dql "SELECT p.title, c.bodyFROM Blog\Entity\Post p JOIN p.comments c WHERE p.id=1" It looks like a SQL query, but it's definitely not a SQL query. Examine the FROM and the JOIN clauses; they contain the following aspects: - A fully qualified entity class name is used in the FROM clause as the root of the query - All the Comment entities associated with the selected Post entities are joined, thanks to the presence of the comments property of the Post entity class in the JOIN clause As you can see, data from the entities associated with the main entity can be requested in an object-oriented way. Properties holding the associations (on the owning or the inverse side) can be used in the JOIN clause. Despite some limitations (especially in the field of subqueries), DQL is a powerful and flexible language to retrieve object graphs. Internally, Doctrine parses the DQL queries, generates and executes them through Database Abstraction Layer (DBAL) corresponding to the SQL queries, and hydrates the data structures with results. Until now, we only used Doctrine to retrieve the PHP objects. Doctrine is able to hydrate other types of data structures, especially arrays and basic types. It's also possible to write custom hydrators to populate any data structure. If you look closely at the return of the previous call of orm:run-dql, you'll see that it's an array, and not an object graph, that has been hydrated. As with all the topics covered in this book, more information about built-in hydration modes and custom hydrators is available in the Doctrine documentation on the following website: Using the entity repositories Entity repositories are classes responsible for accessing and managing entities. Just like entities are related to the database rows, entity repositories are related to the database tables. All the DQL queries should be written in the entity repository related to the entity type they retrieve. It hides the ORM from other components of the application and makes it easier to re-use, refactor, and optimize the queries. Doctrine entity repositories are an implementation of the Table Data Gateway design pattern. For more details, visit the following website: A base repository, available for every entity, provides useful methods for managing the entities in the following manner: - find($id): It returns the entity with $id as an identifier or null It is used internally by the find() method of the Entity Managers. - findAll(): It retrieves an array that contains all the entities in this repository - findBy(['property1' => 'value', 'property2' => 1], ['property3' => 'DESC', 'property4' => 'ASC']): It retrieves an array that contains entities matching all the criteria passed in the first parameter and ordered by the second parameter - findOneBy(['property1' => 'value', 'property2' => 1]): It is similar to findBy() but retrieves only the first entity or null if none of the entities match the criteria Entity repositories also provide shortcut methods that allow a single property to filter entities. They follow this pattern: findBy*() and findOneBy*(). For instance, calling findByTitle('My title') is equivalent to calling findBy(['title' => 'My title']). This feature uses the magical __call() PHP method. For more details visit the following website: In our blog app, we want to display comments in the detailed post view, but it is not necessary to fetch them from the list of posts. Eager loading through the fetch attribute is not a good choice for the list, and Lazy loading slows down the detailed view. A solution to this would be to create a custom repository with extra methods for executing our own queries. We will write a custom method that collates comments in the detailed view. Creating custom entity repositories Custom entity repositories are classes extending the base entity repository class provided by Doctrine. They are designed to receive custom methods that run the DQL queries. As usual, we will use the mapping information to tell Doctrine to use a custom repository class. This is the role of the repositoryClass attribute of the @Entity annotation. Kindly perform the following steps to create a custom entity repository: - Reopen the Post.php file at the src/Blog/Entity/ location and add a repositoryClass attribute to the existing @Entity annotation like the following line of code: @Entity(repositoryClass="PostRepository") - Doctrine command-line tools also provide an entity repository generator. Type the following command to use it: php vendor/bin/doctrine.php orm:generate:repositories src/ - Open this new empty custom repository, which we just generated in the PostRepository.phpPostRepository.php file, at the src/Blog/Entity/ location. Add the following method for retrieving the posts and comments: /** * Finds a post with its comments * * @param int $id * @return Post */ public function findWithComments($id) { return $this ->createQueryBuilder('p') ->addSelect('c') ->leftJoin('p.comments', 'c') ->where('p.id = :id') ->orderBy('c.publicationDate', 'ASC') ->setParameter('id', $id) ->getQuery() ->getOneOrNullResult() ; } Our custom repository extends the default entity repository provided by Doctrine. The standard methods, described earlier in the article, are still available. Getting started with Query Builder QueryBuilder is an object designed to help build the DQL queries through a PHP API with a fluent interface. It allows us to retrieve the generated DQL queries through the getDql() method (useful for debugging) or directly use the Query object (provided by Doctrine). To increase performance, QueryBuilder caches the generated DQL queries and manages an internal state. The full API and states of the DQL query are documented on the following website: We will give an in-depth explanation of the findWithComments() method that we created in the PostRepository class. Firstly, a QueryBuilder instance is created with the createQueryBuilder() method inherited from the base entity repository. The QueryBuilder instance takes a string as a parameter. This string will be used as an alias of the main entity class. By default, all the fields of the main entity class are selected and no other clauses except SELECT and FROM are populated. The leftJoin() call creates a JOIN clause that retrieves comments associated with the posts. Its first argument is the property to join and its second is the alias; these will be used in the query for the joined entity class (here, the letter c will be used as an alias for the Comment class). Unless the SQL JOIN clause is used, the DQL query automatically fetches the entities associated with the main entity. There is no need for keywords like ON or USING. Doctrine automatically knows whether a join table or a foreign-key column must be used. The addSelect() call appends comment data to the SELECT clause. The alias of the entity class is used to retrieve all the fields (this is similar to the * operator in SQL). As in the first DQL query of this article, specific fields can be retrieved with the notation alias.propertyName. You guessed it, the call to the where() method sets the WHERE part of the query. Under the hood, Doctrine uses prepared SQL statements. They are more efficient than the standard SQL queries. The id parameter will be populated by the value set by the call to setParameter(). Thanks again to prepared statements and this setParameter() method, SQL Injection attacks are automatically avoided. SQL Injection Attacks are a way to execute malicious SQL queries using user inputs that have not escaped. Let's take the following example of a bad DQL query to check if a user has a specific role: $query = $entityManager->createQuery('SELECT ur FROMUserRole ur WHERE ur.username = "' . $username . '" ANDur.role = "' . $role . '"'); $hasRole = count($query->getResult()); This DQL query will be translated into SQL by Doctrine. If someone types the following username: " OR "a"="a the SQL code contained in the string will be injected and the query will always return some results. The attacker has now gained access to a private area. The proper way should be to use the following code: $query = $entityManager->createQuery("SELECT ur FROMUserRole WHERE username = :username and role = :role"); $query->setParameters([ 'username' => $username, 'role' => $role ]); $hasRole = count($query->getResult()); Thanks to prepared statements, special characters (like quotes) contained in the username are not dangerous, and this snippet will work as expected. The orderBy() call generates an ORDER BY clause that orders results as per the publication date of the comments, older first. Most SQL instructions also have an object-oriented equivalent in DQL. The most common join types can be made using DQL; they generally have the same name. The getQuery() call tells the Query Builder to generate the DQL query (if needed, it will get the query from its cache if possible), to instantiate a Doctrine Query object, and to populate it with the generated DQL query. This generated DQL query will be as follows: SELECT p, c FROM Blog\Entity\Post p LEFT JOIN p.comments c WHEREp.id = :id ORDER BY c.publicationDate ASC The Query object exposes another useful method for the purpose of debugging: getSql(). As its name implies, getSql() returns the SQL query corresponding to the DQL query, which Doctrine will run on DBMS. For our DQL query, the underlying SQL query is as follows: SELECT p0_.id AS id0, p0_.title AS title1, p0_.body AS body2,p0_.publicationDate AS publicationDate3, c1_.id AS id4, c1_.bodyAS body5, c1_.publicationDate AS publicationDate6, c1_.post_id ASpost_id7 FROM Post p0_ LEFT JOIN Comment c1_ ON p0_.id =c1_.post_id WHERE p0_.id = ? ORDER BY c1_.publicationDate ASC The getOneOrNullResult() method executes it, retrieves the first result, and returns it as a Post entity instance (this method returns null if no result is found). Like the QueryBuilder object, the Query object manages an internal state to generate the underlying SQL query only when necessary. Performance is something to be very careful about while using Doctrine. When set in production mode, ORM is able to cache the generated queries (DQL through the QueryBuilder objects, SQL through the Query objects) and results of the queries. ORM must be configured to use one of the blazing, fast, supported systems (APC, Memcache, XCache, or Redis) as shown on the following website: We still need to update the view layer to take care of our new findWithComments() method. Open the view-post.php file at the web/location, where you will find the following code snippet: $post = $entityManager->getRepository('Blog\Entity\Post')->find($_GET['id']); Replace the preceding line of code with the following code snippet: $post = $entityManager->getRepository('Blog\Entity\Post')-> findWithComments($_GET['id']); Filtering by tag To discover a more advanced use of the QueryBuilder and DQL, we will create a list of posts having one or more tags. Tag filtering is good for Search Engine Optimization and allows the readers to easily find the content they are interested in. We will build a system that is able to list posts that have several tags in common; for example, all the posts tagged with Doctrine and Symfony. To filter our posts using tags kindly perform the following steps: - Add another method to our custom PostRepository class (src/Blog/Entity/PostRepository.php) using the following code: /** * Finds posts having tags * * @param string[] $tagNames * @return Post[] */ public function findHavingTags(array $tagNames) { return $queryBuilder = $this ->createQueryBuilder('p') ->addSelect('t') ->join('p.tags', 't') ->where('t.name IN (:tagNames)') ->groupBy('p.id') ->having('COUNT(t.name) >= :numberOfTags') ->setParameter('tagNames', $tagNames) ->setParameter('numberOfTags',count($tagNames)) ->getQuery() ->getResult() ; } This method is a bit more complex. It takes in a parameter as an array of tag names and returns an array of posts that has all these tags. The query deserves some explanation, which is as follows: - The main entity class (automatically set by the inherited createQueryBuilder() method) is Post and its alias is the letter p. - We join the associated tags through a JOIN clause; the Tag class is aliased by t. - Thanks to where() being called, we retrieve only the posts tagged by one of the tags passed in the parameter. We use an awesome feature of Doctrine that allows us to directly use an array as a query parameter. - Results of where() are grouped by id with the call to groupBy(). - We use the aggregate function COUNT() in the HAVING clause to filter the posts that are tagged by some tags of the $tagNames array, but not all of them. - Edit the index.php file in web/ to use our new method. Here, you will find the following code: /** @var $posts \Blog\Entity\Post[] Retrieve the list ofall blog posts */ $posts = $entityManager->getRepository('Blog\Entity\Post')->findAll(); And replace the preceding code with the next code snippet: $repository = $entityManager->getRepository('Blog\Entity\Post'); /** @var $posts \Blog\Entity\Post[] Retrieve the list ofall blog posts */ $posts = isset($_GET['tags']) ? $repository-> findHavingTags($_GET['tags']) : $repository->findAll(); Now, when a GET parameter called tags exists in the URL, it is used to filter posts. Better, if several comma-separated tags are passed in, only posts with all these tags will be displayed. - Type in your favorite browser. Thanks to the fixtures we have created, posts 5 and 10 should be listed. - In the same file, find the following code: <p> <?=nl2br(htmlspecialchars($post->getBody()))?> </p> And add the list of tags as follows: <ul> <?php foreach ($post->getTags() as $tag): ?> <li> <a href="index.php?tags=<?=urlencode($tag)?>">< ?=htmlspecialchars($tag)?></a> </li> <?php endforeach ?> </ul> A smart list of tags with links to the tag page is displayed. You can copy this code and then paste it in the view-post.php file in the web/ location; or better, don't repeat yourself: create a small helper function to display the tags. Counting comments We still need to make some cosmetic changes. Posts with a lot of comments interest many readers. It would be better if the number of comments for each post was available directly from the list page. Doctrine can populate an array containing the result of the call to an aggregate function as the first row and hydrated entities as the second. Add the following method, for retrieving posts with the associated comments, to the PostRepository class: /** * Finds posts with comment count * * @return array */ public function findWithCommentCount() { return $this ->createQueryBuilder('p') ->leftJoin('p.comments', 'c') ->addSelect('COUNT(c.id)') ->groupBy('p.id') ->getQuery() ->getResult() ; } Thanks to the GROUP BY clause and the call to addSelect(), this method will return a two-dimensional array instead of an array of the Post entities. Arrays in the returned array contain two values, which are as follows: - Our Post entity at the first index - The result of the COUNT() function of DQL (the number of comments) at the second index In the index.php file at the web/ location, find the following code: $posts = $repository->findHavingTags(explode(',',$_GET['tags'])); } else { $posts = $repository->findAll(); } And replace the preceding code with the following code to use our new method: $results = $repository->findHavingTags(explode(',',$_GET['tags'])); } else { $results = $repository->findWithCommentCount(); } To match the new structure returned by findWithCommentCount(), find the following code: <?php foreach ($posts as $post): ?> And replace the preceding code with the next code snippet: <?php foreach ($results as $result): $post = $result[0]; $commentCount = $result[1]; ?> As seen previously, the use of a custom hydrator is a better practice while handling such cases. You should also take a look at Custom AST Walker as shown on the following website: Find the following code snippet: <?php if (empty($posts)): ?> And replace the preceding code with the next code snippet: <?php if (empty($results)): ?> It's time to display the number of comments. Insert the following code after the tag list: <?php if ($commentCount == 0): ?> Be the first to comment this post. <?php elseif ($commentCount == 1): ?> One comment <?php else: ?> <?= $commentCount ?> comments <?php endif ?> As the index.php file at the web/location also uses the findHavingTags() method to display the list of tagged articles, we need to update this method too. This is done using the following code: // … ->addSelect('t') ->addSelect('COUNT(c.id)') ->leftJoin('p.comments', 'c') // … Summary In this article, we have learned about DQL, its differences from SQL, and its Query Builder. We also learned about the concept of entity repositories and how to create custom ones. Even if there is a lot more to learn from these topics and from Doctrine in general, our knowledge should be sufficient to start developing complete and complex applications using Doctrine as a persistent system. Resources for Article: Further resources on this subject: - Introduction to Kohana PHP Framework [Article] - Developing an Application in Symfony 1.3 (Part 1) [Article] - FuelPHP [Article]
https://www.packtpub.com/books/content/building-queries
CC-MAIN-2017-09
refinedweb
2,901
54.52
Create be placed in your command or view directly but in a more reusable place. In a plugin we can create services, this is the place where we have to add our reusable logic. These service can be consumed from view and commands. You could see it as the core (or brain) of the feature. Example As an example, in my feature “Export to Plunker” I created three services: - Message.js - This service is used to show a notification that the plunk is generated and will open a new window - PLunker.js - This service will handle all the communication with Plunker - Project.js - In this service I collect all the files of the selected plugin You can find the full code of this project on github: Create a service This is tutorial is based on the previous two: - Create command: - Create View: There is already a service generated by the wizard. To understand all the steps, we’re going to create a new service. We’ll consume this service from the controller of our UI5 view. It’s important that you’ve already followed the blog Start creating two files, a js file and json file: In the “js” file, we can add our own logic. For example, just create a function that concatenates a value to a string and returns a simple string: In the “json” file we have to define the functions that we want to expose, they won’t be accessible if they are not defined. We can also create functions in the service for internal use only, then you don’t have to define these functions in the “json” file. The name of the service exists out of the name of the project + folder name, in this case “service” + the name of the JS file “MyFirstService” = “myfirstplugin.service.MyFirstService”. Configure all the functions with their incoming params and the return values. Configure the service in the plugin.json In the plugin.json we have to define our created service. This is required to access the service from the context object. We provide a name for the service, implementation and module: - Implements: contains the namespace to the service - Module: this is the path to the service We also need to define an interface. This will map the namespace with the path. Use the service I extended the view from my previous blog with a title and an input field: The value of the input field is connected to a property of a JSON Model which I’ve created in the controller. In the eventhandler of the button I added the following: - Get the context which gives me access to my service - Get the value from the input field using the model - Call my service passing the value from the input field. To fetch the result of the function you’ll have to use the “then” function. This is because the SAP Web IDE SDK is using promises. In the “then” function I call the messagebox to show the result. Result Fill in a name in the input field and click on the button The button will call our service and show the following text in a popup. You can find the full code of the demo feature on github: You now have used the three key components of a plugin, command, view and service. Best regards, Wouter
https://blogs.sap.com/2017/07/17/create-a-service-in-you-sap-web-ide-feature/
CC-MAIN-2021-25
refinedweb
560
69.31
class Solution(object): def canWinNim(self, n): if n % 4 is not 0: return True else: return False Or in one line: def canWinNim(self, n): return n % 4 != 0 Also your use of "is" is probably not intended. You aren't checking for identity between the integer objects 0 and n % 4, but the equality of their values. This will work sometimes (I believe) thanks to caching, but is not 100% guaranteed to work always. You phrased it wrong, they are checking for identity and not for equality. Likely it works for -5 to 256, but I don't know about a guarantee, either. And it's definitely a bad idea unless it's used on purpose and for a good reason, which I doubt here as well :-) Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/27374/python-four-line-solution
CC-MAIN-2017-47
refinedweb
145
68.4
Forum – Code of Conduct # frozen_string_literal: true # This file is used by Rack-based servers to start the application. require_relative 'config/environment' require 'rack/common_logger' console = ActiveSupport::Logger.new($stdout) console.formatter = Rails.logger.formatter console.level = Rails.logger.level #Hanami::Logger.extend(ActiveSupport::Logger.broadcast(console)) Rails.logger.extend(ActiveSupport::Logger.broadcast(console)) run Rails.application Thanks @kaikuchn I'm using named routes. This is it get '/clients', to: 'clients#index', as: :clients_index Can you think of anything else to check? It's more a theoretical question though as I think it should redirect using GET but I won't use it as redirection from 'destroy' action make sense only if a form is used for calling the deletion. Using a form for this seems weird and I ended up with AJAX hence don't need to redirect now. But that raised a question about a proper way of sending DELETE in Hanami. <form>, AJAX do we we have anything else which would look as nice as link_to for 'edit' and 'new' ? flash[:error_m] = params.error_messages. puts flash[:error_m]works and I see all messages in the server output. The template has simply this code <%= flash[:error_m] %>and nothing renders. Warning! Rack::Session::Cookie data size exceeds 4K. Warning! Rack::Session::Cookie failed to save session. Content dropped. @kaikuchn no, just tested, seems it's definitely Hanami which is trying to do a redirect keeping the original HTTP method, i.e. DELETE in case of my destroy action. This is the action code below, see I disabled halt and enabled redirect_to. Did not change anything else since I moved to AJAX for this, but the below is still OK for this test as only checking the log. clients_index_path route does work - tested separately with link_to def call(params) ClientRepository.new.delete(params[:id]) #halt 200 redirect_to routes.clients_index_path end Result: HTTP/1.1 DELETE 302 127.0.0.2 /admin/clients/8 5 {"id"=>"8"} 0.015793 HTTP/1.1 DELETE 405 127.0.0.2 /admin/clients - {} 0.007453 HTTP/1.1 GET 200 127.0.0.2 /admin/clients 1856 {} 0.016227 The last line is my test of the route with GET. So definitely in case of HTTP DELETE, route_to tries to do a redirect using the same DELETE method. Obviously it is not supported by my routes and hence 405. Question - is it a bug or a feature? Anyway seems reasonable if redirect_to could always use GET or had an option to specify the method. This is how to read those logs: So you are sending two delete requests, why ever you are doing this. I'm pretty sure that it's not something Hanami is doing, like 99%. The third request you send is then the get request I'd expect after Hanami had send your client a redirect response. The 2nd request is weird. self.body = XYZonly works in controller modules! Object#inspectas an equivalent to the var_dumpof php..? No clue what var_dumpdoes though. appfolder but that without somehow embedding one or a few of these bundlers into hanami asset management capabilities ... I don't know, I really don't get the logic! I think this was put there because at one point people wanted to work like that.. But I completely agree with you, I was never a fan of these repackaged js libraries that'd be always outdated and you didn't even get to use any of the goodness like google closure to reduce asset size. For that matter I really don't like the asset pipeline approach at all, and I am very happy that the Ruby (on Rails) community has or is moving towards letting JS tooling handle JS. Personally I only use Hanami for APIs (public or private, i.e., for frontends), so I have no clue what people do who go the classic route.
https://gitter.im/hanami/chat?at=5de3fbc01659720ca8de464e
CC-MAIN-2021-25
refinedweb
647
67.15
This is "string concatenation," and it is a bad practice: // bad practice, don't reuse! String text = "Hello, " + name + "!"; Why? Some may say that it is slow, mostly because parts of the resulting string are copied multiple times. Indeed, on every + operator, String class allocates a new block in memory and copies everything it has into it; plus a suffix being concatenated. This is true, but this is not the point here. Actually, I don't think performance in this case is a big issue. Moreover, there were multiple experiments showing that concatenation is not that slow when compared to other string building methods and sometimes is even faster. Some say that concatenated strings are not localizable because in different languages text blocks in a phrase may be positioned in a different order. The example above can't be translated to, say, Russian, where we would want to put a name in front of "привет." We will need to localize the entire block of code, instead of just translating a phrase. However, my point here is different. I strongly recommend avoiding string concatenation because it is less readable than other methods of joining texts together. Let's see these alternative methods. I'd recommend three of them (in order of preference): String.format(), Apache StringUtils and Guava Joiner. There is also a StringBuilder, but I don't find it as attractive as StringUtils. It is a useful builder of strings, but not a proper replacer or string concatenation tool when readability is important. String.format() String.format() is my favorite option. It makes text phrases easy to understand and modify. It is a static utility method that mirrors sprintf() from C. It allows you to build a string using a pattern and substitutors: String text = String.format("Hello, %s!", name); When the text is longer, the advantages of the formatter become much more obvious. Look at this ugly code: String msg = "Dear " + customer.name() + ", your order #" + order.number() + " has been shipped at " + shipment.date() + "!"; This one looks much more beautiful doesn't it: String msg = String.format( "Dear %1$s, your order #%2$d has been shipped at %3$tR!", customer.name(), order.number(), shipment.date() ); Please note that I'm using argument indexes in order to make the pattern even more localizable. Let's say, I want to translate it to Greek. This is how will it look: Αγαπητέ %1$s, στις %3$tR στείλαμε την παραγγελία σου με αριθμό #%2$d! I'm changing the order of substitutions in the pattern, but not in the actual list of methods arguments. Apache StringUtils.join() When the text is rather long (longer than your screen width), I would recommend that you use the utility class StringUtils from Apache commons-lang3: import org.apache.commons.lang3.StringUtils; String xml = StringUtils.join( "<?xml version='1.0'?>", "<html><body>", "<p>This is a test XHTML document,", " which would look ugly,", " if we would use a single line," " or string concatenation or String format().</p>" "</body></html>" ); The need to include an additional JAR dependency to your classpath may be considered a downside with this method (get its latest versions in Maven Central): <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> </dependency> Guava Joiner Similar functionality is provided by Joiner from Google Guava: import com.google.common.base.Joiner; String text = Joiner.on('').join( "WE HAVE BUNNY.\n", "GATHER ONE MILLION DOLLARS IN UNMARKED ", "NON-CONSECUTIVE TWENTIES.\n", "AWAIT INSTRUCTIONS.\n", "NO FUNNY STUFF" ); It is a bit less convenient than StringUtils since you always have to provide a joiner (character or a string placed between text blocks). Again, a dependency is required in this case: <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> </dependency> Yes, in most cases, all of these methods work slower than a plain simple concatenation. However, I strongly believe that computers are cheaper than people. What I mean is that the time spent by programmers understanding and modifying ugly code is much more expensive than a cost of an additional server that will make beautifully written code work faster. If you know any other methods of avoiding string concatenation, please comment below.
http://www.yegor256.com/2014/06/19/avoid-string-concatenation.html
CC-MAIN-2017-51
refinedweb
696
57.57
This action might not be possible to undo. Are you sure you want to continue? Design Pattern Types of Design Patterns Creational Patterns Structural Patterns Behavioral Patterns Benefit “Design pattern is a general repeatable solution to a commonlyoccurring problem in software design.” “Design patterns are recurring solutions to design problems.” Studying design patterns is a way of studying how the “experts” do design. “Design patterns constitute a set of rules describing how to accomplish certain tasks in the realm of software development.” Problem Forces Solution Benefits Consequences Related Patterns Learning the design patterns is a multiple step process: ◦ Acceptance ◦ Recognition ◦ Internalization . Structural Patterns ◦ Structural patterns help you compose groups of objects into larger structures. . rather than having you instantiate objects directly. such as complex user interfaces or accounting data. Creational Patterns ◦ Creational patterns are ones that create objects for you. Behavioral Patterns ◦ Behavioral patterns help you define the communication between objects in your system and how the flow is controlled in a complex program. Factory Pattern Abstract Factory Pattern Singleton Pattern Builder Pattern . In simple words. you can use factory pattern where you have to create an object of any one of sub-classes depending on the data provided. if we have a super class and n sub-classes. Factory of classes. When a class does not know which class of objects it must create. and based on data provided. When to use a Factory Pattern? 1. we have to return the object of one of the sub-classes. 3. we use a factory pattern. . A class specifies its sub-classes to specify which objects to create. In programmer’s language. 2. } } . } public String getGender() { return gender. private String gender. public String getName() { return name.Sample: public class Person { public String name. out.out. Sample: public class Male extends Person { public Male(String fullName) { System.println("Hello Ms. } } public class Female extends Person { public Female(String fullNname) { System.println("Hello Mr. "+fullNname). } } . "+fullName). getPerson(args[0]. Sample: public class Factory { public static void main(String args[]) { Factory factory = new Factory(). else if(gender.equals("M")) return new Male(name). factory. String gender) { if (gender. } } . args[1]).equals("F")) return new Female(name). else return null. } public Person getPerson(String name. The names of actual implementing classes are not needed to be known at the client side. you can change the implementation from one factory to another. Because of the isolation. This means that the abstract factory returns the factory of classes. this returns such factory which later will return one of the sub-classes. . This pattern is one level of abstraction higher than factory pattern. Like Factory pattern returned one of the several sub-classes. When to use Abstract Factory Pattern? One of the main advantages of Abstract Factory Pattern is that it isolates the concrete classes that are generated. public abstract Parts getMonitor(). Sample: public abstract class Computer { public abstract Parts getRAM(). public abstract Parts getProcessor(). } . specification = specification. } } . Sample: public class Parts { public Parts(String specification) { this. } public String getSpecification() { return specification. } public Parts getProcessor() { return new Parts("Celeron"). Sample: public class PC extends Computer { public Parts getRAM() { return new Parts("512 MB"). } } . } public Parts getMonitor() { return new Parts("15 inches"). } public Parts getMonitor() { return new Parts("19 inches"). } public Parts getProcessor() { return new Parts("Intel P 3"). Sample: public class Workstation extends Computer { public Parts getRAM() { return new Parts("1 GB"). } } . } public Parts getProcessor() { return new Parts("Intel P 4"). } public Parts getMonitor() { return new Parts("17 inches"). Sample: public class Server extends Computer{ public Parts getRAM() { return new Parts("4 GB"). } } . return comp. System.out.println("RAM: "+computer.println("Processor: "+computer.getSpecification()). System.equals("Workstation")) comp = new Workstation(). Computer computer = type. else if(computerType. else if(computerType. System.equals("PC")) comp = new PC().getComputer("Server").getMonitor(). } public Computer getComputer(String computerType) { if (computerType. public static void main(String[] args) { ComputerType type = new ComputerType(). } } .getProcessor().getSpecification()).equals("Server")) comp = new Server().println("Monitor: "+computer.getRAM().out.out.getSpecification()). Sample: public class ComputerType { private Computer comp. The advantage of this static approach is that it’s easier to use. you will have to do a lot of recoding. The disadvantage of course is that if in future you do not want the class to be static anymore. This is one of the most commonly used patterns. There are some instances in the application where we have to use just one instance of a particular class. . Clients may only use the accessor function to manipulate the Singleton. Define a public static accessor function in the class.Steps: Define a private static attribute in the “single instance” class. Define all constructors to be protected or private. Do “lazy initialization” (creation on first use) in the accessor function. . } return instance. Sample: public class ClassicSingleton { private static ClassicSingleton instance = null. } } . private ClassicSingleton() { } public static ClassicSingleton getInstance() { if(instance == null) { instance = new ClassicSingleton(). getInstance().. Sample: public class Singleton { public Singleton() { ClassicSingleton instance = ClassicSingleton.//only right way ClassicSingleton anotherInstance =new ClassicSingleton().. } //wrong way } . . Builder. It separates the construction of complex objects from their representation. . as the name suggests builds complex objects from simple ones step-by-step. Sample: public interface Item { public Packing pack(). public int price(). } . } public abstract int price(). Sample: public abstract class Burger implements Item{ public Packing pack() { return new Wrapper(). } . Sample: public class VegBurger extends Burger { public int price() { return 39. } } . Sample: public class Fries implements Item { public Packing pack() { return new Envelop(). } } . } public int price() { return 25. return totalPrice. new Cola().price() + new Fries().price() + new Doll().addItems(items). Sample: public class MealBuilder { public Packing additems() { Item[] items = {new VegBurger(). new Fries(). } } .price() + new Cola(). new Doll()} return new MealBox().price(). } public int calculatePrice() { int totalPrice = new VegBurger(). Adapter Bridge Composite Decorator Facade Proxy . When one interface cannot be changed and has to be suited to the another cannot-be-changed client. The Adapter pattern is used so that two unrelated interfaces can work together. The joining between them is called an Adapter. Inheritance ◦ 2. We do that using an Adapter. This is something like we convert interface of one class into interface expected by the client. Adapter Pattern can be implemented by ◦ 1. an adapter is used so that both the interfaces can work together. Composition. . So. that the 5 Amp plug which here is the client can fit in and also the server which here is the 15 Amp socket can give the output. Sample: The Adapter is something like this. . It will be having the plug of suitable for 15 Amp socket and a socket suitable for a 5 Amp plug. The Bridge Pattern is used to separate out the interface from its implementation. Doing this gives the flexibility so that both can vary independently. . } public void switchOff() { System.println("BULB Switched ON").println("BULB Switched OFF"). } public void switchOff() { System.println("FAN Switched OFF").out.out. public void switchOff().println("FAN Switched ON"). } } .out.out. } public class Fan implements Switch { public void switchOn() { System. Sample: public interface Switch { public void switchOn(). } } public class Bulb implements Switch { public void switchOn() { System. In this pattern. you can develop tree structures for representing part-whole hierarchies. . Composite pattern can represent both the conditions. we come across components which are individual objects and also can be collection of objects. In developing applications. the Composite pattern allows you to create a tree like structure for simple and complex objects so they appear the same to the client. } } . subordinates = new Vector(). private double salary.addElement(e). setSalary(sal). } public void remove(Employee e) { subordinates. Sample: public class Employee { private String name. } public void add(Employee e) { subordinates. private Vector subordinates. double sal) { setName(name). public Employee(String name.remove(e). add(accountant2). North Zone". now we can get the tree for any employee and find out whether that employee has subordinates . 12000). 9000). Sample: private void addEmployeesToTree() { CFO = new Employee("CFO". Employee accountant4 = new Employee("Accountant4".add(accountant4). 11000). Employee headFinance1 = new Employee("Head Finance.add(headFinance1). 10000). Employee accountant2 = new Employee("Accountant2". headFinance2. West Zone". CFO. 22000). Employee accountant1 = new Employee("Accountant1". } Once we have filled the tree up.add(accountant1). 30000).add(accountant3). Employee accountant3 = new Employee("Accountant3". headFinance2.add(headFinance2). headFinance1. Employee headFinance2 = new Employee("Head Finance. CFO. headFinance1. 20000). The disadvantage is that the code maintenance can be a problem as it provides the system with a lot of similar looking small objects (each decorator). we can do this with the help of a decorator. Java Design Patterns suggest that Decorators should be abstract classes and the concrete implementation should be derived from them. . Suppose we have some 6 objects and 2 of them need a special behavior. This is also called “Wrapper”. There is however disadvantage of using decorator. The decorator pattern helps to add behavior or responsibilities to an object. Sample: public abstract class Decorator { public abstract void place(Branch branch). } } . } public class ChristmasTree { private Branch branch. public Branch getBranch() { return branch. getBranch(). Sample: public class BallDecorator extends Decorator { public BallDecorator(ChristmasTree tree) { Branch branch = tree. } public void place(Branch branch) { branch. place(branch). } } .put("ball"). the intent is to add behavior and functionality to some of the objects. In case of decorator. In case of composite objects. The decorator pattern provides functionality to objects in a more flexible way rather than inheriting from them. whether it is a simple or complex object (nodes). The decorator and adapter patterns are similar. The intent of using adapter is to convert the interface of one or more classes to suit the interface of the client program. not all the objects or adding different functionalities to each of the objects. Adapters also seem to decorate the classes. the client program treats the objects similarly. . We as users or clients create connection using the “java. the wiring. The people walking past the road can only see this glass face of the building. This is how facade pattern is used. The face hides all the complexities of the building and displays a friendly face. the implementation of which we are not concerned about.sql. the interface JDBC can be called a facade. In Java. the pipes and other complexities. Facade as the name suggests means the face of the building. It hides the complexities of the system and provides an interface to the client from where the client can access the system. They do not know anything about it.Connection” interface. . The implementation is left to the vendor of driver. This store has a store keeper. packing material. You just have access to store keeper who knows his store well. Let’s consider a store. You. as client want access to different goods. In the storage.g. as he hides the complexities of the system Store. you tell the store keeper and he takes it out of store and hands it over to you on showing him the credentials. . You do not know where the different materials are stored. Whatever you want. raw material and finished goods. Here. the store keeper acts as the facade. there are a lot of things stored e. Sample: Let’s try and understand the facade pattern better using a simple example. All that matters to subscribers is that a dial tone is provided. A subscriber is unaware of how many resources are in the pool when he or she lifts the handset to make a call. ringing generators. . The public switched telephone network is an example of a Flyweight. Sample: The Flyweight uses sharing to support large numbers of objects efficiently. There are several resources such as dial tone generators. and the call is completed. and digit receivers that must be shared between all subscribers. digits are received. This simple object is called the “Proxy” for the complex object . a simple object can represent it. If creation of object is expensive. its creation can be postponed till the very need arises and till then. The proxy pattern is used when you need to represent a complex with a simpler one. In this way. go to bank. In old days when ATMs and cheques were not available. The way we will do it is. stand in a queue and withdraw money. get withdrawal form there. Sample: Let’ say we need to withdraw money to make some purchase. or purchase straight with a cheque. . we can say that ATM or cheque in modern times act as proxies to the Bank. Then go to the shop where you want to make the purchase. get your passbook. go to an ATM and get the money. what used to be the way? Well. Chain of responsibility Command . it gets caught at the correct level. the request rises in hierarchy till some object takes responsibility to handle this request . Once get caught by the exception. we get an “Exception is unreachable” message when we try to add a catch block with the exception below the parent exception catch block. this exception is because of some bug in coding and so. it will then not look for any other exception. It will find for an Exception class and will be caught by that as both the application specific exceptions and the ArrayIndexOutOfBoundsException are sub-classes of the class Exception. So. This is precisely the reason why. Now. we have an application specific exception in the catch block. This will not be caught by that. Suppose. Sample: Suppose the code written throws an ArrayIndexOutOfBoundsException. in short. which is the base class. a module is invoked. The client invokes a particular module using a command. The client passes a request. in the earlier one. According to the command. This pattern is different from the Chain of Responsibility in a way that. This is another of the data-driven pattern. The command request maps to particular modules. the request passes through each of the classes before finding an object that can take the responsibility. this request gets propagated as a command. . The command pattern however finds the particular object according to the command and invokes only that one. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/53070104/dp
CC-MAIN-2016-50
refinedweb
2,376
53.58
Swift Package Manager just went through a massive refactoring and adds support for testing using XCTest on OSX and Linux. It is not yet available on the latest snapshot but we can always try it out by building swiftpm. Get the latest swift pm Install the latest snapshot and run the commands below to build swiftpm, should take about a minute. $ git clone $ cd swift-package-manager $ Utilities/bootstrap $ alias sb=$PWD/.build/debug/swift-build $ alias st=$PWD/.build/debug/swift-test The last two steps will create aliases to swift-build and swift-test (Yes, swiftpm outputs two exectuables now). This is not required but is convienent than using the entire path. Set up tests structure I’ll be using the package I created earlier which is a simple GET client. - Create a folder Testsin root of your swift package - Inside Testscreate another folder which will be a test-module, you can create as many test-module (ie folders) you need in case your package contains more than one target - According to the proposal it would be possible to directly write test if you have only one target - Create a .swift file to write your tests inside that test-module dir $ git clone $ cd SimpleGetClient $ mkdir Tests && cd Tests $ mkdir SimpleGetClient & SimpleGetClient $ touch SimpleGetTests.swift Write Test Cases - Import the package you want to test and XCTest - Subclass XCTestCase - The method name should begin with “test” - Here is an example : @testable import SimpleGetClient import XCTest class SimpleGetTests: XCTestCase { let client = GetClient() func testGetRequestStatusCode() { let result = client.fetch("") XCTAssertEqual(result.responseCode, "419", "Incorrect value received from server") } } Now run swift build and swift test to build the package and run the tests. Use the aliases created above to use the swift-build and swift-test we built above. $ sb --clean && sb && st Compiling Swift Module 'SimpleGetClient' (1 sources) Compiling Swift Module 'SimpleGetClienttest' (1 sources) Linking Package.xctest Test Suite 'All tests' started at 2016-02-17 20:37:56.947 Test Suite 'Package.xctest' started at 2016-02-17 20:37:56.948 Test Suite 'SimpleGetTests' started at 2016-02-17 20:37:56.948 Test Case '-[SimpleGetClienttest.SimpleGetTests testGetRequestStatusCode]' started. Test Case '-[SimpleGetClienttest.SimpleGetTests testGetRequestStatusCode]' passed (1.836 seconds). Test Suite 'SimpleGetTests' passed at 2016-02-17 20:37:58.784. Executed 1 test, with 0 failures (0 unexpected) in 1.836 (1.836) seconds Test Suite 'Package.xctest' passed at 2016-02-17 20:37:58.784. Executed 1 test, with 0 failures (0 unexpected) in 1.836 (1.836) seconds Test Suite 'All tests' passed at 2016-02-17 20:37:58.784. Executed 1 test, with 0 failures (0 unexpected) in 1.836 (1.837) seconds Tests on Linux For Linux users, theres a little more wiring up to do. First create an extension to your test case conforming to XCTestCaseProvider and return all the test methods. #if os(Linux) extension SimpleGetTests: XCTestCaseProvider { var allTests : [(String, () throws -> Void)] { return [ ("testGetRequestWithOneArg", testGetRequestWithOneArg), ("testGetRequestStatusCode", testGetRequestStatusCode), ] } } #endif Now go to the Tests directory and create a file named LinuxMain.swift and write the following : import XCTest @testable import SimpleGetClienttest XCTMain([ SimpleGetTests(), ]) ie import all test-modules by writing <test-module-dirname>test and call the constructor to all your XCTestCase subclasses inside XCTMain method. Now swift-test should work for linux too. Travis! I was able to run the test cases on Travis-CI’s Ubuntu distro using this .travis.yml : sudo: required dist: trusty before_install: - wget -q -O - | gpg --import - - wget - tar xzf swift-DEVELOPMENT-SNAPSHOT-2016-02-08-a-ubuntu14.04.tar.gz - export PATH=${PWD}/swift-DEVELOPMENT-SNAPSHOT-2016-02-08-a-ubuntu14.04/usr/bin:"${PATH}" script: - git clone - cd swift-package-manager && git checkout 151a973 && cd ../ - swift-package-manager/Utilities/bootstrap - ${PWD}/swift-package-manager/.build/debug/swift-build - ${PWD}/swift-package-manager/.build/debug/swift-test Since swift-test is not currently available in the toolchain, I built swiftpm on travis itself. (Hacks :>) The package I used is available here : PS : It was probably a little too early to write this post 😂 but things shouldn’t change a lot.
http://ankit.im/swift/2016/02/17/swift-package-manager-testing-preview/
CC-MAIN-2016-26
refinedweb
687
57.16
> > (c) does not require devfs. most distros ship without it afaik, and> > switching to it is not an overnight process, and requires devfsd to be> > useful in the real world.> > > > It does, however, not manage permissions, nor does it provide for a sane> namespace (it exposes too many internal implementation details in the> interface -- in particular, the driver becomes part of the namespace, and> devices move around between drivers regularly.)It is also very hard to tar that device file.As to devfsd well Al Viro was reporting races in it long ago that I don't believe Richard has had time to fix nor has anyone else fixed. What is the state on devfs there ?-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2001/5/14/114
CC-MAIN-2014-41
refinedweb
144
62.58
I been asked to spec an ‘Advanced Java’ training course. The list (below) contains a couple of ideas of what should be on such a course but ‘Advanced Java’ means different things to different people. Over the many years you’ve spent with Java, you’ve specialised - partly out of what interests you, and partly out of the work that’s been available. So the question is: What do you think an Advanced Java training course should contain?. I’ll narrow it down it bit - it’s got to be stuff that’s mainstream or almost mainstream (however much I’d like to cover JavaSpaces). It’s not dogmatic - i.e. it’s not the ‘Enterprise Java and nothing else’ that was common a couple of years back. But it does have to be of interest / of a level for people coding Java for 3 years or more , and who want to push their skills to the next level. (I also suspect that I’m going to learn as much from them on this course). Just as important, why do you think [insert topic here] should be included on the course? Leave a comment. Current Menu of Topics (Full Course description here) (this list is an early draft , so I’m prepared for some glaring omissions!) - What am I missing? What should I delete? Let me know. Paul , Technology in Plain English I think if you're focus isn't enterprise Java - why is it almost only enterprise products/APIs in the list?. At the level of core language features, the following are rarely adequately covered: 1) Implementing generics 2) The relationship between class loaders and namespaces as described in Liang and Bracha's paper 3) Annotations I tend to agree with Mr. Anderson here. You look at some of these topics, and they certainly are not relevant in the context of a language discussion, many even exclude programmers who are not familiar or not interested in some of these technologies.. I agree with the other posters. It is most important to focus a class on Advanced Java on attributes that are relevant to the language itself, its evolution and how to take best advantage of its features instead of concentrating on the API or technology du jour (How many competing ways can you think of to do competent O/R mapping in Java? Which one will prevail?).. Hi, I would add Performance. Which tool to use for which type of performance problem is something that a lot of developers are not that familiar with. Writing high performance code can be difficult. Regards, Markus You mention other JVM scripting languages but no mention of Groovy and Grails specifically. I think that these two technologies deserve a mention and they can lead in to a discussion of Spring and Hibernate which they use under the covers. I also think that you might want to mention continuous integration tools like Cruise Control. I think you should either change the title of the course, or create a different list of topics. For "Advanced Java" I'd expect topics about the Java language and the Java platform, like: Classloaders Concurrency (threading, locking, deadlocks, java.util.concurrent package, memory model) Performace (Profiling) Debugging/Monitoring Object Serialization in depth (many implications here that many developers ignore) Object identity (hashcode, equals, practical implications on collections and ORM) Java Security Bytecode generation (code "instrumentation") ... You should cover dynamic proxies and reflection. They are the underpinnings of many of the topics you have listed and truly are "advanced Java" topics.. Fernando is right on. Most of the stuff mentioned in your laundry list has nothing to do with Java programming. It's like comparing math and applied math. Your list is applied Java. It is not advanced Java. Fernando's list is much better. Throw in reflection and introspection while you're at it. I'd also throw a deep dig into the java.util package since most people don't really understand the datastructures in there and misuse them terribly. I agree with Fernado's list and would also add reflection to it. Trond mentioned Joshua Bloch's book Effective Java, this has to be the best Java book I have ever bought. I would also do a refresher on inheritance and other basic Object Orientated concepts. I have worked with people in the past that can write Java code but don't understand the basics concepts of overriding methods and inheritance. Paul, I would add Proxies and Java byte code enhancement with libraries like Javassist. You also need to talk about testing, including mock objects.. swing apa framework ? swing: No introduction - assume that if people don't already know roughly what AOP does then they smart enough to pick it up as you go along. A 15 minute hard-core introduction to the theory: present a single good example of a cross-cutting concern, design an aspect-oriented solution with UML diagrams of how aspects and classes relate. A 15 minute lightning tour of the Eclipse AspectJ Development Tools as the instructor writes the classes and aspects described in the introduction and runs the finished example. The remaining half hour doing a programming assignment, with the instructor available for Q&A. Any required files or eclipse projects should already be loaded on the machines in the classroom.. I agree with most of the above that you should keep it vanilla, stuff like Hibernate Spring EJB are covered in other seminars. It shouldn't be a free for all scratch the surface frenzy.) Paul, how dare you list Ruby and JRuby you should be ashamed of yourself! I could read the "Ruby Agenda" all over this post, you are shameless. :-). Paul; I agree with most of the posts here, there should be a focus on the advance topics such as Patterns, Generics, class loading to name a few. After many interviews, many of the "senior" developers can code but they do not undersand why they write a class certain way. I think java developers need to understand why stringBuffer is better than using ""+"". Thanks Rocky Thanks to everybody for their comments - you've given me a lot to think of. Several people left comments on similar topics , so here's my thoughts (and take it that I could still be wrong). Advanced Java v Applied Java apologies for the confusion. The course should be more correctly called 'Applied Java' with the focus on getting things done. That said , some of the more elegant / advanced features mentioned (e.g. Generics, Javadoc, Classloaders, Object Serialisation and Identity) are worthy of a mention. Swing (or your UI toolkit of choice) I would see either as part of a basic Java course (e.g. just enough to get some visual samples running when learning Java features) or worthy of a 5 day course in itself. Concentrate on the Core as commented, there is too much stuff here, with most likely areas to drop being IDE specific stuff (better for a mentoring session) and other scripting langages (how do you agree which one to use? do we have enough time to cover?). Final call will probably be with the people attending the course. Course Format two good , but contradictory suggestions were (i) go in at the deep end , assume that students can keep up and (ii) teach the advanced basics (e.g. Inheritence) that people should know, but too often don't. All depends on the group of people on the day , I suppose. The phrase Laundry List was used , and it's probably accurate. Taking all the suggestions of libraries to cover (Spring , Hibernate, AspectJ , Web Services, JMS etc etc), some hard decisions are going to have to be made about what is included. At least I have feedback from the 'experts' (that's you the OnJava reader) so the final course reflects more than just my own (faulty) opinions Once again , thanks , and I think the course will be better for your contributions. I would delete things like Hibernate, Web Services, JSF, Ruby, Struts, Eclipse, JDeveloper, GWT, etc, etc. I would hope to see topics like concurrency, memory usage, serialization, debugging with multiple threads, using generics, understanding custom class loaders and bytecode generation and especially being able to profile an existing application for performance. I think the list is not focus on the topic of "advanced Java". Um, how many weeks long is this course going to take to deliver? It would take months to cover that content in enough detail to be worthwhile! I agree with Fernando I agree with Fernando but I think concurrency is too far down his list - should be at the top :) Chris - you're right about the duration of the course. Advanced (applied Java) will have areas of interest to different people - the intention is to offer a menu and concentrate of what people want. Paul How about Ant? Alternatives to JSP? (e.g. Velocity). Code generation techniques are useful. Also, how about good coding techniques as opposed to advanced technologies; anything from Joshua Bloch's 'Effective Java' would be a good starting place. I believe that an 'Advanced Java' course should include the Concurrency API's. To live up to their potential the new multi-core processors are going to require threading on a scale that most 'experienced' Java professionals have never encountered before. Threads. Seriously! You'd be surprised at how many seasoned programmers get by with only a vague grasps of writing thread safe code, inter-thread communication, and the new Concurrency library. And it's not as if threads are a peripheral topic in Java -- they are right at the core of the language and most of its APIs. Eclipse and JDeveloper but not Netbeans.Hibernate but not JDO. The list seems to have more to do with current trends, fashions and third party tools/API than real Java language features. To me advanced means Generics, Threading, Unit Testing, Classloader etc. - Java, Swing and Vista. How should a developer build a quality vista interface? What controls should be used? I would also include a section (or subsection) on monitoring, management, and JMX. But maybe that enters in your section on "What's new in Java 5 and Java 6" ? It's easy to tell that the author has written many articles on Java but probably never much code. All the keywords that should go on a resume are here, but what is missing is the issues that senior engineers face everyday w.r.t memory optimization, performance, scalability and reliability.. @anonymous - 'sigh'. You could at least checked my profile before making an uninformed comment. Everybody else (who were brave enough to leave your name) - thanks for your suggestions. eliminate everything that's dependent on products that aren't part of the Sun standard distribution. All those things are not "advanced Java", they're applications of programming techniques. Showing them as examples might be nice, but not as part of the curiculum.. Add: Swing Add: Profiling Add: Writing simple and efficient code I see it like Trond Andersen. You can write a book for each of your posted topics on its own. Take topics like these from the books "Harcore Java" or "Effective Java" plus a little Refactoring (less for Refactor then for good design in the first place). I think the confusion here is the word "Advanced Java" and your proposed list. The list you provided had nothing to do with a Java programmer who wants to know advanced topics in the language and how the java binary is being configured to run. AOP #1? This is nothing but a workaround of Java's statically compiled nature. If you believe you need AOP you're programming in the wrong language. With Lua for instance, where OO is not built-in but is easy enough to implement, it is also trivial to implement the functionality that AOP gives you. I don't see any value in spending time on Ajax. It seems hugely off-topic for a Java course - any Java course, not just this one. I'd hate to think it was included just because it's a current buzzword. web server hosting Advanced Java seems to be complexed. Well all said, argued and flattered, can we have an updated list of ADVANCED JAVA CONCEPTS. Also i would like to know what is the possible age/No of years required to get to know all the listed topics for the same. I hope i am able to learn all of Java Stuff before i lie down in my grave :)
http://www.oreillynet.com/onjava/blog/2007/03/advanced_java_whats_your_opini.html
crawl-002
refinedweb
2,098
63.29
#include <BcpsSubTree.h> Inheritance diagram for BcpsSubTree: The biggest addition to the fields that already exist withink ALPS is the storage for the global list of objects that are active within that subtree. Initally, this will be implemeted as a std::set, but later on should be changed to something more efficient such as a hash table or something like that. Definition at line 42 of file BcpsSubTree.h. Definition at line 49 of file BcpsSubTree.h. Definition at line 54 of file BcpsSubTree.h. References constraintPool_, and variablePool_. Definition at line 60 of file BcpsSubTree.h. References constraintPool_. Definition at line 65 of file BcpsSubTree.h. References variablePool_. This is the list of objects that exist in the subtree. Definition at line 45 of file BcpsSubTree.h. Referenced by getConstraintPool(), and ~BcpsSubTree(). Definition at line 46 of file BcpsSubTree.h. Referenced by getVariablePool(), and ~BcpsSubTree().
http://www.coin-or.org/Doxygen/CoinAll/class_bcps_sub_tree.html
crawl-003
refinedweb
146
60.82
! My favorite is the ability to extend the Java proxy classes with Ruby specific behaviors. For example: # Extend a proxied class implementing the given java interface name # by including the given Ruby module def self.extend_proxy_with(java_interface, mod) JavaUtilities.extend_proxy java_interface do |c| c.include mod end end Calling this method like so: extend_proxy_with 'org.omg.uml.foundation.core.Classifier', ClassifierExtensionsModule will mix the ClassifierExtensionsModule into any class that implements the UML Classifier interface. Of course, these extensions only work within the Ruby world, but they sure make life easier when it come to adapting some rigid Java class hierarchy for some other purpose. The program where I used this pattern was an MDA tool that works similar to Andromda but can do without the metafacade decorator layer by just extending the Java classes directly. So not only does JRuby make it easy to use Java classes. It makes it easy to extend them at runtime as well. thing.name += "_title" Duh! Thanks Terry, I updated the code with your change. Sneaking Ruby in via the JVM is certainly good, but you are right that the real benefit of JRuby is it's access to all that Java code. There's tons of "legacy" stuff out there that you can access from JRuby; code that's too gnarly to re-write. Now it's easy to hook your JRuby front-end into all that stuff. Bonus: it all gets packaged into your .jar/.war/.ear and deploys on the same old app server. Thanks so much. Your comments help me understand what is JRuby. @Terry: I love that feature too, but the code to do it seems to have changed in the past year. With JRuby 1.1RC1, Calling include on the block's parameter fails, but it works fine without the parameter: def self.extend_proxy_with(java_interface, mod) JavaUtilities.extend_proxy java_interface do include mod end end (Thanks to The jRuby Cookbook) But you are say, that this idead is bad?, Of course, but what do you think about that?, I want to show you best movies:, Try to look here and may be you find what do you want:, Most Interested facts about that you can read here:, Most Interested facts about that you can read here:, I want to show you best movies:,
http://www.oreillynet.com/ruby/blog/2006/11/jrubys_killer_feature.html
crawl-002
refinedweb
384
65.83
442A - Borya and Hanabi Solution: Not an easy problem for me.. The first and easiest thing to say is there are 10 type of hints and hence we can do a complete search on each \(2^{10}\) combination of them. Now the hardest part is that, given a combination of hints, check whether they will allow us to differentiate amongst all the cards. For me this is not an obvious task.. One observation is that if a type of cards occurs more than once, we can consider them as one. This is because given a hint (color or value) in which the type belongs to, we will open up all cards belonging to that type, which can be considered as one set. Each type of cards belong to only one set at most, and these sets are disjoint, hence the observation is valid. So it suffices to keep track on the type of cards present. The next observation, that I can't discover myself, is that given a combination of hint, the positions of all the cards are only determinable if and only if for each pair of card, we can distinguish one from the other. Two cards are distinguishable if we have a hint that allows us to point to only one of them. If all the cards are distinguishable, then naturally we can place them to their correct order by elimination. Very clever indeed. #include <iostream> #include <cstdio> #include <algorithm> using namespace std; int mark[5][5]; int tot[10]; int N; int main(){ string s; scanf("%d",&N); int cnt = 0; for(int i=0;i<N;++i){ cin >> s; int x; if(s[0] == 'G') x = 0; if(s[0] == 'B') x = 1; if(s[0] == 'R') x = 2; if(s[0] == 'Y') x = 3; if(s[0] == 'W') x = 4; mark[x][s[1]-'1'] = 1; } int mask = (1<<10) - 1; int ans = 11; while(mask>=0) { int k = 0; for(int i=0;i<10;++i){ if(mask&(1<<i))++k; } bool ok = true; for(int a=0;a<5;++a){ for(int b=0;b<5;++b){ for(int c=0;c<5;++c){ for(int d=0;d<5;++d){ if(a==c && b==d)continue; if(mark[a][b] && mark[c][d]){ if(a==c){ if(!(mask&(1<<(b+5)) || mask&(1<<(d+5)))){ ok = false; } } else { if(b==d){ if(!((mask&(1<<a)) || (mask&(1<<c)))){ ok = false; } }else{ if(!((mask&(1<<a)) || (mask&(1<<c)) || (mask&(1<<(b+5))) || (mask&(1<<(d+5))))){ ok = false; } } } } if(!ok)break; } } } } if(ok){ ans = min(ans,k); } --mask; } printf("%d\n",ans); return 0; }
https://abitofcs.blogspot.com/2015/01/codeforces-442a-borya-and-hanabi.html
CC-MAIN-2018-13
refinedweb
442
76.56
If. Recently I also saw this on Twitter: In #React-land: is it legit to have a component that only *does* stuff, but isn't visible? i.e. for setting cookie from a dispatched redux action, or kick off a background task, etc.— @rem (@rem) November 30, 2017 The idea is interesting so I decided to experiment and see the pros and cons. Imagine how we add/compose functionality with markup only. Instead of doing it in a JavaScript function we just drop a tag. But let’s do couple of examples and see how it looks like. No matter what we use for our React applications we always have that mapping between logic layer and rendering layer. In the Redux land this is the so called connect function where we say map this portion of the state to props or map this actions to this props. function Greeting({ isChristmas }) { return ( <p> { this.props.isChristmas ? 'Merry Christmas' : 'Hello' } dear user! </p> ); } const mapStateToProps = state => ({ isChristmas: state.calendar.isChristmas }); export default connect(mapStateToProps)(Greeting); isChristmas is just a boolean for Greeting. The component doesn’t know where this boolean is coming from. We may easily extract the function into an external file which will make it completely blind for Redux and friends. That is fine and it works well. But what if we have the following: import IsChristmas from './IsChristmas.jsx'; export default function Greeting() { return ( <div> <IsChristmas> { answer => answer ? 'Merry Christmas dear user!' : 'Hello dear user!' } </IsChristmas> </div> ); } Now Greeting does not accept any properties but still does the same job. It is the IsChristmas component having the wiring and fetching the knowledge from the state. Then we have the render props pattern to make the decision what string to render. // IsChristmas.jsx const IsChristmas = ({ isChristmas, children }) => children(isChristmas); export default connect( state => ({ isChristmas: state.calendar.isChristmas }) )(IsChristmas); Using this technique we are shifting the dependency of the state to an external component. Greeting becomes a composition layer with zero knowledge of the application state. This example is a simple one and looks pointless. Let’s go with a more complicated scenario: function UserProfile() { return ( <UserDataProvider>{ user => ( <ActionsProvider>{ actions => ( <section> Hello, { user.fullName }, please <a onClick={ actions.purchase }>order</a> here. </section> ) }</ActionsProvider> ) }</UserDataProvider> ) } We have two providers the role of which is to deliver (a) some data for the current user and (b) a redux action creator purchase so we can fire it when the user click on the order link. These providers are nothing more then functions that use the children prop as a regular function: // UserDataProvider.jsx function UserDataProvider({ children }) { return children({ fullName: 'Jon Snow'}); } connect(state => ({ user: state.user })) (UserDataProvider); // ActionsProvider.jsx function ActionsProvider({ children }) { return children({ purchase: () => alert('Woo') }); } connect(null, dispatch => ({ purchase: () => dispatch(purchaseActionCreator()) })) (UserDataProvider); This idea shifts the dependencies resolution into JSX syntax which to be honest I really like. We don’t have to know about wiring and on a later stage we may completely swap the provider by just re-implementing the component. For example in the code above if we say that the user’s data comes from the cookie and not from a Redux’s store we may just change the body of UserDataProvider. Of course I do see some problems with this approach. First, testing wise we still need the same setup to make our main component testable. UserProfile still needs the Redux stuff because its internal components are using them. While if we do the wiring directly to UserProfile we will get user and purchase as props and we could mock them. Second, the code looks a little bit ugly if we need to use the render props pattern. Overall, I don’t know :) The idea seems interesting but as with most of the patterns can not be applied to every case. Let’s see how it evolves and I will post an update soon.
http://outset.ws/blog/article/react-markup-as-function
CC-MAIN-2019-04
refinedweb
648
56.96
import itertools def overlap(a, b, min_length=3): """ Return length of longest suffix of 'a' matching a prefix of 'b' that is at least 'min_length' characters long. If no such overlap exists, return 0. """ start = 0 # start all the way at the left while True: start = a.find(b[:min_length], start) # look for b's suffx in a if start == -1: # no more occurrences to right return 0 # found occurrence; check for full suffix/prefix match if b.startswith(a[start:]): return len(a)-start start += 1 # move just past previous match def scs(ss): """ Returns shortest common superstring of given strings, which must be the same length """ shortest_sup = None for ssperm in itertools.permutations(ss): sup = ssperm[0] # superstring starts as first string for i in range(len(ss)-1): # overlap adjacent strings A and B in the permutation olen = overlap(ssperm[i], ssperm[i+1], min_length=1) # add non-overlapping portion of B to superstring sup += ssperm[i+1][olen:] if shortest_sup is None or len(sup) < len(shortest_sup): shortest_sup = sup # found shorter superstring return shortest_sup # return shortest scs(['BAA', 'AAB', 'BBA', 'ABA', 'ABB', 'BBB', 'AAA', 'BAB']) 'BAAABABBBA' scs(['ABCD', 'CDBC', 'BCDA']) 'ABCDBCDA'
http://nbviewer.jupyter.org/github/BenLangmead/comp-genomics-class/blob/master/notebooks/CG_SCS.ipynb
CC-MAIN-2018-51
refinedweb
195
52.33
SpreadJS Hotfix 11.2.3 is now available for download in the DevChannel and on the download page. Bugs fixed in 11.2.3 - 261277: Fixed a bug where pressing the backspace key with a floating object selected makes the active cell editable. - 261743: Fixed an issue where importing picture borders is different from Excel. - 261672: Fixed a bug where the “Filter by Color” UI doesn’t show correctly with JP culture. - 261784: Fixed an issue where clicking “Check all” or “Uncheck all” will cause the browser to stop responding. - 261622: Fixed a bug that makes conditional formatting borders visible after exporting. - 261559: Fixed an issue where SpreadJS is unable to import Combo Charts from Excel. - 261560: Fixed a bug that makes Excel files unable to load when namespaces are defined in XML nodes. - 261429: Fixed an issue where importing an Excel file looks different in SpreadJS.
https://www.grapecity.com/blogs/spreadjs-hotfix-11-2-3
CC-MAIN-2020-10
refinedweb
147
75.3
Royal & Sun Alliance Savings Related Name of scheme: Share Option Scheme (now including the 2009 RSA Sharesave Plan) Period of From: 1 July To: 31 December return: 2013 2013 Balance of unallotted securities under scheme(s) from previous 12,559,829 shares of 27.5p each return: Plus: The amount by which the block scheme(s) has been increased since Nil the date of the last return (if any increase has been applied for): Less: Number of securities issued/allotted under scheme(s) 2,526,303 shares of 27.5p each during period (see LR3.5.7G): Equals: Balance under scheme(s) not yet issued/allotted at end of 10,033,526 shares of 27.5p each.
http://www.bloomberg.com/article/2014-01-02/aFVJPy.zbHyE.html
CC-MAIN-2015-35
refinedweb
117
54.56
2014-04-25 13:31:47 8 Comments I am writing a program that accepts an input from the user. #note: Python 2.7 users should use `raw_input`, the equivalent of 3.X's `input` age = int(input("Please enter your age: ")) if age >= 18: print("You are able to vote in the United States!") else: print("You are not able to vote in the United States.") The program works as expected as long as the the user enters meaningful data. C:\Python\Projects> canyouvote.py Please enter your age: 23 You are able to vote in the United States! But it fails if the user enters invalid data: C:\Python\Projects> canyouvote.py Please enter your age: dickety six Traceback (most recent call last): File "canyouvote.py", line 1, in <module> age = int(input("Please enter your age: ")) ValueError: invalid literal for int() with base 10: 'dickety six' Instead of crashing, I would like the program to ask for the input again. Like this: C:\Python\Projects> canyouvote.py Please enter your age: dickety six Sorry, I didn't understand that. Please enter your age: 26 You are able to vote in the United States! How can I make the program ask for valid inputs instead of crashing when non-sensical data is entered? How can I reject values like -1, which is a valid int, but nonsensical in this context? Related Questions Sponsored Content 7 Answered Questions [SOLVED] How to have user true/false input in python? 17 Answered Questions [SOLVED] How can I sanitize user input with PHP? - 2008-09-24 20:20:39 - Brent - 513945 View - 1107 Score - 17 Answer - Tags: php security xss sql-injection user-input 1 Answered Questions [SOLVED] Write a program which gives an input from the user by asking 'Give me an input:' - 2019-04-10 07:36:43 - Eric - 53 View - 0 Score - 1 Answer - Tags: python-3.x 1 Answered Questions Validating user input program syntax eror - 2018-02-12 15:31:55 - jercai - 28 View - 1 Score - 1 Answer - Tags: python-3.x 2 Answered Questions [SOLVED] Validating user's input for only int or double values - 2018-01-28 16:40:00 - Java Programer - 57 View - 0 Score - 2 Answer - Tags: java user-input 2 Answered Questions [SOLVED] Continuously ask for a user input - 2017-10-05 16:55:37 - Meme Boi - 324 View - 0 Score - 2 Answer - Tags: java string loops if-statement while-loop 4 Answered Questions [SOLVED] user input check int only - 2012-09-19 02:24:10 - BubbleTree - 10840 View - 2 Score - 4 Answer - Tags: java user-input @Rohail 2019-12-24 03:33:56 The simple solution would be: Explanation of above code: In order for a valid age,it should be positive and should not be more than normal physical age,say for example maximum age is 120. Then we can ask user for age and if age input is negative or more than 120,we consider it invalid input and ask the user to try again. Once the valid input is entered, we perform a check (using nested if-else statement) whether the age is >=18 or vice versa and print a message whether the user is eligible to vote @Kevin 2014-04-25 13:31:47 The simplest way to accomplish this would be to put the inputmethod in a while loop. Use continuewhen you get bad input, and breakout of the loop when you're satisfied. When Your Input Might Raise an Exception Use tryand exceptto detect when the user enters data that can't be parsed. Implementing Your Own Validation Rules If you want to reject values that Python can successfully parse, you can add your own validation logic. Combining Exception Handling and Custom Validation Both of the above techniques can be combined into one loop. Encapsulating it All in a Function If you need to ask your user for a lot of different values, it might be useful to put this code in a function, so you don't have to retype it every time. Putting It All Together You can extend this idea to make a very generic input function: With usage such as: Common Pitfalls, and Why you Should Avoid Them The Redundant Use of Redundant inputStatements This method works but is generally considered poor style: It might look attractive initially because it's shorter than the while Truemethod, but it violates the Don't Repeat Yourself principle of software development. This increases the likelihood of bugs in your system. What if you want to backport to 2.7 by changing inputto raw_input, but accidentally change only the first inputabove? It's a SyntaxErrorjust waiting to happen. Recursion Will Blow Your Stack If you've just learned about recursion, you might be tempted to use it in get_non_negative_intso you can dispose of the while loop. This appears to work fine most of the time, but if the user enters invalid data enough times, the script will terminate with a RuntimeError: maximum recursion depth exceeded. You may think "no fool would make 1000 mistakes in a row", but you're underestimating the ingenuity of fools! @vpibano 2017-01-03 02:02:47 Its fun reading it with many examples, kudos. Underrated lesson: "Don't underestimate the ingenuity of fools!" @erekalper 2018-02-02 15:58:58 Not only would I have upvoted both the Q&A anyway, as they're great, but you sealed the deal with "dickety six". Well done, @Kevin. @Solomon Ucko 2019-04-28 02:53:37 Don't estimate the ingenuity of fools... and clever attackers. A DOS attack would be easiest for this sort of thing, but others may be possible. @Georgy 2019-05-10 20:17:22 Using Click: Click is a library for command-line interfaces and it provides functionality for asking a valid response from a user. Simple example: Note how it converted the string value to a float automatically. Checking if a value is within a range: There are different custom types provided. To get a number in a specific range we can use IntRange: We can also specify just one of the limits, minor max: Membership testing: Using click.Choicetype. By default this check is case-sensitive. Working with paths and files: Using a click.Pathtype we can check for existing paths and also resolve them: Reading and writing files can be done by click.File: Other examples: Password confirmation: Default values: In this case, simply pressing Enter (or whatever key you use) without entering a value, will give you a default one: @Georgy 2019-05-10 16:47:47 Functional approach or "look mum no loops!": or if you want to have a "bad input" message separated from an input prompt as in other answers: How does it work? itertools.chainand itertools.repeatwill create an iterator which will yield strings "Enter a number: "once, and "Not a number! Try again: "an infinite number of times: replies = map(input, prompts)- here mapwill apply all the promptsstrings from the previous step to the inputfunction. E.g.: filterand str.isdigitto filter out those strings that contain only digits: And to get only the first digits-only string we use Other validation rules: String methods: Of course you can use other string methods like str.isalphato get only alphabetic strings, or str.isupperto get only uppercase. See docs for the full list. Membership testing: There are several different ways to perform it. One of them is by using __contains__method: Numbers comparison: There are useful comparison methods which we can use here. For example, for __lt__( <): Or, if you don't like using dunder methods (dunder = double-underscore), you can always define your own function, or use the ones from the operatormodule. Path existance: Here one can use pathliblibrary and its Path.existsmethod: Limiting number of tries: If you don't want to torture a user by asking him something an infinite number of times, you can specify a limit in a call of itertools.repeat. This can be combined with providing a default value to the nextfunction: Preprocessing input data: Sometimes we don't want to reject an input if the user accidentally supplied it IN CAPS or with a space in the beginning or an end of the string. To take these simple mistakes into account we can preprocess the input data by applying str.lowerand str.stripmethods. For example, for the case of membership testing the code will look like this: In the case when you have many functions to use for preprocessing, it might be easier to use a function performing a function composition. For example, using the one from here: Combining validation rules: For a simple case, for example, when the program asks for age between 1 and 120, one can just add another filter: But in the case when there are many rules, it's better to implement a function performing a logical conjunction. In the following example I will use a ready one from here: Unfortunately, if someone needs a custom message for each failed case, then, I'm afraid, there is no pretty functional way. Or, at least, I couldn't find one. @Locane 2019-09-05 16:32:10 What a thorough and wonderful answer, the explanation breakdown was great. @Austin 2019-09-06 03:53:35 Using your style, how would one go about stripping whitespace and lower-casing the input for membership testing? I don't want to create a set that must include both upper and lowercase examples. I would also like to allow for whitespace input mistakes. @Georgy 2019-09-06 13:19:06 @Austin I added a new section on preprocessing. Take a look. @Ep1c1aN 2019-07-01 09:36:20 You can always apply simple if-else logic and add one more iflogic to your code along with a forloop. This will be an infinite loo and you would be asked to enter the age, indefinitely. @Georgy 2019-07-01 10:25:41 This doesn't really answer the question. The question was about getting a user input until they give a valid response, not indefinitely. @Roko C. Buljan 2019-04-15 00:05:49 Persistent user input using recursive function: String Integer and finally, the question requirement: @aaveg 2015-06-28 23:29:47 Though the accepted answer is amazing. I would also like to share a quick hack for this problem. (This takes care of the negative age problem as well.) P.S. This code is for python 3.x. @PM 2Ring 2016-01-31 08:12:08 Note that this code is recursive, but recursion isn't necessary here, and as Kevin said, it can blow your stack. @aaveg 2016-02-03 08:58:15 @PM2Ring - you are right. But my purpose here was just to show how "short circuiting" can minimise (beautify) long pieces of code. @GP89 2017-05-16 22:29:33 Why would you assign a lambda to a variable, just use definstead. def f(age):is far clearer than f = lambda age: @aaveg 2017-05-16 23:17:04 In some cases, you may need the age just once and then there is no use of that function. One may want to use a function and throw it away after the job is done. Also, this may not be the best way, but it definitely is a different way of doing it (which was the purpose of my solution). @Tytire Recubans 2019-07-04 20:04:31 @aaveg how would you turn this code to actually save the age provided by the user? @Tytire Recubans 2019-07-04 20:16:00 found it, just return the age: f=lambda age: (age.isdigit() and (int(age)>=18 and age)) or \ f(input("invalid input. Try again\nPlease enter your age: ")) print((input("Please enter your age: "))) @Siddharth Satpathy 2018-12-18 06:17:12 Good question! You can try the following code for this. =) This code uses ast.literal_eval() to find the data type of the input ( age). Then it follows the following algorithm: Here is the code. @João Manuel Rodrigues 2018-11-28 14:52:00 Building upon Daniel Q's and Patrick Artner's excellent suggestions, here is an even more generalized solution. I opted for explicit ifand raisestatements instead of an assert, because assertion checking may be turned off, whereas validation should always be on to provide robustness. This may be used to get different kinds of input, with different validation conditions. For example: Or, to answer the original question: @Daniel Q 2018-11-19 21:19:00 Here's a cleaner, more generalized solution that avoids repetitive if/else blocks: write a function that takes (Error, error prompt) pairs in a dictionary and do all your value-checking with assertions. Usage: @Patrick Artner 2018-11-08 11:53:31 One more solution for using input validation using a customized ValidationErrorand a (optional) range validation for integer inputs: Usage: Output: @Saeed Zahedian Abroodi 2017-10-24 06:28:58 Use "while" statement till user enter a true value and if the input value is not a number or it's a null value skip it and try to ask again and so on. In example I tried to answer truly your question. If we suppose that our age is between 1 and 150 then input value accepted, else it's a wrong value. For terminating program, the user can use 0 key and enter it as a value. @Steven Stip 2016-01-14 12:43:55 Why would you do a while Trueand then break out of this loop while you can also just put your requirements in the while statement since all you want is to stop once you have the age? This would result in the following: this will work since age will never have a value that will not make sense and the code follows the logic of your "business process" @user9142415 2018-01-03 00:59:37 You can make the input statement a while True loop so it repeatedly asks for the users input and then break that loop if the user enters the response you would like. And you can use try and except blocks to handle invalid responses. The var variable is just so that if the user enters a string instead of a integer the program wont return "You are not able to vote in the United States." @Pratik Anand 2017-04-30 09:29:28 Try this one:- @Mangu Singh Rajpurohit 2016-11-03 07:49:29 You can write more general logic to allow user to enter only specific number of times, as the same use-case arises in many real-world applications. @Hoai-Thu Vuong 2017-03-01 08:49:03 you forget to increase the iCount value after each loop @ojas mohril 2016-06-23 10:34:14 @2Cubed 2016-05-30 20:47:55 While a try/ exceptblock will work, a much faster and cleaner way to accomplish this task would be to use str.isdigit(). @cat 2016-01-31 03:47:08 So, I was messing around with something similar to this recently, and I came up with the following solution, which uses a way of getting input that rejects junk, before it's even checked in any logical way. read_single_keypress()courtesy You can find the complete module here. Example: Note that the nature of this implementation is it closes stdin as soon as something that isn't a digit is read. I didn't hit enter after a, but I needed to after the numbers. You could merge this with the thismany()function in the same module to only allow, say, three digits.
https://tutel.me/c/programming/questions/23294658/asking+the+user+for+input+until+they+give+a+valid+response
CC-MAIN-2020-16
refinedweb
2,623
59.33
A set of Tkinter widgets to display readonly text and code. Project Description A set of Tkinter widgets for displaying readonly text and code. Getting Started tkReadOnly can be installed from PyPI: pip install tkreadonly ReadOnlyText An extension of the ttk.Text widget that disables all user editing. The builtin ttk.Text widget doesn’t have a “readonly” mode. You can disable the widget, but this also disables selection and other mouse events, and it changes the color scheme of the text. This widget captures and discards all insertion and deletion events on the Text widget. This allows the widget to look and behave like a normal ttk.Text widget in all other regards. Arguments ReadOnlyText takes the same arguments as the base ttk.Text widget. Usage Usage of ReadOnlyText is the same as usage for the base ttk.Text widget. Example: from Tkinter import * from tkreadonly import ReadOnlyText # Create the main Tk window root = Tk() # Create a main frame main_frame = Frame(root) main_frame.grid(column=0, row=0, sticky=(N, S, E, W)) # Put a ReadOnlyText widget in the main frame read_only = ReadOnlyText(main_frame) read_only.grid(column=0, row=0, sticky=(N, S, E, W)) # Add text to the end of the widget. read_only.insert(END, 'Hello world') # Run the main loop root.mainloop() ReadOnlyCode A composite widget that lets you display line number-annotated code, with a vertical scrollbar. The syntax highlighting will be automatically guessed from the filename and/or file contents. Arguments style The Pygments style sheet to use. Default is monokai. Attributes filename The filename currently being displayed. If you set this attribute, the path you provide will be loaded into the code window. line The current line of the file. The current line will be highlighted. If you set this attribute, any existing current line will be cleared and the new line highlighted. Methods refresh() Force a reload of the current file. line_bind(sequence, func) Bind the func event handler to the given event sequence on a line number. line attribute that describes the line that generated the event. name_bind(sequence, func) Bind func event handler to the given event sequence on a token in the code. name attribute that describes the token that generated the event. Usage Example: from Tkinter import * import tkMessageBox from tkreadonly import ReadOnlyCode # Create the main Tk window root = Tk() # Create the main frame main_frame = Frame(root) main_frame.grid(column=0, row=0, sticky=(N, S, E, W)) # Create a ReadOnlyCode widget in the main frame read_only = ReadOnlyCode(main_frame) read_only.grid(column=0, row=0, sticky=(N, S, E, W)) # Show a particular file read_only.filename = '/path/to/file.py' # Highlight a particular line in the file read_only.line = 5 # Set up a handler for a double click on a line number def line_handler(event): tkMessageBox.showinfo(message='Click on line %s' % event.line) read_only.line_bind('<Double-1>', line_handler) # Set up a handler for a single click on a code variable def name_handler(event): tkMessageBox.showinfo(message='Click on token %s' % event.name) read_only.name_bind('<Button-1>', name_handler) # Run the main event loop root.mainloop() Known. Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/tkreadonly/
CC-MAIN-2018-17
refinedweb
538
59.6
When I enter 1 vowel the ans. is 1 vowel and 2 consonants. When I enter 1 vowel and 1 consonant the ans. is 1 v & 2 cons. When I enter 1 consonant the ans. is 0 v & 1 cons. If I enter the sentence: Welcome to Foothill. the ans. is 7 vowels & 8 consonants. The vowels are correct, but the consonants should be 10. I have tried several ways (for loop w/ if else & else and putting the control stat. within the braces, I have tried Strings & arrays but my knowledge is lacking, for ex. on the array used to count the chars my ans would give me a number value. I have reread my text book on loops & control statements several times however I must be missing the key concept to take care of my problem. Code java: import java.util.Scanner; public class NewFoothill { public static void main(String [] args) throws Exception { Scanner input = new Scanner (System.in); int count = 0; System.out.println(" Enter the String. "); String s1 = input.nextLine(); s1 = s1.toUpperCase(); System.out.println("======RESULT======" + s1); s1 = s1.toLowerCase(); System.out.println("======RESULT======" + s1); System.out.println(" String s1 "); for (int i = 0; i < s1.length(); i++) { char c = s1.charAt(i); if ( c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u' ) { count++; } } System.out.println(" There are " + " " + count + " " + " vowels. "); for (int i = 0; i < s1.length(); i++) { char c = s1.charAt(i); if ( c != 'a' || c != 'e' || c != 'i' || c != 'o' || c != 'u' ) { count++; break; } } System.out.println(" There are ' + " " + count + " " + " consonants. "); } }
http://www.javaprogrammingforums.com/%20loops-control-statements/15034-java-loops-count-vowels-consonants-usinf-logic-main-printingthethread.html
CC-MAIN-2015-32
refinedweb
260
80.48
Most of the widgets are organized in a big main window which is divided into four parts: the MainArea and three panels (which are organized by the PanelManager) containing one or more smaller widgets: MainArea PanelManager The panels primarily contain widgets interacting with the 3D window. The user interface of OpenStructure supports drag and drop events. Every file format that is supported by Openstructure can be opened by dragging and dropping it on the main window. When a Python script (ending with .py) is being dropped on the UI, the script will be executed. For any other file type (for example PDB files, images and density maps), OpenStructure will try to load the file and display it in the 3D window, in the data viewer for images or in the sequence viewer for sequences. Perspective The perspective manages the layout of the widgets inside the main window. It contains important classes which itself manages again a sub part of the whole layout. You can get the active perspective object from the gosty app: app = gui.GostyApp.Instance() perspective = app.perspective GetMainArea Returns the main area which is used in this perspective. GetMenu Get QMenu that corresponds to the given name. If it does not exist, it will be created and returned. QMenu GetMenuBar Returns the Menubar of the Application. Can be used to add some menupoints. QMenuBar GetPanels The PanelManager class organizes all the widgets in the side panels. Returns the PanelManager instance which is used in this perspective. HideAllBars Hides all side bars. Can be used if the MainArea should be expanded to full size. StatusMessage Set a status message. This method can also be called from Qt as slot. str The MainArea is a mdi (multi document interface). Therefore it’s possible to display multiple widgets in it. The following example demonstrates how to add a widget to the MDI area: from PyQt5 import QtWidgets app = gui.GostyApp.Instance() main_area = app.perspective.main_area label = QtWidgets.QLabel("Hello World") main_area.AddWidget("The beginning..", label) from PyQt5 import QtWidgets app = gui.GostyApp.Instance() main_area = app.perspective.main_area label = QtWidgets.QLabel("Hello World") main_area.AddWidget("The beginning..", label) It is implemented as a MDI (multi document interface). This allows you to add custom widgets to it. AddPersistentWidget Add a widget whose geometry is preserved across application relaunches. For widgets that are volatile, use AddWidget() If tabbed mode is enabled, the widget geometry is ignored. AddWidget() QWindowState QWidget Add a widget whose geometry is preserved across application relaunches For widgets that are volatile, use #AddWidget() If tabbed mode is enabled, the widget geometry is ignored. int AddWidget Add volatile widget. ShowSubWindow Display the given widget inside the main area. This method can be used to make a widget visible that has been added to the mdi area. This method should only be called if you are sure, that the widget has been added to the main area. Otherwise, there might be unexpected behavior! HideSubWindow Brief hides the given widget inside the main area. This method can be used to hide a widget that has been added to this mdi area. This method should only be called if you are sure, that the widget has been added to the main area. Otherwise, there might be unexpected behavior! EnableTabbedMode Brief switch between free window and tabbed window mode. bool Each side panel can have different view modes. View modes can display the widgets which are held by the Side Panel in a different style. Every panel has the splitter and the tabbed view mode. The view mode can be changed in the Window menu of the Menubar: The widgets which are held by a Side Panel can be moved to another position in the panel or even to another side panel. The widget can be moved by simply clicking on the border of the widget and drag and drop it to the desired position. The drag and drop feature is currently supported by the splitter as well as the tabbed view mode. The Left-, Bottom- and Right Panel are organized by the PanelManager. It is only possible to display a widget which is in the widget pool of the PanelManager class. Once a widget is in the pool all the methods of the PanelManager class can be used to display/hide the widget in any position of the panels. OpenStructure remembers the size and location of a Widget and thus OpenStructure should look the same after restarting it. The following example shows, how to add a PyQt Widget to the widget pool and finally display it in the right side bar:) Class which organizes all widgets which are in the side panels This class handles all side bar widgets. It can be used to display, hide or move a widget to a PanelBar. There are three Bars (left, bottom, right) which are organized by this class. Whenever a widget is being removed or added it checks first if the widget type is known and if there are available instances. PanelBar Display a Widget in a PanelBar. With this method you can add a widget to the given PanelBar. The widget which finally will be added to the gui will be created from the WidgetRegistry. If the WidgetPool does not know the class name of the given widget or if there are no instances left, nothing will happen. PanelPosition AddWidgetByName Display a Widget in a PanelBar Same as AddWidget() AddWidgetToPool Add a widget to the widget pool. The widget must already be in the WidgetRegistry. If you are not sure if the Widget is in the WidgetRegistry, use the other AddWidgetToPool Method instead. Add a widget to the widget pool. Same as AddWidgetToPool() AddWidgetToPool() Widget The GetMenu method returns a QMenu reference, which contains various actions. The action states will be updated automatically. Returns a reference to a QMenu which can be used for example in a QMenuBar. QObject GetQObject Get the SIP-QObject (QObject), learn more about Mixing PyQt and C++ Widgets. RemoveWidget Remove a Widget out of a PanelBar The widget will be removed if it is in a PanelBar This enum indicates the Position of the Panel It is really straightforward to add a custom menupoint. Since the menubar is exported to Python it is even easier to create such e menupoint. The following example describes how this is done within Python and PyQt: from PyQt5 import QtWidgets menu_bar = gui.GostyApp.Instance().perspective.GetMenuBar() test_action = QtWidgets.QAction('Test Menu Point', menu_bar) test = menu_bar.addMenu('&Test') test.addAction(test_action) The MenuBar class is a normal Qt QMenubar (see the Qt Documentation for more information about QMenubar). By getting the Menubar from the perspective, it is automatically converted to a SIP Object which can be used within python. With our Inspector Gadget it is straightforward to modify rendering and coloring options of scene objects without using the keyboard: The render and coloring options affect only the currently selected objects of the scene win. The Shortcut Ctrl+I toggles the visibility of the inspector. Enter search terms or a module, class or function name. gui – Graphical User Interface gui Mixing PyQt and C++ Widgets
https://openstructure.org/docs/2.1/gui/layout/
CC-MAIN-2021-10
refinedweb
1,189
65.83
Developers tend to divide along language boundaries. Once we know a programming language, we identify ourselves by it -- we're "a C++ programmer," "a Delphi developer," etc. One explanation for this tendency to affiliate is that old barrier to change that naturally exists between programming languages: familiarity. People who have spent time learning one set of rules are naturally reluctant to put them aside in favor of another set. As a wise, green little man once said, "You must unlearn what you have learned." The key is applicability. I prefer to think of each programming language as a specialized tool -- and I recognize that a hammer specialist does not make a good carpenter. I find C++, Delphi, and Java to all be useful languages, and I even apply a little VB when appropriate. The four rules of software modeling -- identity, interface, ownership, and dependency -- have helped me to understand and learn each new language quickly. The rule of identity, for instance, helps me understand the differences between the type systems of C++ and Delphi. C++ pointers and references C++ extends the type system of its ancestor language. C offers a set of built-in types, including integers, characters, and floating-point numbers. It also provides constructs, such as structures and unions, with which a developer can compose application-specific types. Declarations A variable declared as any built-in or composite type represents a unique identity; during compilation, the compiler automatically allocates space in the variable's name. During its life span, the variable represents only one object, the owner of which is defined by the variable's scope. For example: struct { int n; } s; /* The structure identified by s owns the integer identified by n. The integer exists only within one structure, and only so long as the structure exists. */ Pointers A pointer in C (and, by extension, C++) does not represent a unique identity. It instead refers to the identity of an object allocated independent of the pointer itself. Such an object could be allocated automatically, as in this example: int n; int *p = &n; The object could also be allocated manually: int *p = (int *)malloc( sizeof(int) ); free( p ); A pointer's scope, unlike a value's, does not confer ownership. int n; struct { int *p; } s1, s2; s1.p = &n; s2.p = &n; /* Neither s1 nor s2 owns n. */ The behavior of C's (and C++'s ) assignment and comparison operators depends on whether they're applied to values or pointers. The assignment operator (=) copies state when applied to values and identity when applied to pointers. Similarly, the comparison operators (== and !=) compare state when applied to values and identity when applied to pointers: = == != int n1, n2; int *p1, *p2; n1 = 5; /* Copy the value 5 into */ n2 = 5; /* two unique integers. */ p1 = &n1; /* Copy the identities of */ p2 = &n2; /* n1 and n2 into p1 and p2. */ _ASSERT( n1 == n2 ); /* Compare values. */ _ASSERT( p1 != p2 ); /* Compare identities. */ Furthermore, a pointer can refer to different objects during its life span: int n1, n2; int *p; p = &n1; /* p refers to n1. */ p = &n2; /* Now, p refers to n2. */ C uses a special syntax to obtain the value of a pointer. In this syntax, the assignment and comparison operators revert to their value semantics: int n1, n2; int *p1, *p2; p1 = &n1; /* Assign identities. */ p2 = &n2; *p1 = 5; /* Assign values. */ *p2 = 5; _ASSERT( p1 != p2 ); /* Compare identities. */ _ASSERT( *p1 == *p2 ); /* Compare values. */ References C++ introduced references to provide for the representation of identity without the need for a special syntax. A reference is similar to a pointer, insofar as it holds the identity of an object separate from itself and its scope does not define ownership: int n; struct s { int &m_r; s( int &r ): m_r( r ) {} } s1(n), s2(n); // Both s1 and s2 refer to n, yet // neither owns it. A reference differs from a pointer in that it cannot change identity during its life span and all operators use value semantics: int n1 = 5; int n2 = 5; int &r1 = n1; // Initialization is the only time int &r2 = n2; // that identity can be assigned. _ASSERT( r1 == r2 ); // Compare values. Delphi 'references' Let's compare C++'s type system with Delphi's. In Delphi, as in C++, variables of built-in and record types define values. Unlike C++, however, Delphi treats all class and interface variables as references. Don't let the name confuse you; a Delphi reference has more in common with a C++ pointer than with a C++ reference. Like a C++ pointer, a Delphi reference refers to the identity of an object separate from itself. Delphi assignment and comparison operators copy and compare identity, not value. Furthermore, a reference variable can refer to different objects during its life span: type TMyClass = class(TObject) public n: integer; end; procedure Test; var c1, c2: TMyClass; begin { c1 and c2 are initially nil. } c1 := TMyClass.Create; { Assign identity. } c2 := TMyClass.Create; c1.n := 5; { Assign values. } c2.n := 5; Assert( c1 <> c2 ); { Compare identities. } Assert( c1.n = c2.n ); { Compare values. } FreeAndNil( c1 ); FreeAndNil( c2 ); end; Stumbling blocks The differences between the type systems of C++ and Delphi are potential stumbling blocks. Because the languages implement identity in different ways, developers must be careful when commuting from one to the other. The following C++ code, for example, was ported from a Delphi program. The Delphi code worked just fine, but the C++ code does not. Can you spot the bug? // Base class of a real-valued function. class CRealFunction { public: virtual double GetValue( double x ) { return 0.0; } }; // A specific real-valued function that solves for // the exponent of a decline curve given two // points and the initial rate of decline. class CRealFunctionDeclineByN: public CRealFunction { public: CRealFunctionDeclineByN( double x0, double y0, double x1, double y1, double d0) : m_x0(x0), m_y0(y0), m_x1(x1), m_y1(y1), m_d0(d0) {} double GetValue( double x ) { double n = x; return m_y1 - m_y0/pow( 1 + m_d0*n*(m_x1-m_x0), 1.0/n ); } private: double m_x0; double m_y0; double m_x1; double m_y1; double m_d0; }; inline bool Opposite( double d1, double d2 ) { return (d1 < 0.0) && (d2 > 0.0) || (d1 > 0.0) && (d2 < 0.0); } inline bool IsZero( double d ) { const double eps = 1e-14; return (d > -eps) && (d < eps); } // Find a zero using the bisection method. double FindZero_Bisection( CRealFunction f, double xlow, double xhigh ) { double ylow; double yhigh; double xmid; double ymid; // Verify that a zero is bracketed. ylow = f.GetValue( xlow ); yhigh = f.GetValue( xhigh ); if (!Opposite( ylow, yhigh )) AfxThrowUserException(); do { // Bisect the brackets. xmid = ( xlow + xhigh )*0.5; ymid = f.GetValue( xmid ); // Keep the half that contains the zero. if (Opposite( ymid, ylow )) { yhigh = ymid; xhigh = xmid; } else { ylow = ymid; xlow = xmid; } } while ( !IsZero(ymid) && !IsZero(xlow-xhigh) ); return xmid; } // A simple unit test. void Test() { double n; CRealFunctionDeclineByN f( 0.0, 10.0, 1.0, 5.8, 1.0 ); char strMessage[512]; n = FindZero_Bisection( f, 1e-3, 10.0 ); sprintf( strMessage, "n = %g, y = %g", n, 10.0/pow( 1.0 + n, 1.0/n ) ); AfxMessageBox( strMessage, MB_OK, -1 ); } Next week, I'll reveal the bug and talk about what identity is good for in the first place.
http://www.itworld.com/AppDev/705/ITW1917/
crawl-001
refinedweb
1,195
57.57
Code. Collaborate. Organize. No Limits. Try it Today. etc) --> I want to address two topics in this article, one educational and the other practical. My educational reason is to show you how to do something non-trivial utilizing the XML and XSL .NET Framework classes. In doing this I'm going to show you one approach to solving the practical problem of updating version numbers in automated nightly builds; in particular of C++ and C# projects. Because of the use of XSL the solution is open ended, so you can use for any language or project, with only a little extra work. If you've been a Microsoft Windows developer long enough, you've had to solve the problem of updating your binaries version numbers after each major build of your product. Incrementing the version number after each build gives you an excellent way to localize problems to particular builds. Also, having good version numbers is essential for your installer to be able to install over or alongside existing copies of your product. If you practice good software development principles and try and do a full build automatically every day, there's a good chance you'll have figured out some automated way to do the updates. If not, or if you are not happy with your existing solution, today is your lucky day! The tool consists of a single executable called MKVER2.EXE. If you're looking at the source, the important code is all contained in MainClass. The data flow through the application looks like this: MKVER2.EXE MainClass An XML file with an extension PVD (for Product Version Data) contains all the relevant version information for all the binary files in your product. This file is the first input into MKVER2. You'll also specify the name of the output file, in the diagram it's AssemblyInfo.cs. The algorithm I have implemented is that we use the extension of the output file to search for a corresponding XSL transform file with the filename of version.extension.xsl. Hence, AssemblyInfo.cs causes us to use version.cs.xsl as the second input to MKVER2. AssemblyInfo.cs version.extension.xsl version.cs.xsl These templates are looked for in the following various locations in order (1) the directory specified on the command line, (2) the directory specified by the MKVER_TEMPLATES environment variable, (3) a Templates sub-directory of the current directory, (4) a Templates sub-directory of the directory where MKVER.EXE is located. MKVER_TEMPLATES Templates MKVER.EXE Take a look at the format of a PVD file and you'll see it's pretty straightforward. One thing that might take some explaining are the <VersionInfo> elements: <VersionInfo> <VersionInfo lang="1033" charset="1200"> ... ... </VersionInfo> The idea here is that all language specific pieces of version information, such as copyrights, descriptions, trademarks, etc. are contained within these elements. Using command line parameters you can determine which language strings end up in your version resources. Take a look at the first part of version.cs.xsl: <?xml version="1.0"?> <xsl:stylesheet xmlns: <xsl:output <xsl:param <xsl:param <xsl:param <xsl:template As you can see we make use of XSL parameters to pass this information into the transform. Later on in the transform we use these parameters as input to the XPath query that retrieves a specific VersionInfo element: VersionInfo <xsl:apply-templates The mode attribute is used because I have two FileVersion templates, one for the product VersionInfo and one for the files VersionInfo. The following code shows the guts of the MKVER2 algorithm in MainClass.cs: mode FileVersion VersionInfo MainClass.cs try { inputFile = Path.GetFullPath((string)inputFile); // load the version data document.Load(inputFile); // wrap it with an XmlNavigator navigator = document.CreateNavigator(); if (!IsCorrectRevision(1)) return; UpdateProductVersion(); // Add dynamically generated nodes AddNodes(); if (trace) { XmlTextWriter xw = new XmlTextWriter(BpConsole.Out); xw.Formatting = Formatting.Indented; document.WriteContentTo(xw); xw.Flush(); BpConsole.WriteLine(); } TransformDocument(); } catch (Exception e) { WriteMessage(MessageType.Error, e.Message); } First we grab the input document and load it into an XmlDocument object. A quick check for the expected revision attribute and we proceed to the UpdateProductVersion() function to increment/set the new version number. XmlDocument UpdateProductVersion() What is the function of AddNodes()? Well, sometimes in the output we are going to want some data that is derived from the input data, but is not in the correct format to make it easily accessible. For example, let's say your version number is currently 1.0.0.100. The 100 might refer to your current build number. Let's say you want to automatically send an e-mail stating "Build 100 Completed". How do you get at the "100" part of the version number? The solution I came up with was to have MKVER2 pre-calculate certain pieces of information and add them as new elements into the original XML. This makes things much easier in the XSL transformations. AddNodes() In order that someone who doesn't have access to the source code can easily see what these new elements are available, I added the -trace command line option, which dumps the new XML data to the screen. One downside of the approach of dynamically adding elements is that it creates a strong coupling between MKVER and the format of the input data. We look for certain elements to base our new nodes on. Hence the revision check mentioned earlier. -trace The final step is to call TransformDocument() to actually do the XSL transformation. This creates our output file. TransformDocument() How do you use the tool? Well you you've noticed MKVER2 uses MKVER2 to do it's own versioning! Included in the sample source code solution file are a couple of Makefile projects. The one called Version simply invokes a batch file to update the version of MKVER2 whenever a full Release configuration build is performed. The batch file is written using 4NT, and shows how I update the version number of MKVER2 and check the results into a local Perforce version control database. Unless you have 4NT and Perforce you won't be able to run it, but you'll be able to follow the basic steps easily enough. Version One thing I've discovered about C# projects that I'm not thrilled about is that there does not appear to be any way at all to do the equivalent of C++ includes. Unless you build from the command line you can't even have a C# project in VS.NET that includes files in directories other than the project directory. What this basically means is you'll have to go through each project and update the version information separately. Note also that this is why you have to specify a -name command line parameter, so that your C# output file only contains version information for one project. -name Yes, I know that .NET projects can automatically create version numbers for themselves, if you use an attribute like [assembly: AssemblyVersion("1.0.*")]. But this essentially generates a random number for the version number, which I feel is about as useful a poke in the eye with a sharp stick. [assembly: AssemblyVersion("1.0.*")] For C++ projects, you'll have things easier. You can create an #include file in a shared location and generate just that one file. You can set a preprocessor macro in each C++ project and select the version information that's relevant to that project. Hence, no need to specify a -name parameter to MKVER2 (unless you want to that is). Check out version.h.xsl for more information, or just run MKVER2 on the supplied version.pvd to see some example output. I've included a version.rc2 file that you can include in your C++ projects RC file to make use of the #define's generated in the header file. To add this RC2 to your project copy it into the project directory, open your RC file and select Edit Resource Includes... Then in the "Compile-time directives:" editor add the line #include "version.rc2". #include version.h.xsl version.pvd #include "version.rc2" You can also use MKVER2 to generate any other type of output you like containing the version information. I've include some XSL files for generating BAT and HTML files for your versioning pleasure. OK, if you read this article to learn how to program in XSL or .NET your probably a bit disappointed by now. I suggest you store this article away for future reference. However, if you have already mastered the basics of XML and XSL and were looking for a power sample to really show you some of the things you can usefully do with it, you should be happier. I'm blown away by how easy it is to do XML related stuff in .NET. I've written a fair amount of MSXML code using the COM interfaces in the past, and I'm never going back! As far as the version utility goes, I hope it's useful to you. If someone writes XSL transforms for VB.NET, Java/J# or any other versionable project and sends them to me, I'd be happy to add them to the MKVER2 download on this page. Finally, if you haven't found it yet and you are looking for a knock your socks off application of XSL with the .NET Framework check out N
http://www.codeproject.com/Articles/3090/Version-Resource-Tool-Using-XML-XSLT-and-NET?PageFlow=FixedWidth
CC-MAIN-2014-15
refinedweb
1,572
55.44
Improved filtfilt() for R Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. The filtfilt function in R is supposedly based on the Matlab one, but it does quite badly on endpoints. My goal here is to explore alternatives, with a focus on the Octave method. NOTE: I may also try scipy for hints.) The best method of dealing with endpoints is still an open question. Matlab does one thing, Octave does another. Both seem to work reasonably well, and the Octave license is more appropriate to the present task, so I am focussing on that. This is a page in progress. I am trying to make some of my R code work exactly as Octave does, since then I’ll know that I’ve successfully mimicked their method. This is proving to be difficult! I started this page weeks ago and have returned to it several times. If/when I figure it out, I will update the date of the blog posting and remove the “draft” designation. If any readers have ideas, please let me know! Octave implementation Full code (with initial comments removed for brevity) function y = filtfilt(b, a, x) if (nargin != 3) print_usage; endif rotate = (size(x,1)==1); if rotate, # a row vector x = x(:); # make it a column vector endif lx = size(x,1); a = a(:).'; b = b(:).'; lb = length(b); la = length(a); n = max(lb, la); lrefl = 3 * (n - 1); if la < n, a(n) = 0; endif if lb < n, b(n) = 0; endif ## Compute a the initial state taking inspiration from ## Likhterov & Kopeika, 2003. "Hardware-efficient technique for ## minimizing startup transients in Direct Form II digital filters" kdc = sum(b) / sum(a); if (abs(kdc) < inf) # neither NaN nor +/- Inf si = fliplr(cumsum(fliplr(b - kdc * a))); else si = zeros(size(a)); # fall back to zero initialization endif si(1) = []; for (c = 1:size(x,2)) # filter all columns, one by one v = [2*x(1,c)-x((lrefl+1):-1:2,c); x(:,c); 2*x(end,c)-x((end-1):-1:end-lrefl,c)]; # a column vector ## Do forward and reverse filtering v = filter(b,a,v,si*v(1)); # forward filter v = flipud(filter(b,a,flipud(v),si*v(end))); # reverse filter y(:,c) = v((lrefl+1):(lx+lrefl)); endfor if (rotate) # x was a row vector y = rot90(y); # rotate it back endif endfunction Test pkg load signal [b, a]=butter(3, 0.1); # 10 Hz low-pass filter t = 0:0.01:1.0; # 1 second sample load x.dat # created by the R code, to ensure both have same data y = filtfilt(b,a,x); z = filter(b,a,x); # apply filter plot(t,x,';data;',t,y,';filtfilt;',t,z,';filter;') R implementation ## Error: 'arg' must be NULL or a character vector Comparison between R and octave output The test codes a.m and a.R are the basis for the above. Some R values (and comparisons with octave output) are given below. si: 0.9971, -1.3857, 0.535 (expect 0.99710 -1.38569 0.53497) lrefl: 9 (expect 9) si_v1: -1.1293, 1.5694, -0.6059 (expect -1.12926 1.56935 -0.60588) si_vend: 0.8557, -1.1891, 0.4591 (expect 0.65164 -0.90559 0.34962) v_before_first_filter: -1.1325, -1.0238, -0.8117, -1.1579 (expect -1.132546 -1.023766 -0.811650 -1.157909 -1.370316 -0.858954 -0.717951 …) v_after_first_filter: -0.6099, -0.4232, -0.219, -0.0097 (expect -1.157285 -1.123425 -1.072315 -1.001433 -0.909024 -0.794494 -0.658707 …) Since v was ok before the filter, and wrong after, the problem seems to be in filter(). Finding the problem may be arduous, since (a) the R function stats::filter() is not well documented, regarding init, init.x and init.y, and (b) the Octave filter() spans 700 lines of C, so it is hard to reverse engineer to figure out what it’s doing. filter.cc I’ll start by assuming that this is trying to mimic the Matlab function of the same name, although the 4th arg is called si in Octave and zi in Matlab, so some caution is warranted. The docs on the matlab version are here 1. it takes 4 args, the last of which ( si) is perhaps worth investigating, since it’s different from the initialization in the R filter (which has 3 items). sigets renamed as psiin line 166 Resources - Matlab test code a.m - R signal::filter.R - R test code a.R - test data used by above two codes x.dat - matlab filtfilt.m - the octave script filtfilt.m can be acquired by typing pkg install -forge signalin an Octave window, and extracting file named ~/octave/signal-1.3.0/filtfilt.m - the octave C filtering code filter.cc is the file libinterp/corefcn/filter.ccin the Octave.
https://www.r-bloggers.com/2014/02/improved-filtfilt-for-r/
CC-MAIN-2021-21
refinedweb
823
66.44
With. Application development in XAML needs efforts from both UI Designer and Developer. In the development process, the UI Designer uses Blend to manage XAML design using design templates, styles, animations etc. Similarly the UI Developer uses Visual Studio to add functionality to the design. Many-a-times it is possible that the UI Designer and Developer work on the same XAML file. In this case, the file updated by the designer or developer should be reloaded by the IDE (Blend and Visual Studio) each time. Visual Studio 2015 contains new settings that allows the integration of the two IDEs to be seamless. Let’s explore this setting. This article is published from the DNC Magazine for .NET Developers and Architects. Download this magazine from here [Zip PDF] or Subscribe to this magazine for FREE and download all previous and current editions Step 1: Open Visual Studio 2015 and create a new WPF Application. In this project, open MainWindow.xaml and add a button to it. Here’s the XAML code: <Button Name="btn" Content="Click" Height="50" Width="230" Click="btn_Click" Background="Brown"></Button> To update the XAML in Blend, right-click on the MainWindow.xaml and select Design in Blend option as shown in the following image: This will open the file in Blend. This blend version is more similar to the Solution Explorer window in Visual Studio. Step 2: In Blend, change the Background property of the button to Red as shown here. (Observe the intellisense). Save the file. Step 3: Visit the project again in Visual Studio, and the following window will be displayed: The above window notifies that the file is updated externally. To reload the file with the external changes, we need to click on Yes or on the Yes to All button. Once clicked, you will observe that the updates made in Blend will be reflected in Visual Studio. In Visual Studio the Background property will be updated from Brown to Red. We can manage this reload with seamless integration with the following settings. In Visual Studio 2015, from Tools |Options |Environment |Documents select the checkbox as shown in the following image: The CheckBox Reload modified files unless there are unsaved changes configuration will load changes made in the XAML file outside Visual Studio editor. Similar changes can be configured in Blend using Tools |Options |Environment |Documents. With this new feature, we can implement seamless integration between Visual Studio and Blend for efficiently managing XAML updates by the Designer and Developer. In the process of DataBinding, new XAML elements may be added dynamically. To detect the UI elements added dynamically, we need a smart tool. In XAML based applications (e.g. WPF) the arrangement of XAML elements with its dependency properties is known as Visual Tree. In the VS 2015 release, a Live Visual Tree tool is provided. This tool helps to inspect the Visual Tree of the running WPF application and properties of element in the Visual Tree. The Live Visual tree can be used for the following purposes: Step 1: Open MainWindow.xaml and update the XAML as shown in the following code: <Grid Height="346" Width="520"> <Grid.RowDefinitions> <RowDefinition Height="300"></RowDefinition> <RowDefinition Height="40"></RowDefinition> </Grid.RowDefinitions> <DataGrid Name="dgemp" AutoGenerateColumns="True" Grid. <Button Name="btn" Content="Click" Height="30" Width="230" Click="btn_Click" Grid.</Button> </Grid> Step 2: In the Code behind, add the following C# Code: public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } private void btn_Click(object sender, RoutedEventArgs e) { //this.Background = new SolidColorBrush(Colors.RoyalBlue); dgemp.ItemsSource = new EmployeeList(); } } public class Employee { public int EmpNo { get; set; } public string EmpName { get; set; } } public class EmployeeList : List<Employee> { public EmployeeList() { Add(new Employee() { EmpNo = 1, EmpName="A"}); Add(new Employee() { EmpNo = 2, EmpName = "B" }); Add(new Employee() { EmpNo = 3, EmpName = "C" }); Add(new Employee() { EmpNo = 4, EmpName = "D" }); Add(new Employee() { EmpNo = 5, EmpName = "E" }); } } The above code defines the Employee class with Employee properties and EmployeeList class containing Employee Data. On the click event of the button, the EmployeeList is passed to ItemsSource property of the DataGrid. Step 3: Run the Application. A WPF window will be displayed with Live Visual Tree panel as shown in the following diagram: The Live Visual Tree has an Icon tool bar which provides options like Enable Selection in Running Application and Preview Selection. The above image shows the non-expanded Visual Tree. After expanding the Visual Tree, the panel gets displayed as shown in the following image: This shows DataGrid and a Button. Selecting the Button using the Mouse will select the Content property of the Button as shown in the following image: The ContentPresenter contains TextBlock containing Click text. Step 4: Click on the Enable Selection in running application. This will clear the Button selection in the Live Visual Tree. Click on the button and the DataGrid will show Employee Data. Click in the Enable Selection in Running application and Preview Selection from the Live Visual Tree toolbar and select a row in the DataGrid. The Dynamically generated Visual Tree will be displayed as shown in the following Image The above diagram shows the dynamically generated Visual Tree for the DataGrid. The ItsmsSource property of the DataGrid generates DataGridRow, which further contains DataGridCell, which in turn contains TextBlock. As we keep changing the selection in the DataGrid, the Live Visual Tree helps to debug the UI selection. Hence this new tool helps developers to inspect the UI elements generated. Along with the Live Visual Tree, we have a Live Property Explorer tool as well. This tool shows the property set applied on the selected UI element. This also allows us to change some of the property values of selected element, at run time. Select the button on the running application, the property explorer will be displayed as shown in the following image: The Click Text is selected for the button, this shows the Live Property explorer for the TextBlock. Here we can change the Text property of the TextBlock while the application is running, as shown in the following image: As you saw, the Live property explorer tool helps to update some properties while running the application. These changed properties can also be directly applied to the application. A new feature provided for XAML based application with this new release of Visual Studio 2015 is the Diagnostic Tools. This tool allows developers to check the Memory and CPU utilization for the XAML application. This tool can be enabled using Debug| Show Diagnostic Tool. Here we can select Memory Usage and CPU Usage. Run the application, and the CPU and Memory utilization can be seen as shown in the following diagram: In Visual Studio 2015, a new feature for developers called the XAML Peek Definition has been introduced. In the earlier versions of Visual Studio, we could peek into definition for classes, functions, etc. in C# and VB.NET language. In Visual Studio 2015 we can use peek definitions for XAML elements as well. In the following diagram, we can see that when we right-click on the x:Class attribute value e.g. NewApp.MainWIndow and select the Peek Definition option from the context menu, we can see the class definition. We can view the definition as shown in the following figure: The advantages of this feature is that we can also use this to show styles/DataTemplate implementation for selected XAML elements as shown in the following diagram: In the above diagram, the definition of the empTemplate can be shown: Now we can edit one of the TextBlock e.g. Background property of the Salary TextBlock as follows: The changes made in the Peek Definition window can be seen in the actual DataTemplate definition. This is a very cool feature that excites the developer in me and hopefully yours too. In Line-of-Business (LOB) application development in WPF, we implement databinding with XAML elements using the Binding class. We implement hierarchical DataBinding using DataContext property. In Visual Studio 2015 using the XAML extended feature, we can experience DataBinding debugging using Live Property Explorer. Consider the following C# class: Consider the following XAML with Databinding: The above XAML applies to the EmployeeList instantiated with Emps. Run the application and enable the Live Visual Tree and Live Property Explorer. We can see the DataContext property in the Live Property Window. We can now experience the DataContext values by selecting the Employee record from the ListBox as shown in the following figure: The above diagram shows the record selected from the ListBox (Red outline). On the right side of the diagram see the Live Property Explorer with the DataContext values. This is a cool feature for developers working on the LOB applications using XAML. While working with C# or VB.NET code in Visual Studio, we can make use of Region-EndRegion feature for defining groups of code for better manageability. In Visual Studio 2015, for long XAML markups, we can now define regions as shown in the following figure: The Region after collapse will be displayed as shown in the following diagram: Although this is a really simple feature, imagine the pain of the developer who works on long and complex XAML code for maintenance and could not group it until now. Conclusion: The new XAML tools provided in Visual Studio 2015 helps developers to effectively manage and work with XAML based applications for UI Debugging, Performance etc. Download the entire source code from GitHub at bit.ly/dncm20-xamlnewtools
http://www.dotnetcurry.com/visualstudio/1182/new-xaml-tools-visual-studio-2015
CC-MAIN-2017-26
refinedweb
1,584
53.71
13 min read Ruby on Rails (“Rails”) is a popular open source framework, based on the Ruby programming language that strives to simplify and streamline the web application development process. Rails is built on the principle of convention over configuration. Simply put, this means that, by default, Rails assumes that its expert developers will follow “standard” best practice. Accordingly, while Rails is easy to use, it is also not hard to misuse. This tutorial looks at 10 common Rails problems, including how to avoid them and the issues that they cause. Common Mistake #1: Putting too much logic in the controller Rails is based on an MVC architecture. In the Rails community, we’ve been talking about fat model, skinny controller for a while now, yet several recent Rails applications I’ve inherited violated this principle. It’s all too easy to move view logic (which is better housed in a helper), or domain/model logic, into the controller. The problem is that the controller object will start to violate the single responsibility principle making future changes to the code base difficult and error-prone. Generally, the only types of logic you should have in your controller are: - Session and cookie handling. This might also include authentication/authorization or any additional cookie processing you need to do. - Model selection. Logic for finding the right model object given the parameters passed in from the request. Ideally this should be a call to a single find method setting an instance variable to be used later to render the response. - Request parameter management. Gathering request parameters and calling an appropriate model method to persist them. - Rendering/redirecting. Rendering the result (html, xml, json, etc.) or redirecting, as appropriate. While this still pushes the limits of the single responsibility principle, it’s sort of the bare minimum that the Rails framework requires us to have in the controller. Common Mistake #2: Putting too much logic in the view The out-of-the-box Rails templating engine, ERB, is a great way to build pages with variable content. However, if you’re not careful, you can soon end up with a large file that is a mix of HTML and Ruby code that can be difficult to manage and maintain. This is also an area that can lead to lots of repetition, leading to violations of DRY (don’t repeat yourself) principles. This can manifest itself in a number of ways. One is overuse of conditional logic in views. As a simple example, consider a case where we have a current_user method available that returns the currently logged in user. Often, there will end up being conditional logic structures like this in view files: <h3> Welcome, <% if current_user %> <%= current_user.name %> <% else %> Guest <% end %> </h3> A better way to handle something like this is to make sure the object returned by current_user is always set, whether someone is logged in or not, and that it answers the methods used in the view in a reasonable way (sometimes referred to as a null object). For instance, you might define the current_user helper in app/controllers/application_controller like this: require 'ostruct' helper_method :current_user def current_user @current_user ||= User.find session[:user_id] if session[:user_id] if @current_user @current_user else OpenStruct.new(name: 'Guest') end end This would then enable you to replace the previous view code example with this one simple line of code: <h3>Welcome, <%= current_user.name -%></h3> A couple of additional recommended Rails best practices: - Use view layouts and partials appropriately to encapsulate things that are repeated on your pages. - Use presenters/decorators like the Draper gem to encapsulate view-building logic in a Ruby object.? Well, not quite. Many Rails developers actually make this mistake and end up sticking everything in their ActiveRecord model classes leading to mongo files that not only violate the single responsibility principle but are also a maintenance nightmare. Functionality such as generating email notifications, interfacing to external services, converting to other data formats and the like don’t have much to do with the core responsibility of an ActiveRecord model which should be doing little more than finding and persisting data in a database. So if the logic shouldn’t go in the views, and it shouldn’t go in the controllers, and it shouldn’t go in the models, well then, where should it go? Enter plain old Ruby objects (POROs). With a comprehensive framework like Rails, newer developers are often reluctant to create their own classes outside of the framework. However, moving logic out of the model into POROs is often just what the doctor ordered to avoid overly complex models. With POROs, you can encapsulate things like email notifications or API interactions into their own classes rather than sticking them into an ActiveRecord model. So with that in mind, generally speaking, the only logic that should remain in your model is: ActiveRecordconfiguration (i.e., relations and validations) - Simple mutation methods to encapsulate updating a handful of attributes and saving them in the database - Access wrappers to hide internal model information (e.g., a full_namemethod that combines first_nameand last_namefields in the database) - Sophisticated queries (i.e., that are more complex than a simple find); generally speaking, you should never use the wheremethod, or any other query-building methods like it, outside of the model class itself Common Mistake #4: Using generic helper classes as a dumping ground This mistake is really sort of a corollary to mistake #3 above. As discussed, the Rails framework places an emphasis on the named components (i.e., model, view, and controller) of an MVC framework. There are fairly good definitions of the kinds of things that belong in the classes of each of these components, but sometimes we might need methods that don’t seem to fit into any of the three. Rails generators conveniently build a helper directory and a new helper class to go with each new resource we create. It becomes all too tempting, though, to start stuffing any functionality that doesn’t formally fit into the model, view, or controller into these helper classes. While Rails is certainly MVC-centric, nothing prevents you from creating your own types of classes and adding appropriate directories to hold the code for those classes. When you have additional functionality, think about which methods group together and find good names for the classes that hold those methods. Using a comprehensive framework like Rails is not an excuse to let good object oriented design best practices go by the wayside. Common Mistake #5: Using too many gems Ruby and Rails are supported by a rich ecosystem of gems that collectively provide just about any capability a developer can think of. This is great for building up a complex application quickly, but I’ve also seen many bloated applications where the number of gems in the application’s Gemfile is disproportionately large when compared with the functionality provided. This causes several Rails problems. Excessive use of gems makes the size of a Rails process larger than it needs to be. This can slow down performance in production. In addition to user frustration, this can also result in the need for larger server memory configurations and increased operating costs. It also takes longer to start larger Rails applications, which makes development slower and makes automated tests take longer (and as a rule, slow tests simply don’t get run as often). Bear in mind that each gem you bring into your application may in turn have dependencies on other gems, and those may in turn have dependencies on other gems, and so on. Adding other gems can thus have a compounding effect. For instance, adding the rails_admin gem will bring in 11 more gems in total, over a 10% increase from the base Rails installation. As of this writing, a fresh Rails 4.1.0 install includes 43 gems in the Gemfile.lock file. This is obviously more than is included in Gemfile and represents all the gems that the handful of standard Rails gems bring in as dependencies. Carefully consider whether the extra overhead is worthwhile as you add each gem. As an example, developers will often casually add the rails_admin gem because it essentially provides a nice web front-end to the model structure, but it really isn’t much more than a fancy database browsing tool. Even if your application requires admin users with additional privileges, you probably don’t want to give them raw database access and you would be better served by developing your own more streamlined administration function than by adding this gem. Common Mistake #6: Ignoring your log files While most Rails developers are aware of the default log files available during development and in production, they often don’t pay enough attention to the information in those files. While many applications rely on log monitoring tools like Honeybadger or New Relic in production, it is also important to keep an eye on your log files throughout the process of developing and testing your application. As mentioned previously in this tutorial, the Rails framework does a lot of “magic” for you, especially in the models. Defining associations in your models makes it very easy to pull in relations and have everything available to your views. All the SQL needed to fill up your model objects is generated for you. That’s great. But how do you know that the SQL being generated is efficient? One example you will often run in to is called the N+1 query problem. While the problem is well understood, the only real way to observe it happening is to review the SQL queries in your log files. Say for instance you have the following query in a typical blog application where you will be displaying all of the comments for a select set of posts: def comments_for_top_three_posts posts = Post.limit(3) posts.flat_map do |post| post.comments.to_a end end When we look at the log file of a request that calls this method we’ll see something like the following, where a single query is made to get the three post objects then three more queries are made to get each of those objects’ comments: Started GET "/posts/some_comments" for 127.0.0.1 at 2014-05-20 20:05:13 -0700 Processing by PostsController#some_comments as HTML Post Load (0.4ms) SELECT "posts".* FROM "posts" LIMIT 3 Comment Load (5.6ms) ELECT "comments".* FROM "comments" WHERE "comments"."post_id" = ? [["post_id", 1]] Comment Load (0.4ms) SELECT "comments".* FROM "comments" WHERE "comments"."post_id" = ? [["post_id", 2]] Comment Load (1.5ms) SELECT "comments".* FROM "comments" WHERE "comments"."post_id" = ? [["post_id", 3]] Rendered posts/some_comments.html.erb within layouts/application (12.5ms) Completed 200 OK in 581ms (Views: 225.8ms | ActiveRecord: 10.0ms) ActiveRecord’s eager loading capability in Rails makes it possible to significantly reduce the number of queries by letting you specify in advance all the associations that are going to be loaded. This is done by calling the includes (or preload) method on the Arel ( ActiveRecord::Relation) object being built. With includes, ActiveRecord ensures that all of the specified associations are loaded using the minimum possible number of queries; e.g.: def comments_for_top_three_posts posts = Post.includes(:comments).limit(3) posts.flat_map do |post| post.comments.to_a end end When the above revised code is executed, we see in the log file that all of the comments were collected in a single query instead of three: Started GET "/posts/some_comments" for 127.0.0.1 at 2014-05-20 20:05:18 -0700 Processing by PostsController#some_comments as HTML Post Load (0.5ms) SELECT "posts".* FROM "posts" LIMIT 3 Comment Load (4.4ms) SELECT "comments".* FROM "comments" WHERE"comments "."post_id" IN (1, 2, 3) Rendered posts/some_comments.html.erb within layouts/application (12.2ms) Completed 200 OK in 560ms (Views: 219.3ms | ActiveRecord: 5.0ms) Much more efficient. This solution to the N+1 problem is really only meant as an example of the kind of inefficiencies that can exist “under-the-hood” in your application if you’re not paying adequate attention. The takeaway here is that you should be checking your development and test log files during development to check for (and address!) inefficiencies in the code that builds your responses. Reviewing log files is a great way to be tipped off to inefficiencies in your code and to correct them before your application goes into production. Otherwise, you may not be aware of a resulting Rails performance issue until your system goes live, since the dataset you work with in development and test is likely to be much smaller than in production. If you’re working on a new app, even your production dataset may start out small and your app will look like it’s running fine. However, as your production dataset grows, Rails problems like this will cause your application to run slower and slower. If you find that your log files are clogged up with a bunch of information you don’t need here are some things you can do to clean them up (the techniques there work for development as well as production logs). Common Mistake #7: Lack of automated tests Ruby and Rails provide powerful automated test capabilities by default. Many Rails developers write very sophisticated tests using TDD and BDD styles and make use of even more powerful test frameworks with gems like rspec and cucumber. Despite how easy it is to add automated testing to your Rails application, though, I have been very unpleasantly surprised by how many projects I’ve inherited or joined where there were literally no tests written (or at best, very few) by the prior development team.. Common Mistake #8: Blocking on calls to external services 3rd party providers of Rails services usually make it very easy to integrate their services into your application via gems that wrap their APIs.. You should also test your application without the external service (perhaps by removing the server your application is on from the network) to verify that it doesn’t result in any unanticipated consequences. Common Mistake #9: Getting married to existing database migrations Rails’ database migration mechanism allows you to create instructions to automatically add and remove database tables and rows. Since the files that contain these migrations are named in a sequential fashion, you can play them back from the beginning of time to bring an empty database to the same schema as production. This is therefore a great way to manage granular changes to your application’s database schema and avoid Rails problems. While this certainly works well at the beginning of your project, as time goes on, the database creation process can take quite a while and sometimes migrations get misplaced, inserted out of order, or introduced from other Rails applications using the same database server. Rails creates a representation of your current schema in a file called db/schema.rb (by default) which is usually updated when database migrations are run. The schema.rb file can even be generated when no migrations are present by running the rake db:schema:dump task. A common Rails mistake is to check a new migration into your source repo but not the correspondingly updated schema.rb file.. Common Mistake #10: Checking sensitive information into source code repositories The Rails framework makes it easy to create secure applications impervious to many types of attacks. Some of this is accomplished by using a secret token to secure a session with a browser. Even though this token is now stored in config/secrets.yml, and that file reads the token from an environment variable for production servers, past versions of Rails included the token in config/initializers/secret_token.rb. This file often mistakenly gets checked into the source code repository with the rest of your application and, when this happens, anyone with access to the repository can now easily compromise all users of your application. You should therefore make sure that your repository configuration file (e.g., .gitignore for git users) excludes the file with your token. Your production servers can then pick up their token from an environment variable or from a mechanism like the one that the dotenv gem provides. Tutorial Wrap-up Rails is a powerful framework that hides a lot of the ugly details necessary to build a robust web application.. It’s important to study the framework and make sure that you fully understand the architectural, design, and coding tradeoffs you’re making throughout the development process, to help ensure a high quality and high performance application.
https://www.toptal.com/ruby-on-rails/top-10-mistakes-that-rails-programmers-make
CC-MAIN-2018-47
refinedweb
2,767
52.39
Archive for the ‘sage interactions’ Category. Regular readers of Walking Randomly will know that I am a big fan of the Manipulate function in Mathematica. Manipulate allows you to easily create interactive mathematical demonstrations for teaching, research or just plain fun and is the basis of the incredibly popular Wolfram Demonstrations Project. Sage, probably the best open source mathematics software available right now, has a similar function called interact and I have been playing with it a bit recently (see here and here) along with some other math bloggers. The Sage team have done a fantastic job with the interact function but it is missing a major piece of functionality in my humble opinion – a Locator control. In Mathematica the default control for Manipulate is a slider: Manipulate[Plot[Sin[n x], {x, -Pi, Pi}], {n, 1, 10}] The slider is also the default control for Sage’s interact: @interact def _(n=(1,10)): plt=plot(sin(n*x),(x,-pi,pi)) show(plt) Both systems allow the user to use other controls such as text boxes, checkboxes and dropdown menus but Mathematica has a control called a Locator that Sage is missing. Locator controls allow you to directly interact with a plot or graphic. For example, the following Mathematica code (taken from its help system) draws a polygon and allows the user to click and drag the control points to change its shape. Manipulate[Graphics[Polygon[pts], PlotRange -> 1], {{pts, {{0, 0}, {.5, 0}, {0, .5}}}, Locator}] The Locator control has several useful options that allow you to customise your demonstration even further. For example, perhaps you want to allow the user to move the vertices of the polygon but you don’t want them to be able to actually see the control points. No problem, just add Appearance -> None to your code and you’ll get what you want. Manipulate[Graphics[Polygon[pts], PlotRange -> 1], {{pts, {{0, 0}, {.5, 0}, {0, .5}}}, Locator, Appearance -> None}] Another useful option is LocatorAutoCreate -> True which allows the user to create extra control points by holding down CTRL and ALT (or just ALT – it depends on your system it seems) as they click in the active area. Manipulate[Graphics[Polygon[pts], PlotRange -> 1], {{pts, {{0, 0}, {.5, 0}, {0, .5}}}, Locator, LocatorAutoCreate -> True}] When you add all of this functionality together you can do some very cool stuff with just a few lines of code. Theodore Gray’s curve fitting code on the Wolfram Demonstrations project is a perfect example. All of these features are demonstrated in the video below So, onto the bounty hunt. I am offering 25 pounds worth (about 40 American Dollars) of books from Amazon to anyone who writes the code to implement a Locator control for Sage’s interact function. To get the prize your code must fulfil the following spec - Your code must be accepted into the Sage codebase and become part of the standard install. This should ensure that it is of reasonable quality. - There should be an option like Mathematica’s LocatorAutoCreate -> True to allow the user to create new locator points interactively by Alt-clicking (or via some other suitable method). - There should be an option to alter the appearance of the Locator control (e.g. equivalent to Mathematica’s Appearance -> x option). As a minimum you should be able to do something like Appearance->None - You should provide demonstration code that implements everything shown in the video above. - I have to be happy with it! And the details of the prize: - I only have one prize – 25 pounds worth of books (about 40 American dollars) from Amazon. If more than one person claims it then the prize will be split. - The 25 pounds includes whatever it will cost for postage and packing. - I won’t send you the voucher – I will send you the books of your choice as a ‘gift’. This will mean that you’ll have to send me your postal address. Don’t enter if this bothers you for some reason. - I expect you to be sensible regarding the exact value of the prize. So if your books come to 24.50 then we’ll call it even. Similarly if they come to 25.50 then I won’t argue. - I am not doing this on behalf of any organisation. It’s my personal money. - My decision is final and I can withdraw this prize offer at any time without explanation. I hope you realise that I am just covering my back by saying this – I have every intention of giving the prize but whenever money is involved one always worries about the possibility of being ripped off. Good luck! Update (29th December 2009): The bounty hunt has only been going for a few days and the bounty has already doubled to 50 pounds which is around 80 American dollars. Thanks to David Jones for his generosity. There have been a couple of blog posts recently that have focused on creating interactive demonstrations for the discrete logistic equation – a highly simplified model of population growth that is often used as an example of how chaotic solutions can arise in simple systems. The first blog post over at Division by Zero used the free GeoGebra package to create the demonstration and a follow up post over at MathRecreation used a proprietary package called Fathom (which I have to confess I have never heard of – drop me a line if you have used it and like it). Finally, there is also a Mathematica demonstration of the discrete logistic equation over at the Wolfram Demonstrations project. I figured that one more demonstration wouldn’t hurt so I coded it up in SAGE – a free open source mathematical package that has a level of power on par with Mathematica or MATLAB. Here’s the code (click here if you’d prefer to download it as a file). def newpop(m,prevpop): return m*prevpop*(1-prevpop) def populationhistory(startpop,m,length): history = [startpop] for i in range(length): history.append( newpop(m,history[i]) ) return history @interact def _( m=slider(0.05,5,0.05,default=1.75,label='Malthus Factor') ): myplot=list_plot( populationhistory(0.1,m,20) ,plotjoined=True,marker='o',ymin=0,ymax=1) myplot.show() Here’s a screenshot for a Malthus Factor of 1.75 and here’s one for a Malthus Factor of 3.1 There are now so many different ways to easily make interactive mathematical demonstrations that there really is no excuse not to use them. Update (11th December 2009) As Harald Schilly points out, you can make a nice fractal out of this by plotting the limit points. My blogging software ruined his code when he placed it in the comments section so I reproduce it here (but he’s also uploaded it to sagenb) var('x malthus') step(x,malthus) = malthus * x * (1-x) stepfast = fast_callable(step, vars=[x, malthus], domain=RDF) def logistic(m, step): # filter cycles v = .5 for i in range(100): v = stepfast(v,m) points = [] for i in range(100): v = stepfast(v,m) points.append((m+step*random(),v)) return points points=[] step = 0.005 for m in sxrange(2.5,4,step): points += logistic(m, step) point(points,pointsize=1).show(dpi=150) _7<< If you are a user of SAGE then feel free to say hi in the comments section and let me know what you use it for.! The latest version of the powerful free, open-source maths package, SAGE, was released last week. Version 3.4.1 brings us a lot of new functionality compared to 3.4 and the SAGE team have prepared a detailed document showing us why the upgrade is worthwhile. For example, the new complex_plot function looks fantastic. From the documentation: The function complex_plot() takes a complex function f(z) of one variable and plots output of the function over the specified xrange and yrange. The magnitude of the output is indicated by the brightness (with zero being black and infinity being white), while the argument is represented by the hue with red being positive real and increasing through orange, yellow, etc. as the argument increases. sage: f(z) = z^5 + z - 1 + 1/z sage: complex_plot(f, (-3, 3), (-3, 3)) Sage aims to become a ‘viable free open source alternative to Magma, Maple, Mathematica and Matlab’ and I think it is well on the way. I become a little more impressed with it with every release and I am a hardcore Mathematica and MATLAB fan. One major problem with it (IMHO at least) is that there is no native windows version which prevents a lot of casual users from trying it out. Although you can get it working on Windows, it is far from ideal because you have to run a virtual machine image using VMWare player. The hard-core techno geeks among you might well be thinking ‘So what? Sounds easy enough.’ but it’s an extra level of complexity that casual users simply do not want to have to concern themselves with. Of course there is also the issue of performance – emulating an entire machine to run a single application is hardly a good use of compute resources. There is a good reason why SAGE doesn’t have a proper Windows version yet – it’s based upon a lot of component parts that don’t have Windows versions and someone has to port each and every one of them. It’s going to be a lot of work but I think it will be worth it. When the SAGE development team release a native Windows version of their software then I have no doubt that it will make a significant impact on the mathematical software scene – especially in education. There will be nothing preventing every school and university in the world from having access to a world-class computer algebra system. In an ideal world everyone would be running Linux but we don’t live in an ideal world so a Windows version of SAGE would be a step in the right direction. Update: It turns out that a Windows port is being developed and something should be ready soon - Thanks to mvngu in the comments section for pointing this out. I should have done my research better! David Joyner of the SAGE development team has come up with a couple of very nice mathematical Christmas greetings using a combination of SAGE (For the mathematics used to generate the images) and GIMP and Inkscape (for the text). The first one is based on a Barnsley Fractal and the sage source code is available here. David’s other creation is a Sierpinski gasket that has been coloured such that it resembles a Christmas tree. The SAGE source code is given below. def sierpinski_seasons_greetings(): """ Code by Marshall Hampton. Colors by David Joyner. General depth by Rob Beezer. creative commons, attribution share-alike. """ depth = 7 nsq = RR(3^(1/2))/2.0 tlist_old = [[[-1/2.0,0.0],[1/2.0,0.0],[0.0,nsq]]] tlist_new = [x for x in tlist_old] for ind in range(depth): for tri in tlist_old: for p in tri: new_tri = [[(p[0]+x[0])/2.0, (p[1]+x[1])/2.0] for x in tri] tlist_new.append(new_tri) tlist_old = [x for x in tlist_new] T = tlist_old N = 4^depth N1 = N - 3^depth q1 = sum([line(T[i]+[T[i][0]], rgbcolor = (0,1,0)) for i in range(N1)]) q2 = sum([line(T[i]+[T[i][0]], rgbcolor = (1,0,0)) for i in range(N1,N)]) show(q2+q1, figsize = [6,6*nsq], axes = False) It just goes to show that advanced mathematical software such as SAGE doesn’t just have to be used for teaching and research – it can be used for making mathematical Christmas cards too! SAGE is completely free and is available from the SAGE math website. Thanks to David for his work on this one.!
http://www.walkingrandomly.com/?cat=24
CC-MAIN-2013-20
refinedweb
2,001
61.97
CHAPTER 8 Class declarations define new reference types and describe how they are implemented (§8.1). The name of a class has as its scope all type declarations in the package in which the class is declared (§8.1.1). A class may be declared abstract (§8.1.2.1) and must be declared abstract if it is incompletely implemented; such a class cannot be instantiated, but can be extended by subclasses. A class may be declared final (§8.1.2.2), in which case it cannot have subclasses. If a class is declared public, then it can be referred to from other packages. Each class except Object is an extension of (that is, a subclass of) a single existing class (§8.1.3) and may implement interfaces (§8.1.4). The body of a class declares members (fields and methods), static initializers, and constructors (§8.1.5). The scope of the name of a member is the entire declaration of the class to which the member belongs. Field, method, and constructor declarations may include the access modifiers (§6.6) public, protected, or private. The members of a class include both declared and inherited members (§8.2). Newly declared fields can hide fields declared in a superclass or superinterface. Newly declared methods can hide, implement, or override methods declared in a superclass or superinterface. Field declarations (§8.3) describe class variables, which are incarnated once, and instance variables, which are freshly incarnated for each instance of the class. A field may be declared final (§8.3.1.2), in which case it cannot be assigned to except as part of its declaration. Any field declaration may include an initializer; the declaration of a final field must include an initializer. Method declarations (§8.4) describe code that may be invoked by method invocation expressions (§15.11). A class method is invoked relative to the class type; an instance method is invoked with respect to some particular object that is an instance of the class type. A method whose declaration does not indicate how it is implemented must be declared abstract. A method may be declared final (§8.4.3.3), in which case it cannot be hidden or overridden. A method may be implemented by platform-dependent native code (§8.4.3.4). A synchronized method (§8.4.3.5) automatically locks an object before executing its body and automatically unlocks the object on return, as if by use of a synchronized statement (§14.17), thus allowing its activities to be synchronized with those of other threads (§17). Method names may be overloaded (§8.4.7). Static initializers (§8.5) are blocks of executable code that may be used to help initialize a class when it is first loaded (§12.4). Constructors (§8.6) are similar to methods, but cannot be invoked directly by a method call; they are used to initialize new class instances. Like methods, they may be overloaded (§8.6.6). ClassDeclaration:If a class is declared in a named package (§7.4.1) with fully qualified name P (§6.7), then the class has the fully qualified name P ClassModifiersopt classIdentifier SuperoptSuperopt InterfacesoptInterfacesopt ClassBodyClassBody .Identifier. If the class is in an unnamed package (§7.4.2), then the class has the fully qualified name Identifier. In the example: class Point { int x, y; }the class Pointis declared in a compilation unit with no packagestatement, and thus Pointis its fully qualified name, whereas in the example: package vista; class Point { int x, y; }the fully qualified name of the class Pointis vista.Point. (The package name vistais suitable for local or personal use; if the package were intended to be widely distributed, it would be better to give it a unique package name (§7.7).) A compile-time error occurs if the Identifier naming a class appears as the name of any other class type or interface type declared in the same package (§7.6). A compile-time error occurs if the Identifier naming a class is also declared as a type by a single-type-import declaration (§7.5.1) in the compilation unit (§7.3) containing the class declaration. In the example: package test;the first compile-time error is caused by the duplicate declaration of the name import java.util.Vector; class Point { int x, y; } interface Point { // compile-time error #1 int getR(); int getTheta(); } class Vector { Point[] pts; } // compile-time error #2 Pointas both a classand an interfacein the same package. A second error detected at compile time is the attempt to declare the name Vectorboth by a class type declaration and by a single-type-import declaration. Note, however, that it is not an error for the Identifier that names a class also to name a type that otherwise might be imported by a type-import-on-demand declaration (§7.5.2) in the compilation unit (§7.3) containing the class declaration. In the example: package test;the declaration of the class import java.util.*; class Vector { Point[] pts; } // not a compile-time error Vectoris permitted even though there is also a class java.util.Vector. Within this compilation unit, the simple name Vectorrefers to the class test.Vector, not to java.util.Vector(which can still be referred to by code within the compilation unit, but only by its fully qualified name). package points; class Point { int x, y; // coordinates PointColor color; // color of this point Point next; // next point with this colordefines two classes that use each other in the declarations of their class members. Because the class type names static int nPoints; } class PointColor { Point first; // first point with this color PointColor(int color) { this.color = color; } private int color; // color components } Pointand PointColorhave the entire package points, including the entire current compilation unit, as their scope, this example compiles correctly-that is, forward reference is not a problem. ClassModifiers:The access modifier ClassModifier ClassModifiers ClassModifier ClassModifier: one ofClassModifier ClassModifier: one of public abstract final publicis discussed in §6.6. A compile-time error occurs if the same modifier appears more than once in a class declaration. If two or more class modifiers appear in a class declaration, then it is customary, though not required, that they appear in the order consistent with that shown above in the production for ClassModifier. abstractclass is a class that is incomplete, or to be considered incomplete. Only abstractclasses may have abstractmethods (§8.4.3.1, §9.4), that is, methods that are declared but not yet implemented. If a class that is not abstractcontains an abstractmethod, then a compile-time error occurs. A class has abstractmethods if any of the following is true: abstractmethod (§8.4.3). abstractmethod from its direct superclass (§8.1.3). abstract) and the class neither declares nor inherits a method that implements it. abstract class Point { int x = 1, y = 1; void move(int dx, int dy) { x += dx; y += dy; alert(); } abstract void alert(); }a class abstract class ColoredPoint extends Point { int color; } class SimplePoint extends Point { void alert() { } } Pointis declared that must be declared abstract, because it contains a declaration of an abstractmethod named alert. The subclass of Pointnamed ColoredPointinherits the abstractmethod alert, so it must also be declared abstract. On the other hand, the subclass of Pointnamed SimplePointprovides an implementation of alert, so it need not be abstract. A compile-time error occurs if an attempt is made to create an instance of an abstract class using a class instance creation expression (§15.8). An attempt to instantiate an abstract class using the newInstance method of class Class (§20.3.6) will cause an InstantiationException (§11.5.1) to be thrown. Thus, continuing the example just shown, the statement: Point p = new Point();would result in a compile-time error; the class Pointcannot be instantiated because it is abstract. However, a Pointvariable could correctly be initialized with a reference to any subclass of Point, and the class SimplePointis not abstract, so the statement: Point p = new SimplePoint();would be correct. A subclass of an abstract class that is not itself abstract may be instantiated, resulting in the execution of a constructor for the abstract class and, therefore, the execution of the field initializers for instance variables of that class. Thus, in the example just given, instantiation of a SimplePoint causes the default constructor and field initializers for x and y of Point to be executed. It is a compile-time error to declare an abstract class type such that it is not possible to create a subclass that implements all of its abstract methods. This situation can occur if the class would have as members two abstract methods that have the same method signature (§8.4.2) but different return types. As an example, the declarations: interface Colorable { void setColor(int color); } abstract class Colored implements Colorable { abstract int setColor(int color); }result in a compile-time error: it would be impossible for any subclass of class Coloredto provide an implementation of a method named setColor, taking one argument of type int, that can satisfy both abstractmethod specifications, because the one in interface Colorablerequires the same method to return no value, while the one in class Coloredrequires the same method to return a value of type int(§8.4). A class type should be declared abstract only if the intent is that subclasses can be created to complete the implementation. If the intent is simply to prevent instantiation of a class, the proper way to express this is to declare a constructor (§8.6.8) of no arguments, make it private, never invoke it, and declare no other constructors. A class of this form usually contains class methods and variables. The class java.lang.Math is an example of a class that cannot be instantiated; its declaration looks like this: public final class Math { private Math() { } // never instantiate this class . . . declarations of class variables and methods . . . } finalif its definition is complete and no subclasses are desired or required. A compile-time error occurs if the name of a finalclass appears in the extendsclause (§8.1.3) of another classdeclaration; this implies that a finalclass cannot have any subclasses. A compile-time error occurs if a class is declared both finaland abstract, because the implementation of such a class could never be completed (§8.1.2.1). Because a final class never has any subclasses, the methods of a final class are never overridden (§8.4.6.1). extendsclause in a class declaration specifies the direct superclass of the current class. A class is said to be a direct subclass of the class it extends. The direct superclass is the class from whose implementation the implementation of the current class is derived. The extendsclause must not appear in the definition of the class java.lang.Object(§20.1), because it is the primordial class and has no direct superclass. If the class declaration for any other class has no extendsclause, then the class has the class java.lang.Objectas its implicit direct superclass. Super:The following is repeated from §4.3 to make the presentation here clearer: extendsClassType ClassType:The ClassType must name an accessible (§6.6) class type, or a compile-time error occurs. All classes in the current package are accessible. Classes in other packages are accessible if the host system permits access to the package (§7.2) and the class is declared TypeName public. If the specified ClassType names a class that is final(§8.1.2.2), then a compile-time error occurs; finalclasses are not allowed to have subclasses. In the example: the relationships are as follows:the relationships are as follows: class Point { int x, y; } final class ColoredPoint extends Point { int color; } class Colored3DPoint extends ColoredPoint { int z; } // error Pointis a direct subclass of java.lang.Object. java.lang.Objectis the direct superclass of the class Point. ColoredPointis a direct subclass of class Point. Pointis the direct superclass of class ColoredPoint. Colored3dPointcauses a compile-time error because it attempts to extend the finalclass ColoredPoint. The subclass relationship is the transitive closure of the direct subclass relationship. A class A is a subclass of class C if either of the following is true: In the example: the relationships are as follows:the relationships are as follows: class Point { int x, y; } class ColoredPoint extends Point { int color; } final class Colored3dPoint extends ColoredPoint { int z; } Pointis a superclass of class ColoredPoint. Pointis a superclass of class Colored3dPoint. ColoredPointis a subclass of class Point. ColoredPointis a superclass of class Colored3dPoint. Colored3dPointis a subclass of class ColoredPoint. Colored3dPointis a subclass of class Point. causes a compile-time error. If circularly declared classes are detected at run time, as classes are loaded (§12.2), then acauses a compile-time error. If circularly declared classes are detected at run time, as classes are loaded (§12.2), then a class Point extends ColoredPoint { int x, y; } class ColoredPoint extends Point { int color; } ClassCircularityErroris thrown. implementsclause in a class declaration lists the names of interfaces that are direct superinterfaces of the class being declared: Interfaces:The following is repeated from §4.3 to make the presentation here clearer: implementsInterfaceTypeList InterfaceTypeList: InterfaceType InterfaceTypeList ,InterfaceType InterfaceType:Each InterfaceType must name an accessible (§6.6) interface type, or a compile- time error occurs. All interfaces in the current package are accessible. Interfaces in other packages are accessible if the host system permits access to the package (§7.4.4) and the interface is declared TypeName public. A compile-time error occurs if the same interface is mentioned two or more times in a single implements clause, even if the interface is named in different ways; for example, the code: class Redundant implements java.lang.Cloneable, Cloneable { int x; }results in a compile-time error because the names java.lang.Cloneableand Cloneablerefer to the same interface. An interface type I is a superinterface of class type C if any of the following is true: In the example: public interface Colorable { void setColor(int color); int getColor(); }the relationships are as follows: public interface Paintable extends Colorable { int MATTE = 0, GLOSSY = 1; void setFinish(int finish); int getFinish(); } class Point { int x, y; } class ColoredPoint extends Point implements Colorable { int color; public void setColor(int color) { this.color = color; } public int getColor() { return color; } } class PaintedPoint extends ColoredPoint implements Paintable { int finish; public void setFinish(int finish) { this.finish = finish; } public int getFinish() { return finish; } } Paintableis a superinterface of class PaintedPoint. Colorableis a superinterface of class ColoredPointand of class PaintedPoint. Paintableis a subinterface of the interface Colorable, and Colorableis a superinterface of Paintable, as defined in §9.1.3. PaintedPointhas Colorableas a superinterface both because it is a superinterface of ColoredPointand because it is a superinterface of Paintable. Unless the class being declared is abstract, the declarations of the methods defined in each direct superinterface must be implemented either by a declaration in this class or by an existing method declaration inherited from the direct superclass, because a class that is not abstract is not permitted to have abstract methods (§8.1.2.1). Thus, the example: interface Colorable { void setColor(int color); int getColor(); }causes a compile-time error, because class Point { int x, y; }; class ColoredPoint extends Point implements Colorable { int color; } ColoredPointis not an abstractclass but it fails to provide an implementation of methods setColorand getColorof the interface Colorable. It is permitted for a single method declaration in a class to implement methods of more than one superinterface. For example, in the code: interface Fish { int getNumberOfScales(); } interface Piano { int getNumberOfScales(); } class Tuna implements Fish, Piano { // You can tune a piano, but can you tuna fish? int getNumberOfScales() { return 91; } }the method getNumberOfScalesin class Tunahas a name, signature, and return type that matches the method declared in interface Fishand also matches the method declared in interface Piano; it is considered to implement both. On the other hand, in a situation such as this: interface Fish { int getNumberOfScales(); } interface StringBass { double getNumberOfScales(); } class Bass implements Fish, StringBass { // This declaration cannot be correct, no matter what type is used. public ??? getNumberOfScales() { return 91; } }it is impossible to declare a method named getNumberOfScaleswith the same signature and return type as those of both the methods declared in interface Fishand in interface StringBass, because a class can have only one method with a given signature (§8.4). Therefore, it is impossible for a single class to implement both interface Fishand interface StringBass(§8.4.6). ClassBody:The scope of the name of a member declared in or inherited by a class type is the entire body of the class type declaration. {ClassBodyDeclarationsopt }ClassBodyDeclarations: ClassBodyDeclaration ClassBodyDeclarations ClassBodyDeclaration ClassBodyDeclaration:ClassBodyDeclaration ClassBodyDeclaration: ClassMemberDeclaration StaticInitializer ConstructorDeclaration ClassMemberDeclaration: FieldDeclaration MethodDeclaration Object, which has no direct superclass privateare not inherited by subclasses of that class. Only members of a class that are declared protectedor publicare inherited by subclasses declared in a package other than the one in which the class is declared. Constructors and static initializers are not members and therefore are not inherited. The example: class Point { int x, y; private Point() { reset(); } Point(int x, int y) { this.x = x; this.y = y; } private void reset() { this.x = 0; this.y = 0; } }causes four compile-time errors: class ColoredPoint extends Point { int color; void clear() { reset(); } // error } class Test { public static void main(String[] args) { ColoredPoint c = new ColoredPoint(0, 0); // error c.reset(); // error } } ColoredPointhas no constructor declared with two integer parameters, as requested by the use in main. This illustrates the fact that ColoredPointdoes not inherit the constructors of its superclass Point. ColoredPointdeclares no constructors, and therefore a default constructor for it is automatically created (§8.6.7), and this default constructor is equivalent to: ColoredPoint() { super(); } ColoredPoint. The error is that the constructor for Pointthat takes no arguments is private, and therefore is not accessible outside the class Point, even through a superclass constructor invocation (§8.6.5). resetof class Pointis private, and therefore is not inherited by class ColoredPoint. The method invocations in method clearof class ColoredPointand in method mainof class Testare therefore not correct. pointspackage declares two compilation units: package points; public class Point { int x, y;and: public void move(int dx, int dy) { x += dx; y += dy; } } package points; public class Point3d extends Point { int z; public void move(int dx, int dy, int dz) { x += dx; y += dy; z += dz; } }and a third compilation unit, in another package, is: import points.Point3d; class Point4d extends Point3d { int w; public void move(int dx, int dy, int dz, int dw) { x += dx; y += dy; z += dz; w += dw; // compile-time errors } }Here both classes in the pointspackage compile. The class Point3dinherits the fields xand yof class Point, because it is in the same package as Point. The class Point4d, which is in a different package, does not inherit the fields xand yof class Pointor the field zof class Point3d, and so fails to compile. A better way to write the third compilation unit would be: import points.Point3d; class Point4d extends Point3d { int w; public void move(int dx, int dy, int dz, int dw) { super.move(dx, dy, dz); w += dw; } }using the movemethod of the superclass Point3dto process dx, dy, and dz. If Point4dis written in this way it will compile without errors. Point: package points; public class Point {the public int x, y; protected int useCount = 0; static protected int totalUseCount = 0; public void move(int dx, int dy) { x += dx; y += dy; useCount++; totalUseCount++; } } publicand protectedfields x, y, useCountand totalUseCountare inherited in all subclasses of Point. Therefore, this test program, in another package, can be compiled successfully: class Test extends points.Point { public void moveBack(int dx, int dy) { x -= dx; y -= dy; useCount++; totalUseCount++; } } class Point {the class variable totalMoves can be used only within the class int x, y; void move(int dx, int dy) { x += dx; y += dy; totalMoves++; } private static int totalMoves; void printMoves() { System.out.println(totalMoves); } } class Point3d extends Point { int z; void move(int dx, int dy, int dz) { super.move(dx, dy); z += dz; totalMoves++; } } Point; it is not inherited by the subclass Point3d. A compile-time error occurs at the point where method moveof class Point3dtries to increment totalMoves. public, instances of the class might be available at run time to code outside the package in which it is declared if it has a publicsuperclass or superinterface. An instance of the class can be assigned to a variable of such a publictype. An invocation of a publicmethod of the object referred to by such a variable may invoke a method of the class if it implements or overrides a method of the publicsuperclass or superinterface. (In this situation, the method is necessarily declared public, even though it is declared in a class that is not public.) Consider the compilation unit: package points; public class Point { public int x, y; public void move(int dx, int dy) { x += dx; y += dy; } }and another compilation unit of another package: package morePoints; class Point3d extends points.Point { public int z; public void move(int dx, int dy, int dz) { super.move(dx, dy); z += dz; } }An invocation public class OnePoint { static points.Point getOne() { return new Point3d(); } } morePoints.OnePoint.getOne()in yet a third package would return a Point3dthat can be used as a Point, even though the type Point3dis not available outside the package morePoints. The method movecould then be invoked for that object, which is permissible because method moveof Point3dis public(as it must be, for any method that overrides a publicmethod must itself be public, precisely so that situations such as this will work out correctly). The fields xand yof that object could also be accessed from such a third package. While the field z of class Point3d is public, it is not possible to access this field from code outside the package morePoints, given only a reference to an instance of class Point3d in a variable p of type Point. This is because the expression p.z is not correct, as p has type Point and class Point has no field named z; also, the expression ((Point3d)p).z is not correct, because the class type Point3d cannot be referred to outside package morePoints. The declaration of the field z as public is not useless, however. If there were to be, in package morePoints, a public subclass Point4d of the class Point3d: package morePoints; public class Point4d extends Point3d { public int w; public void move(int dx, int dy, int dz, int dw) { super.move(dx, dy, dz); w += dw; } }then class Point4dwould inherit the field z, which, being public, could then be accessed by code in packages other than morePoints, through variables and expressions of the publictype Point4d. FieldDeclaration:The FieldModifiers are described in §8.3.1. The Identifier in a FieldDeclarator may be used in a name to refer to the field. The name of a field has as its scope (§6.3) the entire body of the class declaration in which it is declared. More than one field may be declared in a single field declaration by using more than one declarator; the FieldModifiers and Type apply to all the declarators in the declaration. Variable declarations involving array types are discussed in §10.2. FieldModifiersopt TypeType VariableDeclaratorsVariableDeclarators ;VariableDeclarators: VariableDeclarator VariableDeclarators ,VariableDeclarator VariableDeclarator: VariableDeclaratorId VariableDeclaratorId =VariableInitializer VariableDeclaratorId: Identifier VariableDeclaratorId [ ]VariableInitializer: Expression ArrayInitializer It is a compile-time error for the body of a class declaration to contain declarations of two fields with the same name. Methods and fields may have the same name, since they are used in different contexts and are disambiguated by the different lookup procedures (§6.5). If the class declares a field with a certain name, then the declaration of that field is said to hide (§6.3.1) any and all accessible declarations of fields with the same name in the superclasses and superinterfaces of the class. If a field declaration hides the declaration of another field, the two fields need not have the same type. A class inherits from its direct superclass and direct superinterfaces all the fields of the superclass and superinterfaces that are both accessible to code in the class and not hidden by a declaration in the class. It is possible for a class to inherit more than one field with the same name (§8.3.3.3). Such a situation does not in itself cause a compile-time error. However, any attempt within the body of the class to refer to any such hidden field can be accessed by using a qualified name (if it is static) or by using a field access expression (§15.10) that contains the keyword super or a cast to a superclass type. See §15.10.2 for discussion and an example. FieldModifiers:The access modifiers FieldModifier FieldModifiers FieldModifier FieldModifier: one ofFieldModifier FieldModifier: one of public protected private final static transient volatile public, protected, and privateare discussed in §6.6. A compile-time error occurs if the same modifier appears more than once in a field declaration, or if a field declaration has more than one of the access modifiers public, protected, and private. If two or more (distinct) field modifiers appear in a field declaration, it is customary, though not required, that they appear in the order consistent with that shown above in the production for FieldModifier. static, there exists exactly one incarnation of the field, no matter how many instances (possibly zero) of the class may eventually be created. A staticfield, sometimes called a class variable, is incarnated when the class is initialized (§12.4).: class Point { int x, y, useCount; Point(int x, int y) { this.x = x; this.y = y; } final static Point origin = new Point(0, 0); }prints: class Test {); } } (2,2) 0 true 1showing that changing the fields x, y, and useCountof pdoes not affect the fields of q, because these fields are instance variables in distinct objects. In this example, the class variable originof the class Pointis referenced both using the class name as a qualifier, in Point.origin, and using variables of the class type in field access expressions (§15.10), as in p.originand q.origin. These two ways of accessing the originclass variable access the same object, evidenced by the fact that the value of the reference equality expression (§15.20.3): isis q.origin==Point.origin true. Further evidence is that the incrementation: p.origin.useCount++;causes the value of q.origin.useCount to be 1; this is so because p.originand q.originrefer to the same variable. final, in which case its declarator must include a variable initializer or a compile-time error occurs. Both class and instance variables ( staticand non- staticfields) may be declared final. Any attempt to assign to a final field results in a compile-time error. Therefore, once a final field has been initialized, it always contains the same value. If a final field holds a reference to an object, then the state of the object may be changed by operations on the object, but the field will always refer to the same object. This applies also to arrays, because arrays are objects; if a final field holds a reference to an array, then the components of the array may be changed by operations on the array, but the field will always refer to the same array. Declaring a field final can serve as useful documentation that its value will not change, can help to avoid programming errors, and can make it easier for a compiler to generate efficient code. In the example: class Point { int x, y; int useCount; Point(int x, int y) { this.x = x; this.y = y; } final static Point origin = new Point(0, 0); }the class. transientto indicate that they are not part of the persistent state of an object. If an instance of the class Point: class Point { int x, y; transient float rho, theta; }were saved to persistent storage by a system service, then only the fields xand ywould be saved. This specification does not yet specify details of such services; we intend to provide them in a future version of this specification. Java provides a second mechanism. If, in the following example, one thread repeatedly calls the method one (but no more than Integer.MAX_VALUE (§20.7.2) times in all), and another thread repeatedly calls the method two: class Test {then method static int i = 0, j = 0; static void one() { i++; j++; } static void two() { System.out.println("i=" + i + " j=" + j); } } twocould occasionally print a value for jthat is greater than the value of i, because the example includes no synchronization and, under the rules explained in §17, the shared values of iand jmight be updated out of order. One way to prevent this out-or-order behavior would be to declare methods one and two to be synchronized (§8.4.3.5): class Test {This prevents method static int i = 0, j = 0; static synchronized void one() { i++; j++; } static synchronized void two() { System.out.println("i=" + i + " j=" + j); } } and j to be volatile: class Test {This allows method static volatile int i = 0, j = 0; static void one() { i++; j++; } static void two() { System.out.println("i=" + i + " j=" + j); } } oneand method twoto be executed concurrently, but guarantees that accesses to the shared values for iand joccur exactly as many times, and in exactly the same order, as they appear to occur during execution of the program text by each thread. Therefore, method twonever observes a value for jgreaterfetches the value of j. See §17 for more discussion and examples. A compile-time error occurs if a final variable is also declared volatile. staticfield), then the variable initializer is evaluated and the assignment performed exactly once, when the class is initialized (§12.4). static), then the variable initializer is evaluated and the assignment performed each time an instance of the class is created (§12.5). class Point { int x = 1, y = 5; }produces the output: class Test { public static void main(String[] args) { Point p = new Point(); System.out.println(p.x + ", " + p.y); } } 1, 5because the assignments to xand yoccur whenever a new Pointis created. Variable initializers are also used in local variable declaration statements (§14.3), where the initializer is evaluated and the assignment performed each time the local variable declaration statement is executed. It is a compile-time error if the evaluation of a variable initializer for a field of a class (or interface) can complete abruptly with a checked exception (§11.2). class Test { static float f = j; // compile-time error: forward reference static int j = 1; static int k = k+1; // compile-time error: forward reference }causes two compile-time errors, because jis referred to in the initialization of fbefore jis declared and because the initialization of krefers to kitself. If a reference by simple name to any instance variable occurs in an initialization expression for a class variable, then a compile-time error occurs. If the keyword this (§15.7.2) or the keyword super (§15.10.2, §15.11) occurs in an initialization expression for a class variable, then a compile-time error occurs. (One subtlety here is that, at run time, static variables that are final and that are initialized with compile-time constant values are initialized first. This also applies to such fields in interfaces (§9.3.1). These variables are "constants" that will never be observed to have their default initial values (§4.5.4), even by devious programs. See §12.4.2 and §13.4.8 for more discussion.) class Test { float f = j; int j = 1; int k = k+1; }causes two compile-time errors, because jis referred to in the initialization of fbefore jis declared and because the initialization of krefers to kitself. Initialization expressions for instance variables may use the simple name of any static variable declared in or inherited by the class, even one whose declaration occurs textually later. Thus the example: class Test { float f = j; static int j = 1; }compiles without error; it initializes jto 1when class Testis initialized, and initializes fto the current value of jevery time an instance of class Testis created. Initialization expressions for instance variables are permitted to refer to the current object this (§15.7.2) and to use the keyword super (§15.10.2, §15.11). class Point { static int x = 2; }produces the output: class Test extends Point { static double x = 4.7; public static void main(String[] args) { new Test().printX(); } void printX() { System.out.println(x + " " + super.x); } } 4.7 2because the declaration of xin class Testhides the definition of xin class Point, so class Testdoes not inherit the field xfrom its superclass Point. Within the declaration of class Test, the simple name xrefers to the field declared within class Test. Code in class Testmay refer to the field xof class Pointas super.x(or, because xis static, as Point.x). If the declaration of Test.xis deleted: class Point { static int x = 2; }then the field class Test extends Point { public static void main(String[] args) { new Test().printX(); } void printX() { System.out.println(x + " " + super.x); } } xof class Pointis no longer hidden within class Test; instead, the simple name xnow refers to the field Point.x. Code in class Testmay still refer to that same field as super.x. Therefore, the output from this variant program is: 2 2 class Point { int x = 2; }produces the output: class Test extends Point { double x = 4.7; void printBoth() { System.out.println(x + " " + super.x); } public static void main(String[] args) { Test sample = new Test(); sample.printBoth(); System.out.println(sample.x + " " + ((Point)sample).x); } } 4.7 2 4.7 2because the declaration of xin class Testhides the definition of xin class Point, so class Testdoes not inherit the field xfrom its superclass Point. It must be noted, however, that while the field xof class Pointis not inherited by class Test, it is nevertheless implemented by instances of class Test. In other words, every instance of class Testcontains two fields, one of type intand one of type float. Both fields bear the name x, but within the declaration of class Test, the simple name xalways refers to the field declared within class Test. Code in instance methods of class Testmay refer to the instance variable xof class Pointas super.x. Code that uses a field access expression to access field x will access the field named x in the class indicated by the type of reference expression. Thus, the expression sample.x accesses a float value, the instance variable declared in class Test, because the type of the variable sample is Test, but the expression ((Point)sample).x accesses an int value, the instance variable declared in class Point, because of the cast to type Point. If the declaration of x is deleted from class Test, as in the program: class Point { static int x = 2; }then the field class Test extends Point { void printBoth() { System.out.println(x + " " + super.x); } public static void main(String[] args) { Test sample = new Test(); sample.printBoth(); System.out.println(sample.x + " " + ((Point)sample).x); } } xof class Pointis no longer hidden within class Test. Within instance methods in the declaration of class Test, the simple name xnow refers to the field declared within class Point. Code in class Testmay still refer to that same field as super.x. The expression sample.xstill refers to the field xwithin type Test, but that field is now an inherited field, and so refers to the field xdeclared in class Point. The output from this variant program is: 2 2 2 2 super(§15.10.2) may be used to access such fields unambiguously. In the example: interface Frob { float v = 2.0f; } class SuperTest { int v = 3; } class Test extends SuperTest implements Frob { public static void main(String[] args) { new Test().printV(); } void printV() { System.out.println(v); } }the class Testinherits two fields named v, one from its superclass SuperTestand one from its superinterface Frob. This in itself is permitted, but a compile-time error occurs because of the use of the simple name vin method printV: it cannot be determined which vis intended. The following variation uses the field access expression super.v to refer to the field named v declared in class SuperTest and uses the qualified name Frob.v to refer to the field named v declared in interface Frob: interface Frob { float v = 2.0f; } class SuperTest { int v = 3; } class Test extends SuperTest implements Frob { public static void main(String[] args) { new Test().printV(); } void printV() { System.out.println((super.v + Frob.v)/2); } }It compiles and prints: 2.5Even if two distinct inherited fields have the same type, the same value, and are both final, any reference to either field by simple name is considered ambiguous and results in a compile-time error. In the example: interface Color { int RED=0, GREEN=1, BLUE=2; } interface TrafficLight { int RED=0, YELLOW=1, GREEN=2; } class Test implements Color, TrafficLight { public static void main(String[] args) { System.out.println(GREEN); // compile-time error System.out.println(RED); // compile-time error } }it is not astonishing that the reference to GREENshould be considered ambiguous, because class Testinherits two different declarations for GREENwith different values. The point of this example is that the reference to REDis also considered ambiguous, because two distinct declarations are inherited. The fact that the two fields named REDhappen to have the same type and the same unchanging value does not affect this judgment. public interface Colorable { int RED = 0xff0000, GREEN = 0x00ff00, BLUE = 0x0000ff; }the fields public interface Paintable extends Colorable { int MATTE = 0, GLOSSY = 1; } class Point { int x, y; } class ColoredPoint extends Point implements Colorable { . . . } class PaintedPoint extends ColoredPoint implements Paintable { . . . RED. . . } RED, GREEN, and BLUEare inherited by the class PaintedPointboth through its direct superclass ColoredPointand through its direct superinterface Paintable. The simple names RED, GREEN, and BLUEmay nevertheless be used without ambiguity within the class PaintedPointto refer to the fields declared in interface Colorable. MethodDeclaration:The MethodModifiers are described in §8.4.3, the Throws clause in §8.4.4, and the MethodBody in §8.4.5. A method declaration either specifies the type of value that the method returns or uses the keyword MethodHeader MethodBody MethodHeader:MethodBody MethodHeader: MethodModifiersopt ResultTypeResultType MethodDeclaratorMethodDeclarator Throwsopt ResultType:Throwsopt ResultType: Type voidMethodDeclarator: Identifer (FormalParameterListopt ) voidto indicate that the method does not return a value. The Identifier in a MethodDeclarator may be used in a name to refer to the method. A class can declare a method with the same name as the class or a field of the class. For compatibility with older versions of Java, a declaration form for a method that returns an array is allowed to place (some or all of) the empty bracket pairs that form the declaration of the array type after the parameter list. This is supported by the obsolescent production: MethodDeclarator:but should not be used in new Java code. MethodDeclarator [ ] It is a compile-time error for the body of a class to have as members two methods with the same signature (§8.4.2) (name, number of parameters, and types of any parameters). Methods and fields may have the same name, since they are used in different contexts and are disambiguated by the different lookup procedures (§6.5). FormalParameterList:The following is repeated from §8.3 to make the presentation here clearer: FormalParameter FormalParameterList ,FormalParameter FormalParameter: Type VariableDeclaratorIdVariableDeclaratorId VariableDeclaratorId:If a method has no parameters, only an empty pair of parentheses appears in the method's declaration. Identifier VariableDeclaratorId [ ] If two formal parameters are declared to have the same name (that is, their declarations mention the same Identifier), then a compile-time error occurs. When the method is invoked (§15.11), the values of the actual argument expressions initialize newly created parameter variables, each of the declared Type, before execution of the body of the method. The Identifier that appears in the DeclaratorId may be used as a simple name in the body of the method to refer to the formal parameter. The scope of formal parameter names is the entire body of the method. These parameter names may not be redeclared as local variables or exception parameters within the method; that is, hiding the name of a parameter is not permitted. Formal parameters are referred to only using simple names, never by using qualified names (§6.6). class Point implements Move { int x, y; abstract void move(int dx, int dy); void move(int dx, int dy) { x += dx; y += dy; } }causes a compile-time error because it declares two movemethods with the same signature. This is an error even though one of the declarations is abstract. MethodModifiers:The access modifiers MethodModifier MethodModifiers MethodModifier MethodModifier: one ofMethodModifier MethodModifier: one of public protected private abstract static final synchronized native public, protected, and privateare discussed in §6.6. A compile-time error occurs if the same modifier appears more than once in a method declaration, or if a method declaration has more than one of the access modifiers public, protected, and private. A compile-time error occurs if a method declaration that contains the keyword abstractalso contains any one of the keywords private, static, final, native, or synchronized. If two or more method modifiers appear in a method declaration, it is customary, though not required, that they appear in the order consistent with that shown above in the production for MethodModifier. abstractmethod declaration introduces the method as a member, providing its signature (name and number and type of parameters), return type, and throwsclause (if any), but does not provide an implementation. The declaration of an abstractmethod m must appear within an abstractclass (call it A); otherwise a compile-time error results. Every subclass of A that is not abstractmust provide an implementation for m, or a compile-time error occurs. More precisely, for every subclass C of the abstractclass A, if C is not abstract, then there must be some class B such that all of the following are true: abstract, and this declaration is inherited by C, thereby providing an implementation of method m that is visible to C. It is a compile-time error for a private method to be declared abstract. It would be impossible for a subclass to implement a private abstract method, because private methods are not visible to subclasses; therefore such a method could never be used. It is a compile-time error for a static method to be declared abstract. It is a compile-time error for a final method to be declared abstract. An abstract class can override an abstract method by providing another abstract method declaration. This can provide a place to put a documentation comment (§18), or to declare that the set of checked exceptions (§11.2) that can be thrown by that method, when it is implemented by its subclasses, is to be more limited. For example, consider this code: class BufferEmpty extends Exception { BufferEmpty() { super(); } BufferEmpty(String s) { super(s); } }The overriding declaration of method class BufferError extends Exception { BufferError() { super(); } BufferError(String s) { super(s); } } public interface Buffer { char get() throws BufferEmpty, BufferError; } public abstract class InfiniteBuffer implements Buffer { abstract char get() throws BufferError; } getin class InfiniteBufferstates that method getin any subclass of InfiniteBuffernever throws a BufferEmptyexception, putatively because it generates the data in the buffer, and thus can never run out of data. An instance method that is not abstract can be overridden by an abstract method. For example, we can declare an abstract class Point that requires its subclasses to implement toString if they are to be complete, instantiable classes: abstract class Point { int x, y; public abstract String toString(); }This abstractdeclaration of toStringoverrides the non- abstract toStringmethod of class Object(§20.1.2). (Class Objectis the implicit direct superclass of class Point.) Adding the code: class ColoredPoint extends Point { int color; public String toString() { return super.toString() + ": color " + color; // error } }results in a compile-time error because the invocation super.toString()refers to method toStringin class Point, which is abstractand therefore cannot be invoked. Method toStringof class Objectcan be made available to class ColoredPointonly if class Pointexplicitly makes it available through some other method, as in: abstract class Point { int x, y; public abstract String toString(); protected String objString() { return super.toString(); } } class ColoredPoint extends Point { int color; public String toString() { return objString() + ": color " + color; // correct } } staticis called a class method. A class method is always invoked without reference to a particular object. An attempt to reference the current object using the keyword thisor the keyword superin the body of a class method results in a compile time error. It is a compile-time error for a staticmethod. finalto prevent subclasses from overriding or hiding it. It is a compile-time error to attempt to override or hide a finalmethod. A private method and all methods declared in a final class (§8.1.2.2) are implicitly final, because it is impossible to override them. It is permitted but not required for the declarations of such methods to redundantly include the final keyword. It is a compile-time error for a final method to be declared abstract. At run-time, a machine-code generator or optimizer can easily and safely "inline" the body of a final method, replacing an invocation of the method with the code in its body, as in the example: final class Point { int x, y; void move(int dx, int dy) { x += dx; y += dy; } }Here, inlining the method class Test { public static void main(String[] args) { Point[] p = new Point[100]; for (int i = 0; i < p.length; i++) { p[i] = new Point(); p[i].move(i, p.length-1-i); } } } moveof class Pointin method mainwould transform the forloop to the form: for (int i = 0; i < p.length; i++) { p[i] = new Point(); Point pi = p[i]; pi.x += i; pi.y += p.length-1-i; }The loop might then be subject to further optimizations. Such inlining cannot be done at compile time unless it can be guaranteed that Test and Point will always be recompiled together, so that whenever Point-and specifically its move method-changes, the code for Test.main will also be updated. nativeis implemented in platform-dependent code, typically written in another programming language such as C, C++, FORTRAN, or assembly language. The body of a nativemethod is given as a semicolon only, indicating that the implementation is omitted, instead of a block. A compile-time error occurs if a native method is declared abstract. For example, the class RandomAccessFile of the standard package java.io might declare the following native methods: package java.io; public class RandomAccessFile implements DataOutput, DataInput { . . . public native void open(String name, boolean writeable) throws IOException; public native int readBytes(byte[] b, int off, int len) throws IOException; public native void writeBytes(byte[] b, int off, int len) throws IOException; public native long getFilePointer() throws IOException; public native void seek(long pos) throws IOException; public native long length() throws IOException; public native void close() throws IOException; } synchronizedmethod acquires a lock (§17.1) before it executes. For a class ( static)method, the lock associated with the Classobject (§20.3) for the method's class is used. For an instance method, the lock associated with this(the object for which the method was invoked) is used. These are the same locks that can be used by the synchronizedstatement (§14.17); thus, the code: class Test { int count; synchronized void bump() { count++; } static int classCount; static synchronized void classBump() { classCount++; } }has exactly the same effect as: class BumpTest { int count; void bump() { synchronized (this) { count++; } } static int classCount; static void classBump() { try { synchronized (Class.forName("BumpTest")) { classCount++; } } catch (ClassNotFoundException e) { ... } } }The more elaborate example: public class Box {defines a class which is designed for concurrent use. Each instance of the class private Object boxContents; public synchronized Object get() { Object contents = boxContents; boxContents = null; return contents; } public synchronized boolean put(Object contents) { if (boxContents != null) return false; boxContents = contents; return true; } } Boxhas an instance variable contentsthat can hold a reference to any object. You can put an object in a Boxby invoking put, which returns falseif the box is already full. You can get something out of a Boxby invoking get, which returns a null reference if the boxis empty. If put and get were not synchronized, and two threads were executing methods for the same instance of Box at the same time, then the code could misbehave. It might, for example, lose track of an object because two invocations to put occurred at the same time. See §17 for more discussion of threads and locks. Throws:A compile-time error occurs if any ClassType mentioned in a throwsClassTypeList ClassTypeList: ClassType ClassTypeList ,ClassType throwsclause is not the class Throwable(§20.22) or a subclass of Throwable. It is permitted but not required to mention other (unchecked) exceptions in a throwsclause. For each checked exception that can result from execution of the body of a method or constructor, a compile-time error occurs unless that exception type or a superclass of that exception type is mentioned in a throws clause in the declaration of the method or constructor. The requirement to declare checked exceptions allows the compiler to ensure that code for handling such error conditions has been included. Methods or constructors that fail to handle exceptional conditions thrown as checked exceptions will normally result in a compile-time error because of the lack of a proper exception type in a throws clause. Java thus encourages a programming style where rare and otherwise truly exceptional conditions are documented in this way. The predefined exceptions that are not checked in this way are those for which declaring every possible occurrence would be unimaginably inconvenient: Error, for example OutOfMemoryError, are thrown due to a failure in or of the virtual machine. Many of these are the result of linkage failures and can occur at unpredictable points in the execution of a Java program. Sophisticated programs may yet wish to catch and attempt to recover from some of these conditions. RuntimeException, for example NullPointerException, result from runtime integrity checks and are thrown either directly from the Java program or in library routines. It is beyond the scope of the Java language, and perhaps beyond the state of the art, to include sufficient information in the program to reduce to a manageable number the places where these can be proven not to occur. abstractmethods. See §11 for more information about exceptions and a large example. abstract(§8.4.3.1) or native(§8.4.3.4). MethodBody:A compile-time error occurs if a method declaration is either Block ; abstractor nativeand has a block for its body. A compile-time error occurs if a method declaration is neither abstractnor nativeand has a semicolon for its body. If an implementation is to be provided for a method but the implementation requires no executable code, the method body should be written as a block that contains no statements: " { }". If a method is declared void, then its body must not contain any return statement (§14.15) that has an Expression. If a method is declared to have a return type, then every return statement (§14.15) in its body must have an Expression. A compile-time error occurs if the body of the method can complete normally (§14.1). In other words, a method with a return type must return only by using a return statement that provides a value return; it is not allowed to "drop off the end of its body." Note that it is possible for a method to have a declared return type and yet contain no return statements. Here is one example: class DizzyDean { int pitch() { throw new RuntimeException("90 mph?!"); } } abstractor not) of the superclass and superinterfaces that are accessible to code in the class and are neither overridden (§8.4.6.1) nor hidden (§8.4.6.2) by a declaration in the class. abstract, then the declaration of that method is said to implement any and all declarations of abstractmethods with the same signature in the superclasses and superinterfaces of the class that would otherwise be accessible to code in the class. A compile-time error occurs if an instance method overrides a static method. In this respect, overriding of methods differs from hiding of fields (§8.3), for it is permissible for an instance variable to hide a static variable. An overridden method can be accessed by using a method invocation expression (§15.11) that contains the keyword super. Note that a qualified name or a cast to a superclass type is not effective in attempting to access an overridden method; in this respect, overriding of methods differs from hiding of fields. See §15.11.4.10 for discussion and examples of this point. staticmethod, then the declaration of that method is said to hide any and all methods with the same signature in the superclasses and superinterfaces of the class that would otherwise be accessible to code in the class. A compile-time error occurs if a staticmethod hides an instance method. In this respect, hiding of methods differs from hiding of fields (§8.3), for it is permissible for a staticvariable to hide an instance variable. A hidden method can be accessed by using a qualified name or by using a method invocation expression (§15.11) that contains the keyword super or a cast to a superclass type. In this respect, hiding of methods is similar to hiding of fields. void. Moreover, a method declaration must not have a throwsclause that conflicts (§8.4.4) with that of any method that it overrides or hides; otherwise, a compile-time error occurs. In these respects, overriding of methods differs from hiding of fields (§8.3), for it is permissible for a field to hide a field of another type. The access modifier (§6.6) of an overriding or hiding method must provide at least as much access as the overridden or hidden method, or a compile-time error occurs. In more detail: public, then the overriding or hiding method must be public; otherwise, a compile-time error occurs. protected, then the overriding or hiding method must be protectedor public; otherwise, a compile-time error occurs. private; otherwise, a compile-time error occurs. privatemethod is never accessible to subclasses and so cannot be hidden or overridden in the technical sense of those terms. This means that a subclass can declare a method with the same signature as a privatemethod in one of its superclasses, and there is no requirement that the return type or throwsclause of such a method bear any relationship to those of the privatemethod in the superclass. abstract, then there are two subcases: abstractis static, a compile-time error occurs. abstractis considered to override, and therefore to implement, all the other methods on behalf of the class that inherits it. A compile-time error occurs if, comparing the method that is not abstractwith each of the other of the inherited methods, for any such pair, either they have different return types or one has a return type and the other is void. Moreover, a compile-time error occurs if the inherited method that is not abstracthas a throwsclause that conflicts (§8.4.4) with that of any other of the inherited methods. abstract, then the class is necessarily an abstractclass and is considered to inherit all the abstractmethods. A compile-time error occurs if, for any two such inherited methods, either they have different return types or one has a return type and the other is void. (The throwsclauses do not cause errors in this case.) abstract, because methods that are not abstractare inherited only from the direct superclass, not from superinterfaces. There might be several paths by which the same method declaration might be inherited from an interface. This fact causes no difficulty and never, of itself, results in a compile-time error. throwsclauses of two methods with the same name but different signatures. Methods are overridden on a signature-by-signature basis. If, for example, a class declares two public methods with the same name, and a subclass overrides one of them, the subclass still inherits the other method. In this respect, Java differs from C++. When a method is invoked (§15.11), the number of actual arguments and the compile-time types of the arguments are used, at compile time, to determine the signature of the method that will be invoked (§15.11.2). If the method that is to be invoked is an instance method, the actual method to be invoked will be determined at run time, using dynamic method lookup (§15.11.4). class Point {the class int x = 0, y = 0; void move(int dx, int dy) { x += dx; y += dy; } } class SlowPoint extends Point { int xLimit, yLimit; void move(int dx, int dy) { super.move(limit(dx, xLimit), limit(dy, yLimit)); } static int limit(int d, int limit) { return d > limit ? limit : d < -limit ? -limit : d; } } SlowPointoverrides the declarations of method moveof class Pointwith its own movemethod, which limits the distance that the point can move on each invocation of the method. When the movemethod is invoked for an instance of class SlowPoint, the overriding definition in class SlowPointwill always be called, even if the reference to the SlowPointobject is taken from a variable whose type is Point. class Point {the class int x = 0, y = 0; void move(int dx, int dy) { x += dx; y += dy; } int color; } class RealPoint extends Point { float x = 0.0f, y = 0.0f; void move(int dx, int dy) { move((float)dx, (float)dy); } void move(float dx, float dy) { x += dx; y += dy; } } RealPointhides the declarations of the intinstance variables xand yof class Pointwith its own floatinstance variables xand y, and overrides the method moveof class Pointwith its own movemethod. It also overloads the name movewith another method with a different signature (§8.4.2). In this example, the members of the class RealPoint include the instance variable color inherited from the class Point, the float instance variables x and y declared in RealPoint, and the two move methods declared in RealPoint. Which of these overloaded move methods of class RealPoint will be chosen for any particular method invocation will be determined at compile time by the overloading resolution procedure described in §15.11. class Point {Here the class; } } Pointprovides methods getXand getYthat return the values of its fields xand y; the class RealPointthen overrides these methods by declaring methods with the same signature. The result is two errors at compile time, one for each method, because the return types do not match; the methods in class Pointreturn values of type int, but the wanna-be overriding methods in class RealPointreturn values of type float. class Point {Here the overriding methods int x = 0, y = 0; void move(int dx, int dy) { x += dx; y += dy; } int getX() { return x; } int getY() { return y; } int color; } class RealPoint extends Point { float x = 0.0f, y = 0.0f; void move(int dx, int dy) { move((float)dx, (float)dy); } void move(float dx, float dy) { x += dx; y += dy; } int getX() { return (int)Math.floor(x); } int getY() { return (int)Math.floor(y); } } getXand getYin class RealPointhave the same return types as the methods of class Pointthat they override, so this code can be successfully compiled. Consider, then, this test program: class Test { public static void main(String[] args) { RealPoint rp = new RealPoint(); Point p = rp; rp.move(1.71828f, 4.14159f); p.move(1, -1); show(p.x, p.y); show(rp.x, rp.y); show(p.getX(), p.getY()); show(rp.getX(), rp.getY()); }The output from this program is: static void show(int x, int y) { System.out.println("(" + x + ", " + y + ")"); } static void show(float x, float y) { System.out.println("(" + x + ", " + y + ")"); } } (0, 0) (2.7182798, 3.14159) (2, 3) (2, 3)The first line of output illustrates the fact that an instance of RealPointactually contains the two integer fields declared in class Point; it is just that their names are hidden from code that occurs within the declaration of class RealPoint(and those of any subclasses it might have). When a reference to an instance of class RealPointin a variable of type Pointis used to access the field x, the integer field xdeclared in class Pointis accessed. The fact that its value is zero indicates that the method invocation p.move(1, -1)did not invoke the method moveof class Point; instead, it invoked the overriding method moveof. static) method can be invoked by using a reference whose type is the class that actually contains the declaration of the method. In this respect, hiding of static methods is different from overriding of instance methods. The example: class Super { static String greeting() { return "Goodnight"; } String name() { return "Richard"; } }produces the output: class Sub extends Super { static String greeting() { return "Hello"; } String name() { return "Dick"; } } class Test { public static void main(String[] args) { Super s = new Sub(); System.out.println(s.greeting() + ", " + s.name()); } } Goodnight, Dickbecause the invocation of greetinguses the type of s, namely Super, to figure out, at compile time, which class method to invoke, whereas the invocation of nameuses the class of s, namely Sub, to figure out, at run time, which instance method to invoke. import java.io.OutputStream; import java.io.IOException; class BufferOutput { private OutputStream o; BufferOutput(OutputStream o) { this.o = o; } protected byte[] buf = new byte[512]; protected int pos = 0; public void putchar(char c) throws IOException { if (pos == buf.length) flush(); buf[pos++] = (byte)c; } public void putstr(String s) throws IOException { for (int i = 0; i < s.length(); i++) putchar(s.charAt(i)); }This example produces the output: public void flush() throws IOException { o.write(buf, 0, pos); pos = 0; } } class LineBufferOutput extends BufferOutput { LineBufferOutput(OutputStream o) { super(o); } public void putchar(char c) throws IOException { super.putchar(c); if (c == '\n') flush(); } } class Test { public static void main(String[] args) throws IOException { LineBufferOutput lbo = new LineBufferOutput(System.out); lbo.putstr("lbo\nlbo"); System.out.print("print\n"); lbo.putstr("\n"); } } lbo print lboThe class BufferOutputimplements a very simple buffered version of an OutputStream, flushing the output when the buffer is full or flushis invoked. The subclass LineBufferOutputdeclares only a constructor and a single method putchar, which overrides the method putcharof BufferOutput. It inherits the methods putstrand flushfrom class Buffer. In the putchar method of a LineBufferOutput object, if the character argument is a newline, then it invokes the flush method. The critical point about overriding in this example is that the method putstr, which is declared in class BufferOutput, invokes the putchar method defined by the current object this, which is not necessarily the putchar method declared in class BufferOutput. Thus, when putstr is invoked in main using the LineBufferOutput object lbo, the invocation of putchar in the body of the putstr method is an invocation of the putchar of the object lbo, the overriding declaration of putchar that checks for a newline. This allows a subclass of BufferOutput to change the behavior of the putstr method without redefining it. Documentation for a class such as BufferOutput, which is designed to be extended, should clearly indicate what is the contract between the class and its subclasses, and should clearly indicate that subclasses may override the putchar method in this way. The implementor of the BufferOutput class would not, therefore, want to change the implementation of putstr in a future implementation of BufferOutput not to use the method putchar, because this would break the preexisting contract with subclasses. See the further discussion of binary compatibility in §13, especially §13.2. BadPointException: class BadPointException extends Exception { BadPointException() { super(); } BadPointException(String s) { super(s); } } class Point { int x, y; void move(int dx, int dy) { x += dx; y += dy; } }This example results in a compile-time error, because the override of method class CheckedPoint extends Point { void move(int dx, int dy) throws BadPointException { if ((x + dx) < 0 || (y + dy) < 0) throw new BadPointException(); x += dx; y += dy; } } movein class CheckedPointdeclares that it will throw a checked exception that the movein class Pointhas not declared. If this were not considered an error, an invoker of the method moveon a reference of type Pointcould find the contract between it and Pointbroken if this exception were thrown. Removing the throws clause does not help: class CheckedPoint extends Point { void move(int dx, int dy) { if ((x + dx) < 0 || (y + dy) < 0) throw new BadPointException(); x += dx; y += dy; } }A different compile-time error now occurs, because the body of the method movecannot throw a checked exception, namely BadPointException, that does not appear in the throwsclause for move. StaticInitializer:It is a compile-time error for a static initializer to be able to complete abruptly (§14.1, §15.5) with a checked exception (§11.2). staticBlock The static initializers and class variable initializers are executed in textual order and may not refer to class variables declared in the class whose declarations appear textually after the use, even though these class variables are in scope. This restriction is designed to catch, at compile time, circular or otherwise malformed initializations. Thus, both: class Z { static int i = j + 2; static int j = 4; }and: class Z { static { i = j + 2; } static int i, j; static { j = 4; } }result in compile-time errors. Accesses to class variables by methods are not checked in this way, so: class Z { static int peek() { return j; }produces the output: static int i = peek(); static int j = 1; } class Test { public static void main(String[] args) { System.out.println(Z.i); } } 0because the variable initializer for iuses the class method peekto access the value of the variable jbefore jhas been initialized by its variable initializer, at which point it still has its default value (§4.5.4). If a return statement (§14.15) appears anywhere within a static initializer, then a compile-time error occurs. If the keyword this (§15.7.2) or the keyword super (§15.10, §15.11) appears anywhere within a static initializer, then a compile-time error occurs. ConstructorDeclaration:The SimpleTypeName in the ConstructorDeclarator must be the simple name of the class that contains the constructor declaration; otherwise a compile-time error occurs. In all other respects, the constructor declaration looks just like a method declaration that has no result type. ConstructorModifiersopt ConstructorDeclaratorConstructorDeclarator Throwsopt ConstructorBody ConstructorDeclarator:ConstructorBody ConstructorDeclarator: SimpleTypeNameSimpleTypeName (FormalParameterListopt ) Here is a simple example: class Point { int x, y; Point(int x, int y) { this.x = x; this.y = y; } }Constructors are invoked by class instance creation expressions (§15.8), by the newInstancemethod of class Class(§20.3), by the conversions and concatenations caused by the string concatenation operator + (§15.17.1), and by explicit constructor invocations from other constructors (§8.6.5). Constructors are never invoked by method invocation expressions (§15.11). Access to constructors is governed by access modifiers (§6.6). This is useful, for example, in preventing instantiation by declaring an inaccessible constructor (§8.6.8). Constructor declarations are not members. They are never inherited and therefore are not subject to hiding or overriding. ConstructorModifiers:The access modifiers ConstructorModifier ConstructorModifiers ConstructorModifier ConstructorModifier: one ofConstructorModifier ConstructorModifier: one of public protected private public, protected, and privateare discussed in §6.6. A compile-time error occurs if the same modifier appears more than once in a constructor declaration, or if a constructor declaration has more than one of the access modifiers public, protected, and private. Unlike methods, a constructor cannot be abstract, static, final, native, or synchronized. A constructor is not inherited, so there is no need to declare it final and an abstract constructor could never be implemented. A constructor is always invoked with respect to an object, so it makes no sense for a constructor to be static. There is no practical need for a constructor to be synchronized, because it would lock the object under construction, which is normally not made available to other threads until all constructors for the object have completed their work. The lack of native constructors is an arbitrary language design choice that makes it easy for an implementation of the Java Virtual Machine to verify that superclass constructors are always properly invoked during object creation. throwsclause for a constructor is identical in structure and behavior to the throwsclause for a method (§8.4.4). thisfollowed by a parenthesized argument list, or an explicit invocation of a constructor of the direct superclass, written as superfollowed by a parenthesized argument list. ConstructorBody:It is a compile-time error for a constructor to directly or indirectly invoke itself through a series of one or more explicit constructor invocations involving {ExplicitConstructorInvocationopt BlockStatementsoptBlockStatementsopt }ExplicitConstructorInvocation: this (ArgumentListopt ) ;ArgumentListopt super ( ) ; this. its direct superclass that takes no arguments. Except for the possibility of explicit constructor invocations, the body of a constructor is like the body of a method (§8.4.5). A return statement (§14.15) may be used in the body of a constructor if it does not include an expression. In the example: class Point {the first constructor of int x, y; Point(int x, int y) { this.x = x; this.y = y; } } class ColoredPoint extends Point { static final int WHITE = 0, BLACK = 1; int color; ColoredPoint(int x, int y) { this(x, y, WHITE); } ColoredPoint(int x, int y, int color) { super(x, y); this.color = color; } } ColoredPointinvokes the second, providing an additional argument; the second constructor of ColoredPointinvokes the constructor of its superclass Point, passing along the coordinates. An explicit constructor invocation statement. An invocation of the constructor of the direct superclass, whether it actually appears as an explicit constructor invocation statement or is provided automatically (§8.6.7), performs an additional implicit action after a normal return of control from the constructor: all instance variables that have initializers are initialized at that time, in the textual order in which they appear in the class declaration. An invocation of another constructor in the same class using the keyword this does not perform this additional implicit action. §12.5 describes the creation and initialization of new class instances. Object, then the default constructor has an empty body. If the class is declared public, then the default constructor is implicitly given the access modifier public (§6.6); otherwise, the default constructor has the default access implied by no access modifier. Thus, the example: public class Point { int x, y; }is equivalent to the declaration: public class Point { int x, y; public Point() { super(); } }where the default constructor is publicbecause the class Pointis public. private. A publicclass can likewise prevent the creation of instances outside its package by declaring at least one constructor, to prevent creation of a default constructor with publicaccess, and declaring no constructor that is public. Thus, in the example: class ClassOnly { private ClassOnly() { } static String just = "only the lonely"; }the class ClassOnlycannot be instantiated, while in the example: package just; public class PackageOnly { PackageOnly() { } String[] justDesserts = { "cheesecake", "ice cream" }; }the class PackageOnlycan be instantiated only within the package just, in which it is declared. Contents | Prev | Next | Index Java Language Specification (HTML generated by Suzette Pelouch on February 24, 1998) Please send any comments or corrections to doug.kramer@sun.com Spec-Zone.ru - all specs in one place
http://spec-zone.ru/Java/JLS/1/8.doc.html
CC-MAIN-2018-13
refinedweb
11,910
52.7
Hi, I have a question regarding Solution Accelelator - Correspondence Management. I would like to know if there exists a way to “promote” from one environment to another one all the works done into Adobe Solution Accelerators Content Creator? Situation: Business users work into a "Test" environment and they must promote their work into "Prod" environement. I have tried to create by using LiveCycle Workbench an LCA having my “fragment” created in ContentCreator, but "categories" was not imported. Do we have to redo the entire job done in ContentCreator? There exist a way to automatism the process? Thanks in advance, Marieve Marieve, While you can effectively create an LCA to export from one LC server and import into another, there is an unfortunate bug in LC ES Update 1 (8.2.1) that prevents any custom properties set on any assets from being exported in the LCA. This means that when you import the LCA on the other server, you lose all of the "cm" namespace properties that were set by the CGRImport tool, as well as all of the tags set in Content Creator (as you have already determined with your testing). The CGRImport tool supports the use of .tags files. If a .tags file is found with the same name as some other file being imported, CGRImport will look in that file for any tags to set after importing the asset from the local drive. You can see an example if you look inside the //sa/building_blocks/cgr_1_0/dist/repository/samples/CustomCommunications/share/frag folder. There are XDPs in there that have sibling .tags files. Furthermore, CGRImport will properly set other custom properties (still in the "cm" namespace) that are used by Content Creator to differentiate one XDP from another (e.g. to know if it's a textual paragraph fragment vs an image fragment). I would recommend you drag & drop what you want from the Test LC server into a local folder, create .tags files accordingly, then use CGRImport to import that folder/file structure into the Prod LC server. Stefan Adobe Systems
https://forums.adobe.com/thread/450772
CC-MAIN-2018-22
refinedweb
345
61.26