question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
|---|---|---|---|---|
836,342
| 839,592
|
Mark class/method obsolete or deprecated in C++
|
Is there a way of marking methods/classes in C++ as obsolete?
In c# you can write:
[Obsolete("You shouldn't use this method anymore.")]
void foo() {}
I use the GNU toolchain/Eclipse CDT if that matters.
|
The easiest way is with a #define DEPRECATED. On GCC, it expands to __attribute__((deprecated)), on Visual C++ it expands to __declspec(deprecated), and on compilers that do not have something silimar it expands to nothing.
|
836,469
| 836,485
|
Cross platform programming
|
I need to write a small program for the university. The problem is, it has to be in C/C++ under linux, and I've never used linux, I anticipate having a lot of problems with the IDE, compilation, and all that.
Is it possible to code it under windows and then "copy/paste" the code and compile it under linux? What are limitations I should know about if it's at all possible?
It will be a small program, typical client/server communication using sockets.
|
I think you should go ahead and do it under Linux (gcc?). This will teach you some stuff about 'old school' programming. Forget about using an IDE, use vim (if you already get it) or nedit (more like notepad).
Compile on the command line. Link it yourself. Write a make file to do this.
This is the basics. You need to understand it before using an IDE. Do this while you are still at university, because it's a pain and you will (and should) want to use an IDE for real work!
Also, a basic understanding of Unix is not hard to achieve (I have found my way around Solaris, Ubuntu and OS X, coming from a Windows background) - a few simple tutorials should get you up and running. For writing small school projects, there is not much you need to know: cd, ls, mkdir, make, gcc (be sure to use g++ for C++ projects - that has bitten me on my Mac before...). Stay close to your home directory (~).
Doing your project on the target system will help you get certain stuff right: When doing these simple sockets and pthreads examples, I found compiling and linking them to be non-platform portable. On certain systems, linking in the libraries needs to be done this way, on others that way.
BTW: If you really do want to do this under Windows, your best bet is to have a POSIX environment under Windows. POSIX sockets are different to the Windows networking model if I remember correctly.
Try either MinGW or Cygwin. Both should give you the *nix development environment under Windows. You can use your favorite text editor (a Windows port of vim?) and cmd.exe instead of bash for starting the compiler :)
EDIT: Sorry, if the tone is confrontational (according to comment). I will try to soften it a bit. It's just... I have seen quite a few people trying to learn C/C++ (or Java for that matter) with IDEs and have come to believe that they get in the way for starting off. Sure, you will need better tools for real life programs, but the overhead of project files etc. for school sample projects adds clutter. It also makes it harder to email your homework to your teacher - a zip with a bunch of .c and .h files and a makefile is really as simple as it gets...
|
836,511
| 6,153,500
|
What are some techniques for migrating a large MFC application to WPF/.NET?
|
I am currently working on a very large legacy MFC MDI application. It has a large number of UI elements - dockable toolbars, custom tree controls, context menus, etc. It is an image processing application so the main views render themselves using DirectX and OpenGL. The product is about 10 years old and one of the priorities here is to update the look and feel of it.
Knowing that Microsoft has done a good job of providing interoperability between C++/MFC and .NET I thought it would make sense to migrate the code base incrementally. What I'm struggling with now is where to start.
One approach is to rip out the MFC framework with WPF and reuse as much of the C++ code as we can. This will let us maximize the benefits of the WPF architecture but will mean a long development period until we're fully functional again.
Another approach is to replace MFC controls one at a time with their WPF counterparts. This will allow us to work incrementally. My concern with this approach is that it means there will be an awful lot of connection points between managed and unmanaged code and I'm not sure where to start with replacing things like the main menu and toolbars.
Or is there another option here I'm not seeing?
Any suggestions or links to information on this topic would be appreciated.
Update: DavidK raised some excellent questions so I'm adding the motivations behind this.
1) Future development of the product
This product is still being actively developed with new features getting added on a regular basis. I thought that it would make a lot of sense to try and slowly migrate towards C#/WPF. In my limited experience with C#/WPF I found the productivity gains to be amazing over working in C++/MFC.
The other big thing we're getting with WPF is the ability to take advantage of multi-head systems. MFC applications are limited to a single top level frame, making it very difficult to leverage multiple monitors.
2) Employee retention and recruitment
It's getting harder and harder to find developers who are willing to work on MFC. It's also important for the career development of the current developers to get exposure to newer technologies.
|
Revisiting this because I have successfully replaced our top level MFC UI (the main frame, windows, and toolbars) with WPF.
As it turns out, our core drawing code merely needs to be handed an HWND to render into. This made it really easy to reuse the bulk of our existing C++ codebase.
Here's a quick rundown on the key pieces of the approach I took:
Used the .NET HwndHost class to host an HWND for the C++ drawing code to render into
Created C++/CLI wrappers for any native C++ code that needed to be exposed to the WPF/C# UI code
Left most of the MFC dialogs as-is in the native C++ code. This minimizes the amount of work needed to finish the UI. The MFC dialogs can be migrated to WPF over time.
As a side note, we're using SandDock and SandRibbon from Divelements and have been very happy with them so far.
|
836,551
| 836,613
|
Forward declare a class's public typedef in c++
|
I'm trying to simplify a bunch of header file "include spaghetti" by using forward declarations and moving #includes into the implementation file. However, I keep coming upon the following scenario:
//Foo.h
#include "Bar.h"
class Foo
{
public:
void someMethod(Bar::someType_t &val);
};
//Bar.h
.
.
.
class Bar
{
public:
typedef std::vector<SomeClass> someType_t;
};
I want to remove #include "Bar.h" in as many cases as possible. I also see the situation where the typedef in Bar.h is listed outside of the Bar class. I'm assuming both situations can be addressed in the same manner.
Any ideas?
|
Unfortunately you don't have many choices and none is perfect.
First, the two obvious and unacceptable solutions:
You can forward declare the typedef which totally defeats the purpose of using a typedef.
You include the file which contains the typedef, which you want to avoid.
The more interesting solutions:
Have all related typedefs in the same include and include that file. This creates code coupling between the classes though. You ought to do that only with related classes, else you are going to end up with a god include file and this could lead to a lot of recompiling when you add a typedef to that file.
For each class, have a separate include with the typedefs in it. Kind of annoying, but it works.
Those last two are like doing forward declarations but with added typedefs. They reduce file interdependencies since you are rarely modifying the typedef file.
I'd say for most situations, the central include has the most benefit for hassle. Just be careful.
|
836,705
| 836,841
|
Why all java methods are implicitly overridable?
|
In C++, I have to explicitly specify 'virtual' keyword to make a member function 'overridable', as there involves an overhead of creating virtual tables and vpointers, when a member function is made overridable (so every member function is implicitly not overridable for performance reasons).
It also allows a member function to be hidden (if not overridden) when a subclass provides a separate implementation with the same name and signature.
The same technique is used in C# as well. I am wondering why Java waved away from this behavior and made every method overridable by default and provided the ability to disable overriding behavior on explicit use of 'final' keyword.
|
The better question might be "Why does C# have non-virtual methods?" Or at the very least, why aren't they virtual by default with the option to flag them as non-virtual?
In C++, there is the idea (as Brian so nicely pointed out) that if you don't want it, you don't pay for it. The problem is that if you do want it, this usually means you end up paying through the nose for it. In most Java implementations, they are designed explicitly for lots of virtual calls; the vtable implementations tend to be fast, scarcely more expensive than non-virtual calls, meaning the primary advantage of non-virtual functions is lost. Furthermore, JIT compilers can inline virtual functions at runtime. As such, for efficiency reasons, there is very little reason actually to use non-virtual functions.
Thus, it largely comes down to the principle of least surprise. It tells us that all methods to behave the same way, not half of them being virtual and half of them being non-virtual. Since we need to have at least some virtual methods to achieve this polymorphism thing, it makes sense to have them all be virtual. Furthermore, having two methods with the same signature is just asking to shoot yourself in the foot.
Polymorphism also dictates that the object itself should have control over what it does. It's behavior should not be determinate on whether the client thinks it's a FooParent or a FooChild.
EDIT: So I'm being called on my assertions. This next paragraph is conjecture on my part, not a statement of fact.
An interesting side effect of all this is that Java programmers tend to use interfaces very heavily. Since the virtual method optimizations make the cost of interfaces essentially non-existent, they allow you to use a List (for example) instead of an ArrayList, and switch it out for a LinkedList at some later date with a simple one-line change and no additional penalty.
EDIT: I'll also pony up a couple sources. While not the original sources, they do come from Sun explaining some of the workings on HotSpot.
Inlining
VTable
|
837,073
| 844,430
|
Error C1047: Object file created with an older compiler than other objects
|
I have a project that I'm building in C++ in Release mode in Visual Studio 2008 SP1 on Windows 7 and when I build it I keep getting:
fatal error C1047: The object or
library file '.\Release\foobar.obj'
was created with an older compiler
than other objects; rebuild old
objects and libraries.
The error occurs while linking.
I've tried deleting the specific object file and rebuilding but that doesn't fix it. I've also tried blowing away the whole release build folder and rebuilding but that also didn't fix it. Any ideas?
|
I would suggest reinstalling VS 2008 SP1. Have you installed a different VS (e.g. VS Express) in the meantime? This is known to cause interference with an existing VS installation.
You could try checking the compiler and linker versions by running cl.exe and link.exe from the Visual Studio command prompt.
|
837,088
| 846,589
|
Problems with Static Initialization
|
I'm having some weird issues with static initalization. I'm using a code generator to generate structs and serialization code for a message passing system I wrote. In order to have a way of easily allocating a message based on it's message id I have my code generator ouput something similar to the following for each message type:
MessageAllocator s_InputPushUserControllerMessageAlloc(INPUT_PUSH_USER_CONTROLLER_MESSAGE_ID, (AllocateMessageFunc)Create_InputPushUserControllerMessage);
The MessageAllocator class basically looks like this:
MessageAllocator::MessageAllocator( uint32_t messageTypeID, AllocateMessageFunc func )
{
if (!s_map) s_map = new std::map<uint32_t, AllocateMessageFunc>();
if (s_map->insert(std::make_pair(messageTypeID, func)).second == false)
{
//duplicate key!
ASSERT(false, L"Nooooo!");
}
s_count++;
}
MessageAllocator::~MessageAllocator()
{
s_count--;
if (s_count == 0) delete s_map;
}
where s_map and s_count are static members of MessageAllocator. This works most of the time but sometimes messages are not added to the map. For example, this particular message is not added unless i call Create_InputPushUserControllerMessage() somewhere in my startup code, however other messages work fine. I thought this might be something to do with the linker incorrectly thinking the type is unreferenced and removing it so I disabled that using the /OPT:NOREF switch (I'm using Visual Studio 2008 SP1) but that had no effect.
I'm aware of the problem of the "static initialization order fiasco" but as far as I know the order in which these objects are created shouldn't alter the result so this seems ok to me.
Any insight here would be appreciated.
|
Turns out that the object files containing the static initializers were not included by the linker because nothing referenced any functions in them. To work around this I extern "C"-ed one of the generated functions so that it would have a predictable non-mangled name and then forced a reference to it using a pragma like this for each message
#pragma comment(linker, "/include:Create_GraphicsDynamicMeshCreationMessage")
which I put in the generated header file that is later included in all the other non-generated files. It's MSVC only and kind of hack but I assume I can do something similar on GCC once I eventually port it.
|
837,203
| 837,243
|
Using Hash Maps to represent an extremely large data source
|
I have a very large possible data set that I am trying to visualize at once. The set itself consists of hundreds of thousands of segments, each of which is mapped to an id.
I have received a second data source that gives more real-time information for each segment, but the id's do not correspond to the id's I have.
I have a 1:1 mapping of the data id's (9-character strings) to the current id's (long integers). The problem is that there are a lot of id's, and the data that is coming in is in no specific order.
The solution I came up with is to have a hash-map that maps the strings to the road id's. The problem is that I don't know if the hash-map will be efficient enough to have all 166k data entries.
Does anyone have any suggestions and/or hashing algorithms that I can use for this?
|
If you're only dealing with hundreds of thousands of datapoints, it will likely not be a problem to go with the naive way and just stick with a hash-map.
Even if you have 500,000 9-character strings and an equal number of longs, that still only 16ish bytes per item, or 8,000,000 bytes total. Even if you double that for overhead, 16 MB is hardly too big to have in memory at one time.
Basically, try the easy way first, and only worry about it when your profiling tells you it's taking too long.
|
837,265
| 837,446
|
Why is there no ^^ operator in C/C++?
|
& has &&. | has ||. Why doesn't ^ have ^^?
I understand that it wouldn't be short-circuiting, but it would have different semantics. In C, true is really any non-zero value. Bitwise XOR is not always the same thing as logical XOR:
int a=strcmp(str1,str2);// evaluates to 1, which is "true"
int b=strcmp(str1,str3);// evaluates to 2, which is also "true"
int c=a ^^ b; // this would be false, since true ^ true = false
int d=a ^ b; //oops, this is true again, it is 3 (^ is bitwise)
Since you can't always rely on a true value being 1 or -1, wouldn't a ^^ operator be very helpful? I often have to do strange things like this:
if(!!a ^ !!b) // looks strange
|
Dennis Ritchie answers
There are both historical and practical reasons why there is no ^^ operator.
The practical is: there's not much use for the operator. The main point of && and || is to take advantage of their short-circuit evaluation not only for efficiency reasons, but more often for expressiveness and correctness.
[...]
By contrast, an ^^ operator would always force evaluation of both arms of the expression, so there's no efficiency gain. Furthermore, situations in which ^^ is really called for are pretty rare, though examples can be created. These situations get rarer and stranger as you stack up the operator--
if (cond1() ^^ cond2() ^^ cond3() ^^ ...) ...
does the consequent exactly when an odd number of the condx()s are true. By contrast, the && and || analogs remain fairly plausible and useful.
|
837,267
| 837,291
|
Safe to store list::iterator for later use?
|
Suppose I have a list, in which no new nodes are added or deleted. However, the nodes may be shuffled around.
Is it safe to save an iterator, pointing to a node in the list, and access it at some arbitrarily later time?
Edit (followup question):
The documentation for list::splice() says that it removes elements from the argument list. Does this mean if I call splice, using the same list as arguments to the function, that existing iterators will be invalidated?
|
Yes, std::list iterators are just pointers to a node. You can insert, delete (other nodes), and rearrange nodes in the list and the iterator is not invalidated.
|
837,521
| 837,601
|
C++ STL's String eqivalent for Binary Data
|
I am writing a C++ application and I was wondering what the C++ conventional way of storing a byte array in memory.
Is there something like a string, except specifically made for binary data.
Right now I am using a *unsigned char** array to store the data, but something more STL/C++ like would be better.
|
I'd use std::vector<unsigned char>. Most operations you need can be done using the STL with iterator ranges. Also, remember that if you really need the raw data &v[0] is guaranteed to give a pointer to the underlying array.
|
837,671
| 837,696
|
Which libraries must be linked with protocol-buffers generated C++ code
|
I have the mytest.cc and mytest.h output from a mytest.proto file, but I can't find any reference on to how to compile a object using g++ for this. (the .proto is fine as I got it working with Python)
g++ mytest.cc -l???????
what libraries to include?
|
I think you may need to link to libprotobuf
g++ mytest.cc -lprotobuf -o mytest
|
837,772
| 840,468
|
How do I pass a list of objects from C++ to Lua?
|
I'm the lead dev for Bitfighter, and am adding user-scripted bots using Lua. I'm working with C++ and Lua using Lunar to glue them together.
I'm trying to do something that I think should be pretty simple: I have an C++ object in Lua (bot in the code below), and I call a method on it that (findItems) which causes C++ to search the area around the robot and return a list of objects it finds (TestItems and others not shown here). My question is simply how do I assemble and return the list of found items in C++, and then iterate over them in Lua?
Basically, I want to fill in the <<<< Create list of items, return it to lua >>>> block below, and make any corrections I may need in the Lua code itself, included below that.
I've tried to keep the code simple but complete. Hope there's not too much here! Thanks!
C++ Header file
class TestItem : public LuaObject
{
public:
TestItem(); // C++ constructor
///// Lua Interface
TestItem(lua_State *L) { } ; // Lua constructor
static const char className[];
static Lunar<TestItem>::RegType methods[];
S32 getClassID(lua_State *L) { return returnInt(L, TestItemType); }
};
class LuaRobot : public Robot
{
LuaRobot(); // C++ constructor
///// Lua Interface
LuaRobot(lua_State *L) { } ; // Lua constructor
static const char className[];
static Lunar<LuaRobot>::RegType methods[];
S32 findItems(lua_State *L);
}
C++ .cpp file
const char LuaRobot::className[] = "Robot"; // Class name in Lua
// Define the methods we will expose to Lua
Lunar<LuaRobot>::RegType LuaRobot::methods[] =
{
method(LuaRobot, findItems),
{0,0} // End method list
};
S32 LuaRobot::findItems(lua_State *L)
{
range = getIntFromStack(L, 1); // Pop range from the stack
thisRobot->findObjects(fillVector, range); // Put items in fillVector
<<<< Create list of items, return it to lua >>>>
for(int i=0; i < fillVector.size(); i++)
do something(fillVector[i]); // Do... what, exactly?
return something;
}
/////
const char TestItem::className[] = "TestItem"; // Class name in Lua
// Define the methods we will expose to Lua
Lunar<TestItem>::RegType TestItem::methods[] =
{
// Standard gameItem methods
method(TestItem, getClassID),
{0,0} // End method list
};
Lua Code
bot = LuaRobot( Robot ) -- This is a reference to our bot
range = 10
items = bot:findItems( range )
for i, v in ipairs( items ) do
print( "Item Type: " .. v:getClassID() )
end
|
So you need to fill a vector and push that to Lua.
Some example code follows. Applications is a std::list.
typedef std::list<std::string> Applications;
I create a table and fill it with the data in my list.
int ReturnArray(lua_State* L) {
lua_createtable(L, applications.size(), 0);
int newTable = lua_gettop(L);
int index = 1;
Applications::const_iterator iter = applications.begin();
while(iter != applications.end()) {
lua_pushstring(L, (*iter).c_str());
lua_rawseti(L, newTable, index);
++iter;
++index;
}
return 1;
}
This leaves me with an array in the stack. If it were returned to Lua, then I could write the following:
for k,v in ipairs( ReturnArray() ) do
print(v)
end
Of course so far, this just gets me a Lua array of strings. To get an array of Lua objects we just tweak your method a bit:
S32 LuaRobot::findItems(lua_State *L)
{
range = getIntFromStack(L, 1); // Pop range from the stack
thisRobot->findObjects(fillVector, range); // Put items in fillVector
// <<<< Create list of items, return it to lua >>>>
lua_createtable(L, fillVector.size(), 0);
int newTable = lua_gettop(L);
for(int i=0; i < fillVector.size(); i++) {
TestItem* item = fillVector[i];
item->push(L); // put an object, not a string, in Lua array
lua_rawseti(L, newTable, i + 1);
}
return 1;
}
|
838,098
| 838,125
|
CListControl selection (MFC)
|
In report view in a CListCtrl in MFC, how do I detect if there is no current highlighted selection?
Using GetFirstSelectedItemPosition doesn't work because if an item was previously selected and then clicked somewhere else on the list control, GetFirstSelectedItemPosition still reports the last position selected instead of NULL, however, the said position is not highlighted anymore.
|
Did you try CListCtrl::GetSelectedCount?
|
838,345
| 838,363
|
How to reduce code duplication on class with data members with same name but different type?
|
I have trouble when designing classes like this
class C1 {
public:
void foo();
}
class C2 {
public:
void foo();
}
C1 and C2 has the same method foo(),
class Derived1 : public Base {
public:
void Update() {
member.foo();
}
private:
C1 member;
}
class Derived2 : public Base {
public:
void Update() {
member.foo();
}
private:
C2 member;
}
Update() of both Derived class are exactly the same, but the type of member is different.
So i have to copy the Update implement for every new derived class.
Is that a way to reduce this code duplication? I only come out with a solution with macro.
I think there is a more elegant way to solve this with template but I can not figure it out..
EDIT:
thanks a lot guys but i think i missed something..
1.I'm using c++
2.In reality each Derived class has about 5 members, they all afford the foo() method and are derived from the same base class. My situation is that i have already written a (very long) Update() method and it can work for every derived class without any modification. So i just copy and paste this Update() to every new class's Update() and this lead to terrible code duplication. I wonder if there is a way in which i need not to rewrite the Update() too much and can reduce the duplication.
thx again
|
This is exactly the sort of application that class templates are designed for. They allow functions within a class to operate on different data types, without the need to copy algorithms and logic.
This Wikipedia page will give you a good overview of templates in programming.
Here's the basic idea to get you started:
template <class T>
class CTemplateBase
{
public:
void Update()
{
member.foo();
}
private:
T member; // Generic type
}
class CDerived1 : public CTemplateBase<C1>
{
// No common algorithms required here
}
class CDerived2 : public CTemplateBase<C2>
{
// No common algorithms required here
}
|
838,384
| 1,267,878
|
Reorder vector using a vector of indices
|
I'd like to reorder the items in a vector, using another vector to specify the order:
char A[] = { 'a', 'b', 'c' };
size_t ORDER[] = { 1, 0, 2 };
vector<char> vA(A, A + sizeof(A) / sizeof(*A));
vector<size_t> vOrder(ORDER, ORDER + sizeof(ORDER) / sizeof(*ORDER));
reorder_naive(vA, vOrder);
// A is now { 'b', 'a', 'c' }
The following is an inefficient implementation that requires copying the vector:
void reorder_naive(vector<char>& vA, const vector<size_t>& vOrder)
{
assert(vA.size() == vOrder.size());
vector vCopy = vA; // Can we avoid this?
for(int i = 0; i < vOrder.size(); ++i)
vA[i] = vCopy[ vOrder[i] ];
}
Is there a more efficient way, for example, that uses swap()?
|
This algorithm is based on chmike's, but the vector of reorder indices is const. This function agrees with his for all 11! permutations of [0..10]. The complexity is O(N^2), taking N as the size of the input, or more precisely, the size of the largest orbit.
See below for an optimized O(N) solution which modifies the input.
template< class T >
void reorder(vector<T> &v, vector<size_t> const &order ) {
for ( int s = 1, d; s < order.size(); ++ s ) {
for ( d = order[s]; d < s; d = order[d] ) ;
if ( d == s ) while ( d = order[d], d != s ) swap( v[s], v[d] );
}
}
Here's an STL style version which I put a bit more effort into. It's about 47% faster (that is, almost twice as fast over [0..10]!) because it does all the swaps as early as possible and then returns. The reorder vector consists of a number of orbits, and each orbit is reordered upon reaching its first member. It's faster when the last few elements do not contain an orbit.
template< typename order_iterator, typename value_iterator >
void reorder( order_iterator order_begin, order_iterator order_end, value_iterator v ) {
typedef typename std::iterator_traits< value_iterator >::value_type value_t;
typedef typename std::iterator_traits< order_iterator >::value_type index_t;
typedef typename std::iterator_traits< order_iterator >::difference_type diff_t;
diff_t remaining = order_end - 1 - order_begin;
for ( index_t s = index_t(), d; remaining > 0; ++ s ) {
for ( d = order_begin[s]; d > s; d = order_begin[d] ) ;
if ( d == s ) {
-- remaining;
value_t temp = v[s];
while ( d = order_begin[d], d != s ) {
swap( temp, v[d] );
-- remaining;
}
v[s] = temp;
}
}
}
And finally, just to answer the question once and for all, a variant which does destroy the reorder vector (filling it with -1's). For permutations of [0..10], It's about 16% faster than the preceding version. Because overwriting the input enables dynamic programming, it is O(N), asymptotically faster for some cases with longer sequences.
template< typename order_iterator, typename value_iterator >
void reorder_destructive( order_iterator order_begin, order_iterator order_end, value_iterator v ) {
typedef typename std::iterator_traits< value_iterator >::value_type value_t;
typedef typename std::iterator_traits< order_iterator >::value_type index_t;
typedef typename std::iterator_traits< order_iterator >::difference_type diff_t;
diff_t remaining = order_end - 1 - order_begin;
for ( index_t s = index_t(); remaining > 0; ++ s ) {
index_t d = order_begin[s];
if ( d == (diff_t) -1 ) continue;
-- remaining;
value_t temp = v[s];
for ( index_t d2; d != s; d = d2 ) {
swap( temp, v[d] );
swap( order_begin[d], d2 = (diff_t) -1 );
-- remaining;
}
v[s] = temp;
}
}
|
838,460
| 838,468
|
Java Exception vs C++ Exceptions
|
Where are exceptions stored ?
Stack, Heap.
How is memory allocated and deallocated for Exceptions?
Now if you have more than one exception which needs to be handled are there objects of all these exceptions created?
|
I would assume that memory for exceptions is allocated the same way as for all other objects (on the heap).
This used to be a problem, because then you cannot allocate memory for an OutOfMemoryError,
which is why there was no stack trace until Java 1.6. Now they pre-allocate space for the stacktrace as well.
If you are wondering where the reference to the exception is stored while it is being thrown, the JVM keeps the reference internally while it unwinds the call stack to find the exception handler, who then gets the reference (on its stack frame, just like any other local variable).
There cannot be two exceptions being thrown at the same time (on the same thread). They can be nested, but then you have only one "active" exception with a reference to the nested exception.
When all references to the exception disappear (e.g. after the exception handler is finished), the exception gets garbage-collected like everything else.
|
838,639
| 838,720
|
What cast occurs when there is a signed/unsigned mismatch?
|
When a compiler finds a signed / unsigned mismatch, what action does it take? Is the signed number cast to an unsigned or vice versa? and why?
|
If operand are integral and there is an unsigned value, then conversion to unsigned is done. For example:
-1 > (unsigned int)1 // as -1 will be converted to 2^nbits-1
Conversion int->unsigned int is: n>=0 -> n; n<0 -> n (mod 2^nbits), for example -1 goes to 2^nbits-1
Conversion unsigned int->int is: n <= INT_MAX -> n; n > INT_MAX -> implementation defined
If the destination type is unsigned,
the resulting value is the least
unsigned integer congruent to the
source integer (modulo 2^n where n is
the number of bits used to represent
the unsigned type).
If the destination type is signed, the
value is unchanged if it can be
represented in the destination type
(and bit-field width); otherwise, the
value is implementation-defined.
|
838,670
| 838,842
|
Is there a way to forbid casting to subclass that is non-const in C++?
|
Here is a complete example.
I want to forbid using A::set from objects casted from B to A by allowing only casting
B to const A.
How to do it?
(I can't use virtual functions)
#include <iostream>
#include <cassert>
using namespace std;
class A {
public:
int get() const { return i_; }
void set(int i) { i_ = i; }
protected:
int i_;
};
class B : public A {
public:
int ok() const { return A::get() == copy_i_; }
void set(int i) { A::set(i); copy_i_ = i; }
protected:
int copy_i_;
};
void test2() {
A a;
a.set(3); // ok here
cout << a.get() << endl;
B b;
b.set(5);
A& aa = b;
assert(b.ok());
aa.set(3); // not ok here
assert(b.ok()); // fail-here
}
int main() {
test2();
return 0;
}
|
You could make the inheritance private and provide a member function in B to use instead of casting.
const A& B::convert_to_A() const { return *this; }
|
838,674
| 838,785
|
Throwing Destructors, Memory Corruption?
|
We have a class whose semantic behaviour is like the following :-
struct Sample
{
~Sample() throw()
{
throw 0;
}
};
void f ()
{
try
{
delete new Sample;
}
catch (...){
}
}
I know that throwing exceptions in dtors is evil; but the relinquishment of a 3rd Party library resource is throwing an exception (but can be re-accquired immediately, something strange!). There is also a pool of this resource, say an array/container of class Sample. So, there are two cases to consider: destruction of a dynamically allocated object and destruction of a dynamically allocated array of objects.
Currently the application crashes randomly at different execution-points only when the array version (pool) is used. We believe this is due to memory corruption but then why does the unpooled version work?.
What happens to the allocated memory? Is it undefined behaviour? What happens in the case of an array? Do the dtors (atleast, not memory) of all the elements of an array (say if the dtor of the first element throws) get called?
Thanks in advance,
EDIT-1:
Well, we tracked it down to dtors of some array-elements not being called. But the allocated memory does not seem to have problems...
Following is section 5.3.5.7 of SC22-N-4411.pdf)
If the value of the operand of the delete-expression is not a null pointer value, the delete-expression will
call a deallocation function (3.7.4.2). Otherwise, it is unspecified whether the deallocation function will be
called. [ Note: The deallocation function is called regardless of whether the destructor for the object or some
element of the array throws an exception. —end note ]
<\snip>
Looks like memory is always deallocated in such cases. Am i right in interpreting the standard?
|
There are two things that could happen in this situation:
terminate() is called
undefined behaviour
In neither case can dynamically allocated memory be guaranteed to be released (except that application termination will of course return all resources to the OS).
|
838,721
| 838,986
|
C++ iterators considered harmful?
|
At the Boost library conference today, Andrei Alexandrescu, author of the book Modern C++ Design and the Loki C++ library, gave a talk titled "Iterators Must Go" (video, slides) about why iterators are bad, and he had a better solution.
I tried to read the presentation slides, but I could not get much out of them.
Are iterators bad?
Is his replacement really better?
Will C++ implementators pick up his ideas?
|
First, to answer your questions:
No. In fact, I argued elsewhere that iterators are the most important/fundamental concept of computer science ever. I (unlike Andrei) also think that iterators are intuitive.
Yes, definitely but that shouldn't come as a surprise.
Hmm. Looking at Boost.Range and C++0x – haven't they already?
Andrei's big contribution here is just to say: drop the concept of iterators altogether, see ranges not just as a convenience wrapper but rather as a core construct. Other languages have already done this (much of Andrei's concepts just echo .NET's LINQ or Python's iterators) but they all only offer output ranges. Andrei argues for different types of ranges, much like the conventional iterator categories.
In that light, it's odd that he starts by mocking the arbitrariness of these iterator categories.
I also think that his examples are off, especially his file copying: yes, the iterator variant is a huge improvement over the 1975 code. It reduces a loop with complicated break condition down to one statement. What he's really taking issue with here is just the syntax. Well, excuse me: we're talking about C++ here – of course the syntax is ugly. And yes, using ranges here is an improvement – but only syntactically.
I also think that Andrei's find implementation is off. What he really defines there is the DropUntil operation (naming is hard!) from LINQ. The find operation should really return either one or zero elements (or an iterator!). Shunning iterators here isn't helpful in my opinion since we might want to modify the value directly instead of copying it. Returning a one-element range here only adds overhead without a benefit. Doing it Andrei's way is bad because then the name of the method is just wrong and misleading.
That said, I essentially agree with Andrei in almost all points. Iterators, while being my pet concept from computer science, are certainly a big syntactical burden and many ranges (especially infinite generators) can (and should) be implemented conveniently without them.
|
838,816
| 838,859
|
Call C++ code from a C# application or port it?
|
I've recently been wrestling with an algorithm which was badly implemented (i.e. the developer was pulled off onto another project and failed to adequately document what he'd done) in C#.
I've found an alternative (from numerical recipes) which works but is written in C++. So I'm thinking probably the safest way to get something working would be to wrap the C++ up in a DLL.
Bearing in mind that I'm still a bit green when it comes to C# and have never tried making a DLL from scratch, does this sound like a reasonable approach (and if so, has anyone tried this / got any advice)? Or should I go the whole hog and try and port the C++ routine into C#?
Edit - I'm not looking for anyone to make the decision for me, but if anyone has any exprience of either route I'd be interested to hear their opinions and any nasty pitfalls that should be avoided. For example, how nasty is passing in lists of data from C# to a C++ STL vector?
|
I tried linking to c-dll's from c# code with quite good results, even though I had some problems sending data between the environments. Otherwise the procedure is quite straight forward. The more data you send back and forth (both amount and frequency) the slower your program will run, but you have probably already figured this out on your own.
The main drawback was maintaining the c#-c glue code (interface code) each time something changed or someone found a bug.
Here is a bit of code to get you started:
using System.Runtime.InteropServices;
class myDllCaller {
//call to function in the dll returning an int
[DllImport("MyFavorite.dll")]
private static extern int dllFunction(//list of parameters to function);
public static void Main() {
int rerult = dllFunction();
}
}
|
838,917
| 838,932
|
How does the compiler resolve infinite reference loops?
|
// edited by Neil Butterworth to conserve vertical space
#include <stdio.h>
struct A;
struct B;
A& GetAInstance();
B& GetBInstance();
struct A {
A() {
printf( "A\n" );
}
~A() {
printf( "~A\n" );
B& b = GetBInstance();
}
};
struct B {
B() {
printf( "B\n" );
}
~B() {
printf( "~B\n" );
A& a = GetAInstance();
}
};
A& GetAInstance() {
static A a;
return a;
}
B& GetBInstance() {
static B b;
return b;
}
int main( ) {
A a;
}
Consider the above scenario. I would expect this to result in an infinite reference loop resulting in the program being unable to exit from static de-initialization, but the program ran just fine with the following printout:
A
~A
B
~B
A
~A
Which was unexpected.
How does a compiler deal with this situation? What algorithms does it use to resolve the infinite recursion? Or have I misunderstood something fundamental? Is this, somewhere in the standard, defined as undefined?
|
The compiler effectively stores a bool with each static to remember whether it has been initialised.
This is the order:
Inside main:
Construct A
Destruct A
Construct static B
Clean-up of statics:
Destruct static B
Construct static A
Destruct static A
3.6.3/1 in the Standard specifies it should work this way, even when a static is constructed during clean-up as in this case.
|
839,257
| 864,834
|
How do I make a CMFCToolBar recognize image masks?
|
I have a CMFCToolBar-derived class and an insance thereof is the member of a CDockablePane-derived class.
I looked at the VisualStudioDemo sample to see how it's done and have this so far:
int CMyPane::OnCreate(LPCREATESTRUCT lpCreateStruct)
{
// Removed all "return -1 on error" code for better readability
CDockablePane::OnCreate(lpCreateStruct);
if(m_toolBar.Create(this, AFX_DEFAULT_TOOLBAR_STYLE, IDR_MY_TOOLBAR) &&
m_toolBar.LoadToolBar(IDR_MY_TOOLBAR, 0, 0, TRUE /* Is locked */))
{
if(theApp.m_bHiColorIcons) // Is true, i.e. following code is executed
{
m_toolBar.CleanUpLockedImages();
m_toolBar.LoadBitmap(IDB_MY_TOOLBAR_24, 0, 0, TRUE /*Locked*/);
}
m_toolBar.SetPaneStyle(m_toolBar.GetPaneStyle() | CBRS_TOOLTIPS | CBRS_FLYBY);
m_toolBar.SetPaneStyle(m_toolBar.GetPaneStyle() & ~(CBRS_GRIPPER | CBRS_SIZE_DYNAMIC | CBRS_BORDER_TOP | CBRS_BORDER_BOTTOM | CBRS_BORDER_LEFT | CBRS_BORDER_RIGHT));
m_toolBar.SetOwner(this);
// All commands will be routed via this control , not via the parent frame:
m_toolBar.SetRouteCommandsViaFrame(FALSE);
}
return 0;
}
The high-color image (24bit) is loaded but the magenta mask (R255 G0 B255) is visible. I don't see how I can tell the toolbar to recognize the mask.
Is this even possible?
|
I don't know if this works every time but I use RGB(192, 192, 192) as the mask color and it does get recognized.
(Seems like the CMFCToolBar control is prepared to use ::GetSysColor(COLOR_BTNFACE) as the transparent color...)
|
839,276
| 839,286
|
object returned after an exception?
|
int somefunction(bool a)
{
try
{
if(a)
throw Error("msg");
return 2;
}
catch (Error const & error)
{
//do i need to return anything here??
//return -1;
}
}
|
You need to either return something or re-throw the exception (or throw a new one). You can rethrow the same exception by just using the keyword
throw
in the catch block with no exception or arguments afterwards.
|
839,455
| 839,531
|
Aggregating contributions from multiple donors
|
As I try to modernize my C++ skills, I keep encountering this situation where "the STL way" isn't obvious to me.
I have an object that wants to gather contributions from multiple sources into a container (typically a std::vector). Each source is an object, and each of those objects provides a method get_contributions() that returns any number of contributions (from 0 to many). The gatherer will call get_contributions() on each contributor and aggregate the results into a single collection.
The question is, what's the best signature for get_contributions()?
Option 1: std::vector<contribution> get_contributions() const
This is the most straightforward, but it leads to lots of copying as the gatherer copies each set of results into the master collection. And yes, performance matters here. For example, if the contributors were geometric models and getting contributions amounted to tesselating them into triangles for rendering, then speed would count and the number of contributions could be enormous.
Option 2: template <typename container> void get_contributions(container &target) const
This allows each contributor to add its contributions directly to the master container by calling target.push_back(foo). The drawback here is that we're exposing the container to other types of inspection and manipulation. I'd prefer to keep the interface as narrow as possible.
Option 3: template <typename out_it> void get_contributions(out_it &it) const
In this solution, the aggregator would pass a std::back_insert_iterator for the master collection, and the individual contributors would do *it++ = foo; for each contribution. This is the best I've come up with so far, but I'm left with the feeling that there must be a more elegant way. The back_insert_iterator feels like a kludge.
Is Option 3 the best, or is there a better approach? Does this gathering pattern have a name?
|
Option 3 is the most idiomatic way. Note that you don't have to use back_insert_iterator. If you know how many elements are going to be added, you can resize the vector, and then provide a regular vector iterator instead. It won't call push_back then (and potentially save you some copying)
back_insert_iterator's main advantage is that it expands the vector as needed.
It's not a kludge though. It's designed for this exact purpose.
One minor adjustment would be to take pass the iterator by value, and then return it when the function returns.
|
839,468
| 839,520
|
Hints and tools for finding unmatched braces / preprocessor directives
|
This is one of my most dreaded C/C++ compiler errors:
file.cpp(3124) : fatal error C1004: unexpected end-of-file found
file.cpp includes almost a hundred header files, which in turn include other header files. It's over 3000 lines. The code should be modularized and structured, the source files smaller. We should refactor it. As a programmer there's always a wish list for improving things.
But right now, the code is a mess and the deadline is around the corner. Somewhere among all these lines—quite possibly in one of the included header files, not in the source file itself—there's apparently an unmatched brace, unmatched #ifdef or similar.
The problem is that when something is missing, the compiler can't really tell me where it is missing. It just knows that when it reached end of the file it wasn't in the right parser state.
Can you offer some tools or other hints / methodologies to help me find the cause for the error?
|
If the #includes are all in one place in the source file, you could try putting a stray closing brace in between the #includes. If you get an 'unmatched closing brace' error when you compile, you know it all balances up to that point. It's a slow method, but it might help you pinpoint the problem.
|
839,530
| 839,536
|
How to add search functionality to my application
|
I am writing Windows application (with Borland C++ Builder), which stores large number of text files. I want users to be able to search these files very fast, so I need an indexing and search library. I do not use database, but my own file format for storing the documents (all are in a single file).
Are there such libraries for Windows? It should add/remove documents to the index per request and find documents similar to a Google query ("car house -payment").
|
CLucene is a C-Port of the lucene (java) library.
I have only used the original java version, but lucene is able to do what you are asking for.
|
839,644
| 872,971
|
Get std::fstream failure error messages and/or exceptions
|
I'm using fstream. Is there any way to get the failure message/exception?
For example if I'm unable to open the file?
|
From checking it out I found that also errno and also GetLastError() do set the last error and checking them is quite helpful. For getting the string message use:
strerror(errno);
|
839,667
| 839,719
|
How much should I worry about the Intel C++ compiler emitting suboptimal code for AMD?
|
We've always been an Intel shop. All the developers use Intel machines, recommended platform for end users is Intel, and if end users want to run on AMD it's their lookout. Maybe the test department had an AMD machine somewhere to check we didn't ship anything completely broken, but that was about it.
Up until a few of years ago we just used the MSVC compiler and since it doesn't really offer a lot of processor tuning options beyond SSE level, noone worried too much about whether the code might favour one x86 vendor over another. However, more recently we've been using the Intel compiler a lot. Our stuff definitely gets some significant performance benefits from it (on our Intel hardware), and its vectorization capabilities mean less need to go to asm/intrinsics. However people are starting to get a bit nervous about whether the Intel compiler may actually not be doing such a good job for AMD hardware. Certainly if you step into the Intel CRT or IPP libraries you see a lot of cpuid queries to apparently set up jump tables to optimised functions. It seems unlikely Intel go to much trouble to do anything good for AMDs chips though.
Can anyone with any experience in this area comment on whether it's a big deal or not in practice ? (We've yet to actually do any performance testing on AMD ourselves).
Update 2010-01-04: Well the need to support AMD never became concrete enough for me to do any testing myself. There are some interesting reads on the issue here, here and here though.
Update 2010-08-09: It seems the Intel-FTC settlement has something to say about this issue - see "Compilers and Dirty Tricks" section of this article.
|
Buy an AMD box and run it on that. That seems like the only responsible thing to do, rather than trusting strangers on the internet ;)
Apart from that, I believe part of AMD's lawsuit against Intel is based on the claim that Intel's compiler specifically produces code that runs inefficiently on AMD processors. I don't know whether that's true or not, but AMD seems to believe so.
But even if they don't willfully do that, there's no doubt that Intel's compiler optimizes specifically for Intel processors and nothing else.
When that is said, I doubt it'd make a huge difference. AMD CPU's would still benefit from all the auto-vectorization and other clever features of the compiler.
|
839,705
| 839,770
|
qt trouble overriding paintEvent
|
I'm subclassing QProgressBar in a custom widget, and I overwrote the paintEvent method with the following code :
void myProg::paintEvent(QPaintEvent *pe)
{
QProgressBar::paintEvent(pe);
QRect region = pe->rect();
QPainter *painter = new QPainter(this);
QPen *pen = new QPen;
painter->begin(this);
painter->setBrush(Qt::red);
int x = this->x();
int y = this->y();
pen->setWidth(10);
painter->setPen(*pen);
painter->drawLine(x,y,x+100,y);
painter->end();
}
I'm trying to display a red line, as a starting point, to see that I can add my own modifications to the widget. However, this isn't working. I only see the widget as a regular QProgressBar. Any ideas on what could be wrong ?
|
The coordinate system you need to use is relative to the top-left of the widget, but you're apparently using one relative to the widget's parent. (Widget's x and y coords are relative to their parent). So your line will be getting clipped.
Also, it's unnecessary to call QPainter::begin and QPainter::end when you construct it using a QWidget * parameter. And the painter in your code doesn't get deleted, either. It's not necessary to create a painter on the heap with new: I'd just create it on the stack.
Try:
void myProg::paintEvent(QPaintEvent *pe)
{
QProgressBar::paintEvent(pe);
QRect region = pe->rect();
QPainter painter(this);
QPen pen(Qt::red); //Note: set line colour like this
//(Brush line removed; not necessary when drawing a line)
int x = 0; //Note changed
int y = height() / 2; //Note changed
pen.setWidth(10);
painter.setPen(pen);
painter.drawLine(x,y,x+100,y);
}
This should draw a red horizontal line 100 pixels long starting from the middle-left of the widget.
|
839,739
| 1,004,869
|
With CDatabase, can I send SQL without using CRecordSet?
|
When using the MFC class CDatabase to connect to a data source, is there any way to execute SQL statements without having to open a CRecordSet object? I ask because CRecordSet::Open() appears to throw an exception when I use it to call stored procedures that don't return anything - and there's no reason to expect results from, say, sp_delete_row.
|
I use CDatabase::ExecuteSQL()
CDatabase database;
//database is connected somewhere
database.ExecuteSql("Drop table [users]"); // sql statement from little Johnny Drop tables
|
839,765
| 839,818
|
The right type for handles in C interfaces
|
I'm creating a C api that hides some functionality in a DLL file.
Since everything is C++ on the inside most of the functions works against handles which maps directly to this pointers on the inside of the API.
To get a some degree of type safety for those handles I define them like this:
typedef struct MyType1* MyType1Handle;
typedef struct MyType2* MyType2Handle;
I don't actually define MyType1 or MyType2 at any place since I only use them as pointers and do a type cast on the inside of the api to the actual pointer type.
My problem is that when using my library in a clr project in visual studio I get this warning: unresolved typeref token (token) for 'type'; image may not run.
http://msdn.microsoft.com/en-us/library/h8027ys9(VS.80).aspx
It's no big deal since it works, but it looks unprofessional.
I don't like using void*:
typedef void* MyType1Handle;
typedef void* MyType2Handle;
This makes it possible to call a function wanting a MyType1Handle with a MyType2Handle since they are actually the same type.
Another approach I don't want to use is something like this
typedef int MyType1Handle;
typedef int MyType2Handle;
This would work fine as long as int's and pointers have the same size, but that's not always the case and it seems like there is no foolproof way of getting a platform specific pointer sized integer. It has the same type safety problems as the void* as well.
Another approach I tried was to do it like this:
struct MyType1{};
typedef struct MyType1* MyType1Handle;
This didn't work in C since empty structs is invalid C code. I could of course extend my struct with a dummy member, but it seems like there should be a better way of doing this.
So my question boils down to:
How do you generally specify this kind of types in the most compatible way?
|
If you look at how Microsoft defines it's winapi handles (winnt.h) it actually looks like this:
struct HWND__ { int unused; }; typedef struct HWND__ *HWND
in fact they have a macro for this:
#define DECLARE_HANDLE(name) struct name##__{int unused;}; typedef struct name##__ *name
so.. this seems to be a common practice to do so.
Unfortunately I can't make another suggestion except this one, which you already
mentioned, but I hope it helps you anyway.
|
839,856
| 839,897
|
Using std:fstream how to deny access (read and write) to the file
|
How can I deny access to a file I open with fstream? I want to unable access to the file while I'm reading/writing to it with fstream?
|
You cannot do it with the standard fstream, you'll have to use platform specific functions.
On Windows, you can use CreateFile() or LockFileEx(). On Linux, there is flock(), lockf(), and fcntl() (as the previous commenter said).
If you are using MSVC, you can pass a third parameter to fstream's constructor. See the documentation for Visual Studio 6 or newer versions. Of course it won't work with other compilers and platforms.
Why do you want to lock others out anyway? There might be a better solution...
|
839,958
| 840,055
|
Custom Iterator in C++
|
I have a class TContainer that is an aggregate of several stl collections pointers to TItems class.
I need to create an Iterator to traverse the elements in all the collections in my TContainer class abstracting the client of the inner workings.
What would be a good way to do this?. Should I crate a class that extends an iterator (if so, what iterator class should I extend), should I create an iterator class that is an aggregate of iterators?
I only need a FORWARD_ONLY iterator.
I.E, If this is my container:
typedef std::vector <TItem*> ItemVector;
class TContainer {
std::vector <ItemVector *> m_Items;
};
What would be a good Iterator to traverse all the items contained in the vectors of the m_Items member variable.
|
When I did my own iterator (a while ago now) I inherited from std::iterator and specified the type as the first template parameter. Hope that helps.
For forward iterators user forward_iterator_tag rather than input_iterator_tag in the following code.
This class was originally taken from istream_iterator class (and modified for my own use so it may not resemble the istram_iterator any more).
template<typename T>
class <PLOP>_iterator
:public std::iterator<std::input_iterator_tag, // type of iterator
T,ptrdiff_t,const T*,const T&> // Info about iterator
{
public:
const T& operator*() const;
const T* operator->() const;
<PLOP>__iterator& operator++();
<PLOP>__iterator operator++(int);
bool equal(<PLOP>__iterator const& rhs) const;
};
template<typename T>
inline bool operator==(<PLOP>__iterator<T> const& lhs,<PLOP>__iterator<T> const& rhs)
{
return lhs.equal(rhs);
}
Check this documentation on iterator tags:
http://www.sgi.com/tech/stl/iterator_tags.html
Having just re-read the information on iterators:
http://www.sgi.com/tech/stl/iterator_traits.html
This is the old way of doing things (iterator_tags) the more modern approach is to set up iterator_traits<> for your iterator to make it fully compatible with the STL.
|
839,975
| 840,245
|
How does c++ by-ref argument passing is compiled in assembly?
|
In the late years of college, I had a course on Compilers. We created a compiler for a subset of C. I have always wondered how a pass-by-ref function call is compiled into assembly in C++.
For what I remember, a pass-by-val function call follows the following procedure:
Store the address of the PP
Push the arguments onto the stack
Perform the function call
In the function, pop from stack the parameters
What's different for pass-by-reference? (int void(int&);)
EDIT:
I may sound totally lost but, if you could help me I'd really appreciate it.
Everyone's answer is basically that it passes the address instead of the value. I understood that to be basically what passing a pointer is. So how come, these two functions, behave differently?:
struct A {
int x;
A(int v){
x = v;
}
};
int byRef(A& v){
v = A(3);
return 0;
}
int byP (A* v){
v = &A(4); //OR new A(4)
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
A a (1); A b (2);
byRef(a); byP (&b);
cout << a.x << " " << b.x;
system("pause");
return 0;
}
I know that in byP(A*), v is being passed by value, thus, it won't affect the caller's argument. Then, how would you implement byRef(A&) in terms of A*?
|
int byRef(A& v){
v = A(3);
return 0;
}
This invokes the assignment of the temporary object to the object passed by reference, the object used in the function call is modified. A shallow copy will be performed if no assignment operator is provided.
int byP (A* v){
v = &A(4); //OR new A(4)
return 0;
}
This copies a pointer to a temporary object to the passed in pointer value. No assignment function is called. The value of 'v' is changed but the object v pointed to, the object address passed as an argument, is unchanged.
If you did this:
struct A {
int x;
A(int v){
x = v;
}
A &operator = (A &rhs){
cout << "assignment!";
}
};
then the "assignment" will be output in the byRef function but not the byP function.
Although the & is implemented using pointers 'under the hood', as others have said, they are treated as the object passed to the function by the language.
So, to implement the byRef using pointers:
int byRefUsingP (A *v)
{
*v = A(3);
// or you could do:
// v->operator = (A(3));
// if an operator = is defined (don't know if it will work without one defined)
return 0;
}
|
840,023
| 840,069
|
Efficient coding help
|
I am currently writing code in C++ to find all possible permutations of 6 integers and store the best permutation (i.e. the one whose total is closest to a given value).
I am trying to write this code as efficiently as possible and would apreciate any advice or examples.
I was considering storing the integers in an array and performing the permutations using pointers within a loop. Is this a good approach?
|
They already thought of this one for you:
#include <algorithm>
int ra[6] = { 1, 2, 3, 4, 5, 6 };
do {
// whatever
} while (std::next_permutation(ra, ra+6));
Note that the elements have to start in increasing order (by comparison via operator<), or else the loop will terminate before you've seen every permutation. See http://en.cppreference.com/w/cpp/algorithm/next_permutation for details.
It probably doesn't give the fastest runtime possible, because at each step it has to examine the array to work out what to change. But it's certainly the most efficient in programmer effort, and without knowing the compiler and platform it's not really possible to micro-optimise the nested loop approaches anyway.
|
840,081
| 840,389
|
What does floating point error -1.#J mean?
|
Recently, sometimes (rarely) when we export data from our application, the export log contains float values that look like "-1.#J". I haven't been able to reproduce it so I don't know what the float looks like in binary, or how Visual Studio displays it.
I tried looking at the source code for printf, but didn't find anything (not 100% sure I looked at the right version though...).
I've tried googling but google throws away any #, it seems. And I can't find any lists of float errors.
|
It can be either negative infinity or NaN (not a number). Due to the formatting on the field printf does not differentiate between them.
I tried the following code in Visual Studio 2008:
double a = 0.0;
printf("%.3g\n", 1.0 / a); // +inf
printf("%.3g\n", -1.0 / a); // -inf
printf("%.3g\n", a / a); // NaN
which results in the following output:
1.#J
-1.#J
-1.#J
removing the .3 formatting specifier gives:
1.#INF
-1.#INF
-1.#IND
so it's clear 0/0 gives NaN and -1/0 gives negative infinity (NaN, -inf and +inf are the only "erroneous" floating point numbers, if I recall correctly)
|
840,321
| 840,363
|
How can I see the assembly code for a C++ program?
|
How can I see the assembly code for a C++ program?
What are the popular tools to do this?
|
Ask the compiler
If you are building the program yourself, you can ask your compiler to emit assembly source. For most UNIX compilers use the -S switch.
If you are using the GNU assembler, compiling with -g -Wa,-alh will give intermixed source and assembly on stdout (-Wa asks compiler driver to pass options to assembler, -al turns on assembly listing, and -ah adds "high-level source" listing):
g++ -g -c -Wa,-alh foo.cc
For Visual Studio, use /FAsc.
Peek into a binary
If you have a compiled binary,
use objdump -d a.out on UNIX (also works for cygwin),
dumpbin /DISASM foo.exe on Windows.
Use your debugger
Debuggers could also show disassembly.
Use disas command in GDB.
Use set disassembly-flavor intel if you prefer Intel syntax.
or the disassembly window of Visual Studio on Windows.
|
840,522
| 840,532
|
Given a pointer to a C++ object, what is the preferred way to call a static member function?
|
Say I have:
class A {
public:
static void DoStuff();
// ... more methods here ...
};
And later on I have a function that wants to call DoStuff:
B::SomeFunction(A* a_ptr) {
Is it better to say:
a_ptr->DoStuff();
}
Or is the following better even though I have an instance pointer:
A::DoStuff()
}
This is purely a matter of style, but I'd like to get some informed opinions before I make a decision.
|
I think I'd prefer "A::DoStuff()", as it's more clear that a static method is being called.
|
840,888
| 841,029
|
How does code written in one language get called from another language
|
This is a question that I've always wanted to know the answer, but never really asked.
How does code written by one language, particularly an interpreted language, get called by code written by a compiled language.
For example, say I'm writing a game in C++ and I outsource some of the AI behavior to be written in Scheme. How does the code written in Scheme get to a point that is usable by the compiled C++ code? How is it used by the C++ source code, and how is it used by the C++ compiled code? Is there a difference in the way it's used?
Related
How do multiple-languages interact in one project?
|
There is no single answer to the question that works everywhere. In general, the answer is that the two languages must agree on "something" -- a set or rules or a "calling protocol".
In a high level, any protocol needs to specify three things:
"discovery": how to find about each other.
"linking": How to make the connection (after they know about each other).
"Invocation": How to actually make requests to each other.
The details depend heavily on the protocol itself.
Sometimes the two languages conspire to work together. Sometimes the two languages agree to support some outside-defined protocol. These days, the OS or the "runtime environment" (.NET and Java) is often involved as well. Sometimes the ability only goes one way ("A" can call "B", but "B" cannot call "A").
Notice that this is the same problem that any language faces when communicating with the OS. The Linux kernel is not written in Scheme, you know!
Let's see some typical answers from the world of Windows:
C with C++: C++ uses a contorted ("mangled") variation of the "C protocol". C++ can call into C, and C can call into C++ (although the names can be quite messy sometimes and it might need external help translating the names). This is not just Windows; it's generally true in all platforms that support both. Most popular OS's use a "C protocol" as well.
VB6 vs. most languages: VB6's preferred method is the "COM protocol". Other languages must be able to write COM objects to be usable from VB6. VB6 can produce COM objects too (although not every possible variation of COM objects).
VB6 can also talk a very limited variation of the "C protocol", and then only to make calls outside: it cannot create objects that can be talked to directly via the "C protocol".
.NET languages: All .NET languages communicate compile to the same low-level language (IL). The runtime manages the communication and from that point of view, they all look like the same language.
VBScript vs. other languages: VBScript can only talk a subset of the COM protocol.
One more note: SOAP "Web Services" is really a "calling protocol" as well, like many other web-based protocol that are becoming popular. After all, it's all about talking to code written in a different language (and running in a different box at that!)
|
840,960
| 840,971
|
What does it mean when "virtual" is in "class Foo : public virtual Bar" as opposed to "virtual void frob()"?
|
I understand virtual in the context of a member function, like virtual void frob(). But what does it mean in the context of the class declaration, like class Foo : public virtual Bar?
For a given method there are 8 cases stemming from the presence or absence of virtual in the following three locations: 1) a superclass's functions; 2) the inheritance chain for this class; 3) this class's functions.
I think I understand how 1) and 3) interact, but 2) seems redundant. Is it? What am I not understanding?
|
That's virtual inheritance, you do it when you know you'll be doing multiple inheritance. That page goes into way more detail.
|
841,038
| 841,046
|
Large buffers vs Large static buffers, is there an advantage?
|
Consider the following code.
Is DoSomething1() faster then DoSomething2() in a 1000 consecutive executions?
I would assume that if I where to call DoSomething1() it 1000 times it would be faster then calling DoSomething2() it 1000 times.
Is there any disadvantage to making all my large buffers static?
#define MAX_BUFFER_LENGTH 1024*5
void DoSomething1()
{
static char buf[MAX_BUFFER_LENGTH] ;
memset( buf, 0, MAX_BUFFER_LENGTH );
}
void DoSomething2()
{
char buf[MAX_BUFFER_LENGTH] ;
memset( buf, 0, MAX_BUFFER_LENGTH );
}
Thank you for your time.
|
Disadvantage of static buffers:
If you need to be thread safe then using static buffers probably isn't a good idea.
Memory won't be freed until the end of your program hence making your memory consumption higher.
Advantages of static buffers:
There are less allocations with static buffers. You don't need to allocate on the stack each time.
With a static buffer, there is less chance of a stack overflow from too high of an allocation.
|
841,075
| 841,083
|
Best C++ Code Formatter/Beautifier
|
There are lots of source code formatting tools out there. Which ones work best for C++?
I'm interested in command-line tools or other things that can be automatically run when checking code in/out, preferably without needing to launch an editor or IDE.
(If you see the one you like already listed as an answer, vote it up. If it's not there, add it.)
|
AStyle can be customized in great detail for C++ and Java (and others too)
This is a source code formatting tool.
clang-format is a powerful command line tool bundled with the clang compiler which handles even the most obscure language constructs in a coherent way.
It can be integrated with Visual Studio, Emacs, Vim (and others) and can format just the selected lines (or with git/svn to format some diff).
It can be configured with a variety of options listed here.
When using config files (named .clang-format) styles can be per directory - the closest such file in parent directories shall be used for a particular file.
Styles can be inherited from a preset (say LLVM or Google) and can later override different options
It is used by Google and others and is production ready.
Also look at the project UniversalIndentGUI. You can experiment with several indenters using it: AStyle, Uncrustify, GreatCode, ... and select the best for you. Any of them can be run later from a command line.
Uncrustify has a lot of configurable options. You'll probably need Universal Indent GUI (in Konstantin's reply) as well to configure it.
|
841,434
| 2,690,211
|
How do you manually insert options into boost.Program_options?
|
I have an application that uses Boost.Program_options to store and manage its configuration options. We are currently moving away from configuration files and using database loaded configuration instead. I've written an API that reads configuration options from the database by hostname and instance name. (cool!) However, as far as I can see there is no way to manually insert these options into the boost Program_options. Has anyone used this before, any ideas? The docs from boost seem to indicate the only way to get stuff in that map is by the store function, which either reads from the command line or config file (not what I want). Basically looking for a way to manually insert the DB read values in to the map.
|
My answer comes a little too late, but I spent some time trying to do something similar and found an annoyingly obvious solution (incase anyone else is looking for this)...
Recalling that boost::program_options::variables_map derives from std::map<std::string, boost::program_options::variable_value>, you can do perfectly legal STL map processing including an insert...
namespace po = boost::program_options;
po::variables_map vm;
vm.insert(std::make_pair("MyNewEmptyOption", po::variable_value());
vm.insert(std::make_pair("MyNewIntOption", po::variable_value(32, false));
po::notify(vm);
-Edmond-
|
841,471
| 853,791
|
wxwidgets // g++ Compiler error: no matching function for call to 'operator new(..'
|
At the moment I am trying to port a Visual C++ application to Linux. The code compiles without errors in Visual Studio, but I get many compiler errors under Linux. One of these errors is:
../src/wktools4.cpp:29: error: no matching function for
call to 'operator new(unsigned int, const char[40], int)'
More information:
IDE: kdevelop with G++
GUI API:
The error appears at the following line:
IMPLEMENT_APP(Cwktools4App)
and some other lines.
What am I missing?
|
I found the error:
#ifdef __WXDEBUG__
#define new WXDEBUG_NEW
#endif
When I remove these lines, I don't get the errors any more. The code was generated from a wxwidgets wizard for VisualStudio. I have no idea what it does...
Thank you all for your help! Now I have to fix the linker errors ;)
|
841,619
| 841,666
|
C++ constructor problem
|
#include <iostream>
using namespace std;
// This first class contains a vector and a scalar representing the size of the vector.
typedef class Structure1
{
int N;
double* vec;
public:
// Constructor and copy constructor:
Structure1(int Nin);
Structure1(const Structure1& structurein);
// Accessor functions:
int get_N() { return N; }
double* get_vec() { return vec; }
// Destructor:
~Structure1() { delete []vec; }
} Structure1;
Structure1::Structure1(int Nin)
{
N = Nin;
vec = new double [N];
for (int i = 0; i < N; i++)
{
vec[i] = i;
};
}
Structure1::Structure1(const Structure1& structurein)
{
vec = new double[structurein.N];
for(int i = 0; i < structurein.N; i++)
{
vec[i] = structurein.vec[i];
};
}
// This class just contains the first structure.
typedef class Structure2
{
Structure1 structure;
public:
// Constructor:
Structure2(const Structure1& structurein) { structure = structurein; }
The error occurs here:
Line Location main.cpp:47: error: no matching function for call to 'Structure1::Structure1()'
// Accessor Function:
Structure1 get_structure() { return structure; }
// Destructor:
~Structure2() {}
} Structure2;
int main (int argc, char * const argv[])
{
const int N = 100;
Structure1 structure1(N);
Structure2 structure2(structure1);
return 0;
}
If anyone knows what's wrong, your help would be greatly appreciated. Thanks!
|
By the way, I think you need to fix the copy constructor:
Structure1::Structure1(const Structure1& structurein)
{
///!!!! Initialize N
N = structurein.N;
vec = new double[structurein.N];
for(int i = 0; i < structurein.N; i++)
{
vec[i] = structurein.vec[i];
};
}
|
841,705
| 841,716
|
C++: Trying to import class as a class member into another class, does not copy properly
|
Thanks to all that responded to my previous thread. There is still a problem with this simple program that I would like to solve. I am trying to import one class into another as a member object. The output of this program is confusing, though. As a test of the code, I output the scalar contained within the first class object. This works. Then I output the scalar the is contained in the object contained as a member of the second class, and this outputs zero. I assume that something is wrong. Could someone shed some light on this problem? Thanks!
#include <iostream>
using namespace std;
typedef double* doublevec;
// This first class contains a vector and a scalar representing the size of the vector.
typedef class Structure1
{
int N;
doublevec vec;
public:
// Constructor and copy constructor.
Structure1(int Nin);
Structure1(const Structure1& structurein);
// Accessor functions:
int get_N() { return N; }
doublevec get_vec() { return vec; }
// Destructor:
~Structure1() { delete []vec; }
} Structure1;
Structure1::Structure1(int Nin)
{
N = Nin;
vec = new double [N];
for (int i = 0; i < N; i++)
{
vec[i] = i;
};
}
Structure1::Structure1(const Structure1& structurein)
{
vec = new double[structurein.N];
for(int i = 0; i < structurein.N; i++)
{
vec[i] = structurein.vec[i];
};
}
// This class just contains the first structure.
typedef class Structure2
{
Structure1 structure;
public:
// Constructor:
Structure2(const Structure1& structurein) : structure(structurein) {}
// Accessor Function:
Structure1 get_structure() { return structure; }
// Destructor:
~Structure2() {}
} Structure2;
int main (int argc, char * const argv[])
{
const int N = 100;
Structure1 structure1(N);
Structure2 structure2(structure1);
cout << structure1.get_N() << endl;
cout << structure2.get_structure().get_N();
return 0;
}
|
Looks like you don't set N in Structure1's copy constructor. You need:
Structure1::Structure1(const Structure1& structurein){
N = structurein.N;
vec = new double[structurein.N];
for(int i = 0; i < structurein.N; i++) {
vec[i] = structurein.vec[i];
}
}
(Also, you don't need a semicolon after a for-loop. But that's another matter.)
Edit: This was actually mentioned in an answer to your other question. ;)
|
841,718
| 848,157
|
Programmatically Add/Remove tabs on wxNotebook by PageText
|
I need to be able to programmatically add and remove tabs on a wxNotebook by the text/label that is displayed on each tab.
In windows, using a tab control and tab pages, I would be able to reference each tab by a key. The tab control has a map of tab pages keyed on the text of each tab.
|
Use the following helper method to convert from the tab label/text to the corresponding index of the wxNotebookPage. After you have the index of the wxNotebookPage, then you can use all of the wxNotebook's methods that expect the page index as an argument.
int TabTestFrame::GetIndexForPageName( wxString tabText)
{
int end = Notebook1->GetPageCount();
wxString selectedtabText = "";
for ( int i = 0; i < end; i++)
{
selectedtabText = Notebook1->GetPageText(i);
if (tabText == selectedtabText)
return i;
}
return -1;
}
|
841,966
| 847,763
|
How can I disambiguate this template code?
|
I'm having trouble figuring out how to solve a compiler error I'm running into. I've reduced it down to this simplest-case representation:
enum EAtomId { EAtomId_Test };
int StringFormat(char* o_dest, size_t i_destSizeChars, const char* i_format, ...);
template <size_t SIZE>
int StringFormat(char (&o_dest)[SIZE], EAtomId i_format, ...);
void func()
{
char textBuffer[1000];
StringFormat(textBuffer, EAtomId_Test, "hi there");
}
The compiler error is:
repro.cpp(17) : error C2666: 'StringFormat' : 2 overloads have similar conversions
C:\Users\sbilas\Desktop\repro.cpp(9): could be 'int StringFormat(char *,size_t,const char *,...)'
while trying to match the argument list '(char [1000], EAtomId, const char [9])'
That's the full error btw. I'm a little surprised it isn't listing both available versions..
I have a couple problems with this error. First, I don't see why it's ambiguous. Shouldn't the compiler see the char(&)[] version as the obvious case to match? And second, how can I prevent that enum from being converted to a size_t when it's doing the lookup? Seems like I'm running into some very specific C++ rules here.
The easiest workaround for me is to put the size_t for the buffer size in front of the actual buffer. But that will break all of our conventions where we put buffer then size in our code. Is there another way to do this?
This is on VC++ 2005 btw but it reproduces on a couple other compilers I have (this is in a cross-platform game).
|
The problem with the above code is due to ordinary overload resolution rules, and is not specific to templates. The following code with two ordinary functions still shows the ambiguity:
enum EAtomId { EAtomId_Test };
const int SIZE=1000;
int StringFormat(char * s
, size_t i_destSizeChars
, const char* i_format
, ...);
int StringFormat(char (&a)[SIZE]
, EAtomId i_format
, ...);
void func()
{
char textBuffer[SIZE];
StringFormat (textBuffer, EAtomId_Test, "hi there");
}
The applicable rules for overload resolution work as follows (13.3.3):
For each set of arguments, work out the set of conversions that take place
The best function, is a function for which all conversions are as least as good as the conversions in all of the other overloads.
Taking each of these in turn we have:
Function 1:
textBuffer->s: Array to pointer (Exact match 13.3.3.1.1/3) - Identity
EAtomId_Test->i_destSizeChars: Possible promotion and Integral Conversion
string literal->const char*: Array to pointer
Function 2:
textBuffer->a: Identity Conversion
EAtomId_Test->EAtomId: Identity Conversion
string literal->ellipsis: Ellipsis Conversion
Comparing each of these conversions in turn you have:
Function 1 is same as Function 2
Function 2 better than Function 1
Function 1 better than Function 2
So, neither function 1 nor function 2 is as least as good for all conversions and so the call is ambiguous.
|
842,268
| 842,278
|
Need help appending one wchar_t to another! C++
|
I have tried wcscat() but i get a runtime access violation.
wchar_t* a = L"aaa";
wchar_t* b = L"bbb";
wchar_t* c;
c = wcscat(a, b);
Can somebody tell me what is wrong here? Or another way to solve my problem? Thanks
|
wcscat doesn't create a new string - it simply appends b to a. So, if you want to make sure you don't cause a runtime access violation, you need to make sure there's space for b at the end of a. In the case above:
wchar_t a[7] = L"aaa";
wchar_t b[] = L"bbb";
wchar_t* c;
c = wcscat(a, b);
You can still get a return value from the function, but it will simply return a.
|
842,273
| 842,311
|
How to determine whether a C++ template has specified method
|
Let's suppose I am writing a function template GCD which uses both operator/ and operator%. For some types, for example, complex numbers or polynomials both can be computed efficiently (i.e. when dividing polynomials you get remainder "for free"). So some of my class templates have divmod implemented, which returns a pair of quotient and remainder.
However some classes and notably built-in types, such as int, do not have divmod (or equivalent) implemented. If I was about to write one GCD which uses divmod I would discriminate against the other types (and vice versa). I haven't been able to come up with a template specialization which wouldn't require most of the code for GCD to be repeated.
What are my options here?
|
I would go for type traits and template specializations based on that. You can use metaprogramming to determine if the type has divmod, and based on that provide specific parts of a general algorithm. That is, extract the common part of the algorithm into a generic piece of code that calls specialized functions for the differing parts based on the typetraits.
Take a look at boost::type_traits library to get on the way of the type traits implementation, and policy based design for ideas on how to inject different behaviors in a common code base by means of (policy) templates.
|
842,387
| 842,427
|
how do I dynamically cast between vectors of pointers?
|
I have:
class T {};
class S: public T {};
vector<T*> v;
vector<S*> w;
transform(v.begin(), v.end(), dynamic_cast_iterator<S*>(w.begin()));
But, of course, dynamic_cast_iterator doesn't exist.
|
Here is one solution (using boost lambda):
#include <boost/lambda/lambda.hpp>
#include <boost/lambda/casts.hpp>
#include <algorithm>
#include <iterator>
#include <iostream>
namespace bll = boost::lambda;
struct A { virtual ~A() { } };
struct B : A { void f() { std::cout << "hello, world" << std::endl; } };
int main() {
std::vector<A*> a; a.push_back(new B);
std::vector<B*> b;
std::transform(a.begin(), a.end(), std::back_inserter(b),
bll::ll_dynamic_cast<B*>(bll::_1));
b[0]->f();
delete a[0];
}
|
842,407
| 842,453
|
How can I sort a singly linked list in constant space?
|
I have a singly linked list and I need to sort it in constant space due to memory limitations (in other words, no extra space should be used that is proportional to the number of items in the list).
The structure of the linked list is:
head.item = the payload you want to sort on; and
head.next = the next item.
The requirement for constant space discounts solutions where I build another list, I need to do it in-place.
How can I do that?
|
Sorting a linked list in constant space is easy, you just have to adjust the pointers. The easiest way to do this is to use a sort algorithm that only swaps adjacent elements. I'm going to provide a bubble-sort, just because you've made no requirement for efficiency:
# Enter loop only if there are elements in list.
swapped = (head <> null)
while swapped:
# Only continue loop if a swap is made.
swapped = false
# Maintain pointers.
curr = head
next = curr.next
prev = null
# Cannot swap last element with its next.
while next <> null:
# Swap if items in wrong order.
if curr.item > next.item:
# Notify loop to do one more pass.
swapped = true
# Swap elements (swapping head is special case).
if curr == head:
head = next
temp = next.next
next.next = curr
curr.next = temp
curr = head
else:
prev.next = curr.next
curr.next = next.next
next.next = curr
curr = next
endif
endif
# Move to next element.
prev = curr
curr = curr.next
next = curr.next
endwhile
endwhile
|
842,563
| 842,682
|
LSB AppChecker: GCC links against unused libraries
|
I'm checking the portability of a shared object (.so) with the LSB AppChecker.
One of the problems it reports is that there is one external library (libm.so.6) that is not being used but is linked against anyways.
How can I prevent GCC from linking to this unneeded library?
EDIT:
The output of the ldd command against my shared object is:
linux-gate.so.1 => (0x009ff000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x003dc000)
libm.so.6 => /lib/libm.so.6 (0x00110000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00137000)
libc.so.6 => /lib/libc.so.6 (0x0021d000)
/lib/ld-linux.so.2 (0x0097f000)
|
Pass the -Wl,-as-needed argument as part of the linker command line. This will automatically drop any direct library dependencies you're not actually using symbols from.
$ g++ -o test test.cpp -lm; readelf -d test|grep '(NEEDED)'
0x0000000000000001 (NEEDED) Shared library: [libstdc++.so.6]
0x0000000000000001 (NEEDED) Shared library: [libm.so.6]
0x0000000000000001 (NEEDED) Shared library: [libgcc_s.so.1]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
$ g++ -o test test.cpp -lm -Wl,-as-needed; readelf -d test|grep '(NEEDED)'
0x0000000000000001 (NEEDED) Shared library: [libstdc++.so.6]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
|
842,753
| 842,771
|
Operator Overloading
|
I'm new to C++, this is my first week since the upgrade from fortran. Sorry if this is a simple question, but could someone help me with operator overloading. I have written a program which has two classes. One object contains a vector and two scalars, the other class simply contains the first object. In a test implementation of this code I suspect the operator overloading to be at fault. The program tries to accomplish the following goals:
1) Initialize first structure.
2) Initialize a second structure containing the initialized first structure. After this is imported, the value val0 = 10 is added to every element of the vector in the enclosing structure, structure2.structure1 .
3) Output structure1 and structure2 variables to compare.
For this simple program my output is:
100
100
0
0
0 0 10
1 1 11
2 2 12
3 3 13
...
I was expecting:
100
100
0
10
0 0 10
1 1 11
2 2 12
3 3 13
...
Clearly my overloaded = operator is copying my vector properly, but one of the scalars? Could someone help?
#include <iostream>
using namespace std;
typedef double* doublevec;
// This first class contains a vector, a scalar N representing the size of the vector, and another scalar used for intializing the vector.
typedef class Structure1
{
int N, vec0;
doublevec vec;
public:
// Constructor and copy constructor.
Structure1(int Nin, int vecin) : N(Nin), vec0(vecin) { vec = new double [N]; for(int i = 0; i < N; i++) { vec[i] = i + vec0; } }
Structure1(const Structure1& structurein);
// Accessor functions:
int get_vec0() { return vec0; }
int get_N() { return N; }
doublevec get_vec() { return vec; }
// Overide equivalence operator:
Structure1& operator=(const Structure1& right)
{
//Handle Self-Assignment
if (this == &right) return *this;
N = right.N;
vec0 = right.vec0;
for (int i = 0; i < N; i++)
{
vec[i] = right.vec[i];
}
return *this;
}
// Destructor:
~Structure1() { delete []vec; }
} Structure1;
Structure1::Structure1(const Structure1& structurein)
{
N = structurein.N;
vec = new double[N];
for(int i = 0; i < N; i++)
{
vec[i] = structurein.vec[i];
}
}
// This class just contains the first structure.
typedef class Structure2
{
Structure1 structure;
// Mutator Function:
void mutate_structure();
public:
// Constructor:
Structure2(const Structure1& structurein) : structure(structurein) { mutate_structure(); }
// Accessor Function:
Structure1 get_structure() { return structure; }
// Destructor:
~Structure2() {}
} Structure2;
void Structure2::mutate_structure()
{
int N = structure.get_N();
Structure1 tempstruct(N,10);
structure = tempstruct;
}
int main (int argc, char * const argv[])
{
const int N = 100;
Structure1 structure1(N,0);
Structure2 structure2(structure1);
cout << structure1.get_N() << endl;
cout << structure2.get_structure().get_N() << endl;
cout << structure1.get_vec0() << endl;
cout << structure2.get_structure().get_vec0() << endl;
for(int i = 0; i < N; i++)
{
cout << i << " " << structure1.get_vec()[i] << " " << structure2.get_structure().get_vec()[i] << endl;
}
return 0;
}
|
it looks like vec0 isn't initialized by your copy constructor...
Try modifying your copy constructor to:
Structure1::Structure1(const Structure1& structurein)
{
N = structurein.N;
vec = new double[N];
for (int i = 0; i < N; i++)
{
vec[i] = structurein.vec[i];
}
// ADD THIS LINE
vec0 = structurein.vec0;
}
|
842,843
| 843,920
|
Window Hooking Questions
|
Is am using this:
SetWindowsHookEx(WH_CALLWNDPROC, ...);
I can see the messages I want to process, but I want to prevent those message from reaching the target window. So I tried this:
SetWindowsHookEx(WH_GETMESSAGE, ...);
When I do this I can modify the message, and prevent the target window from processing it, but this hook doesn't see the messages I need to process. I presume this is because it is being posted to the target window's queue, not sent? Is there a way around this issue? I have heard that window sub-classing might be able to accomplish this, but can I subclass a window in a different process? Is there a way to do this using hooks?
|
You can't subclass a window in a another process, but the hook DLL should be able to subclass the window you're interested in. WH_GETMESSAGE and WH_CALLWNDPROC hooks run in the context of the process receiving the message, so at that point you have an "in" to subclass the target's window.
|
842,935
| 842,958
|
memcpy not doing as it is supposed to
|
I have this bit of code that is outputting the wrong results.
#include <stdio.h>
#include <string.h>
int main()
{
unsigned char bytes[4];
float flt=0;
bytes[0]=0xde;
bytes[1]=0xad;
bytes[2]=0xbe;
bytes[3]=0xef;
memcpy( &flt, bytes, 4);
printf("bytes 0x%x float %e\n", flt, flt);
return 0;
}
the output that I get is
bytes 0xc0000000 float -2.000001e+00
I am expecting to get
bytes 0xdeadbeef float -6.2598534e+18
edit #1
as was pointed out the endianness could be different which would result in the following
bytes 0xefbeadde float -1.1802469e+29
what I don't understand is the cast from float to unsigned int resulting in 0xc0000000 (the float in the same printf statement being -2.0000 I would attribute to compiler optimization)
this was working before on a different computer. It could be an architecture change.
|
It is not problem of memcpy.
float is allways converted to double when passed over ... of printf, so you just can't get 4 bytes on most of intel architectures.
when you expacting 0xdeadbeef in this code, you assume that your architecture is BIG endian. There are many little endian architectures, for example Intel x86.
|
843,154
| 843,161
|
Fastest way to find the number of lines in a text (C++)
|
I need to read the number of lines in a file before doing some operations on that file. When I try to read the file and increment the line_count variable at each iteration until I reach EOF. It was not that fast in my case. I used both ifstream and fgets. They were both slow. Is there a hacky way to do this, which is also used by, for instance BSD, Linux kernel or berkeley db (may be by using bitwise operations).
The number of lines is in the millions in that file and it keeps getting larger, each line is about 40 or 50 characters. I'm using Linux.
Note:
I'm sure there will be people who might say use a DB idiot. But briefly in my case I can't use a db.
|
The only way to find the line count is to read the whole file and count the number of line-end characters. The fastest way to do this is probably to read the whole file into a large buffer with one read operation and then go through the buffer counting the '\n' characters.
As your current file size appears to be about 60Mb, this is not an attractive option. You can get some of the speed by not reading the whole file, but reading it in chunks, say of size 1Mb. You also say that a database is out of the question, but it really does look to be the best long-term solution.
Edit: I just ran a small benchmark on this and using the buffered approach (buffer size 1024K) seems to be a bit more than twice as fast as reading a line at a time with getline(). Here's the code - my tests were done with g++ using -O2 optimisation level:
#include <iostream>
#include <fstream>
#include <vector>
#include <ctime>
using namespace std;
unsigned int FileRead( istream & is, vector <char> & buff ) {
is.read( &buff[0], buff.size() );
return is.gcount();
}
unsigned int CountLines( const vector <char> & buff, int sz ) {
int newlines = 0;
const char * p = &buff[0];
for ( int i = 0; i < sz; i++ ) {
if ( p[i] == '\n' ) {
newlines++;
}
}
return newlines;
}
int main( int argc, char * argv[] ) {
time_t now = time(0);
if ( argc == 1 ) {
cout << "lines\n";
ifstream ifs( "lines.dat" );
int n = 0;
string s;
while( getline( ifs, s ) ) {
n++;
}
cout << n << endl;
}
else {
cout << "buffer\n";
const int SZ = 1024 * 1024;
std::vector <char> buff( SZ );
ifstream ifs( "lines.dat" );
int n = 0;
while( int cc = FileRead( ifs, buff ) ) {
n += CountLines( buff, cc );
}
cout << n << endl;
}
cout << time(0) - now << endl;
}
|
843,238
| 843,249
|
SQLite3 - Cannot Open Database
|
I have the following code:
#include <iostream>
#include <string>
#include "sqlite3.h"
int main()
{
sqlite3* db;
int rc = sqlite3_open("testing.db", &db);
std::cout << rc << std::endl;
std::cout << sqlite3_errmsg(db);
std::cin >> rc;
}
When I run it, the program outputs "21" and "library routine called out of sequence". What am I doing wrong? 21 is the code for SQLITE_MISUSE. See: http://www.sqlite.org/c3ref/c_abort.html
|
Call sqlite3_errmsg() to get the actual error message.
Edit:
When I run your code, it returns 0. Seems to work fine here.
Which system are you running the code on? How was your code compiled?
|
843,247
| 843,254
|
type/value mismatch in template C++ class declaration
|
I am trying to compile the following code on Linux using gcc 4.2:
#include <map>
#include <list>
template<typename T>
class A
{
...
private:
std::map<const T, std::list<std::pair<T, long int> >::iterator> lookup_map_;
std::list<std::pair<T, long int> > order_list_;
};
When I compile this class I receive the following message from gcc:
error: type/value mismatch at argument 2 in template parameter list for ‘template<class _Key, class _Tp, class _Compare, class _Alloc> class std::map’
error: expected a type, got ‘std::list<std::pair<const T, long int>,std::allocator<std::pair<const T, long int> > >::iterator’
error: template argument 4 is invalid
I have removed file names and line numbers , but they all refer to the line declaring the map.
When I replace the pair in these expressions with an int or some concrete type, it compiles fine. Can someone please explain to me what I am doing wrong.
|
You need to write typename before std::list<...>::iterator, because iterator is a nested type and you're writing a template.
Edit: without the typename, GCC assumes (as the standard requires) that iterator is actually a static variable in list, rather than a type. Hence the "parameter type mismatch" error.
|
843,389
| 843,414
|
The Pimpl Idiom in practice
|
There have been a few questions on SO about the pimpl idiom, but I'm more curious about how often it is leveraged in practice.
I understand there are some trade-offs between performance and encapsulation, plus some debugging annoyances due to the extra redirection.
With that, is this something that should be adopted on a per-class, or an all-or-nothing basis? Is this a best-practice or personal preference?
I realize that's somewhat subjective, so let me list my top priorities:
Code clarity
Code maintainability
Performance
I always assume that I will need to expose my code as a library at some point, so that's also a consideration.
EDIT: Any other options to accomplish the same thing would be welcome suggestions.
|
I'd say that whether you do it per-class or on an all-or-nothing basis depends on why you go for the pimpl idiom in the first place. My reasons, when building a library, have been one of the following:
Wanted to hide implementation in order to avoid disclosing information (yes, it was not a FOSS project :)
Wanted to hide implementation in order to make client code less dependent. If you build a shared library (DLL), you can change your pimpl class without even recompiling the application.
Wanted to reduce the time it takes to compile the classes using the library.
Wanted to fix a namespace clash (or similar).
None of these reasons prompts for the all-or-nothing approach. In the first one, you only pimplize what you want to hide, whereas in the second case it's probably enough to do so for classes which you expect to change. Also for the third and fourth reason there's only benefit from hiding non-trivial members that in turn require extra headers (e.g., of a third-party library, or even STL).
In any case, my point is that I wouldn't typically find something like this too useful:
class Point {
public:
Point(double x, double y);
Point(const Point& src);
~Point();
Point& operator= (const Point& rhs);
void setX(double x);
void setY(double y);
double getX() const;
double getY() const;
private:
class PointImpl;
PointImpl* pimpl;
}
In this kind of a case, the tradeoff starts to hit you because the pointer needs to be dereferenced, and the methods cannot be inlined. However, if you do it only for non-trivial classes then the slight overhead can typically be tolerated without any problems.
|
843,506
| 843,521
|
Shell Icon Overlay (C#)
|
I need a method to create Icon Overlay's for Folders and Files in Windows XP/Vista, using C# or C++? Any examples?
Thanks,
-Sean!
|
Tigris' TortoiseSVN product heavily uses icon overlays provided by library shared by several Tortoise products, the overlays themselves are written in C++ rather than C#.
The documentation for the TortoiseOverlays project explains how they use it and the problems they have encountered (username: guest, empty password), and the GPL'ed sourcecode is in the Subversion repository (same username/password as above).
Snippit from documentation:
TortoiseOverlays registers itself with the explorer to handle the nine
states mentioned above, i.e. it registers nine overlay handlers. The
explorer process initializes the TortoiseOverlays handler, calling its
IShellIconOverlayIdentifier::GetOverlayInfo(). TortoiseOverlays looks
for the registered overlay handlers under
HKLM\Software\TortoiseOverlays\Statusname and calls their
GetOverlayInfo() method so they can initialize too (Note that any
change to the icon name, index, ... your handler does are overwritten
later and won't be used - it's TortoiseOverlays that handles the icons
now). After the initialization, TortoiseOverlays relays every call to
its IShellIconOverlayIdentifier::IsMemberOf() method to the other
handlers. The first handler that returns S_OK determines whether the
icon is shown or not.
|
843,618
| 843,628
|
Failing to use stl containers in templated functions/classes
|
When I try and compile the following code...
#include <vector>
template <class T> void DoNothing()
{
std::vector<T>::iterator it;
}
int main(int argc, char**argv)
{
return 0;
}
g++ says:
test.cpp:5: error: expected `;' before
‘it’
And I don't understand why this is a problem. If I replace it with std::vector<int>::iterator, say, it works fine as expected.
As you can see i'm not instantiating the function, so g++ must have a problem with the template definition itself, but I can't see how its invalid.
Thanks for any advice about whats going on.
NB I'm actually trying to write a templated class and having issues with a map rather than a vector, but this is the simplest test case for my problem.
|
You need to use the typename keyword because the std::vector<T>::iterator type is dependent on the template parameter:
template <class T> void DoNothing()
{
typename std::vector<T>::iterator it;
}
It can actually be confusing when you need to use typename and when you don't need it (or are even not permitted to use it). This article has a decent overview:
http://pages.cs.wisc.edu/~driscoll/typename.html
|
843,705
| 843,747
|
Large initial memory footprint for native app
|
I've noticed that the native C++ application I'm working on has quite a large memory footprint (20MB) even before it enters any of my code.
(I'm referring to the "private bytes" measure in Windows, which as I understand it is the most useful metric).
I've placed a break point on the first line of the "main()" function and sure enough, the footprint is at 20MB when it reaches that.
The size of the EXE is only a couple of meg so that doesn't account for it.
I also deliberately removed all of the DLLs just to prove they weren't the cause. As expected it gets a "Dll not found" message, but the footprint is still 20MB!
So then I wondered that maybe it was the statically initialised objects which were the cause.
So, I added breakpoints to both "new" and "malloc". At the first hit to those (for the first static initialiser), the memory is already 20MB.
Anyone got any ideas about how I can diagnose what's eating up this memory?
Because it seems to be memory outside of the usual new/malloc paradigm, I'm struggling to understand how to debug.
Cheers,
John
|
It might be that you're pulling a lot of libraries with your app. Most of them get initialized before execution is handed over to your main(). Check for any non-standard libraries you're linking against.
EDIT: A very straightforward solution would be to create a new project and just link the libraries you're using one by one, checking memory usage each time. Even though it's an ugly approach, you should find the culprit this way.
There's probably a more elegant solution out there, so you might want to spare some time googling for (free) memory profiling solutions.
|
843,711
| 843,799
|
How do I intercept messages being sent to a window?
|
I want to intercept messages that are being sent to a window in a different process. What is the best way to do this? I can't see the messages when I use the WH_GETMESSAGE hook, and I'm not sure if I can subclass across processes? Any help would be much appreciated.
|
You need to inject your own code into the process that owns the windows you wish to intercept messages from. Fortunately, SetWindowsHookEx() makes this fairly easy, although you may have a bit of trouble at first if you've only used it for in-process hooking up to now.
I can recommend two excellent articles on the subject:
Joseph Newcomber's Hooks and DLLs
Robert Kuster's Three Ways to Inject Your Code into Another Process
|
843,714
| 843,850
|
Which one to use c++ stl container or the MFC container?
|
For every stl container there is a MFC container available in visual c++.Which is better than the other one in what sense and what do you use?
I always use STL container is that wrong?
|
MFC collection classes do have some advantages if you are working within confines of MFC land. E.g. you get things like serialization (if your container elements inherit from CObject or similar) and some debugging support for "free". MSDN has a breakdown of how to choose between different MFC collection types [here](http://msdn.microsoft.com/en-us/library/y1z022s1(VS.80).aspx).
As a default though, I would lean towards the STL classes.
|
843,996
| 844,078
|
What's wrong with passing C++ iterator by reference?
|
I've written a few functions with a prototype like this:
template <typename input_iterator>
int parse_integer(input_iterator &begin, input_iterator end);
The idea is that the caller would provide a range of characters, and the function would interpret the characters as an integer value and return it, leaving begin at one past the last-used character. For example:
std::string sample_text("123 foo bar");
std::string::const_iterator p(sample_text.begin());
std::string::const_iterator end(sample_text.end());
int i = parse_integer(p, end);
This would leave i set to 123 and p "pointing" at the space before foo.
I've since been told (without explanation) that it's bad form to pass an iterator by reference. Is it bad form? If so, why?
|
There is nothing really wrong, but it will certainly limit the use of the template. You won't be able to just put an iterator returned by something else or generated like v.begin(), since those will be temporaries. You will always first have to make a local copy, which is some kind of boilerplate not really nice to have.
One way is to overload it:
int parse_integer(input_iterator begin, input_iterator end,
input_iterator &newbegin);
template<typename input_iterator>
int parse_integer(input_iterator begin, input_iterator end) {
return parse_integer(begin, end, begin);
}
Another option is to have an output iterator where the number will be written into:
template<typename input_iterator, typename output_iterator>
input_iterator parse_integer(input_iterator begin, input_iterator end,
output_iterator out);
You will have the return value to return the new input iterator. And you could then use a inserter iterator to put the parsed numbers into a vector or a pointer to put them directly into an integer or an array thereof if you already know the amount of numbers.
int i;
b = parse_integer(b, end, &i);
std::vector<int> numbers;
b = parse_integer(b, end, std::back_inserter(numbers));
|
844,169
| 844,172
|
AccessViolationException on calling the most basic c++ function I can think of from managed code
|
I'm trying to learn how to use managed/unmanaged code interop, but I've hit a wall that 4 hours of googling wasn't able to get over. I put together 2 projects in visual studio, one creating a win32 exe, and one creating a windows forms .NET application. After a bunch of mucking around I got the C# code to call into the c++ code correctly, but from here I started getting AccessViolationException everytime I got in there. Here is the code from the .cpp file:
extern "C" __declspec(dllexport) void QuickTest()
{
int iTest = 0;
int aTestArray[3] = {1,2,3};
return;
}
And here is the code from the C# windows forms app calling it:
[DllImport("UnmanagedEvaluation2.exe")]
static extern void QuickTest();
Pretty simple right? The call works, and I am able to step into the c++ code (I turned on unmanaged debugging for the project), but it dies on the array creating line every single time with AccessViolationException. The same code runs fine when I run the executable (the c++ code is in a console application project, I tried calling it from the _tmain function and no problems), but when calling into it from .NET it blows up every time.
There has to be something obvious I am missing here, but I haven't come up with anything useful from reading tutorials, and most of the issues posts about that exception are people having problems with complicated marshalings or GCHandles. Thanks in advance for any help.
Update:
You were right below, but it's weird. At first when I started this I assumed I wouldn't be able to do that (call into functions in executables), but when I tried it -- it actually did work, the call that is. It seems like it lets you call into a function into an executable, but as soon as you try to allocate any memory it dies. Any way, thanks for the advice, it seems to be working correctly now.
|
You can't call functions in executables from outside those executables. You need to compile your code into a DLL.
|
844,241
| 844,256
|
Why are C++0x rvalue reference not the default?
|
One of the cool new features of the upcoming C++ standard, C++0x, are "rvalue references." An rvalue reference is similar to an lvalue (normal) reference, except that it can be bound to a temporary value (normally, a temporary can only be bound to a const reference):
void FunctionWithLValueRef(int& a) {…}
void FunctionWithRValueRef(int&& a) {…}
int main() {
FunctionWithLValueRef(5); // error, 5 is a temporary
FunctionWithRValueRef(5); // okay
}
So, why did they invent a whole new type, instead of just removing the restrictions on normal references to allow them to be bound to temporaries?
|
It would be pointless. You would change the thing in the function, and the change would be lost immediately because the thing was actually a temporary.
The reason for the new type stems from the need to be able to decide what actually is an rvalue and what not. Only then you can actually use them for the cool things they are used.
string toupper(string && s) { // for nonconst rvalues
for(char &c : s) make_uppercase(c);
return move(s); // move s into a returned string object
}
string toupper(string const& s) { // for the rest
// calls the rvalue reference version, by passing
// an rvalue copy.
return toupper(string(s));
}
Now, if you have some rvalue and pass it to toupper, the rvalue can directly be modified, because we know the temporary is a throw-away thing anyway, so we can aswell just change it and don't need to copy it. Also, the same observation is used for the thing called move-constructors and move-assignment. The right hand side is not copied, but its things are just stolen away and moved to *this.
If you were to say that rvalues can bind to non-const lvalue references, then you would have no way to figure out whether that references an lvalue (named object) or an rvalue (temporary) in the end.
It's probably more little know, but useful anyway, you can put lvalue or rvalue ref-qualifiers on a member function. Here is an example, which naturally extends the existing semantics of rvalue references to the implicit object parameter:
struct string {
string& operator=(string const& other) & { /* ... */ }
};
Now, you can't anymore say
string() = "hello";
Which is confusing and is not really making sense most of the time. What the & above does is saying that the assignment operator can only be invoked on lvalues. The same can be done for rvalues, by putting &&.
|
844,602
| 844,613
|
C GUI, with a C++ backbone?
|
I have a simple (and also trivial) banking application that I wrote in C++. I'm on ubuntu so I'm using GNOME (GTK+). I was wondering if I could write all my GUI in C/GTK+ and then somehow link it to my C++ code. Is this even possible?
Note: I don't want to use Qt or GTKmm, so please don't offer those as answers.
|
Yes, it's a very easy thing to do. All you have to do is expose some of the C++ functions as "extern C" so that the event handlers and callbacks in your UI code can call them.
In the case that you can't change the existing C++ source - no problem. just write a C++ shim for your UI, extern those functions, and call backend functions from there.
|
844,751
| 966,348
|
How to integrate native applications with eclipse?
|
I have a couple of native applications written in C++ and C#. These are legacy applications that require data sharing between them. Currently, data sharing is through import/export of text file in some proprietary format. We are currently looking at integrating these two applications using eclipse. My questions are:
How can we integrate native applications such as c++ and c# based applications into eclipse?
What kind of data integration methods does eclipse provide for native applications?
Is eclipse the best choice for such use?
Also, it will be very helpful if you can share your experiences about integrating native applications in eclipse.
I am specifically looking at integrating native applications into eclipse just the way we would integrate a eclipse plugin written in Java. For example, what does it take to write a wrapper plugin in Java which will wrap a native tool by using JNI calls that can be integrated into eclipse just as any other eclipse plugin? Is this is a preferred approach for integrating native applications or is it a good idea to rewrite my legacy native application in Java?
I am not looking at using eclipse as a launch pad for my native applications using the "External Tools" configuration.
|
If you can write a JNI wrapper around your C++/C# applications, then you can use them from an Eclipse plugin.
The simplest approach is to:
repackage your C++/C# applications as DLLs (if they aren't already)
wrap them with a JNI layer
place the DLLs in the root folder of your plugin
call System.LoadLibrary() from a static initializer block in your JNI wrapper class to load required DLLs
You might find the discussion on the Eclipse newsgroup entitled Using DLL in an Eclipse plugin helpful.
|
844,768
| 844,779
|
How is STL iterator equality established?
|
I was wondering, how is equality (==) established for STL iterators?
Is it a simple pointer comparison (and thus based on addresses) or something more fancy?
If I have two iterators from two different list objects and I compare them, will the result always be false?
What about if I compare a valid value with one that's out of range? Is that always false?
|
Iterator classes can define overloaded == operators, if they want. So the result depends on the implementation of operator==.
You're not really supposed to compare iterators from different containers. I think some debug STL implementations will signal a warning if you do this, which will help you catch cases of this erroneous usage in your code.
|
844,914
| 844,927
|
How to get form data from HTML page using c++
|
How do I get form data from HTML page using c++, as far as the basics of post and get?
EDIT: CGI is using apache 2 on windows, I got c++ configured and tested with with apache already.
|
The easiest way to access form data from an HTTP request is via CGI. This involves reading environment variables which is done using the getenv function.
|
845,447
| 845,533
|
Winsock uncontrollably spawns several, persistent threads
|
I'm designing a networking framework which uses WSAEventSelect for asynchronous operations. I spawn one thread for every 64th socket due to the max 64 events per thread limitation, and everything works as expected except for one thing:
Threads keep getting spawned uncontrollably by Winsock during connect and disconnect, threads that won't go away.
With the current design of the framework, two threads should be running when only a few sockets are active. And as expected, two threads are running in total. However, when I connect with a few sockets (1-5 sockets), an additional 3 threads are spawn which persist until I close the application. Also, when I lose connection on any of the sockets, 2 more threads are spawned (also persisting until closure). That's 7 threads in total, 5 of which I have no idea what they are there for.
If they are required by Winsock for connecting or whatever and then disappeared, that would be fine. But it bothers me that they persist until I close my application.
Is there anyone who could shed some light on this? Possibly a solution to avoid these threads or force them to close when no connections are active?
(Application is written in C++ with Win32 and Winsock 2.2)
Information from Process Explorer:
Expected threads:
MyApp.exe!WinMainCRTStartup
MyApp.exe!Netfw::NetworkThread::ThreadProc
Unexpected threads:
ntdll.dll!RtlpUnWaitCriticalSection+0x2dc
mswsock.dll+0x7426
ntdll.dll!RtlGetCurrentPeb+0x155
ntdll.dll!RtlGetCurrentPeb+0x155
ntdll.dll!RtlGetCurrentPeb+0x155
All of the unexpected threads have call stacks with calls to functions such as ntkrnlpa.exe!IoSetCompletionRoutineEx+0x46e which probably means it is a part of the notification mechanism.
|
Download the sysinternals tool process explorer. Install the appropriate debugging tools for windows. In process explorer, set Options -> Symbols path to:
SRV*C:\Websymbols*http://msdl.microsoft.com/download/symbols
Where C:\Websymbols is just a place to store the symbol cache (I'd create a new empty directory for it.)
Now, you can inspect your program with process explorer. Double click the process, go to the threads tab, and it will show you where the threads started, how busy they are, and what their current callstack is.
That usually gives you a very good idea of what the threads are. If they're Winsock internal threads, I wouldn't worry about them, even if there are hundreds.
|
845,705
| 846,348
|
Memory Based Data Server for local IPC
|
I am going to be running an app(s) that require about 200MB of market data each time it runs.
This is trivial amount of data to store in memory these days, so for speed thats what i want to do.
Over the course of a days session I will probably run, re-run, re-write and re-run etc etc one or more applications over and over.
SO, the question is how to hold the data in memory all day such that even if the app crashes I do not have to reload the data by opening the data file on disk and re-loading the data?
My initial idea is to write a data server app that does nothing more than read the data into shared memory so that it is available for use. If I do that I guess I could use memory mapping for the IPC by calling
CreateFile()
CreateFileMapping()
MapViewOfFile()
Is there a better IPC/approach?
|
If you have enough memory and nothing else asks for memory, that might reduce your startup time. To guarantee access to the memory, you probably want to have a memory mapped file in named shared memory, as described here. You can have a simple program create the share and manage it so you can guarantee it remains in memory.
|
845,706
| 845,785
|
Is there a way to automatically make a library either static or dynamic?
|
I know this might be a long shot, but here it goes. I have several active projects and each has sub project library which gets compiled when the main project compiles. These libraries are dynamic ones, but recently there was an issue that might arise a need for those libraries (most of them are shared between projects) to be static instead of dynamic.
Now, I am quite sure someone has devised a system where I could make a library which could be compiled wither as static into the project or dynamic one with something like a simple preprocessor directive or something. If not, I'll pipe dream away.
edit:
looks like CMake might be it, however, apart from building stuff I would also like to alleviate __declspec(dllimport) and __declspec(dllexport) from my code - so that I can automatically switch between static and dynamic. Although it is fairly easy to do with preprocessor macros, I thought maybe there might be some form of a system already in use by people?
|
I like to use CMake to avoid these type of problems.
|
845,710
| 845,713
|
Is there a function that returns the ASCII value of a character? (C++)
|
I need a function that returns the ASCII value of a character, including spaces, tabs, newlines, etc...
On a similar note, what is the function that converts between hexadecimal, decimal, and binary numbers?
|
char c;
int ascii = (int) c;
s2.data[j]=(char)count;
A char is an integer, no need for conversion functions.
Maybe you are looking for functions that display integers as a string - using hex, binary or decimal representations?
|
845,772
| 845,786
|
How to check if the binary representation of an integer is a palindrome?
|
How to check if the binary representation of an integer is a palindrome?
|
Since you haven't specified a language in which to do it, here's some C code (not the most efficient implementation, but it should illustrate the point):
/* flip n */
unsigned int flip(unsigned int n)
{
int i, newInt = 0;
for (i=0; i<WORDSIZE; ++i)
{
newInt += (n & 0x0001);
newInt <<= 1;
n >>= 1;
}
return newInt;
}
bool isPalindrome(int n)
{
int flipped = flip(n);
/* shift to remove trailing zeroes */
while (!(flipped & 0x0001))
flipped >>= 1;
return n == flipped;
}
EDIT fixed for your 10001 thing.
|
845,819
| 845,830
|
How can I return an array?
|
Is there any way to return an array from a function? More specifically, I've created this function:
char bin[8];
for(int i = 7; i >= 0; i--)
{
int ascii='a';
if(2^i-ascii >= 0)
{
bin[i]='1';
ascii=2^i-ascii;
}
else
{
bin[i]='0';
}
}
and I need a way to return bin[].
|
Your array is a local variable allocated on the stack. You should use new [] to allocate it on the heap. Then you can just say: return bin;. Beware that you will have to explicitly free it with delete [] when you are done with it.
|
845,912
| 845,917
|
What is the C++ function to raise a number to a power?
|
How do I raise a number to a power?
2^1
2^2
2^3
etc...
|
pow() in the cmath library. More info here.
Don't forget to put #include<cmath> at the top of the file.
|
845,957
| 845,968
|
Question About & operator in C++
|
I am looking at the .h file of a Wrapper class. And the class contains one private member:
T* dataPtr;
(where T is as in template < class T > defined at the top of the .h file)
The class provides two "* overloading operator" methods:
T& operator*()
{
return *dataPtr;
}
const T& operator*() const
{
return *dataPtr;
}
Both simply return *dataPtr, but what does the notation "*dataPtr" actually return, in plain English? And how does it fit with the return type "T&"?
|
The return type T& states that you are returning a reference of an instance of a T object. dataPtr is a pointer, which you "dereference" (get the reference value/instance of a pointer) using *.
|
846,044
| 846,055
|
How to get the filename of a DLL?
|
I have a C++ Windows application myapp.exe which loads several plug-ins.
Plug-ins need to find the path to their DLLs. I can use GetModuleFileName for this, but it need the handle for the plug-in DLL. I don't know where to get this handle. GetModuleHandle(NULL) returns the handle to the executable.
One option is to use GetModuleHandle (GetModuleHandle("myplugin.dll") ) , but this requires the name of the plugin to be hardcoded which I want to avoid.
Any help is appreciated.
Paul
|
I don't know where to get this handle
It's passed as a parameter to your DLLMain() entry function.
If the plugin can't access its DLLMain() entry function, it can use the VirtualQuery function on a piece of its own memory and use the AllocationBase field of the filled-in MEMORY_BASIC_INFORMATION structure as its HMODULE.
|
846,121
| 846,170
|
C++ adding a carriage return at beginning of string when reading file
|
I have two questions:
1) Why is my code adding a carriage return at the beggining of the selected_line string?
2) Do you think the algorithm I'm using to return a random line from the file is good enough and won't cause any problems?
A sample file is:
line
number one
#
line number two
My code:
int main()
{
srand(time(0));
ifstream read("myfile.dat");
string line;
string selected_line;
int nlines = 0;
while(getline(read, line, '#')) {
if((rand() % ++nlines) == 0)
selected_line = line;
}
// this is adding a \n at the beginning of the string
cout << selected_line << endl;
}
EDIT: OK, what some of you suggested makes a lot of sense. The string is probably being read as "\nmystring". So I guess my question now is, how would i remove the first \n from the string?
|
What you probably want is something like this:
std::vector<std::string> allParagraphs;
std::string currentParagraph;
while (std::getline(read, line)) {
if (line == "#") { // modify this condition, if needed
// paragraph ended, store to vector
allParagraphs.push_back(currentParagraph);
currentParagraph = "";
else {
// paragraph continues...
if (!currentParagraph.empty()) {
currentParagraph += "\n";
}
currentParagraph += line;
}
}
// store the last paragraph, as well
// (in case it was not terminated by #)
if (!currentParagraph.empty()) {
allParagraphs.push_back(currentParagraph);
}
// this is not extremely random, but will get you started
size_t selectedIndex = rand() % allParagraphs.size();
std::string selectedParagraph = allParagraphs[selectedIndex];
For better randomness, you could opt for this instead:
size_t selectedIndex
= rand() / (double) (RAND_MAX + 1) * allParagraphs.size();
This is because the least significant bits returned by rand() tend to behave not so randomly at all.
|
846,259
| 846,272
|
Array of linked lists in C++
|
Some code for context:
class WordTable
{
public:
WordTable();
~WordTable();
List* GetListByAlphaKey(char key);
void AddListByKey(char key);
bool ListExists(char key);
bool WordExists(string word);
void AddWord(string word);
void IncrementWordOccurances(string word);
void Print();
private:
List *_listArray[33];
int _GetIndexByKey(char key);
};
class TableBuilder
{
public:
TableBuilder();
~TableBuilder();
void AnalyzeStream(fstream &inputStream);
void PrintResults();
private:
void _AnalyzeCursor(string data);
bool _WordIsValid(string data);
WordTable* _WordTable;
};
struct Element {
public:
string Word;
int Occurances;
Element* Next;
};
class List
{
public:
List();
~List();
Element* AddElement(string word);
void DeleteElement(Element* element);
void Print();
void Delete();
Element* First;
bool WordExists(string word);
void IncrementWordOccurances(string word);
private:
void _PrintElementDetails(Element* element);
};
Requirements
I must analyze text, building array of linked lists (where array contains list for each letter; list contains every word found in text), then print out results.
Problem
I can`t initialize array of lists in WordTable.cpp. I know that i've misunderstood something, but i got no ideas and time. Anyone?
P.s. Yeah, that's a homework. STOP giving me advices about best practices, please... :)
|
An initialization for _listArray would look like this:
WordTable::WordTable() {
for (int i=0; i<33; i++)
_listArray[i] = new List();
}
You don't really say what exactly the problem is so I'm not sure if this helps...
|
846,444
| 846,456
|
C++ question... definition doesn't recognize vectors specified in declaration
|
I'm working on a class assignment that started small, so I had it all in one file. Now it's gotten bigger and I'm trying to separately compile main, functions, and classes (so all the classes are together in one .h and one .cpp) I have one class B, which is the parent of a lot of others and comes first in the file. One of its data members isn't working now that I'm using separate compilation, which is causing dozens of errors.
In .h
class A;
class B {
public:
B (){}
A* myptr;
void whatever();
vector<A*> myAs; //this one is the problem
};
In .cpp
void B::whatever() {
vector<A*> newvector; //no problem!
myptr = &something; //no problem!
for (vector<A*>::iterator iter = myAs.begin(); iter != myAs.end(); ++iter) {
//error!
}
}
I get errors: either "myAs was not declared in this scope" or "class B has no member myAs."
I've included < vector >, forward-declared class A as you see above, and I definitely remembered to include the .h at the top of the .cpp! Is there something about vectors or classes and separate compilation that I don't understand? This is in Xcode, BTW.
|
It's not just vector. It's std::vector because it is within the namespace called std. That's why the compiler moans. It doesn't know what vector<A*> means. Say std::vector<A*> instead.
Do not add using namespace std; now into the header because of this. It may be OK for the assignment to put it into the .cpp file to save typing if you wish. But it's a really bad idea to put such a line into a header: Because you don't know which files will need your header in future. The situation can quickly get out of hand as the amount of files including your header grows with time. The header should therefor include only the names and headers that it really needs so that it causes as little name conflicts as possible - whereas that using namespace std; line would make all names of std visible directly.
|
846,566
| 846,625
|
Using Boost on ubuntu
|
I've heard a lot of good comments about Boost in the past and thought I would give it a try. So I downloaded all the required packages from the package manager in Ubuntu 9.04. Now I'm having trouble finding out how to actually use the darn libraries.
Does anyone know of a good tutorial on Boost that goes all the way from Hello World to Advanced Topics, and also covers how to compile programs using g++ on ubuntu?
|
Agreed; the boost website has good tutorials for the most part, broken down by sub-library.
As for compiling, a good 80% of the library implementation is defined in the header files, making compiling trivial. for example, if you wanted to use shared_ptr's, you'd just add
#include <boost/shared_ptr.hpp>
and compile as you normally would. No need to add library paths to your g++ command, or specify -llibboost. As long as the boost directory is in your include path, you're all set.
From the boost documentation:
The only libraries that need to be compiled and linked are the following:The only Boost libraries that must be built separately are:
Boost.Filesystem
Boost.IOStreams
Boost.ProgramOptions
Boost.Python (see the Boost.Python build documentation before building and installing it)
Boost.Regex
Boost.Serialization
Boost.Signals
Boost.Thread
Boost.Wave
A few libraries have optional separately-compiled binaries:
Boost.DateTime has a binary component that is only needed if you're using its to_string/from_string or serialization features, or if you're targeting Visual C++ 6.x or Borland.
Boost.Graph also has a binary component that is only needed if you intend to parse GraphViz files.
Boost.Test can be used in “header-only” or “separately compiled” mode, although separate compilation is recommended for serious use.
So, if you're using one of the listed libraries, use the Getting Started guide to, well, get you started on compiling and linking to Boost.
|
846,576
| 846,582
|
Is there a C++ function to turn off the computer?
|
Is there a C++ function to turn off the computer? And since I doubt there is one (in the standard library, at least), what's the windows function that I can call from C++?
Basically, what is the code to turn off a windows xp computer in c++?
|
On windows you can use the ExitWindows function described here:
http://msdn.microsoft.com/en-us/library/aa376868(VS.85).aspx
and here's a link to example code that does this:
http://msdn.microsoft.com/en-us/library/aa376871(VS.85).aspx
|
846,812
| 847,177
|
What is R-Value reference that is about to come in next c++ standard?
|
What is R-Value reference that is about to come in next c++ standard?
|
Here is a really long article from Stephan T. Lavavej
|
846,899
| 846,946
|
Windows 7 and the case of the missing regtlib
|
I've just discovered that regtlib.exe appears to be missing from Windows 7 (and apparently from Vista as well).
I've just installed Windows 7 RC in a VM and I'm attempting to build our existing projects on the new OS. The projects are c/c++ based and I'm using visual studio 2008. In order to build these projects I need to register several tlb files that are referenced within the code base.
Has anyone also encountered this problem? And, has anyone managed to solve this?
Thanks.
|
Yeah regtlib was removed from vista and up. As far as I know, all it does is call LoadTypeLibEx with the REGKIND_REGISTER flag (http://msdn.microsoft.com/en-us/library/ms221249.aspx). Maybe you could write a simple replacement.
|
847,092
| 847,107
|
Partial builds versus full builds in Visual C++
|
For most of my development work with Visual C++, I am using partial builds, e.g. press F7 and only changed C++ files and their dependencies get rebuilt, followed by an incremental link. Before passing a version onto testing, I take the precaution of doing a full rebuild, which takes about 45 minutes on my current project. I have seen many posts and articles advocating this action, but wonder is this necessary, and if so, why? Does it affect the delivered EXE or the associated PDB (which we also use in testing)? Would the software function any different from a testing perspective?
For release builds, I'm using VS2005, incremental compilation and linking, precompiled headers.
|
Hasn't everyone come across this usage pattern? I get weird build errors, and before even investigating I do a full rebuild, and the problem goes away.
This by itself seems to me to be good enough reason to do a full rebuild before a release.
Whether you would be willing to turn an incremental build that completes without problems over to testing, is a matter of taste, I think.
|
847,157
| 847,182
|
Qt: having problems responding on QWebView::linkClicked(QUrl) - slot signal issue
|
I am pretty new with Qt.
I want to respond to linkClicked in QWebView.
I tried connect like this:
QObject::connect(ui->webView, SIGNAL(linkClicked(QUrl)),
MainWindow,SLOT(linkClicked(QUrl)));
But I was getting error: C:/Documents and Settings/irfan/My Documents/browser1/mainwindow.cpp:9: error: expected primary-expression before ',' token
When I do this using UI Editing Signals Slots:
I have in header file declaration of slot:
void linkClicked(QUrl &url);
in source cpp file :
void MainWindow::linkClicked(QUrl &url)
{
QMessageBox b;
b.setText(url->toString());
b.exec();
}
When I run this it compiles and runs but got a warning :
Object::connect: No such slot MainWindow::linkClicked(QUrl)
in ui_mainwindow.h:100
What is proper way of doing this event handling?
|
I changed QObject::connect to only connect and it works.
So this code works:
connect(ui->webView,SIGNAL(linkClicked(const QUrl)),this,SLOT(linkClicked(const QUrl)),Qt::DirectConnection);
But I don't know why?
|
847,279
| 847,316
|
Code reuse in exception handling
|
I'm developing a C api for some functionality written in C++ and I want to make sure that no exceptions are propagated out of any of the exported C functions.
The simple way to do it is making sure each exported function is contained in a:
try {
// Do the actual code
} catch (...) {
return ERROR_UNHANDLED_EXCEPTION;
}
Let's say I know one exception that is often missed inside the C++ code is std::bad_alloc and I want to treat it specially I'd write something like this instead:
try {
// Run the actual code
} catch (std::bad_alloc& e) {
return ERROR_BAD_ALLOC;
} catch (...) {
return ERROR_UNHANDLED_EXCEPTION;
}
Is it possible to decompose this in some clever way so that I can globally treat some errors differently without adding a new catch statement for the exception handler around every exported function?
I'm aware of that this is possible to solve using the preprocessor, but before going down that road, I'd make sure there is no other way to do it.
|
You can use only one handler function for all possible exceptions, and call it from each or your API implementation functions, as below:
int HandleException()
{
try
{
throw;
}
// TODO: add more types of exceptions
catch( std::bad_alloc & )
{
return ERROR_BAD_ALLOC;
}
catch( ... )
{
return ERROR_UNHANDLED_EXCEPTION;
}
}
And in each exported function:
try
{
...
}
catch( ... )
{
return HandleException();
}
|
847,313
| 847,452
|
How to read 3rd party application's variables from memory?
|
I'm trying to read variables from memory. Variables, that doesn't belong to my own program. For instance, let's say I have this Adobe Shockwave (.dcr) application running in browser and I want to read different variables from it. How it's being done? Do I need to hook the process? But it's running under virtual machine, so I don't know how to do it.
This task is pretty much trivial in normal w32 applications (as it is mainly just
CBT-hooking / subclassing), but as I mentioned before, I've got no idea how it's being
done with flash / shockwave.
I'm using C++ (VS9) as my development-environment, in case you wish to know.
Any hints would be highly appreciated, so thank you in advance.
Best regards,
nhaa123
|
If you're trying to do it manually just for one or two experiments, it's easy.
Try a tool like Cheat engine which is like a free and quick and simple process peeker. Basically it scans the process's memory space for given key values. You can then filter those initial search hits later as well. You can also change those values you do find, live. The link above shows a quick example of using it to find a score or money value in a game, and editing it live as the game runs.
|
847,396
| 847,525
|
Compile a DLL in C/C++, then call it from another program
|
I want to make a simple, simple DLL which exports one or two functions, then try to call it from another program... Everywhere I've looked so far, is for complicated matters, different ways of linking things together, weird problems that I haven't even begun to realize exist yet... I just want to get started, by doing something like so:
Make a DLL which exports some functions, like,
int add2(int num){
return num + 2;
}
int mult(int num1, int num2){
int product;
product = num1 * num2;
return product;
}
I'm compiling with MinGW, I'd like to do this in C, but if there's any real differences doing it in C++, I'd like to know those also. I want to know how to load that DLL into another C (and C++) program, and then call those functions from it.
My goal here, after playing around with DLLs for a bit, is to make a VB front-end for C(++) code, by loading DLLs into visual basic (I have visual studio 6, I just want to make some forms and events for the objects on those forms, which call the DLL).
I need to know how to call gcc (/g++) to make it create a DLL, but also how to write (/generate) an exports file... and what I can/cannot do in a DLL (like, can I take arguments by pointer/reference from the VB front-end? Can the DLL call a theoretical function in the front-end? Or have a function take a "function pointer" (I don't even know if that's possible) from VB and call it?) I'm fairly certain I can't pass a variant to the DLL...but that's all I know really.
update again
Okay, I figured out how to compile it with gcc, to make the dll I ran
gcc -c -DBUILD_DLL dll.c
gcc -shared -o mydll.dll dll.o -Wl,--out-implib,libmessage.a
and then I had another program load it and test the functions, and it worked great,
thanks so much for the advice,
but I tried loading it with VB6, like this
Public Declare Function add2 Lib "C:\c\dll\mydll.dll" (num As Integer) As Integer
then I just called add2(text1.text) from a form, but it gave me a runtime error:
"Can't find DLL entry point add2 in C:\c\dll\mydll.dll"
this is the code I compiled for the DLL:
#ifdef BUILD_DLL
#define EXPORT __declspec(dllexport)
#else
#define EXPORT __declspec(dllimport)
#endif
EXPORT int __stdcall add2(int num){
return num + 2;
}
EXPORT int __stdcall mul(int num1, int num2){
return num1 * num2;
}
calling it from the C program like this worked, though:
#include<stdio.h>
#include<windows.h>
int main(){
HANDLE ldll;
int (*add2)(int);
int (*mul)(int,int);
ldll = LoadLibrary("mydll.dll");
if(ldll>(void*)HINSTANCE_ERROR){
add2 = GetProcAddress(ldll, "add2");
mul = GetProcAddress(ldll, "mul");
printf("add2(3): %d\nmul(4,5): %d", add2(3), mul(4,5));
} else {
printf("ERROR.");
}
}
any ideas?
solved it
To solve the previous problem, I just had to compile it like so:
gcc -c -DBUILD_DLL dll.c
gcc -shared -o mydll.dll dll.o -Wl,--add-stdcall-alias
and use this API call in VB6
Public Declare Function add2 Lib "C:\c\dll\mydll" _
(ByVal num As Integer) As Integer
I learned not to forget to specify ByVal or ByRef explicitly--I was just getting back the address of the argument I passed, it looked like, -3048.
|
Regarding building a DLL using MinGW, here are some very brief instructions.
First, you need to mark your functions for export, so they can be used by callers of the DLL. To do this, modify them so they look like (for example)
__declspec( dllexport ) int add2(int num){
return num + 2;
}
then, assuming your functions are in a file called funcs.c, you can compile them:
gcc -shared -o mylib.dll funcs.c
The -shared flag tells gcc to create a DLL.
To check if the DLL has actually exported the functions, get hold of the free Dependency Walker tool and use it to examine the DLL.
For a free IDE which will automate all the flags etc. needed to build DLLs, take a look at the excellent Code::Blocks, which works very well with MinGW.
Edit: For more details on this subject, see the article Creating a MinGW DLL for Use with Visual Basic on the MinGW Wiki.
|
847,535
| 1,141,745
|
Avoiding unneccessry recompilations using "branchy" development model
|
I'm using Mercurial for development of quite a large C++ project which takes about 30 minutes to get built from the scratch(while incremental builds are very quick).
I'm usually trying to implement each new feature in the new branch(using "hg clone") and I may have several new features developed during the day and it's quickly getting very boring to wait for the new feature branch to get built.
Are there any recipes to somehow re-use object files from other already built branches?
P.S. in git there are named branches within the same repository which make re-usage of the existing object files possible for the build system, however I prefer the simpler Mercurial separate branches model...
|
My Localbranch extension was designed partly around this use case. It uses a single working directory, but I think it's simpler than git. It's essentially a mechanism for maintaining multiple repository clones under one working directory, where only one is active at a given time.
|
847,721
| 847,766
|
How to identify if a library is DEBUG or RELEASE build?
|
Our project is using many static libraries to build the application. How can we make sure we are using release version of libraries in release build of application?
We are making mistakes by taking debug library in release application build.
I am looking for an elegant way in which I can write a module in that we can check whether a particular library is release or debug and report it if not matching. Our application is written in C/C++. (Platform MSVC & GCC)
|
The normal approach is eithr to give the libraries different names or store them in different directories, such as Debug and Release. And if your build is correctly automated, I can't see how you can make mistakes.
|
847,876
| 847,908
|
Is there any performance difference between for() and while()?
|
Or is it all about semantics?
|
Short answer: no, they are exactly the same.
Guess it could in theory depend on the compiler; a really broken one might do something slightly different but I'd be surprised.
Just for fun here are two variants that compile down to exactly the same assembly code for me using x86 gcc version 4.3.3 as shipped with Ubuntu. You can check the assembly produced on the final binary with objdump on linux.
int main()
{
#if 1
int i = 10;
do { printf("%d\n", i); } while(--i);
#else
int i = 10;
for (; i; --i) printf("%d\n", i);
#endif
}
EDIT: Here is an "oranges with oranges" while loop example that also compiles down to the same thing:
while(i) { printf("%d\n", i); --i; }
|
847,967
| 848,001
|
How do you determine the last valid element in a STL-Container
|
If i iterate over a STL container i sometimes need to know if the current item is the last one in the sequence. Is there a better way, then doing something like this? Can i somehow convert rbegin()?
std::vector<int> myList;
// ....
std::vector<int>::iterator lastit = myList.end();
lastit--;
for(std::vector<int>::iterator it = myList.begin(); it != myList.end(); it++) {
if(it == lastit)
{
// Do something with last element
}
else
{
// Do something with all other elements
}
|
Try the following
std::vector<int>::iterator it2 = (++it);
if ( it2 == myList.end() ) {
...
}
The following should work as well
if ( it+1 == myList.end() ) {
// it is last
...
}
|
847,974
| 848,000
|
The benefits / disadvantages of unity builds?
|
Since starting at a new company I've noticed that they use unity cpp files for the majority of our solution, and I was wondering if anyone is able to give me a definitive reason as to why and how these speed up the build process? I would've thought that editing one cpp file in the unity files will force recompilation of all of them.
|
Very similar question and good answers here: #include all .cpp files into a single compilation unit?
The summary seems to be that less I/O overhead is the major benefit.
See also The Magic Of Unity Builds as linked in the above question as well.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.