question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
823,003
823,014
Have a C++ Class act like a custom ostream, sstream
I have a C++ class MyObject and I want to be able to feed this data like I would to a osstream (but unlike a direct sstream, have the incoming data be formatted a special way). I can't seem to figure out how to overload a operator for MyObject to eat input given to it. class MyObject { public: ostringstream s; FEEDME }; int main() { MyObject obj; obj.FEEDME << "Hello" << 12345; // I want obj.s == ":Hello::12345:" } I want it so every item fed in be surrounded by : : So in the given example, s = ":Hello::12345" should be the final outcome. What my question is, how can I tell the object that when ever a <<something, put : : around the something. Is this possible?
try this: class MyObject { public: template <class T> MyObject &operator<<(const T &x) { s << ':' << x << ':'; return *this; } std::string to_string() const { return s.str(); } private: std::ostringstream s; }; MyObject obj; obj << "Hello" << 12345; std::cout << obj.to_string() << std::endl; There are certain things you won't be able to shove into the stream, but it should work for all the basics.
823,107
823,120
What is wrong with this setup?
I created a program called test: #include<stdlib.h> #include<iostream> int main() { std::cout<<system("..\\add\\debug\\add.exe 4 8"); while(true); return 0; } add.exe consists of #include<stdlib.h> int main(int argc,char **argv[]) { int n=((unsigned)argv[1]); int m=((unsigned)argv[2]); return(n+m); } so when I run test, I get 6841420 The attempt was to have test run add with parameters 4 and 8, and for add to return the sum of those values (12) and then test would display that to the screen. How did I get 6841420, and how can I fix it?
The problem is that you are converting a pointer value into an integer. The arguments will be passed to your program as C style strings (const char*). You need to first convert these to a string using an API like atoi. #include<stdlib.h> int main(int argc,char *argv[]) { int n= atoi(argv[1]); int m= atoi(argv[2]); return(n+m); } EDIT As others have pointed out, you should also do some error checking to ensure there are actually 2 parameters passed to the program.
823,116
823,132
How do I use C++ STL containers in My iPhone App?
I'd like to use an STL set in my iPhone app (which is written in Objective-C in XCode). How do I include set and/or use the standard namespace? In C++ I'd do this: #include<set> using namespace std; // use the set<T> somewhere down here... How can I do this in Objective-C?
Just rename your source file so it ends in .mm and it should trigger the Objective-C++ front-end; you can then mix Objective-C and C++ in it. More information here.
823,283
823,303
Is there a function that returns the character/string at a point in a .txt?
I know its possible to get a part of a .txt, then convert it to an integer, then store it in a variable, but is it possible to to that in a single declaration. (The variable needs to be global). Ie: [data.txt] 1020 [convert_data.cpp] #include<fstream> fstream convert("data.txt"); //way to declare something equal to A PARTICULAR POINT in data.txt int main() { //how would I take this block of code and simplify it to two DECLARATIONS (not //function calls), or, if that's not possible or not practical, how would I make //n and m (or var1 and var2) global AND CONSTANT? char var1[5]; convert.getline(var1,2); char var2[5]; convert.getline(var2,2); const int n=atoi(var1); const int m=atoi(var2); return 0; }
No, you can't do it in a single declaration. You can write a function to do that, and then you can initialize local variables with that function, but you can't initialize global variables with it. If you want to initialize global variables, you'll need to explicitly do so during your program's initialization routine, or you could do it in the constructor of a global object (not recommended).
823,285
823,315
What Are Some Quirks/Surprises with Using .mm Files in Objective-C?
I want to use some C++ STL collections in my Objective-C iPhone app. Apparently this is possible by giving files the extension ".mm" . What are some of the quirks/surprises associated with this? I want to use the basic containers that I'm familiar with (vector, queue, set, ...) Cheers!
See Using C++ With Objective-C for a detailed list of what you can and can't do. You can do most things that you would expect. You just can't do things like have a C++ class inherit from an Objective-C class or vice-versa, you can't mix C++ exceptions with Objective-C exceptions, and C++ introduces several new keywords not present in Objective-C.
823,345
823,359
How can I make this declaration work?
EDIT: I also got an answer to make sector a vector of vectors: vector<vector<char>>sector; and that gets rid of the rest of my errors. EDIT: I've made sector an array of pointers as someone suggested, and still get three errors: EDIT: I have edited the program, but it has not fixed all of the errors: I have this section of a program: char* load_data(int begin_point,int num_characters); ifstream mapdata("map_data.txt"); const int maxx=atoi(load_data(0,2)); const int maxy=atoi(load_data(2,2)); char** sector=new char[maxx][maxy]; char* load_data(int begin_point,int num_characters) { seekg(begin_point); char* return_val=new char[num_characters+1]; mapdata.getline(return_val,num_characters); return return_val; } And I get these errors: line 5>error C2540: non-constant expression as array bound line 5>error C2440: 'initializing' : cannot convert from 'char (*)[1]' to 'char **' line 14>error C3861: 'seekg': identifier not found per seekg: yes I know I have to include fstream, I included that in main.cpp, this is a separate .h file also included in main.cpp. How do I fix the errors? Specifically, how to I fix the errors while keeping all my variables global? Also, if it helps, this is map_data.txt: 10 10 00O 99! 1 55X 19 What is a question? 18 This is an answer 1 1 2 1
Well, function load_data(int,int) returns a char. You are passing that char to the atoi function, that takes a char*. In addition to that, you are probably not including stdlib.h header file!! #include <cstdlib> int atoi(const char*); If you dont wan't to include stdlib.h, then you could declare atoi as extern, but be aware when you compile this module. extern int atoi(const char*) Take into account that the argument of atoi function must be a null-terminated string. In order for your code to work, you should make function load data return a char*, not a char. char* load_data(int,int); So, now you could do //notice these aren't const, they rely on non-compile time available data. int maxx = atoi (load_data(....)); int maxy = atoi (load_data(....)); If you are in C++, load_data function could return a std::string. std::string load_data(int,int) and then use c_str() method, which returns a C-String from a C++ string. const char* std::string:c_str() int maxx = atoi(load_data(....).c_str()); int maxy = atoi(load_data(....).c_str()); In addition to that, you shouldn't (regarding line 5>error C2540: non-constant expression as array bound line 5>error C2440: 'initializing' : cannot convert from 'char (*)[1]' to 'char **' ) char sector[maxx][maxy]; You should char** sector = new char[maxx][maxy](); and dont forget to free this memory delete[](sector);
823,426
823,455
Passing references to pointers in C++
As far as I can tell, there's no reason I shouldn't be allowed to pass a reference to a pointer in C++. However, my attempts to do so are failing, and I have no idea why. This is what I'm doing: void myfunc(string*& val) { // Do stuff to the string pointer } // sometime later { // ... string s; myfunc(&s); // ... } And I'm getting this error: cannot convert parameter 1 from 'std::string *' to 'std::string *&'
Your function expects a reference to an actual string pointer in the calling scope, not an anonymous string pointer. Thus: string s; string* _s = &s; myfunc(_s); should compile just fine. However, this is only useful if you intend to modify the pointer you pass to the function. If you intend to modify the string itself you should use a reference to the string as Sake suggested. With that in mind it should be more obvious why the compiler complains about you original code. In your code the pointer is created 'on the fly', modifying that pointer would have no consequence and that is not what is intended. The idea of a reference (vs. a pointer) is that a reference always points to an actual object.
823,479
823,525
Multiple threads reading from the same file
My platform is windows vista 32, with visual c++ express 2008 . for example: if i have a file contains 4000 bytes, can i have 4 threads read from the file at same time? and each thread access a different section of the file. thread 1 read 0-999, thread 2 read 1000 - 2999, etc. please give a example in C language.
If you don't write to them, no need to take care of sync / race condition. Just open the file with shared reading as different handles and everything would work. (i.e., you must open the file in the thread's context instead of sharing same file handle). #include <stdio.h> #include <windows.h> DWORD WINAPI mythread(LPVOID param) { int i = (int) param; BYTE buf[1000]; DWORD numread; HANDLE h = CreateFile("c:\\test.txt", GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL); SetFilePointer(h, i * 1000, NULL, FILE_BEGIN); ReadFile(h, buf, sizeof(buf), &numread, NULL); printf("buf[%d]: %02X %02X %02X\n", i+1, buf[0], buf[1], buf[2]); return 0; } int main() { int i; HANDLE h[4]; for (i = 0; i < 4; i++) h[i] = CreateThread(NULL, 0, mythread, (LPVOID)i, 0, NULL); // for (i = 0; i < 4; i++) WaitForSingleObject(h[i], INFINITE); WaitForMultipleObjects(4, h, TRUE, INFINITE); return 0; }
823,553
823,828
How to get hardware MAC address on Windows
I'm playing around with retrieving the MAC address from the NIC - there are a variety of ways to get it, this article covers the most common: http://www.codeguru.com/Cpp/I-N/network/networkinformation/article.php/c5451 I'm currently using the GetAdaptersInfo method, which seems the most bulletproof, but if the MAC address has been set via the registry: http://www.mydigitallife.info/2008/06/30/how-to-change-or-spoof-mac-address-in-windows-xp-vista-server-20032008-mac-os-x-unix-and-linux/ Then it reports the MAC address that it has been changed to. The only way I've found to actually get the true MAC is to remove the registry entry, restart the NIC, get the MAC via GetAdaptersInfo, then replace the registry entry, and restart the NIC. While it gets the job done, it's hardly transparent to the user. Is there any other methods that anyone is familiar with, that will return the hardware MAC regardless of what the registry is set to? Ideally I'd like a solution that works on XP on up. Thanks in advance!
My guess is that in the linked CodeGuru article, the Miniport solution is likely to overcome the problem you describe, albeit painful to implement. The reason I think this is that I have used the GetAdaptersInfo solution myself in the past, and noticed that the MAC address will change without reboot when an adapter is added, e.g. a Bluetooth adapter providing PAN services. Perhaps rather than rebooting after changing the registry setting, you could try stopping and restarting the relevent network services. You could easily check this manually prior to looking for a programmatic solution. (n.b. the above is all guess work. If you try it and it works, perhaps add a post for those trying to do the same in future).
823,854
823,886
How do I show command-line build options in Visual C++ 2008?
In a Visual C++ 2008 project, building a project will display following information in the output window: 1>------ Build started: Project: Project1, Configuration: Debug Win32 ------ 1>Compiling... 1>main.cpp 1>test1.cpp 1>test2.cpp 1>Generating Code... 1>Linking... 1>LINK : test.exe not found or not built by the last incremental link; performing full link 1>Project1- 0 error(s), 0 warning(s) ========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ========== How can I get output like this: cl.exe /Od /I "includepath" /D "_UNICODE" /FD /EHsc /RTC1 /MDd /Zc:wchar_t- /Fo"Debug\\" /Fd"Debug\vc90.pdb" /nologo /c /ZI /TP /errorReport:prompt Like a C# project will do it.
This is controlled via the "Suppress startup banner" setting in the "General" sub-tab of the "C/C++" tab in the project's property pages. If you set it to "No", it will show in the Output window the command line being used during compilation.
823,935
823,981
What's the point in defaulting functions in C++11?
C++11 adds the ability for telling the compiler to create a default implementation of any of the special member functions. While I can see the value of deleting a function, where's the value of explicitly defaulting a function? Just leave it blank and the compiler will do it anyway. The only point I can see is that a default constructor is only created when no other constructor exists: class eg { public: eg(int i); eg() = default; }; But is that really better than how you do it now? class eg { public: eg(int i); eg() {} }; Or am I missing a use-case?
A defaulted constructor will have a declaration, and that declaration will be subject to the normal access rules. E.g. you can make the default copy constructor protected. Without these new declarations, the default generated members are public.
823,971
824,013
Is it possible to use boost.any as a key in a std::map(or something similiar)?
std::map<any, string> is not working so I wonder if there's another approach to have arbritary keys?
I think the issue is not with Boost::Any, but rather with the fact that you are not specifying a custom comparator. Since map is a sorted associative container, you need to have a comparator. The following works for me: tailor it according to your purposes: #include <iostream> #include <map> #include <boost/any.hpp> using namespace std; struct cmp { bool operator()(const boost::any& l, const boost::any& r) { try { int left = boost::any_cast<int>(l); int right = boost::any_cast<int>(r); return left < right; } catch(const boost::bad_any_cast &) { return false; } return false; } }; int main() { map<boost::any, string, cmp> ma; boost::any to_append = 42; ma.insert(std::make_pair(to_append, "int")); if (ma.find(42) != ma.end()) { cout << "hurray!\n"; } return 0; }
824,143
824,191
Working with string streams?
Say i have a stringsteam in C++, and I want to do different operations to it like: Searching for a sequence of characters, Converting block of text into int (in the middle of the line), Moving the get pointer back and forth and so on. What is the standard/common way of doing this kind of things with stringstreams?
You can use stringstream::str() method which returns the associated std::string object. Then you can do your desired operations on the returned string.
824,160
824,261
Which compilation option should be set for profiling?
I need to profile an application compiled with intel's compiler via VC++. I'm using VTune to profile my code. My understanding is that in release mode I won't have the debug information that is necessary for the profiler to profile my code while in debug mode, the result of the profiling will not be pertinent. What should I do ? Is it possible to add debug info in release mode? How can I set this mode? If so, will I still benefit from all the optimization (inlining etc.)?
You should certainly profile with optimisations enabled (compiler option /O3). /Zi is the Intel compiler switch (on Windows) to enabled debugging information. Because of the optimisations, some functions may be missing from the debugging information due to inlining, but VTune will cope with that.
824,295
824,307
What does C++ struct syntax "a : b" mean
If I have a C++ struct, defining a 64bit data word such as.. struct SMyDataWord { int Name : 40; int Colour : 24; }; What does the : 40 syntax mean... does it mean that the first 40 bits are reserved for the Name and the remaining 24 bits for the Colour? This is how it appears to be being used, but I've not come across it before.
Bitfields, carried over from C. Name is 40 bits wide, Colour is 24 bits wide. Your struct therefore has at least 64 bits. On my system 64 bits would be 8 bytes.
824,512
824,571
Why are empty expressions legal in C/C++?
int main() { int var = 0;; // Typo which compiles just fine }
This is the way C and C++ express NOP.
825,015
825,083
g++ rejects my simple functor with "expected a type, got 'xyz'"
I've been playing about with functors in C++. In particular, I've got a vector of pairs I'd like to sort by the first element of the pair. I started off writing a completely specialised functor (i.e. something like "bool MyLessThan(MyPair &lhs, MyPair &rhs)"). Then, just because this sort of stuff is interesting, I wanted to try writing a generic "Apply F to the first elements of this pair" functor. I wrote the below, but g++ doesn't like it. I get: error: type/value mismatch at argument 2 in template parameter list for 'template struct Pair1stFunc2' error: expected a type, got 'less' #include <algorithm> #include <functional> #include <utility> #include <vector> template <class P, class F> struct Pair1stFunc2 { typename F::result_type operator()(P &lhs, P &rhs) const { return F(lhs.first, rhs.first); } typename F::result_type operator()(const P &lhs, const P &rhs) const { return F(lhs.first, rhs.first); } }; typedef std::pair<int,int> MyPair; typedef std::vector<MyPair> MyPairList; MyPairList pairs; void foo(void) { std::sort(pairs.begin(), pairs.end(), Pair1stFunc2<MyPair, std::less>()); } Can anyone shed any light on what I'm doing wrong here? I know this is as slightly artificial example, but I'd like to know what's going on, if only to improve my STL-fu.
You need to specialize std::less with the comparison type you're using. Pair1stFunc2<MyPair, std::less<int> >() will do the trick. Within your own operator() you'll also need to instantiate an object of the comparison type, since you can't just call the class directly. E.g. change return F(lhs.first, rhs.first); to F func; return func(lhs.first, rhs.first); You could also move the specialization into the functor, as another answer suggests.
825,018
825,365
Pimpl idiom vs Pure virtual class interface
I was wondering what would make a programmer to choose either Pimpl idiom or pure virtual class and inheritance. I understand that pimpl idiom comes with one explicit extra indirection for each public method and the object creation overhead. The Pure virtual class in the other hand comes with implicit indirection(vtable) for the inheriting implementation and I understand that no object creation overhead. EDIT: But you'd need a factory if you create the object from the outside What makes the pure virtual class less desirable than the pimpl idiom?
When writing a C++ class, it's appropriate to think about whether it's going to be A Value Type Copy by value, identity is never important. It's appropriate for it to be a key in a std::map. Example, a "string" class, or a "date" class, or a "complex number" class. To "copy" instances of such a class makes sense. An Entity type Identity is important. Always passed by reference, never by "value". Often, doesn't make sense to "copy" instances of the class at all. When it does make sense, a polymorphic "Clone" method is usually more appropriate. Examples: A Socket class, a Database class, a "policy" class, anything that would be a "closure" in a functional language. Both pImpl and pure abstract base class are techniques to reduce compile time dependencies. However, I only ever use pImpl to implement Value types (type 1), and only sometimes when I really want to minimize coupling and compile-time dependencies. Often, it's not worth the bother. As you rightly point out, there's more syntactic overhead because you have to write forwarding methods for all of the public methods. For type 2 classes, I always use a pure abstract base class with associated factory method(s).
825,935
826,008
Derived Functor with any return type and any parameters
I have a class that uses functors as units of work. It accepts a reference to a functor in its Run() method. To allow this class to operate on any functor, all these functors must derive from my base functor class which looks like this: class baseFunctor{ public: virtual void operator()()=0; virtual baseFunctor Clone()=0; }; This works, however obviously it restricts these functors to having an operator method that returns void and accepts no parameters. I need to be able to accept a functor in my class that can take any type of parameters and return anything. Its apparently do-able but I can't seem to find a way to do it. I have considered using templates, multiple inheritance, but I keep getting thwarted by the fact that the class that needs to run this functor must be able to accept any type, so will accept the base class type, and so will not know the actual type of the functor. Any suggestions of what avenue to look at would be appreciated.
How will the class that calls the functor know what parameters to provide and what to do with the return value, if any?
826,354
830,365
Rebind a socket to a different interface
Is there an existing Linux/POSIX C/C++ library or example code for how to rebind a socket from one physical interface to another? For example, I have ping transmitting on a socket that is associated with a physical connection A and I want to rebind that socket to physical connection B and have the ping packets continue being sent and received on connection B (after a short delay during switch-over). I only need this for session-less protocols. Thank you Update: I am trying to provide failover solution for use with PPP and Ethernet devices. I have a basic script which can accomplish 90% of the functionality through use of iptables, NAT and routing table. The problem is when the failover occurs, the pings continue being sent on the secondary connection, however, their source IP is from the old connection. I've spoken with a couple of people who work on commercial routers and their suggestion is to rebind the socket to the secondary interface. Update 2: I apologise for not specifying this earlier. This solution will run on a router. I cannot change the ping program because it will run on the clients computer. I used ping as just an example, any connection that is not session-based should be capable of being switched over. I tested this feature on several commercial routers and it does work. Unfortunately, their software is proprietary, however, from various conversations and testing, I found that they are re-binding the sockets on failover.
As of your updated post, the problem is that changing the routing info is not going to change the source address of your ping, it will just force it out the second interface. This answer contains some relevant info. You'll need to change the ping program. You can use a socket-per-interface approach and somehow inform the program when to fail over. Or you will have to close the socket and then bind to the second interface. You can get the interface info required a couple of ways including calling ioctl() with the SIOCGIFCONF option and looping through the returned structures to get the interface address info.
826,569
826,635
Compelling examples of custom C++ allocators?
What are some really good reasons to ditch std::allocator in favor of a custom solution? Have you run across any situations where it was absolutely necessary for correctness, performance, scalability, etc? Any really clever examples? Custom allocators have always been a feature of the Standard Library that I haven't had much need for. I was just wondering if anyone here on SO could provide some compelling examples to justify their existence.
As I mention here, I've seen Intel TBB's custom STL allocator significantly improve performance of a multithreaded app simply by changing a single std::vector<T> to std::vector<T,tbb::scalable_allocator<T> > (this is a quick and convenient way of switching the allocator to use TBB's nifty thread-private heaps; see page 7 in this document)
826,698
826,773
How to make NUnit assertion failures show line numbers for C++?
When I run NUnit tests against my C++ code and an assertion fails, I don't get line numbers for where the failure occurs. Sample Method: [Test] void testMethod() { Assert::Fail("test comment"); } Sample output: [nunit2] Failures: [nunit2] 1) namespace.SomeTest.testMethod: test comment [nunit2] at namespace.SomeTest.testMethod() Similar output (also without line numbers) is generated for any assertion failure. When looking at my output, how do I get line number information for which line caused the failure?
Double check that you are building your classes with Debug information (PDB). The Assert framework basically throws an exception when the assert fails and the exception captures a StackTrace. The stack trace gets it's line numbers from the PDB file associated with the executable.
826,742
826,771
What data type does memory see when I use void?
When I create a method of type int the compiler reserves X number of bits in memory. So how does the see a void type? How many bits/bytes does a void type take up?
the void type does not take any bits. you cannot declare a variable of type void. this: void a; causes a compilation error. void is just a place holder that means "nothing" a function that returns void returns nothing and a function that takes void as an argument, takes no arguments. You can however declare a variable of type void*: void* a; This simply declares a pointer that can point to anything what so ever. as any pointer, it takes the size of a pointer type, i.e. sizeof(void*) which typically equals 4 in 32 bit systems.
826,870
826,874
Why does my program consume 100% CPU under nVidia NView?
I was recently working on a windows program that would sometimes become unresponsive when scrolling through a large list of items in a production environment. Of course it works fine on my desktop. The production Environment is: Windows XP based Workstation with 2 monitors nVidia Video Drivers with nView enabled Of note is a Dr watson stack trace generated when the process is terminated: State Dump for Thread Id 0xef4 eax=00e3fff8 ebx=000000a0 ecx=00e00000 edx=00000000 esi=0003fff8 edi=00e40000 eip=00b920c2 esp=0012bcac ebp=00000000 iopl=0 nv up ei ng nz na pe cy cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000283 \system32\nview.dll - function: nview!NVLoadDatabase 00b920a8 c80b0600 enter 0x60b,0x0 00b920ac 83c30f add ebx,0xf 00b920af 33f6 xor esi,esi 00b920b1 03f9 add edi,ecx 00b920b3 83e3f8 and ebx,0xfffffff8 00b920b6 3bcf cmp ecx,edi 00b920b8 89742414 mov [esp+0x14],esi 00b920bc 734c jnb nview!NVLoadDatabase+0xcaf (00b9210a) 00b920be 8bc1 mov eax,ecx 00b920c0 8b10 mov edx,[eax] 00b920c2 8b4004 mov eax,[eax+0x4] ds:0023:00e3fffc=00000000 00b920c5 89442414 mov [esp+0x14],eax 00b920c9 8bc2 mov eax,edx 00b920cb 2500000001 and eax,0x1000000 00b920d0 33ed xor ebp,ebp 00b920d2 0bc5 or eax,ebp 00b920d4 7414 jz nview!NVLoadDatabase+0xc8f (00b920ea) 00b920d6 8bc2 mov eax,edx 00b920d8 c1e008 shl eax,0x8 00b920db 8be8 mov ebp,eax 00b920dd c1f81f sar eax,0x1f ChildEBP RetAddr Args to Child 00000000 00000000 00000000 00000000 00000000 nview!NVLoadDatabase+0xc67 Why did this problem only occur in production?
This is interesting because nView is a 3rd party DLL provided by NVidia. Postings on the internet about nview!NVLoadDatabase suggest that there is an unpatched defect in nview. This is supported by the fact that explorer uses 100% CPU, as confirmed by these reports. See: http://forums.nvidia.com/lofiversion/index.php?t36879.html A detailed investigation of this problem is available on this site: http://blogs.technet.com/marcelofartura/archive/2007/02/28/real-case-random-apps-running-100-cpu.aspx As per this article, the hang is due to an infinite loop in nview.dll. Although the assembly instructions and register values described online do not exactly match those in our log, they were close enough for me to conclude that it is the same issue. To work around the problem, I disabled nView Desktop Manager (Right click on the desktop, select nView Properties, and click disable in the nView Desktop Manager groupbox). Before doing this I was able to consistently reproduce the hang. However, after disabling nView I could not reproduce the hang. Thus, this appears to be a viable workaround. Anyway, I posted this up here in case it will be useful to anyone. It caused me a LOT of grief chasing this one down.
826,935
826,949
How do I store arrays in an STL list?
Using C++ and the STL, does anybody know how to store integer arrays as nodes in an STL list or vector? I have an unknown number of pairs of numbers that I need to store, and coming from other languages my first thought was to use some sort of list- or vector-like data structure... but I'm running into some trouble. I am 100% sure that I'm making an obvious beginner's C++ mistake, and that somebody who actually knows the language will take one look at what I'm trying to do and be able to set me straight. So, here's what I've tried. Declaring a list like so works: stl::list<int[2]> my_list; And then I can easily make a two-element array, like so: int foo[2] = {1,2}; This compiles and runs just fine. However, as soon as I try to add foo to my list, like so: my_list.push_back(foo); I get a whole gnarly set of compiler errors, none of which I really understand (my C++-fu is almost non-existent): /usr/include/c++/4.0.0/ext/new_allocator.h: In member function ‘void __gnu_cxx::new_allocator<_Tp>::construct(_Tp*, const _Tp&) [with _Tp = int [2]]’: /usr/include/c++/4.0.0/bits/stl_list.h:440: instantiated from ‘std::_List_node<_Tp>* std::list<_Tp, _Alloc>::_M_create_node(const _Tp&) [with _Tp = int [2], _Alloc = std::allocator<int [2]>]’ /usr/include/c++/4.0.0/bits/stl_list.h:1151: instantiated from ‘void std::list<_Tp, _Alloc>::_M_insert(std::_List_iterator<_Tp>, const _Tp&) [with _Tp = int [2], _Alloc = std::allocator<int [2]>]’ /usr/include/c++/4.0.0/bits/stl_list.h:773: instantiated from ‘void std::list<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = int [2], _Alloc = std::allocator<int [2]>]’ test.cpp:5: instantiated from here /usr/include/c++/4.0.0/ext/new_allocator.h:104: error: ISO C++ forbids initialization in array new So, anybody have ideas as to what I'm doing wrong here? Any pointers (no pun intended) would be most helpful. Is it just not possible to store arrays in a std::list? Should I be using a struct? Am I just missing a * or & somewhere?
You can't store arrays in STL containers. You'd use a vector of vectors or somesuch for the general case. For your specific case, I'd use a vector of std::pair, like so: std::vector<std::pair<int, int> >. std::pair is a class that has two members, first and second, of whatever type you templatize it to be. Edit: I originally had it as std::vector<std::pair<int> >, but I wasn't sure if it was overloaded to accept only 1 parameter in the case that both types are the same... a little digging turned up no evidence of this, so I modified it to explicitly state that both first and second are ints.
827,010
827,158
SQLite - pre allocating database size
Is there a way to pre allocate my SQLite database to a certain size? Currently I'm adding and deleting a number of records and would like to avoid this over head at create time.
There is a hack - Insert a bunch of data into the database till the database size is what you want and then delete the data. This works because: "When an object (table, index, or trigger) is dropped from the database, it leaves behind empty space. This empty space will be reused the next time new information is added to the database. But in the meantime, the database file might be larger than strictly necessary." Naturally, this isn't the most reliable method. (Also, you will need to make sure that auto_vacuum is disabled for this to work). You can learn more here - http://www.sqlite.org/lang_vacuum.html
827,016
827,050
off-by-one error with string functions (C/C++) and security potentials
So this code has the off-by-one error: void foo (const char * str) { char buffer[64]; strncpy(buffer, str, sizeof(buffer)); buffer[sizeof(buffer)] = '\0'; printf("whoa: %s", buffer); } What can malicious attackers do if she figured out how the function foo() works? Basically, to what kind of security potential problems is this code vulnerable? I personally thought that the attacker can't really do anything in this case, but I heard that they can do a lot of things even if they are limited to work with 1 byte.
The only off-by-one error I see here is this line: buffer[sizeof(buffer)] = '\0'; Is that what you're talking about? I'm not an expert on these things, so maybe I've overlooking something, but since the only thing that will ever get written to that wrong byte is a zero, I think the possibilities are quite limited. The attacker can't control what's being written there. Most likely it would just cause a crash, but it could also cause tons of other odd behavior, all of it specific to your application. I don't see any code injection vulnerability here unless this error causes your app to expose another such vulnerability that would be used as the vector for the actual attack. Again, take with a grain of salt...
827,196
827,203
Virtual Default Destructors in C++
I've got a large set of inherited classes (criteria) which inherit from a base class (criterion). Here's criterion's code class criterion { public: virtual unsigned __int32 getPriorityClass() const = 0; virtual BOOL include(fileData &file) const = 0; virtual void reorderTree() = 0; virtual unsigned int directoryCheck(const std::wstring& directory) const = 0; virtual std::wstring debugTree() const = 0; }; Some examples of derived classes from this one: class fastFilter : public criterion { public: void reorderTree() {}; unsigned int directoryCheck(const std::wstring& /*directory*/) const { return DIRECTORY_DONTCARE; }; unsigned __int32 getPriorityClass() const { return PRIORITY_FAST_FILTER; }; }; class isArchive : public fastFilter { public: BOOL include(fileData &file) const { return file.getArchive(); } std::wstring debugTree() const { return std::wstring(L"+ ISARCHIVE\n"); }; }; Since I don't have a destructor here at all, but yet this is supposed to be a base class, do I need to insert an empty virtual destructor, I.e. like this?: virtual void ~criterion() = 0; If that virtual destructor declaration is needed, do all intermediate classes need one as well? I.e. would fastFilter above need a virtual destructor as well?
Yes - the base class needs a virtual destructor, even if it's empty. If that is not done, then when something delete's a derived object through a base pointer/reference, the derived object's member objects will not get a chance to destroy themselves properly. Derived classes do not need to declare or define their own destructor unless they need something other than default destructor behavior.
827,393
827,422
Default value for bool in C++
I'm redesigning a class constructor in C++ and need it to catch an unspecified bool. I have used default values for all of the other parameters, but from my understanding bool can only be initialized to true or false. Since both of those cases have meaning in the class, how should I handle checking for change from a default value?
The reality is that you can't do this. A bool has value, either true or false, and if you haven't initialized it then it is randomly true or false, possibly different at each run of the program or allocation of that class. If you need to have a type with more options, define an enum. typedef enum MyBool { TRUE, FALSE, FILENOTFOUND } MyBool;
827,552
827,572
Why explicitly call a constructor in C++
I know we can explicitly call the constructor of a class in C++ using scope resolution operator, i.e. className::className(). I was wondering where exactly would I need to make such a call.
Most often, in a child class constructor that require some parameters : class BaseClass { public: BaseClass( const std::string& name ) : m_name( name ) { } const std::string& getName() const { return m_name; } private: const std::string m_name; //... }; class DerivedClass : public BaseClass { public: DerivedClass( const std::string& name ) : BaseClass( name ) { } // ... }; class TestClass : { public: TestClass( int testValue ); //... }; class UniqueTestClass : public BaseClass , public TestClass { public: UniqueTestClass() : BaseClass( "UniqueTest" ) , TestClass( 42 ) { } // ... }; ... for example. Other than that, I don't see the utility. I only did call the constructor in other code when I was too young to know what I was really doing...
827,634
828,258
Given a filename, how can I get the Adler32 using Crypto++
Given a "string filename", how can I get the Adler32 checksum using the C++ Crypto++ library. I am a little confused about using their Source and Sink system. Below I have the skeleton of the code that does MD5, but I can't seem to find any examples or tutorials on the Adler32 usage. string filename = "/tmp/data.txt" string file_adler32_digest; string file_md5_digest; MD5 hashMD5; FileSource fs( filename.c_str(), true, new HashFilter( hashMD5, new HexEncoder( new StringSink( file_md5_digest ) ) ) ); /* Confusion begins here */ //how do I do the adler32 ? /* Confusion ends here */ cout << file_adler32_digest << endl << file_md5_digest << endl; Good samples and sample code here http://www.cryptopp.com/wiki/Category:Sample for all the Crypto++ (except for the Adler32 stuff I want)
If you follow this http://www.cryptopp.com/wiki/HashFilter, you have to change hashMD5 for hashAdler32, and file_md5_digest for file_adler32_digest Adler32 hashAdler32; FileSource( filename.c_str(), true, new HashFilter( hashAdler32, new HexEncoder( new StringSink( file_adler32_digest ) ) ) ); After this file_adler32_digest should contain the desired hash.
827,686
827,741
Unmanaged lib in managed executable causing managed exceptions
I'm having a problem with mixing managed and unmanaged code. I have created two projects under a single solution in Visual Studio 2008 under Vista x64 SP1. One of them does not have CLR support and is a static library. My second project is compiled as an executable with CLR enabled. It depends on the first static library, and passes WinForms events to it. When I start the application without debugging, I get an exception, I put the info that I get from the exception here: http://pastebin.com/f46ad1211. Here is the code of the unmanaged lib that is run: void manager::init() // <-- Called from the .exe project { this->log.open("C:\\development\\log.txt"); this->storage = storage_manager(&(this->log), &(this->settings)); this->storage.load_settings(); } & void storage_manager::load_settings() { this->error_check(sqlite3_open("settings.db", &(this->db_settings_p))); sqlite3_stmt* read_settings; this->error_check(sqlite3_prepare_v2(this->db_settings_p, "SELECT name, value FROM settings", 1024, &read_settings, NULL)); int step_code; std::string name; std::string value; while(true) { step_code = sqlite3_step(read_settings); if(step_code == SQLITE_DONE) { break; } else if(step_code == SQLITE_ROW) { name = std::string(reinterpret_cast<const char*>(sqlite3_column_text(read_settings, 0))); value = std::string(reinterpret_cast<const char*>(sqlite3_column_text(read_settings, 1))); (*(this->settings))[name] = value; } else { this->error(); } } sqlite3_reset(read_settings); sqlite3_finalize(read_settings); } & void storage_manager::error_check(int rc) { if(rc) { this->error(); } } void storage_manager::error() //Sure of error { std::string error_msg; error_msg = "Storage Manager: SQLite Error ("; error_msg += sqlite3_errcode(this->db_p); error_msg += ") - "; error_msg += sqlite3_errmsg(this->db_p); this->log->write(error_msg.c_str(), error_msg.length()); this->log->flush(); } I can't see why I'm getting a managed (System.BlahBlahBlah) exception in an unmanaged library. Is there any way to get the two to be totally separate?
The underlying exception is actually a Windows exception that the CLR is apparently turning into a CLR exception for you. You have an access violation. What you should be able to do is, in Visual Studio, head to Debug > Exceptions and break on acess violations. This should let you drop in and see where in the native code it's gone all horribly wrong and start diagnosing the issue.
827,706
827,725
Calculating e^x without using any functions
We are supposed to calculate e^x using this kind of formula: e^x = 1 + (x ^ 1 / 1!) + (x ^ 2 / 2!) ...... I have this code so far: while (result >= 1.0E-20 ) { power = power * input; factorial = factorial * counter; result = power / factorial; eValue += result; counter++; iterations++; } My problem now is that since factorial is of type long long, I can't really store a number greater than 20! so what happens is that the program outputs funny numbers when it reaches that point .. The correct solution can have an X value of at most 709 so e^709 should output: 8.21840746155e+307 The program is written in C++.
Both x^n and n! quickly grow large with n (exponentially and superexponentially respectively) and will soon overflow any data type you use. On the other hand, x^n/n! goes down (eventually) and you can stop when it's small. That is, use the fact that x^(n+1)/(n+1)! = (x^n/n!) * (x/(n+1)). Like this, say: term = 1.0; for(n=1; term >= 1.0E-10; n++) { eValue += term; term = term * x / n; } (Code typed directly into this box, but I expect it should work.) Edit: Note that the term x^n/n! is, for large x, increasing for a while and then decreasing. For x=709, it goes up to ~1e+306 before decreasing to 0, which is just at the limits of what double can handle (double's range is ~1e308 and term*x pushes it over), but long double works fine. Of course, your final result ex is larger than any of the terms, so assuming you're using a data type big enough to accommodate the result, you'll be fine. (For x=709, you can get away with using just double if you use term = term / n * x, but it doesn't work for 710.)
828,000
830,301
Using SubclassDlgItem to change control types
I have a C++ MFC app with a dialog where I want to change the type of the control dynamically based on the selection in a combo box. The dialog resource starts off with a plain old edit control which I then call SubclassDlgItem on to change to a custom control type. So far so good. Now, when the user changes the selection in a different Combobox on the screen, I want to change this control to a different custom type. So, I destroy the existing control by calling delete on the pointer to the custom class for that control. I then call ::CreateEx to re-create my edit control and call SubclassDlgItem again to create the new custom control. My problem is that this flickers quite a bit, and I think I'm getting the edit control created with ::CreateEx on top of my custom control. Any ideas on how to get rid of the flicker, especially if the user is quickly changing the contents of the controlling combo box?
A colleague of mine suggested calling CWnd::LockWindowUpdate() before I do the switch. So, it boils down to something like this: CRect r; DWORD dwStyle = WS_CHILD|WS_TABSTOP|WS_VISIBLE; m_pParent->GetDlgItem(m_nID)->GetWindowRect(&r); m_pParent->ScreenToClient(r); m_pParent->LockWindowUpdate(); m_pParent->InvalidateRect(r); delete m_pCust; // Delete the old custom control m_pCust = NULL; ::CreateWindowEx(0, "EDIT", "", dwStyle, r.left, r.top, r.Width(), r.Height(), m_pParent->m_hWnd, (HMENU)m_nID, AfxGetInstanceHandle(), NULL); m_pCust = new CustomCtrl(); pCust->SubclassDlgItem(m_nID, m_pParent); m_pParent->UnlockWindowUpdate() There is a little more involed because of what my custom control does. I ended up calling m_pParent->InvalidateRect(r) to get my control to draw correctly at the end. Also, it turns out that the overlap of the ::CreateEx edit control was because I was calling UnsubclassDlgItem before deleting the old custom control
828,067
828,202
Pattern to specialize templates based on inheritance possible?
So, one problem pattern that I keep coming across and don't have a good solution for is how to provide template specializations that are based in what type a template parameter is derived from. For example, suppose I have: template<typename T> struct implementPersist; template<typename T> void persist( T& object ) { implementPersist::doPersist( object ); } What I'd like is for users of persist to be able to provide implementations of implementPersist::persist for types that are declared after the above. That's straightforward in principle, but cumbersome in practice but the user need to provide an implementPersist for every type. To be more clear, suppose I have: struct Persistent { virtual void myPersist() = 0; }; struct MyClass : public persistent { virtual void MyPersist() { ...implementation...} }; // Persists subclasses of Persistent using myPersist template<> struct implementPersist<Persistent>{ void doPersist(Persistent& p) { p->myPersist(); } }; struct X{}; template<> struct implementPersist<X>{ void doPersist(X& p) { ...implementation...} }; // Persists subclasses of Persistent using boostPersist struct MyBoostPersistedObject { virtual void boostPersist() = 0 }; struct Z : public MyBoostPersistedObject { virtual void boostPersist() = 0 }; template<> struct implementPersist<myBoostPersistedObject>{ void boostPersist() { ...implementation... } }; My intention is that I provide one template implementation for all subclasses of Persist, another for all subclasses of myBoostPersistedObject and other for miscellaneous classes not in interesting classes structures (e.g. various POD types). In practice however, implementPersist<Persistent>::doPersist is only ever invoked if ::persist(T&) is called where T is exactly a Persistent object. It falls back to the (missing) generic case where T=myClass. In general, I want to be able to specialize templates in generic ways based on inheritance. Its little frustrating because clearly compilers know how to do this, and do it when deciding to call functions based upon parameters, e.g. void persist( Persistent&); void persist( X& ); void persist( myBoostPersistedObject& ); But as far as I can tell, no such similar matching can be done for templates. One workaround is to do something like: class persist; template<typename T, bool hasMyPersistMethod=isDerivedFrom(T,persist)::value > struct implementPersist; template<typename T, bool true > struct implementPersist<T,true> { template<> struct implementPersist<X>{ void doPersist(T& p) { p->myPersist(); } } }; (see here for isDerivedFrom). However, this requires that the initial declaration of implementPersist knows about the types of the classes providing implementations. I'd like something more generic. I'm frequently finding uses for such a pattern, in order to avoid adding explicit specializations for every class in my system. Any ideas?
Yes, you can do this using enable_if. #include <iostream> #include <boost/type_traits.hpp> using namespace std; template <bool Enable, typename T = void> struct enable_if { typedef T type; }; template <typename T> struct enable_if<false, T> { }; template <typename T, typename Enable = void> struct persist_t {}; struct A { virtual void foo() const = 0; }; template <typename T> struct persist_t<T, typename enable_if<boost::is_base_of<A, T>::value>::type> { static void persist(T const& x) { x.foo(); } }; struct B : A { virtual void foo() const { cout << "B::foo\n"; } }; template <typename T> void persist(T & x) { persist_t<T>::persist(x); } int main() { B b; persist(b); } Boost has a nicer implementation of enable_if, I just provided it here for completeness. Boost also has an example of using it that is very similar to my above example. Hope that helps.
828,092
828,238
Python-style integer division & modulus in C
In Python and Ruby, signed integer division truncates towards negative infinity, and signed integer modulus has the same sign the second operand: >>> (-41) / 3 -14 >>> (-41) % 3 1 However, in C and Java, signed integer division truncates towards 0, and signed integer modulus has the same sign as the first operand: printf("%d\n", (-41) / 3); /* prints "-13" */ printf("%d\n", (-41) % 3); /* prints "-2" */ What is the simplest and most efficient way in C to perform the same kind of division and modulus as in Python and Ruby?
The direction for rounding with signed integer division is not specified in older C standards. However, in C99 it is specified to round towards zero. Here's portable code which works with all versions of the C standards and CPU architectures: int py_div(int a, int b) { if (a < 0) if (b < 0) return -a / -b; else return -(-a / b) - (-a % b != 0 ? 1 : 0); else if (b < 0) return -(a / -b) - (a % -b != 0 ? 1 : 0); else return a / b; } int py_mod(int a, int b) { if (a < 0) if (b < 0) return -(-a % -b); else return -a % b - (-a % -b != 0 ? 1 : 0); else if (b < 0) return -(a % -b) + (-a % -b != 0 ? 1 : 0); else return a % b; } I did some superficial tests and it appears to give the same results as Python. This code may not be maximally efficient, but a good C compiler can probably optimize it adequately, especially if you put the code in a header as static functions. You may also want to take a look at this closely related question: Integer division rounding with negatives in C++.
828,106
828,125
Why is ClassName ClassName variable definition in C++ compiling and working properly?
Say I have a class definition: class CustomClass { int member; }; Why is the following variable definition compiling and working correctly: CustomClass CustomClass; // the variable is properly constructed Should't this confuse the compiler and cause it to indicate an error?
Class names and variable names occupy two separate namespaces. The compiler is able to figure out that the first CustomClass is a type and the second CustomClass is a variable name.
828,442
828,498
Is there a way to compile-time assert if a variable is a class, struct or a basic type in c++?
I am trying to implement a template class that would be able to tell me if a variable is a class,structure or a basic type. So far I've come with this: template< typename T > class is_class { private: template< typename X > static char ( &i_class( void(X::*)() ) )[1]; // template< typename X > static char ( &i_class( X ) )[2]; public: static bool const val = sizeof( i_class< T >(0) ) == 1; }; and ussage: is_class< int >::val; // false is_class< some_class_type >::val; // true The problem is that now I need to write a blank void function in every class that could be used with this code. Does anyone know a solution to this problem?
is_class is a member of the Boost.Type_Traits library. The implementation is probably similar to what you already have. You would use in in conjunction with enable_if to create the function if appropriate: template <class T> typename enable_if_c<boost::is_class<T>::value>::type foo() { } Or the equivalent: template <class T> typename enable_if<boost::is_class<T>>::type foo() { } The function foo is only generated if the type T is of class type. The return value for the function, if it is generated, is the second parameter (omitted) to the enable_if template (the default is void).
828,776
828,900
comparison function of an associate container data member in class
I have a class like this: class MyClass{ public: MyClass(int Mode); private: std::map < int, std::string, CompFunc > data; }; The data member must use different comparison function according to Mode argument. if Mode == 1, then use CompFunc1. if Mode == 2, then use CompFunc2. etc. but the CompFunc template argument is fixed there, how could I make it dynamically? thanks.
struct Cmp { explicit Cmp(int mode) : mode_(mode) {} bool operator()(int lhs, int rhs) const { switch (mode_) { case 1: return CompFunc1(lhs, rhs); break; case 2: return CompFunc2(lhs, rhs); break; // etc. } } private: int mode_; }; class MyClass { public: explicit MyClass(int Mode) : cmp(mode), data(cmp) {} private: Cmp cmp; std::map<int, std::string, Cmp> data; };
828,819
830,746
Mystery: In Qt, why would editorEvent be called, but not createEditor?
I'm subclassing QAbstractItemDelegate. This is my code. Suggestions are welcome: QWidget *ParmDelegate::createWidget(Parm *p, const QModelIndex &index) const { QWidget *w; if (index.column() == 0) { w = new QLabel(p->getName().c_str()); } else { if (p->isSection()) return NULL; w = p->createControl(); } return w; } QWidget *ParmDelegate::createEditor(QWidget *parent, const QStyleOptionViewItem &option, const QModelIndex &index) const { cout << "createEditor called" << endl; Parm *p = reinterpret_cast<Parm*>(index.internalPointer()); QWidget *retval = createWidget(p, index); retval->setFocusPolicy(Qt::StrongFocus); retval->setParent(parent); return retval; } void ParmDelegate::updateEditorGeometry(QWidget *editor, const QStyleOptionViewItem &option, const QModelIndex &index) const { QRect rect(option.rect); editor->setGeometry(QRect(QPoint(0,0), rect.size())); } void ParmDelegate::paint(QPainter *painter, const QStyleOptionViewItem &option, const QModelIndex &index) const { Parm *p = reinterpret_cast<Parm*>(index.internalPointer()); scoped_ptr<QWidget> w(createWidget(p, index)); if (!w) return; QRect rect(option.rect); w->setGeometry(QRect(QPoint(0,0), rect.size())); w->render(painter, rect.topLeft()); } QSize ParmDelegate::sizeHint(const QStyleOptionViewItem &option, const QModelIndex &index) const { Parm *p = reinterpret_cast<Parm*>(index.internalPointer()); scoped_ptr<QWidget> w(createWidget(p, index)); if (!w) return QSize(0,0); return w->sizeHint(); } bool ParmDelegate::editorEvent(QEvent * event, QAbstractItemModel * model, const QStyleOptionViewItem & option, const QModelIndex & index ) { cout << "editorEvent called" << endl; return false; } When this is run, I only see that editorEvent gets called twice for every edit event -- no createEditor!
From Qt's AbstractItemDelegate documentation: To provide custom editing, there are two approaches that can be used. The first approach is to create an editor widget and display it directly on top of the item. To do this you must reimplement createEditor() to provide an editor widget, setEditorData() to populate the editor with the data from the model, and setModelData() so that the delegate can update the model with data from the editor. The second approach is to handle user events directly by reimplementing editorEvent(). This appears to say that you are missing something to trigger the first approach. My guess is that your model's data() function isn't returning the proper value for the Qt::EditRole option.
828,880
828,915
usage of virtual keyword with a class declaration
I was asked in an interview that what is the usage of virtual keyword with a class declaration in C++ and I answered that virtual keyword cannot be used with a class declaration in C++. The interviewer said that it is possible and asked me to test it later. Now that I have checked it myself I have come to know that this is possible and this is not a compiler error. In fact, when I do something like this with a Visual C++ compiler: virtual class Test { int i; }; I get a compiler warning "warning C4091: 'virtual ' : ignored on left of 'Test' when no variable is declared". I haven't been able to find out yet that what this warning means and further what is the usage of virtual keyword. If there is no helpful usage, then why is this allowed in the first place and why is this not a compiler error.
That's a bug in VC++. Comeau and gcc both reject the code.
828,989
829,004
Dump facility in C++ like var_dump() in PHP?
When I was in college i did some C/C++, but in near future i was working in PHP, and now I wish to put more time in learning C/C++. In PHP i was using print_r() or var_dump() in order to display datas from structures or arrays. Do I have such a default functionality in C, in order to see what do i have in a struct or array?
There is no such functionality in C++. You can of course write your own Dump() functions. The reason such a feature cannot be generally provided is that the C++ compilation process removes the object metadata needed to structure the dump output. You can of course display structure contents in a debugger, where such metadata is maintained in the debug information. BTW, are you asking about C or C++? The two languages are quite different, both in features and approach, although neither has var_dump() or similar.
829,311
1,091,972
SQLGetDiagRec causes crash in Unicode release build
I'm having a problem with the call to SQLGetDiagRec. It works fine in ascii mode, but in unicode it causes our app to crash, and i just can't see why. All the documentation i've been able to find seems to indicate that it should handle the ascii/unicode switch internally. The code i'm using is: void clImportODBCFileTask::get_sqlErrorInfo( const SQLSMALLINT _htype, const SQLHANDLE _hndle ) { SQLTCHAR SqlState[6]; SQLTCHAR Msg[SQL_MAX_MESSAGE_LENGTH]; SQLINTEGER NativeError; SQLSMALLINT i, MsgLen; SQLRETURN nRet; memset ( SqlState, 0, sizeof(SqlState) ); memset ( Msg, 0, sizeof(Msg) ); // Get the status records. i = 1; //JC - 2009/01/16 - Start fix for bug #26878 m_oszerrorInfo.Empty(); nRet = SQLGetDiagRec(_htype, _hndle, i, SqlState, &NativeError, Msg, sizeof(Msg), &MsgLen); m_oszerrorInfo = Msg; } everything is alright until this function tries to return, then the app crashes. It never gets back to the line of code after the call to get_sqlErrorInfo. I know that's where the problem is because i've put diagnostics code in and it gets past the SQLGetDiagRec okay and it fnishes this function. If i comment the SQLGetDiagRec line it works fine. It always works fine on my development box whether or not it's running release or debug. Any help on this problem would be greatly appreciated. Thanks
Well i found the correct answer, so I thought i would include it here for future reference. The documentation i saw was wrong. SQLGetDiagRec doesn't handle Unicode i needed to use SQLGetDiagRecW.
829,468
2,717,660
how to perform boost::filesystem copy_file with overwrite
The Windows API function CopyFile has an argument BOOL bFailIfExists that allows you to control whether or not you want to overwrite the target file if it exists. The boost::filesystem copy_file function has no such argument, and will fail if the target file exists. Is there an elegant way to use the boost copy_file function and overwrite the target file? Or is it better to simply use the Windows API? My current target platform is Windows, but I prefer to use STL and boost where possible to keep my code platform independent. Thank you.
There's a third enum argument to copy_file, boost::filesystem::copy_option::overwrite_if_exists copy_file(source_path, destination_path, copy_option::overwrite_if_exists); https://www.boost.org/doc/libs/1_75_0/libs/filesystem/doc/reference.html
829,622
834,020
How to insert a CLOB using OleDb
Could someone post some sample code showing how to insert text greater than 4000 characters in length into an Oracle 10g CLOB field? I am using the Oracle OLEDB provider and ATL in C++. My naive attempt returns the error 'ORA-01704: string literal too long' whenever the text I am attempting to insert goes over 4000 characters in length. I'm guessing I need to parameterise the 4000+ character string somehow but I'm not sure how to do that...
I eventually got this working. In case anyone else has the same problem, I inserted the value EMPTY_CLOB() then used the ISequentialStream interface to stream the text into the empty field. The Microsoft mydyntext sample shows how to do this.
829,623
831,857
Determining the type of an expression
Sometimes I need to learn the type of an expression while programming in C or C++. Sometimes there's a good IDE or existent documentation to help me, but sometimes not. I often feel such a construct could be useful: void (*myFunc)(int); printf("%s", nameoftype(myFunc)); //"void (*)(int)" int i, unsigned int u; printf("%s", nameoftype(i+u)); //"unsigned int" This is especially true for C++; think accessors of const objects - do they return a const reference or a copy? Think dynamic casts and templated classes. How can I do this? (i.e. learn the type of an expression) I use GCC but as far as I know, it does not have such an extension. So I guess I'm curious as to how people solve this problem. (Both compile-time and runtime solutions welcome.)
What are you looking for? Automatic type inference or looking for the type so you can declare a variable correctly manually? (your own answers look like you want to have the second one). In this case, consider using Geordi: <litb> make type pointer to function taking pointer to array of 10 int returning void <geordi> void (*)(int (*)[10]) <litb> geordi: { int a = -1; unsigned int b = 0; cout << ETYPE(a + b), ETYPE_DESC(a + b), (a + b); } <geordi> rvalue unsigned int, rvalue unsigned integer, 4294967295 <litb> geordi: << TYPE_DESC(void (*)(int (*)[10])) <geordi> pointer to a function taking a pointer to an array of 10 integers and returning nothing Automatic type inference is not currently possible without helper libraries like boost.typeof, which will use compiler extensions like __typeof__ for GCC. Next C++ will get auto (with different semantics than current auto) and will be able to do that, together with decltype to get the type of an expression. If you can live with getting out of local context, you can always create a function template like this: template<typename T> void f(T t) { /* ... */ } int main() { int a = -1; unsigned int b = 0; f(a + b); }
829,625
829,751
Winsock 2 portability
I'm about to develop some sockets related stuff in C++ and would like the software to be as portable between Windows and Linux as possible right from the start (making it portable later is tricky.) I've looked at different libraries, there is one for C++ from alhem.net and of course there is boost::asio. boost::asio looks very promising but would be a very big dependency for applications this small. Is it even worth writing the stuff myself or should I just use a library? If I do it myself what would be the main pitfalls?
Winsocks aren't very compatible with Posix sockets: In Winsocks a socket is of type SOCKET. On Posix it's simply a file descriptor (int), on which you can perform normal read() and write() calls. They don't return errors the same way. They don't support some options on recv() and send(). You have to initialize and unitialize the Winsocks library with two specials functions. I don't think you can close Windows sockets with shutdown() or close(). It's something like closesocket() instead. There must be more differences, but that's what I can remember right now. If you want portability with Winsocks, you'll have a small library for closing a socket, printing an error message and so on. I'd probably go with boost::asio, personnally (I've never used it, though).
829,656
829,726
C++ for Game Programming - Love or Distrust?
In the name of efficiency in game programming, some programmers do not trust several C++ features. One of my friends claims to understand how game industry works, and would come up with the following remarks: Do not use smart pointers. Nobody in games does. Exceptions should not be (and is usually not) used in game programming for memory and speed. How much of these statements are true? C++ features have been designed keeping efficiency in mind. Is that efficiency not sufficient for game programming? For 97% of game programming? The C-way-of-thinking still seems to have a good grasp on the game development community. Is this true? I watched another video of a talk on multi-core programming in GDC 2009. His talk was almost exclusively oriented towards Cell Programming, where DMA transfer is needed before processing (simple pointer access won't work with the SPE of Cell). He discouraged the use of polymorphism as the pointer has to be "re-based" for DMA transfer. How sad. It is like going back to the square one. I don't know if there is an elegant solution to program C++ polymorphism on the Cell. The topic of DMA transfer is esoteric and I do not have much background here. I agree that C++ has also not been very nice to programmers who want a small language to hack with, and not read stacks of books. Templates have also scared the hell out of debugging. Do you agree that C++ is too much feared by the gaming community?
Look, most everything you hear anyone say about efficiency in programming is magical thinking and superstition. Smart pointers do have a performance cost; especially if you're doing a lot of fancy pointer manipulations in an inner loop, it could make a difference. Maybe. But when people say things like that, it's usually the result of someone who told them long ago that X was true, without anything but intuition behind it. Now, the Cell/polymorphism issue sounds plausible — and I bet it did to the first guy who said it. But I haven't verified it. You'll hear the very same things said about C++ for operating systems: that it is too slow, that it does things you want to do well, badly. None the less we built OS/400 (from v3r6 forward) entirely in C++, bare-metal on up, and got a code base that was fast, efficient, and small. It took some work; especially working from bare metal, there are some bootstrapping issues, use of placement new, that kind of thing. C++ can be a problem just because it's too damn big: I'm rereading Stroustrup's wristbreaker right now, and it's pretty intimidating. But I don't think there's anything inherent that says you can't use C++ in an effective way in game programming.
829,700
829,923
C++ new[] into base class pointer crash on array access
When I allocate a single object, this code works fine. When I try to add array syntax, it segfaults. Why is this? My goal here is to hide from the outside world the fact that class c is using b objects internally. I have posted the program to codepad for you to play with. #include <iostream> using namespace std; // file 1 class a { public: virtual void m() { } virtual ~a() { } }; // file 2 class b : public a { int x; public: void m() { cout << "b!\n"; } }; // file 3 class c : public a { a *s; public: // PROBLEMATIC SECTION c() { s = new b[10]; } // s = new b; void m() { for(int i = 0; i < 10; i++) s[i].m(); } // s->m(); ~c() { delete[] s; } // delete s; // END PROBLEMATIC SECTION }; // file 4 int main(void) { c o; o.m(); return 0; }
One problem is that the expression s[i] uses pointer arithmetic to compute the address of the desired object. Since s is defined as pointer to a, the result is correct for an array of as and incorrect for an array of bs. The dynamic binding provided by inheritance only works for methods, nothing else (e.g., no virtual data members, no virtual sizeof). Thus when calling the method s[i].m() the this pointer gets set to what would be the ith a object in the array. But since in actuality the array is one of bs, it ends up (sometimes) pointing to somewhere in the middle of an object and you get a segfault (probably when the program tries to access the object's vtable). You might be able to rectify the problem by virtualizing and overloading operator[](). (I Didn't think it through to see if it will actually work, though.) Another problem is the delete in the destructor, for similar reasons. You might be able to virtualize and overload it too. (Again, just a random idea that popped into my head. Might not work.) Of course, casting (as suggested by others) will work too.
829,792
830,203
Expose Borland C++ methods to C#
I have following method in my Borland C++ code, static bool UploadBitstream(void) { //Some code Implementation } And I'm trying to convert it to DLL and access it in C#. What are the steps I need to follow to Convert the code DLL and then use it in C# ??
First, you have to make sure that the methods are defined extern. Then you need to declare the method stdcall or pascal calling convention, and mark them dllexport. See code listing below (this is ancient memory for me, so pardon if I am a bit off on modern Borland C++ compilers). // code.h extern "C" { #define FUNCTION __declspec(dllexport) FUNCTION int __stdcall SomeFunction(int Value); In the main #include "code.h" FUNCTION int __stdcall SomeFunction(int timesThree) { return timesThree * 3; }
829,837
829,875
How much one can do with (higher order) macros?
Is it "safe" to give macros names as arguments to other macros to simulate higher order functions? I.e. where should I look to not shoot myself in the foot? Here are some snippets: #define foreach_even(ii, instr) for(int ii = 0; ii < 100; ii += 2) { instr; } #define foreach_odd(ii, instr) for(int ii = 1; ii < 100; ii += 2) { instr; } #define sum(foreach_loop, accu) \ foreach_loop(ii, {accu += ii}); int acc = 0; sum(foreach_even, acc); sum(foreach_odd, acc); What about partial application, can I do that? : #define foreach(ii, start, end, step, instr) \ for(int ii = start; ii < end; ii += step) { instr; } #define foreach_even(ii, instr) foreach(ii, 0, 100, instr) #define foreach_odd(ii, instr) foreach(ii, 1, 100, instr) #define sum(foreach_loop, accu) \ foreach_loop(ii, {accu += ii}); int acc = 0; sum(foreach_even, acc); sum(foreach_odd, acc); And can I define a macro inside a macro? #define apply_first(new_macro, macro, arg) #define new_macro(x) macro(arg,x)
If you're into using preprocessor as much as possible, you may want to try boost.preprocessor. But be aware that it is not safe to do so. Commas, for instance, cause a great number of problems when using preprocessors. Don't forget that preprocessors do not understand (or even try to understand) any of the code they are generating. My basic advice is "don't do it", or "do it as cautiously as possible".
829,919
829,937
Delayed constructor in C++
I have been reviewing some code that looks like this: class A; // defined somewhere else, has both default constructor and A(int _int) defined class B { public: B(); // empty A a; }; int main() { B* b; b = new B(); b->a(myInt); // here, calling the A(int _int) constructor, //but default constructor should already have been called } Does this work? Calling a specific constructor after the default has already been called?
That code does not call a's constructor. It calls A::operator()(int). But if you explicitly call a constructor on an object that has already been constructed, you're well into undefined behavior-land. It may seem to work in practice, but there is no guarantee that it'll do what you expect.
830,067
830,747
When should BOOL and bool be used in C++?
When should BOOL and bool be used in C++ and why? I think using bool is cleaner and more portable because it's a built-in type. But BOOL is unavoidable when you interactive with legacy code/C code, or doing inter-op from .NET with C code/Windows API. So my policy is: Use bool inside C++. Use BOOL when talk to outer world, e.g., export function in windows DLL. Is there a definitive explanation of when to use one over the other?
Matthew Wilson discusses BOOL, bool, and similar in section 13.4.2 of Imperfect C++. Mixing the two can be problematic, since they generally have different sizes (and so pointers and references aren't interchangeable), and since bool isn't guaranteed to have any particular size. Trying to use typedefs or conditional compilating to smooth over the differences between BOOL and bool or trying to allow for a single Boolean type to work in both C and C++ is even worse: #if defined(__cplusplus) || \ defined(bool) /* for C compilation with C99 bool (macro) */ typedef bool bool_t; #else typedef BOOL bool_t; #endif /* __cplusplus */ This approach means that a function's return type can differ depending on which language calls it; Wilson explains that he's seen more than one bug in his own code and others' that results from this. He concludes: The solution to this imperfection is, as it so often is, abstinence. I never use bool for anything that can possibly be accessed across multiple link units—dynamic/static libraries, supplied object files—which basically means not in functions or classes that appear outside of header files. The practical answer, such as it is, is to use a pseudo-Boolean type, which is the size of int. In short, he would agree with your approach.
830,119
830,134
Garbage Collection in C++/CLI
Consider the below: #include <iostream> public ref class TestClass { public: TestClass() { std::cerr << "TestClass()\n"; } ~TestClass() { std::cerr << "~TestClass()\n"; } }; public ref class TestContainer { public: TestContainer() : m_handle(gcnew TestClass) { } private: TestClass^ m_handle; }; void createContainer() { TestContainer^ tc = gcnew TestContainer(); // object leaves scope and should be marked for GC(?) } int main() { createContainer(); // Manually collect. System::GC::Collect(); System::GC::WaitForPendingFinalizers(); // ... do other stuff return 0; } My output is simply: TestClass() I never get ~TestClass(). This is a simplification of an issue I am having in production code, where a list of handles is being cleared and repopulated multiple times, and the handle destructors are never being called. What am I doing wrong? Sincerely, Ryan
~TestClass() declares a Dispose function. !TestClass() would declare a finaliser (the equivalent of C#'s ~TestClass) which gets called on a gc collection (although that's not guaranteed).
830,463
830,484
What's the difference between global variables and variables in main?
MyClass GlobalVar; int main() { MyClass VarInMain; }
A couple of things: Typically, they're allocated in different places. Local variables are allocated on the stack, global variables are allocated elsewhere. Local variables in main are only visible within main. On the other hand, a global variable may be accessed anywhere.
830,573
834,975
Is it possible to narrow the friend relationship between classes and functions when multiple template types are involved?
Suppose I represent an image class as: template <typename Pixel> class Image { ... }; I would need my own swap function to prevent extra copying of images, so I would need to make it a friend of Image. If inside Image I write: template <typename T> friend void swap(Image<T>&, Image<T>&); I get what I want, but it makes all swap functions friends of all Image classes. So I can narrow the friend relationship as follows: template <typename Pixel> class Image; template <typename T> void swap(Image<T>&, Image<T>&); template <typename Pixel> class Image { ... friend void swap<>(Image&, Image&); }; as described in C++ FAQ-lite 35.16. Now suppose I also have a convolution function that can take floating-point or integral kernels: template <typename Pixel, typename KernelValue> Image<Pixel> convolve(const Image<Pixel>&, const Kernel<KernelValue>&); Convolution needs access to Image's raw memory, so it too must be a friend. However, I would like to partially narrow the friendship so that convolve is a friend for all KernelValues but only the particular Pixel type, something like: template <typename KernelValue> friend Image<Pixel> convolve(const Image<Pixel>&, const Kernel<KernelValue>&); inside the Image definition. The compiler does not like this (or other variants) at all, mainly because it doesn't match the original function declaration, so the private pointer can't be accessed. Is it possible to get what I want here, or should I settle for the "more friendly" version?
The only way I see this happening is defining the function inside the Image class, like so: #include <iostream> using std::cout; template <typename Pixel> struct image { template <typename KernelValue> friend image<Pixel> convolve(image<Pixel> const&, image<KernelValue> const&) { cout << "foo\n"; return image<Pixel>(); } }; int main() { image<int> i; image<float> i2; convolve(i, i2); }
830,639
830,722
A question about windows iocp
When I write a program about IO completion port in Windows Vista, the first sample didn't work and the GetQueuedCompletionStatus() can not get any OVERLAPPED structures. So I put the OVERLAPPED structure in global scope,and it works amazingly. Why is that? CODE1: int main() { OVERLAPPED o; .. CreateIoCompletionPort(....); for (int i = 0; i<10; i++) { WriteFile(..,&o); OVERLAPPED* po; GetQueuedCompletionStatus(..,&po); } } CODE2: OVERLAPPED o; int main() { .. CreateIoCompletionPort(....); for (int i = 0; i<10; i++) { WriteFile(..,&o); OVERLAPPED* po; GetQueuedCompletionStatus(..,&po); } }
Okay! This is from the OVERLAPPED structure's MSDN page's Remarks section: Any unused members of this structure should always be initialized to zero before the structure is used in a function call. Otherwise, the function may fail and return ERROR_INVALID_PARAMETER. Globals are zero initializes whereas locals are not. If you plan to use the former code, you need to zero out the memory: int main() { OVERLAPPED o = {0}; // ...
830,674
830,783
C++ Database Connectivity?
Hey, I want to know how to connect databases with C++? Any cross-platform solution which supports many databases? I know about SQLAPI++ but its a shareware... so any free one? What solutions do I have if I limit the OSes to Windows only? Thanks
SOCI - The C++ Database Access Library
830,708
830,917
Is this program running Asynchronous or synchrounous?
When I run this program OVERLAPPED o; int main() { .. CreateIoCompletionPort(....); for (int i = 0; i<10; i++) { WriteFile(..,&o); OVERLAPPED* po; GetQueuedCompletionStatus(..,&po); } } it seems that the WriteFile didn't return until the writing job is done. At the same time , GetQueuedCompletionStatus() gets called. The behavior is like a synchronous IO operation rather than an asynch-IO operation. Why is that?
If the file handle and volume have write caching enabled, the file operation may complete with just a memory copy to cache, to be flushed lazily later. Since there is no actual IO taking place, there's no reason to do async IO in that case. Internally, each IO operation is represented by an IRP (IO request packet). It is created by the kernel and given to the filesystem to handle the request, where it passes down through layered drivers until the request becomes an actual disk controller command. That driver will make the request, mark the IRP as pending and return control of the thread. If the handle was opened for overlapped IO, the kernel gives control back to your program immediately. Otherwise, the kernel will wait for the IRP to complete before returning. Not all IO operations make it all the way to the disk, however. The filesystem may determine that the write should be cached, and not written until later. There is even a special path for operations that can be satisfied entirely using the cache, called fast IO. Even if you make an asynchronous request, fast IO is always synchronous because it's just copying data into and out of cache. Process monitor, in advanced output mode, displays the different modes and will show blank in the status field while an IRP is pending. There is a limit to how much data is allowed to be outstanding in the write cache. Once it fills up, the write operations will not complete immediately. Try writing a lot of data at once, with may operations.
830,825
830,894
How to capture standard error output from a Windows service?
I have an application that makes use of the Mozilla LDAP library. We're diagnosing a problem involving the LDAP library failing to make a connection to the server. I'm attempting to get additional information from the LDAP library by tossing a debug version of the lib in with the application and enabling debug using ldap_set_opt. Unfortunately, I think the debug library is sending debug strings to standard error. While I'm working on recompiling the LDAP client library again, hopefully enabling the option that makes it call OutputDebugString instead of streaming to stderr, a nice solution would be to capture the stderr output to a file. The application, though, is running as a Windows service. Anyone know how I could redirect stderr to a file for an application running as a service? edit I'm hoping not to have to modify any more of the service source code than I already have. Options in the service configuration would be ideal.
Can you try manually redirecting stderr? FILE* stderr_redirect = freopen( "C:/stderr.log", "w", stderr ); // Code that writes to stderr fclose( stderr_redirect ); Edit: There is no way to redirect stdout or stderr for a service other than to handle those streams inside your service yourself. Some services provide an option to redirect these messages to a file. You could either temporarily redirect these streams or add an option to your service to make it configurable next time you have a problem.
830,842
830,862
Is there a way to get Visual Studio to unload dlls?
I have a Visual Studio 2008 project with some legacy native C++ DLL projects, and some newer WPF projects that use the DLLs. When I open the WPF xaml windows in the designer, Visual Studio loads up the native DLLs to be able to display the window. The problem is, is that if I now need to make a change in the legacy DLLs, I need to close all the WPF windows and restart visual studio to be able to build them. Obviously, I need to close the tabs that are using the DLLs, but after I do that, is there a way to tell Visual Studio to unload those DLLs without a full restart?
I've had similar problems. A little bit better than restarting is removing references to the dlls, then adding the references back in.
830,980
1,182,583
VC2008 compiler errors opening sbr files (C2418 C1903 C2471)
EDIT: See my answer below for the hotfix. ORIGINAL QUESTION: In setting up for our boat-programming adventure I have to set up source control and fix project files for a team to use them. (the project was previously only being worked on by one person who took shortcuts with setting up the project includes, etc) I am fixing those SLN and Proj files. When trying to do a build on an external USB drive (I have not tried it on the primary hard drive) I am getting odd errors (lots of them for various files): fatal error C1083: Cannot open compiler generated file: '.\Debug\.sbr': Permission denied These files are referenced in the vcproj file with relative paths in double quotes: RelativePath="..\..\Source\.cpp" I get the same errors form within a sln file in the IDE or if I call msbuild with the sln file. The files are kind of "shared" for a few sln files (projects). The person who originally created the SLN files is not known for being a wizard at configuring MSDev or making things work for teams. Is this an issue with the way the source files are referenced? Any suggestions on how to fix these? This URL does not seem to have helpful information: Fatal Error C1083 on MSDN Note - there were/are still hardcoded paths in the proj file, but i don;t see them for these files. They were mostly for the include and lib dirs. I think I removed them all. I also get these errors: ..\..\Source\.cpp : error C2471: cannot update program database '\debug\vc90.pdb' ..\..\Source\.cpp(336) : fatal error C1903: unable to recover from previous error(s); stopping compilation ..\..\Source\.cpp(336) : error C2418: cannot delete browser file: .\Debug\.sbr
Title: You may receive a "PRJ0008" or "C2471" or "C1083" or "D8022" or "LNK1103" or similar error message when you try to build a solution in Visual C++ Symptoms: D8022 : Cannot open 'RSP00000215921192.rsp' PRJ0008 : Could not delete file 'vc90.idb'. C1083 : Cannot open program database file 'vc90.pdb' C2471 : Cannot update program database 'vc90.pdb' LNK1103 : debugging information corrupt. Cause: This problem occurs when all of the following conditions are true: You have a solution with more than one project in it. Two or more of the projects are not dependent on each other. You have parallel builds enabled. (Tools -> Options: Projects and Solutions, Build and Run: "maximum number of parallel project builds" is set to a value greater than 1) You are building on a system with multiple CPUs (cores). Two or more of the non-dependent projects are configured to use the same Intermediate and/or Output directory. A specific race condition in mspdbsrv.exe remains uncorrected. Resolution: To resolve the problem do one or more of the following: Reconfigure the non-dependent projects to specify an Intermediate and Output directory that is different from one another, e.g. Output Directory = "$(SolutionDir)$(ProjectName)\$(ConfigurationName)", Intermediate Directory = "$(OutDir)". Adjust your solution's project dependencies (Project -> Project Dependencies...) so that each is dependent on another. Disable parallel builds. Add the "/onecpu" boot option to your boot.ini file. Change you BIOS settings to enable/use only one CPU. File a problem report with Microsoft Technical Support and keep bugging the crap out of them until they eventually fix mspdbsrv. Status: The problem is a combination of both a user project configuration error as well as a race condition in Microsoft's "mspdbsrv.exe" utility that does not properly handle more than one thread calling it at the same time for the same file resulting in the file's HANDLE being left open. Additionally Visual Studio itself and/or its build system (VCBUILD and/or MSBUILD) (or all three!) should be made smart enough to detect and alert the user of such user errors so that corrective action can be taken. This problem has been around for a LOOOOOONG time. Applies to: Microsoft Visual C++ 2005 Microsoft Visual C++ 2008 Others? Respectfully submitted: "Fish" (David B. Trout) fish@infidels.org p.s: You're welcome. :)
831,111
831,128
Parsing a string in C++
I'm trying to make a constructor for a graph class that accepts a string as a parameter and uses it to build the graph. The string is formatted as follows: |vertex list|Edges list| e.g. |1,2,3,4,15|(1->2),(3->2),(4->15)| The idea is that the constructor will take the values from the string and then know to perform the following actions (inserting the vertexes into the vertex list and then inserting the edges into the edges list): addVertex(1) addVertex(2) addVertex(3) addVertex(4) addVertex(15) addEdge(1,2) addEdge(3,2) addEdge(4,15) I would have just made a couple of "for" loops to scan the string, but I don't know what to do about double(or more) digit numbers. I'm starting to imagine all sorts of seriously complicated for loops and I'm wondering if anyone here could share with me any more intelligent ways to extract and use this data.
You can use a stringstream and use the stream extraction operator to get your integers. string s("12 34"); istringstream ss(s); int x, y; ss >> x >> y; Since this is homework, I urge you to explore the possibilities and figure out the complete code for yourself.
831,265
835,781
I need a message pump that doesn't mess up my open window
My application (the bootstrap application for an installer that I'm working on needs to launch some other applications (my installer and third party installers for my installer's prerequisites) and wait for them to complete. In order to allow the GUI to do screen updates while waiting for an app to complete, I put a message pump in the wait loop using the 'MFC-compatible' example in the Visual Studio documentation on idle loop processing as a guideline. My code (which is in a member function of a CWinApp-derived class) is as follows: if (::CreateProcess(lpAppName, szCmdLineBuffer, NULL, NULL, TRUE, 0, NULL, NULL, &StartupInfo, &ProcessInfo)) { ::GetExitCodeProcess(ProcessInfo.hProcess, &dwExitCode); if (bWait) while (dwExitCode == STILL_ACTIVE) { // In order to allow updates of the GUI to happen while we're waiting for // the application to finish, we must run a mini message pump here to // allow messages to go through and get processed. This message pump // performs much like MFC's main message pump found in CWinThread::Run(). MSG msg; while (::PeekMessage(&msg, NULL, 0, 0, PM_NOREMOVE)) { if (!PumpMessage()) { // a termination message (e.g. WM_DESTROY) // was processed, so we need to stop waiting dwExitCode = ERROR_CANT_WAIT; ::PostQuitMessage(0); break; } } // let MFC do its idle processing LONG nIdle = 0; while (OnIdle(nIdle++)) ; if (dwExitCode == STILL_ACTIVE) // was a termination message processed? { // no; wait for .1 second to see if the application is finished ::WaitForSingleObject(ProcessInfo.hProcess, 100); ::GetExitCodeProcess(ProcessInfo.hProcess, &dwExitCode); } } ::CloseHandle(ProcessInfo.hProcess); ::CloseHandle(ProcessInfo.hThread); } else dwExitCode = ::GetLastError(); The problem that I'm having is that, at some point, this message pump seems to free up window and menu handles on the window that I have open at the time this code is run. I did a walk through in the debugger, and at no time did it ever get into the body of the if (!PumpMessage()) statement, so I don't know what's going on here to cause the window and menu handles to go south. If I don't have the message pump, everything works fine, except that the GUI can't update itself while the wait loop is running. Does anyone have any ideas as to how to make this work? Alternatively, I'd like to launch a worker thread to launch the second app if bWait is TRUE, but I've never done anything with threads before, so I'll need some advice on how to do it without introducing synchronization issues, etc. (Code examples would be greatly appreciated in either case.)
I've also posted this question on the Microsoft forums, and thanks to the help of one Doug Harris at Microsoft, I found out my problem with my HWND and HMENU values was, indeed due to stale CWwnd* and CMenu* pointers (obtained using GetMenu() and GetDialogItem() calls. Getting the pointers again after launching the second app solved that problem. Also, he pointed me to a web site* that showed a better way of doing my loop using MsgWaitForMultipleObjects() to control it that doesn't involve the busy work of waiting a set amount of time and polling the process for an exit code. My loop now looks like this: if (bWait) { // In order to allow updates of the GUI to happen while we're // waiting for the application to finish, we must run a message // pump here to allow messages to go through and get processed. LONG nIdleCount = 0; for (;;) { MSG msg; if (::PeekMessage(&msg, NULL, 0, 0, PM_NOREMOVE)) PumpMessage(); else //if (!OnIdle(nIdleCount++)) { nIdleCount = 0; if (!PeekMessage(&msg, NULL, 0, 0, PM_NOREMOVE)) { DWORD nRes = ::MsgWaitForMultipleObjects(1, &ProcessInfo.hProcess, FALSE, INFINITE, QS_ALLEVENTS); if (nRes == WAIT_OBJECT_0) break; } } } } ::GetExitCodeProcess(ProcessInfo.hProcess, &dwExitCode); *That Web site, if you're curious, is: http://members.cox.net/doug_web/threads.htm
831,394
831,506
Build Process for a Visual C++ 2008 Express Project
I have a VS 2008 C++ project, with one very small and simple code file. I need to write an app to generate this file and build the project into a Win32 DLL. I will need to deliver a free compiler etc. with the app to my client, so I can't automate VS to do this. How would I best go about this?
VS 2008 Express installs the command line compiler (in fact the installer has an option so you only get the command line stuff). So getting a free compiler is no problem. If you really only need to build a single file into a DLL, a cl command using the '/LD' option should be enough to do the trick (though you'll likely need at least a couple other options). If you want to get fancy, I'm not sure if The Express SKU includes nmake.exe (but I imagine it does).
832,031
832,081
Cached or precomputed Immutable Functions in C# / C++
By "immutable function" or "immutable method", I mean a function whose result will never vary if you give it the same arguments. I would be interested to know if anyone know of a more generic or less verbose solution when you want to cache the precomputed value(s) of an immutable function. Let me explain what I mean with a simple example: //Let's assume that ComputeStuff() returns a widely used value //and that //1. It is immutable (it will always return the same result) //2. its performance is critical, and it cannot be accepted to compute // the result at each call, because the computation is too slow //I show here a way to solve the problem, based on a cached result. //(this example works in a case of a method with no arguments. // A hash would be required in order to store multiple precomputed results //depending upon the arguments) private string mComputeStuff_Cached = null; public string ComputeStuff() { if (mComputeStuff_Cached != null) return mComputeStuff_Cached ; string result; // // ... //Do lots of cpu intensive computation in order to compute "result" //or whatever you want to compute //(for example the hash of a long file) //... // mComputeStuff_Cached = result; return mComputeStuff_Cached ; } Notes: - I added the tag C++ as a solution in C++ would also interest me - The concept of "immutable functions" is common for database developers, since a function can be defined as "immutable", or "immutable within a transaction" (this is a good way to improve the performance of the queries). Thanks in advance
"Memoization" may be a useful term, here. There are a few memoization libraries out there (I could swear there was one in boost, but I can't find it at the moment). Doing a web search for "memoize" or "memoization" and your language of choice will reveal a few hits. Here's a neat article in Wikibooks: Optimizing C++/General optimization techniques/Memoization
832,295
832,307
How to retrieve foreign host's MAC address in C++
Currently we're parsing arp request output from the command line. string cmd = "arp -n "; cmd.append(ipaddress); cmd.append(" | grep "); cmd.append(ipaddress); fgets( line, 130, fp); fgets( line, 130, fp); ret.append(line); ... It works, but is there a way to do this using a library function that wont depend so much on the native command line interface? The project is using libpcap currently.
In general, this will depend on your OS. There's no real standard API for this. Assuming you're on linux, open and parse /proc/net/arp. The format is similar to that of the output from the arp command. Note that you must send a packet to the IP in question at least once recently in order to have it in the ARP table, and of course you won't get anything outside of your local segment. Getting an IP into this cache is easy enough - just send a UDP packet to it on some unused port, for example - then poll until it shows up. Another alternative would be to use a raw socket to craft and send your own arp packet, but that's much more complex :) If you do want to go down this route, study the source code for arping (page is down today, but the git repo is online), which might also give you a bit more portability by using platform-independent-ish libraries like libpcap (but requires root)
832,306
1,007,705
Playing YouTube videos in a Windows Mobile application
I am working on an application for Windows Mobile 6 (or maybe 5) that plays YouTube videos. Well, it should play YouTube videos (and control/query the player about status changes, current frame/time, etc.) After scouring the web for quite some time now (and a few trials), I still couldn't find a way to do this. The options I know of are: Use the YouTube player, embedded in HTML, controllable via JavaScript. However, I couldn't watch YT videos from IE Mobile, to begin with -- I get an error message saying something along the lines of "you need a browser with Flash Player 8 and JavaScript enabled". Host a Media Player control, but WMP refuses to play YT videos, including the Mobile format. Use DirectShow. I'm still looking into this one (I've never worked with COM, let alone DirectShow, before), but I am yet to find a solution that supports YouTube's format(s) I would rather write this application in C#, but C++ works, too. Help me, O Wise Sages of StackOverflow!
You can also grab YouTube videos as MP4, hopefully that expands your player options. You can look into DirectShow CF for playback functionality, or host some other player in your app that supports MP4 or FLV. Trying to play it back through IE mobile won't work, as the version necessary of the Flash plug-in with video playback support isn't available (last time I checked). To get the MP4 file make a request to this URL: "http://www.youtube.com/get_video?video_id=" + videoID + "&t=" + token + "&fmt=18" To get the FLV use this: "http://www.youtube.com/get_video?video_id=" + videoID + "&t=" + token To get the Token call this: "http://www.youtube.com/api2_rest?method=youtube.videos.get_video_token&video_id=" + videoID I wrote an app that would grab a playlist of YouTube videos and sync them up with my PocketPC, I used TCPMP with the Flash add-on to playback the video (externally from my app). Although MP4 also worked on the PPC, I stuck to FLVs because at the time some videos on YouTube were not available as MP4. I wouldn't be concerned about this now. Sadly my PPC broke, now I'm doing something similar on my iPhone but I had to switch completely to the MP4 format. VLC's FLV playback on the iPhone was too jerky for me.
832,810
832,925
Modern C++ Game Programming Examples
To what extent are modern C++ features like: polymorphism, STL, exception safety/handling, templates with policy-based class design, smart pointers new/delete, placement new/delete used in game studios? I would be interested to know the names of libraries and the C++ features they use. For example, Orge3D uses all modern C++ features including exceptions and smart pointers. In other words, if I would be looking for an example for a game library using modern C++, I would go to Orge3D. But I don't know if these features deter game studios from using Orge3D. Further I don't know if there are other examples. For example, I used Box2D some time before briefly, but it uses only placement new, and class keyword as the C++ features. Even encapsulation is broken in these classes as all members are public. Ideally, if C++ features would be the best match for all situations, these would be used most often. But it seems not. What are the impedances? The obvious one is having to read stacks of books, but that's half a reason only. This question is a follow up on "C++ for Game Programming - Love or Distrust?" (From the responses there I got an impression that many C++ features are still not used in games; which is not necessarily the way it should be).
I've worked in 2 game companies, seen a number of codebases, and observed a number of debates on matters such as these among game developers, so I might be able to offer some observations. The short answer for each point is that it varies highly from studio to studio or even from team to team within the same studio. Long answers enumerated below: polymorphism, Used by some studios but not others. Many teams still prefer to avoid it. Virtual function calls do come at a cost and that cost is noticeable at times even on modern consoles. Of those that do use polymorphism, my cynical assumption is that only a small percentage use it well. STL, Split down the middle, for many of the same reasons as polymorphism. STL is easy to use incorrectly, so many studios choose to avoid it on these grounds. Of those that use it heavily, many people pair it with custom allocators. EA has created EASTL, which addresses many of the issue STL causes in game development. exception safety/handling, Very few studios use exception handling. One of the first recommendations for modern consoles is to turn off both RTTI and exceptions. PC games probably use exceptions to much greater effect, but among console studios exceptions are very frequently shunned. They increase code size, which can be at a premium, and literally are not supported on some relevant platforms. templates with policy-based class design, Policy based design... I haven't come across any of it. Templates are used pretty frequently for things like introspection/reflection and code reuse. Policy based design, when I read Alexandrescu's book, seemed like a flawed promise to me. I doubt it's used very heavily in the game industry, but it will vary highly from studio to studio. smart pointers Smart pointers are embraced by many of the studios that also use polymorphism and STL. Memory management in console games is extremely important, so a lot of people dislike reference counting smart pointers because they're not explicit about when they free... but these are certainly not the only kind of smart pointer. The smart pointer idea in general is still gaining traction. I think it will be much more commonplace in 2-3 years. new/delete, placement new/delete These are used frequently. They are often overridden to use custom allocators underneath, so that memory can be tracked and leaks can be found with ease. I think I agree with your conclusion. C++ isn't used in game studios to as much of an extent as it could be. There are good and bad reasons for this. The good ones are because of performance implications or memory concerns, the bad ones are because people are stuck in a rut. In a lot of ways it makes sense to do things the way they've always been done, and if that means C with limited C++ then it means C with limited C++. But there are a number of anti-C++ biases floating around... some justified and some not.
833,034
833,051
How to convert const char* to char*
can any body tell me how to conver const char* to char*? get_error_from_header(void *ptr, size_t size, size_t nmemb, void *data) { ErrorMsg *error = (ErrorMsg *)data; char* err = strstr((const char *)ptr,"550"); //error cannot convert const char** to char* if(err) { strncpy(error->data,(char*)ptr,LENGTH_ERROR_MESSAGE-1); error->data[LENGTH_ERROR_MESSAGE-1] = '\0'; error->ret = true; } return size*nmemb; }
You don't appear to use err in the rest of that function, so why bother creating it? if (NULL != strstr((const char *)ptr, "550")) { If you do need it, will you really need to modify whatever it points to? If not, then declare it as const also: const char* err = strstr((const char *)ptr, "550"); Finally, as casts are such nasty things, it is best to use a specific modern-style cast for the operation you want to perform. In this case: if (NULL != strstr(reinterpret_cast<const char *>(ptr), "550")) {
833,252
863,176
Common C++ framework
Has anyone experienced with Common C++ Framework? Can it be used as a framework for a Telecom networking application?
Why use that? Why use an obscure framework, when a well established, well documented, peer reviewed, high quality framework like Boost exists. It has good networking libraries among a million others.
833,257
837,204
Controlling Lua5.1's garbage collector
OK so I have a C++ class that is exposed to Lua using SWIG. The script creates the object but a manager class also has a pointer to the object so it can be modified in C++(or another script) for whatever reason. The problem is that when the script finishes the object is freed, how can I control what the Garbage collector collects without having to implement a gc metamethod? Here is an example: --Script that creates the object someObject = Utils.Object("Obj name"); Now the Object has registered itself with the manager so the rest of the application(and other scripts) can access it. --Another script obj = ObjManager:GetObject(0); Clearly not a very realistic example but hopefully it illustrates my question. Is there a way to veto the garbage collector without a gc metamethod in C++? Just to clarify the manager is in C++, and Utils is the module name housing the exposed class. Also the object registers itself to the manager in its constructor. Thanks in advance.
Lua's GC only knows about references held within Lua, which is a reasonable implementation constraint. This implies that an object's lifetime is under Lua's control. If an object created by executing one script or function needs to be available to later scripts or functions, a reference to it has to be preserved within the Lua state so that the GC knows it is still in use. Otherwise, it is indistinguishable from garbage, and may be discarded at any time. This one of the purposes of the Lua registry table. The C side can easily hold a reference to any Lua object by placing it in the registry table. The key can either be some unique value known to the C library (the address of a static variable converted to a light userdata is often a good choice since it cannot collide with any key from any other library). Alternatively the function call luaL_ref(L, LUA_REGISTRYINDEX) will put the item at the top of stack in the registry table and return a unique integer key. This works well for storing a script-provided callback function in a way that both protects the function from the GC and allows a "pointer" (the integer key) to it to be stored in a C structure so that it can be retrieved and called later. Note that luaL_ref() can be used to manage references in any table, so it may very well make sense to use a table that is private to your module for the purpose rather than the global registry table. In this case, the table ObjManager itself might be a good candidate.
833,258
833,270
std::map unable to handle polymorphism?
When using std::map in c++, is it possible to store inherited classes as their "base class" in the map and still being able to call their overloaded methods? See this example: #include <iostream> #include <map> class Base { public: virtual void Foo() { std::cout << "1"; } }; class Child : public Base { public: void Foo() { std::cout << "2"; } }; int main (int argc, char * const argv[]) { std::map<std:string, Base> Storage; Storage["rawr"]=Child(); Storage["rawr"].Foo(); return 0; } This following code writes "1". Which somehow tells me that polymorphism doesn't work when you use std::map.
Polymorphism doesn't work in that case because the std::map stores the Base as a value type, so the objects get sliced. If you want to have polymorphism on stored objects, you need to store pointers or use one of the boost ptr containers to effect the same. This means you need to remember to delete the memory afterwards (please don't put pointers to stack objects into the map)
833,289
833,333
how do I use STL algorithms with a vector of pointers
I have a vector of pointers that are not owned by the container. How do I use algorithms on the targets of the pointers. I tried to use boost's ptr_vector, but it tries to delete the pointers when it goes out of scope. Here is some code that needs to work: vector<int*> myValues; // ... myValues is populated bool consistent = count(myValues.begin(), myValues.end(), myValues.front()) == myValues.size(); auto v = consistent ? myValues.front() : accumulate(myValues.begin(), myValues.end(), 0) / myValues.size(); fill(myValues.begin(), myValues.end(), v); // etc. I realize that for loops would work, but this happens in a bunch of places, so some kind of unary adapter? I wasn't able to find one. Thanks in advance!
You could use Boost Indirect Iterator. When dereferenced (with operator*() ), it applies an extra dereference, so you end up with the value pointed by the pointer referenced by the iterator. For more information, you can also see this question about a dereference iterator. Here's a simple example: std::vector<int*> vec; vec.push_back(new int(1)); vec.push_back(new int(2)); std::copy(boost::make_indirect_iterator(vec.begin()), boost::make_indirect_iterator(vec.end()), std::ostream_iterator<int>(std::cout, " ")); // Prints 1 2
833,291
833,308
Is there an equivalent to WinAPI's MAX_PATH under linux/unix?
If I want to allocate a char array (in C) that is guaranteed to be large enough to hold any valid absolute path+filename, how big does it need to be. On Win32, there is the MAX_PATH define. What is the equivalent for Unix/linux?
There is a PATH_MAX, but it is a bit problematic. From the bugs section of the realpath(3) man page: The POSIX.1-2001 standard version of this function is broken by design, since it is impossible to determine a suitable size for the output buffer, resolved_path. According to POSIX.1-2001 a buffer of size PATH_MAX suffices, but PATH_MAX need not be a defined constant, and may have to be obtained using pathconf(3). And asking pathconf(3) does not really help, since, on the one hand POSIX warns that the result of pathconf(3) may be huge and unsuitable for mallocing memory, and on the other hand pathconf(3) may return -1 to signify that PATH_MAXis not bounded.
833,496
838,743
Should managed code return an error or throw exceptions to unmanaged code?
I am about to expose a service written in C# to a legacy C++ application using COM. What is the best approach to report errors to the unmanaged client? Throwing exceptions or simply return an error value? Thanks, Stefano
You should throw Exceptions. Exceptions are mapped to HRESULTS by the Framework, and HRESULTs are the standard way to return errors to COM clients, so this is the way to go. Each Exception type has an HResult property. When managed code called from a COM Client throws an exception, the runtime passes the HResult to the COM client. If you want application-specific HRESULT codes, you can create your own custom Exception types and set the Exception.HResult property. One point to note is that the call stack information will be lost when an Exception is thrown to a COM client. It can therefore be a good idea to log exceptions before propagating to the COM client. One technique I sometimes use is the following: explicitly implement a ComVisible interface for COM clients that logs and rethrows exceptions. COM clients use the ComVisible interface that logs exceptions before propagating them. .NET clients use the concrete class and are expected to make their own arrangements for exception handling. It's a bit long-winded to write but can be helpful when you're subsequently troubleshooting. Another advantage of this approach is that you can have an API tailored to the restrictions of COM for COM clients, and a more standard API for standard .NET clients. For example, COM clients are restricted to passing arrays by reference, whereas passing by reference is discouraged for .NET clients. Example: [ ComVisible(true), GuidAttribute("..."), Description("...") ] public interface IMyComVisibleClass { // Text from the Description attribute will be exported to the COM type library. [Description("...")] MyResult MyMethod(...); [Description("...")] MyOtherResult MyArrayMethod([In] ref int[] ids,...); } ... [ ComVisible(true), GuidAttribute("..."), ProgId("..."), ClassInterface(ClassInterfaceType.None), Description("...") ] public class MyComVisibleClass : IMyComVisibleClass { public MyResult MyMethod(...) { ... implementation without exception handling ... } public MyOtherResult MyArrayMethod(int[] ids,...) { ... input parameter does not use ref keyword for .NET clients ... ... implementation without exception handling ... } MyResult IMyComVisibleClass.MyMethod(...) { // intended for COM clients only try { return this.MyMethod(...); } catch(Exception ex) { ... log exception ... throw; // Optionally wrap in a custom exception type } } MyOtherResult IMyComVisibleClass.MyArrayMethod(ref int[] ids, ...) { // intended for COM clients only try { // Array is passed without ref keyword return this.MyArrayMethod(ids, ...); } catch(Exception ex) { ... log exception ... throw; // Optionally wrap in a custom exception type } } }
833,882
833,929
Nested function calls order of evaluation
It's well-known that the order of evaluation of a function's arguments in unspecified and can differ between different compilers. What doesn't seem so clear is whether function calls can be interleaved, in the following sense: f(g(h()), i(j())) Let's assume the compiler chooses to evaluate f's first parameter first. Is the compiler free to call j between calling h and g? I believe so, but I don't know where to find confirmation in the Standard.
The evaluation order is unspecified - see section 5.2.2/8 of the Standard: The order of evaluation of arguments is unspecified. All side effects of argument expression evaluations take effect before the function is entered.
834,179
834,250
WTF does WTF represent in the WebKit code base?
I downloaded Chromium's code base and ran across the WTF namespace. namespace WTF { /* * C++'s idea of a reinterpret_cast lacks sufficient cojones. */ template<typename TO, typename FROM> TO bitwise_cast(FROM in) { COMPILE_ASSERT(sizeof(TO) == sizeof(FROM), WTF_wtf_reinterpret_cast_sizeof_types_is_equal); union { FROM from; TO to; } u; u.from = in; return u.to; } } // namespace WTF Does this mean what I think it means? Could be so, the bitwise_cast implementation specified here will not compile if either TO or FROM is not a POD and is not (AFAIK) more powerful than C++ built in reinterpret_cast. The only point of light I see here is the nobody seems to be using bitwise_cast in the Chromium project.
It’s short for Web Template Framework and provides commonly used functions all over the WebKit codebase.
834,314
834,362
Should I declare all functions virtual in a base class?
When I declare a base class, should I declare all the functions in it as virtual, or should I have a set of virtual functions and a set of non-virtual functions which I am sure are not going to be inherited?
A function only needs to be virtual iff a derived class will implement that function in a different way. For example: class Base { public: void setI (int i) // No need for it to be virtual { m_i = i; } virtual ~Base () {} // Almost always a good idea virtual bool isDerived1 () // Is overridden - so make it virtual { return false; } private: int m_i; }; class Derived1 : public Base { public: virtual ~Derived () {} virtual bool isDerived1 () // Is overridden - so make it virtual { return true; } }; As a result, I would error the side of not having anything virtual unless you know in advance that you intend to override it or until you discover that you require the behaviour. The only exception to this is the destructor, for which its almost always the case that you want it to be virtual in a base class.
834,347
834,543
VS2008 Express Editions and Resources
I am wanting to add resources (such as an icon) into a WinAPI based program in VC++ 2008 EE and am struggling. As there is no resource editor bundled with the IDE, is it possible? My Google searches all seem to related to C# or other managed environments. Thanks all,
I'm afraid there is no resource editor with the Express Edition. (edit) I couldn't find a feature matrix on the official site, but Wikipedia says so, so it must be right;-) You could look at 3rd party tools - a quick web search throws up ResEdit as a possible answer.
834,378
834,957
Throw-catch cause linkage errors
I'm getting linkage errors of the following type: Festival.obj : error LNK2019: unresolved external symbol "public: void __thiscall Tree::add(class Price &)" (?add@?$Tree@VPrice@@@@QAEXAAVPrice@@@Z) referenced in function __catch$?AddBand@Festival@@QAE?AW4StatusType@@HHH@Z$0 I used to think it has to do with try-catch mechanism, but since been told otherwise. This is an updated version of the question. I'm using Visual Studio 2008, but I have similar problems in g++. The relevant code: In Festival.cpp #include "Tree.h" #include <exception> using namespace std; class Band{ public: Band(int bandID, int price, int votes=0): bandID(bandID), price(price), votes(votes){}; ... private: ... }; class Festival{ public: Festival(int budget): budget(budget), minPrice(0), maxNeededBudget(0), priceOffset(0), bandCounter(0){}; ~Festival(); StatusType AddBand(int bandID, int price, int votes=0); ... private: Tree<Band> bandTree; ... }; StatusType Festival::AddBand(int bandID, int price, int votes){ if ((price<0)||(bandID<0)){ return INVALID_INPUT; } Band* newBand=NULL; try{ newBand=new Band(bandID,price-priceOffset,votes); } catch(bad_alloc&){return ALLOCATION_ERROR;} if (bandTree.find(*newBand)!=NULL){ delete newBand; return FAILURE; } bandTree.add(*newBand); .... } In Tree.h: template<class T> class Tree{ public: Tree(T* initialData=NULL, Tree<T>* initialFather=NULL); void add(T& newData); .... private: .... }; Interestingly enough I do not have linkage errors when I try to use Tree functions when type T is a primitive type like an int.
Is there Tree.cpp? If there is, maybe you forgot to link it? Where is the implementation of Tree::add? In addition I don't see where you call Tree::add. I guess it should be inside the try statement, right after the new? Just a reminder: For most compilers (i.e. those that practice separate compilation) the implementation of the member functions of a template class has to be visible during the compilation of the source file that uses the template class. Usually people follow this rule by putting the implementation of the member functions inside the header file. Maybe Tree::add isn't inside the header? Then a possible solution in the discussed case will be to put Tree::add implementation inside the header file. The difference between regular classes and template classes exists because template classes are not "real" classes - it is, well, a template. If you had defined your Tree class as a regular class, the compiler could have used your code right away. In case of a template the compiler first "writes" for you the real class, substituting the template parameters with the types you supplied. Now, compiler compiles cpp files one by one. He is not aware of other cpp files and can use nothing from other cpp files. Let's say your implementation of Tree:add looks like this: void Tree::add(T& newData) { newData.destroyEverything(); } It is totally legitimate as long as your T has method destroyEverything. When the compiler compiles Class.cpp it wants to be sure that you don't do with T anything it doesn't know. For example Tree<int> won't work because int doesn't have destroyEverything. The compiler will try to write your code with int instead of T and find out that the code doesn't compile. But since the compiler "sees" only the current cpp and everything it includes, it won't be able to validate add function, since it is in a separate cpp. There won't be any problem with void Tree::add(int& newData) { newData.destroyEverything(); } implemented in a separate cpp because the compiler knows that int is the only acceptable type and can "count on himself" that when he gets to compile Tree.cpp he will find the error.
834,383
835,275
C++11: a new language?
Recently I started reading (just a bit) the current draft for the future C++11 standard. There are lots of new features, some of them already available via Boost Libs. Of course, I'm pretty happy with this new standard and I'd like to play with all the new features as soon as possibile. Anyway, speaking about this draft with some friends, long-time C++ devs, some worries emerged. So, I ask you (to answer them): 1) The language itself This update is huge, maybe too huge for a single standard update. Huge for the compiler vendors (even if most of them already started implementing some features) but also for the end-users. In particular, a friend of mine told me "this is a sort of new language". Can we consider it a brand new language after this update? Do you plan to switch to the new standard or keep up with the "old" standard(s)? 2) Knowledge of the language How the learning curve will be impacted by the new standard? Teaching the language will be more difficult? Some features, while pretty awesome, seem a bit too "academic" to me (as definition I mean). Am I wrong? Mastering all these new additions could be a nightmare, couldn't it?
In short, no, we can't consider this a new language. It's the same language, new features. But instead of being bolted on by using the Boost libs, they're now going to be standard inclusions if you're using a compiler that supports the 0x standard. One doesn't have to use the new standard while using a compiler that supports the new standard. One will have to learn and use the new standard if certain constraints exist on the software being developed, however, but that's a constraint with any software endeavor. I think that the new features that the 0x standard brings will make doing certain things easier and less error prone, so it's to one's advantage to learn what the new features are, and how they will improve their design strategy for future work. One will also have to learn it so that when working on software developed with it, they will understand what's going on and not make large boo-boos. As to whether I will "switch to the new standard", if that means that I will learn the new standard and use it where applicable and where it increases my productivity, then yes, I certainly plan to switch. However, if this means that I will limit myself to only working with the new features of the 0x standard, then no, since much of my work involves code written before the standard and it would be a colossal undertaking to redesign everything to use the new features. Not only that, but it may introduce new bugs and performance issues that I'm not aware of without experience. Learning C++ has always been one of the more challenging journeys a programmer can undertake. Adding new features to the language will not change the difficulty of learning its syntax and how to use it effectively, but the approach will change. People will still learn about pointers and how they work, but they'll also learn about smart pointers and how they're managed. In some cases, people will learn things differently than before. For example, people will still need to learn how to initialize things, but now they'll learn about Uniform Initialization and Initializer Lists as primary ways to do things. In some cases, perhaps understanding things will be easier with the addition of the new for syntax for ranges or the auto return type in a function declaration. I think that overall, C++ will become easier to learn and use while at the same time becoming easier to teach. Mastering a language is a long-term goal, it can not be done over night. It's silly to think that one can have mastery over something as complex as C++ quickly. It takes practice, experience and debugging code to really hammer something in. Academically learning is one thing, but putting to use that knowledge is an entire different monster. I think that if one already has mastery of the C++ language, the new concepts will not pose too much of a burden, but a new comer may have an advantage in that they won't bother learning some of the more obsolete ways of doing things.
834,622
834,631
Best way to empty stringstream?
One of the possibilities is: somestringstream.str(""); But is it most optimal? Is there any way to preserve stringstream internal buffer, so that following operator<<() calls would not require to reserve memory again?
I've always done: s.clear();//clear any bits set s.str(std::string()); @litb gets into more detail about how to seekp to the start of the stream combined with std::ends you can keep your allocated size.
834,665
945,365
Embedding Flash in 3D-rendering binary
I want to embed Flash (2D GUI) in my 3D game C++ application, and it must be portable. Rendering Flash is not enough, I need user action callbacks. I found Hikari for Ogre3D, but I guess the technique it's based on only works on Windows? (It uses OLE.) Further, from what I've heard (unverified) CryEngine uses Flash. Almost forgot the obvious question: how do I go about business?
Answering my own question: a colleauge recommended this library. Sounds free and interresting to me, think I'll look into it.
834,782
834,947
Create tab blinking effect (like IM programs) using wxNotebook
How can I Create Tab blinking effect like IM programs do using wxNotebook? A good example of this is would be any tabbed IM program that blinks to show the user that they received a new IM.
You can give each tab an icon (using SetPageImage if i remember correctly). I did that in the past to show some progress bar. You could use it to draw a bitmap (wxMemoryDC) every time your timer triggers and update that image. You can also use wxAuiNotebook. It doesn't use native widgets, but it's part of the AUI framework (adopted in wx2.8) and it allows modern things, like drag&drop of tabs. Of course it also has SetPageBitmap method. You could either render a small led blinking, or you could draw the name of the contact on a colored background (whose color changes each blink interval) and use it itself as the icon instead of drawing the contact name next to the icon. The latter method only works using wxAuiNotebook if i recall correctly. It's been some time since i did it, but it worked out very nicely.
834,848
835,035
Problem with desktops
I have this code: #define _WIN32_WINNT 0x0500 #include <cstdlib> #include <iostream> #include <windows.h> using namespace std; int main(int argc, char *argv[]) { Sleep(500); HDESK hOriginalThread; HDESK hOriginalInput; hOriginalThread = GetThreadDesktop(GetCurrentThreadId()); hOriginalInput = OpenInputDesktop(0, FALSE, DESKTOP_SWITCHDESKTOP); HDESK hNewDesktop=CreateDesktop("Test",NULL,NULL,0,DELETE|READ_CONTROL|WRITE_DAC|WRITE_OWNER|GENERIC_ALL,NULL); cout<<SetThreadDesktop(hNewDesktop); Sleep(575); SwitchDesktop(hNewDesktop); system("cmd"); Sleep(1000); SwitchDesktop(hOriginalInput); SetThreadDesktop(hOriginalThread); CloseDesktop(hNewDesktop); CloseDesktop(hOriginalInput); Sleep(1000); return 0; } When I run this, it creates new desktop, switch to it, but command prompt not appeared. I must manualy terminate process "cmd" and my program then continue. Is there a way to show window of any application on other desktop? And how I can change background of desktop I created? Please help.
You can pick which desktop to start an application in when the application starts. STARTUPINFO si = {0}; si.cb = sizeof(STARTUPINFO); si.lpDesktop = L"winsta0\\Default"; Then pass this struct into CreateProcess or CreateProcessAsUser. You can also pick which session to start the application in (Enable the session ID column in task manager to see which one you want) You can create a process in another session by using: SetTokenInformation on the token that you use in CreateProcessAsUser passing in a TokenSessionId You can't change the session of an already running process.
834,865
834,916
Avoid Deadlock using Non-virtual Public Interface and Scoped Locks in C++
I've run into a problem which seems troubling to me. It seems I've found a situation that's easy enough to work-around, but that could lead to problems if a) I have a lapse in concentration while programming or b) somebody else starts implementing my interfaces and doesn't know how to handle this situation. Here's my basic setup: I've got an abstract class that I'm using as a generic interface to several data types. I've adopted the non-virtual public interface paradigm (Sutter, 2001) along with scoped locking to provide some thread safety. An example interface class would look something like this (I've left out details about scoped locking and the mutex implementation, as I don't think they're relevant): class Foo { public: A( ) { ScopedLock lock( mutex ); aImp( ); } B( ) { ScopedLock lock( mutex ); bImp( ); } protected: aImp( ) = 0; bImp( ) = 0; } It is then up to the user to implement aImp and bImp, which is where the problem comes in. If aImp performs some operation which uses bImp, it's extremely easy (and almost logical, in some sense) to do this: class Bar { protected: aImp( ) { ... B( ); ... } bImp( ) { ... } } Deadlock. Of course, the easy solution to this is to always call the protected virtual functions rather than their public variants (replace B( ) with bImp( ) in the above snippet). But it still seems far to easy to hang myself if I make a mistake, or worse yet allow others to hang themselves. Does anybody have some way to attempt to either stop an implementer of the abstract class from calling those public functions at compile-time, or otherwise help to avoid the deadlock solution? Just for kicks, some mutexes allow for operation which will avoid deadlock problems. As an example, if I implement this using the windows functions EnterCriticalSection and LeaveCriticalSection, there's no issue. But I'd rather avoid platform specific functionality. I'm currently using boost::mutex and boost::shared_mutex in my scoped lock implementation, and as far as I've seen it doesn't attempt to avoid deadlock (which I think I almost prefer).
Using private inheritance will potentially solve your problem: class Foo { public: void A( ) { ScopedLock lock( mutex ); aImp( ); } void B( ) { ScopedLock lock( mutex ); bImp( ); } protected: virtual void aImp( ) = 0; virtual void bImp( ) = 0; }; class FooMiddle : private Foo { public: using Foo::aImp; using Foo::bImp; }; class Bar : public FooMiddle { virtual void aImpl () { bImp (); B (); // Compile error - B is private } }; Deriving from Foo privately, and then using FooMiddle ensures that Bar doesn't have access to A or B. However, bar is still able to override aImp and bImp, and the using declarations in FooMiddle mean that these can still be called from Bar. Alternatively, an option that will help but not solve the problem is to use the Pimpl pattern. You'd end up with something as follows: class FooImpl { public: virtual void aImp( ) = 0; virtual void bImp( ) = 0; }; class Foo { public: void A( ) { ScopedLock lock( mutex ); m_impl->aImp( ); } void B( ) { ScopedLock lock( mutex ); m_impl->bImp( ); } private: FooImpl * m_impl; } The benefit is that in the classes deriving from FooImpl, they no longer have a "Foo" object and so cannot easily call "A" or "B".
835,038
838,219
Visual C++ Debug window displaying of CR/LF in Visual Studio 2008
For some time now when I am debugging Visual C++ applications and viewing any CString or char* (or any other ascii char based type) variable in the Local, Auto, or Watch debug windows, the CR/LF characters in my variables are not displayed at all. In other words, if I have a string variable set to "This is a line\r\nThis is another line" in my code, the debug window will show "This is a lineThis is another line". What I would like it to show is "This is a line□□This is another line" so that I can see the two extra characters in that text. This has caused me to make some mistakes when trying to debug string parsing code. Note, the text visualizer properly breaks the text up into separate lines, but I don't want to use the text visualizer if I don't have to. Furthermore, some coworkers of mine are able to see CR/LF characters in the correct manner, but we cannot determine why they are not shown for me. Many thanks in advance.
This seems to be some sort of hard-to-reproduce bug (I'm not seeing them in 2k8 either) according to this old link: If we wanted to do this properly we'd need to issue the proper escape sequences for these characters. eg show "\r\n" in the string. The behavior of stripping special characters historical and will be fixed in a future release. If you are viewing text with newlines in it you can either view the string as a character array: type "str,100" to view a string of length 100 as an array. Or you can click on the magnifying glass glyph and view the string in a multiline edit control. A month later: We cannot reproduce this problem on neither VS2003 nor VS2005. This looks like a machine-specific problem. So if your coworkers are really seeing it then there must be something weird going on in our setups.
835,201
835,744
What embedded browser for C++ project?
Is there any browser I could embedd in C++ application on Windows? I need all features typical browser has (HTTP client, cookies support, DOM style HTML parser, JavaScript engine) except rendering. Because I don't need rendering capability (and that's rather big part of a browser) I would prefer a browser with non monolithic design so I wouldn't have to include rendering stuff into my project. It would be nice if it had C++ rather than C API. I need this embedded browser mainly because I have much trouble finding C++ HTML parser which could handle broken HTML like browsers do. If you know any, please answer Library Recommendation: C++ HTML Parser SO question or at least vote on it to increase a chance someone will give a good answer.
I'm a bit confused by your question regarding embedding a web browser for which you don't need rendering capabilities. A web browser is rendering web pages by definition, unless you just need HTTP and XML with JavaScript capabilities which is a subset of a browser functionalities? If you need a web browser to embed in your C++ application, I would suggest to consider Qt that comes with the WebKit plugin. It is C++, LGPL and has a very nice IDE (Qt Creator). I tried Qt with Qt Creator on unix (Ubuntu) and it was very impressive. The debugger is a bit light but it is just the first version. The adapter of Qt into visual c++ 2008 is now free.
835,233
835,257
How to handle passing runtime-sized arrays between classes in C++
Right now I have a simple class that handles the parsing of XML files into ints that are useful to me. Looks something like this: int* DataParser::getInts(){ *objectNumbers = new int[getSize()]; for (int i=0;i<getSize();i++){ objectNumbers[i]=activeNode->GetNextChild()->GetContent(); } return objectNumbers; } In the main part of the program, I receive this by doing: int* numbers= data->getInts(); ///Do things to numbers[] delete numbers; Everything works fine until the delete command, which crashes everything. What is the proper way of doing this?
Part of the problem is that you are not pairing new[] with delete[]. This probably isn't the root of your bug here but you should get in the habbit of doing this. The bug is almost certainly related to the code that you left commented out. Can you add some more context there so we can see what you're doing with the numbers value? In general, I find it's much easier to use a vector for this type of problem. It takes the memory management out of the equation and has the added benefit of storing the size with the dynamic memory. void DataParser::getInts(std::vector<int>& objectNumbers){ for (int i=0;i<getSize();i++){ objectNumbers.push_back(activeNode->GetNextChild()->GetContent()); } } ... std::vector<int> numbers; data.getInts(numbers);
835,236
835,278
Bug in Array initialization in Managed C++ (followup)
Following up from my previous question. Can anyone explain why the following code compiles without any errors: typedef array<VdbMethodInfo^> MethodArray; typedef array<VdbParameterInfo^> ParameterArray; ParameterArray^ parameters = gcnew ParameterArray { gcnew VdbParameterInfo("name", "string", "Paul")}; MethodArray^ methods = gcnew MethodArray { gcnew VdbMethodInfo("createTable", parameters) }; Yet this gives me "error C2440: 'initializing' : cannot convert from 'VdbParameterInfo ^' to 'VdbMethodInfo ^" typedef array<VdbMethodInfo^> MethodArray; typedef array<VdbParameterInfo^> ParameterArray; MethodArray^ methods = gcnew MethodArray { gcnew VdbMethodInfo("createTable", gcnew ParameterArray { gcnew VdbParameterInfo("name", "string", "Paul")}; ) }; All I've done is attempt to "nest" the parameter array inside the method array initialization... Not directly mind - VdbMethodInfo's constructor takes, as a second argument, a ParameterArray. It seems to imply that managed C++ array initialization expects any recursive nesting to have the same type... (i.e. I think this must be a bug) Related question : here
I've found a workaround which makes the syntax cleaner anyway. I use the "..." syntax (Managed C++ equivalent to the C# "params" keyword"): public ref class MetaData { typedef array<VdbMethodInfo^> MethodArray; typedef array<VdbParameterInfo^> ParameterArray; static ParameterArray^ params(... ParameterArray^ p) { return p; } public: static array<VdbMethodInfo^>^ Instance() { ParameterArray^ parameters = gcnew ParameterArray { gcnew VdbParameterInfo("name", "string", "Paul")}; MethodArray^ methods = gcnew MethodArray { gcnew VdbMethodInfo("createTable", params(gcnew VdbParameterInfo("name", "string", "Paul"), gcnew VdbParameterInfo("age", "number", "25"))) }; return methods; } };
835,324
835,377
Migrating from old Borland C++ to Visual C++ Express
At the risk of appearing a dinosaur, I have some old C++ code, compiled with Borland C++, which sets registers, and interfaces to an Assembler module, which I would like to modernize. I have just installed MS VC++ Express, and needless to say a lot of things don't work! The default seems to be Win32, which is fine, so I have blanked out FAR and HUGE. PASCAL seems to map to __stdcall. So I have a macro #define THRCOMP extern "C" int FAR PASCAL _Export where THRCOMP goes in front of a module name. This presumably results in something like extern "C" int __stdcall _Export <modname>; which the compiler doesn't like, and puts out a message about an "anachronism" (doesn't say what!). What is wrong? Also the old code sets has in-line Assembler which I need to turn into a separately compiled subroutine - is there a (free) Assembler, and can it link Assembler obj decks in with C++? By the way, I can't see my obj decks - but WinZip picked them up! Explanation? Generally, is there a guide to migrating old C++ code? Thanks in advance.
A couple of specific things from your example: VC doesn't like _Export at all. the anachronism is that you have modifiers (like __stdcall) on a data declaration. If <modname> doesn't have parens it's a data declaration and the modifiers don't do anything. If <modname> is a function implemented in assembly, you should still have the declaration include the argument list. For example: extern "C" int __stdcall modname( int x); You can get a free assembler from the Windows Driver Kit (WDK - what used to be called the DDK), but if you're current code is written using Borland's TASM compiler it might not be using the same syntax so there might be quite a bit of work porting it. However, if the current assembler is 16-bit code, you're going to have a lot of work porting it to 32-bit assembler anyway...
835,419
837,017
MFC CDialog::Create fails
I'm having problems with some code to create a CDialog based window. The code was working fine last week. The only changes I made was replacing a C++ deque with a hash array. I had commented out the line of code with the Create method being called to allow me to skip loading the window. Now the code doesn't create the window at all anymore. The Create function returns false and the GetLastError function returns 0. I don't use any custome controls inside the window - just a checkbox and a list control. As far as I can tell (I can't hook a debugger up at this point) the OnCreate and OnInitDialog functions are not being called at all. I've pasted the code below that I've been using to test the Create function's return and GetLastError BOOL result = ORDER_HANDLER_GUI.Create(OrderHandlerGUI::IDD, AfxGetMainWnd()); int error = ::GetLastError(); if(result) AfxMessageBox("Created GUI"); else { CString msg; msg.Format("%d", error); AfxMessageBox("Could not create GUI"); AfxMessageBox(msg); } Update: I finally managed to get the debugger to attach (this is a plugin loaded in a 3rd party application that didn't like the debugger for some reason). After stepping through the code it seems that AfxGetMainWnd() is returning NULL. I'm doing more testing on this now.
The problems seems to have been with the call to CDynLinkLibrary(). I had commented this out at the request of the company that writes the software that is loading my plugin. Adding this line back in caused some values to still be null, but the window is now created properly. I'm going to do a bit of research on this and will update if I find anything. If anyone knows more about this than me (not hard to do) feel free to leave comments.
835,590
835,629
How would std::ostringstream convert to bool?
I stumbled across this code. std::ostringstream str; /// (some usage) assert( ! str ); What does ostringstream signify when used in a bool context? Is this possibly an incorrect usage that happens to compile and run?
It tells you if the stream is currently valid. This is something that all streams can do. A file stream, for example, can be invalid if the file was not opened properly. As a side note, this functionality (testing a stream as a bool) is achieved by overloading explicit operator bool in C++11 and later and by overloading the void* cast operator in versions before C++11. Here is a link containing some examples of why a stream might fail. This isn't specific to string streams, but it does apply to them. Edit: changed bool to void* after Martin York pointed out my mistake.
835,617
836,026
Hierarchical Memory allocator library for C++
My application is mostly organised in layers so I found that something like the APR memory pools would be the best way. While reading on SO about C++ placement new posts here & here, and a more generic C allocation question I was thinking about hand-crafting a hierarchical pool allocator as suggested in one post, but in the pure NYI tradition I'm first asking if something like this already exists. It could also have the nice property of being able to give back unused memory to the OS (since allocation could be done with mmap(MAP_ANON)) or could be allocating from the stack as suggested Ferrucico here.
I know of another good hierarchical memory allocator, but it calls malloc underneath the covers. talloc is a hierarchical pool based memory allocator with destructors. It is the core memory allocator used in Samba4, and has made a huge difference in many aspects of Samba4 development. To get started with talloc, I would recommend you read the talloc guide. That being said, Glibc's malloc already uses mmap(MAP_ANON) for allocation larger than mmap_threshold, which you can set via mallopt(M_MMAP_THRESHOLD, bytes). By default it is dynamically adjusted between /* MMAP_THRESHOLD_MAX and _MIN are the bounds on the dynamically adjusted MMAP_THRESHOLD. */ #ifndef DEFAULT_MMAP_THRESHOLD_MIN #define DEFAULT_MMAP_THRESHOLD_MIN (128 * 1024) #endif #ifndef DEFAULT_MMAP_THRESHOLD_MAX /* For 32-bit platforms we cannot increase the maximum mmap threshold much because it is also the minimum value for the maximum heap size and its alignment. Going above 512k (i.e., 1M for new heaps) wastes too much address space. */ # if __WORDSIZE == 32 # define DEFAULT_MMAP_THRESHOLD_MAX (512 * 1024) # else # define DEFAULT_MMAP_THRESHOLD_MAX (4 * 1024 * 1024 * sizeof(long)) # endif #endif Watch out if you lower it; by default no more than #define DEFAULT_MMAP_MAX 65536 pieces will be allocated using mmap. This can be changed with mallopt(M_MMAP_MAX, count), but using many mmaps has an overhead. The environment variables MALLOC_MMAP_THRESHOLD_ etc. will also set these options. Obviously, memory that malloc allocates with mmap is freed with munmap. I'm not sure if any of this is documented anywhere outside of Glibc's source code, or has any compatibility guarantees.
835,642
836,001
How can I delete a Win32 desktop with running programs, and terminate those programs?
I have this code: #define _WIN32_WINNT 0x0500 #include <cstdlib> #include <iostream> #include <windows.h> using namespace std; int main(int argc, char *argv[]) { HDESK hOriginalThread; HDESK hOriginalInput; hOriginalThread = GetThreadDesktop(GetCurrentThreadId()); hOriginalInput = OpenInputDesktop(0, FALSE, DESKTOP_SWITCHDESKTOP); HDESK hNewDesktop=CreateDesktop("BasicAppDesktopDesktop",NULL,NULL,0,DELETE|READ_CONTROL|WRITE_DAC|WRITE_OWNER|GENERIC_ALL,NULL); /*HDESK hNewDesktop=OpenDesktop("Winlogon", 0, FALSE, DESKTOP_CREATEMENU | DESKTOP_CREATEWINDOW | DESKTOP_ENUMERATE | DESKTOP_HOOKCONTROL | DESKTOP_JOURNALPLAYBACK | DESKTOP_JOURNALRECORD | DESKTOP_READOBJECTS | DESKTOP_SWITCHDESKTOP | DESKTOP_WRITEOBJECTS); */ SetThreadDesktop(hNewDesktop); SwitchDesktop(hNewDesktop); //system("cmd"); STARTUPINFOA si = {0}; si.cb = sizeof(STARTUPINFO); si.lpDesktop = "winsta0\\BasicAppDesktopDesktop"; PROCESS_INFORMATION infos; CreateProcess(NULL,"explorer",NULL,NULL,false,NORMAL_PRIORITY_CLASS,NULL,NULL,&si,&infos); //WaitForSingleObject( infos.hProcess, INFINITE ); while(!(GetAsyncKeyState(VK_F12) == -32767))Sleep(50); CloseHandle( infos.hProcess ); CloseHandle( infos.hThread ); SwitchDesktop(hOriginalInput); SetThreadDesktop(hOriginalThread); CloseDesktop(hNewDesktop); CloseDesktop(hOriginalInput); return 0; } When I press F12, desktop switch to original and program will close, but if I run in second desktop any program, then do not terminates, and when I run my program again, that any program will appears. Is there a way to delete desktops with programs, or automatic terminations of programs ran through second desktop on exit? Please help.
If you want to force termination of a program that you started with CreateProcess (as in the code you posted), then you can just use TerminateProcess on the handle returned in your PROCESS_INFORMATION struct. If you want to terminate all processes with threads attached to your new desktop, whether you started them or not, then its a bit (ok, a lot) more complicated. Your code would have to do the following: Enumerate all running processes (using CreateToolhelp32Snapshot) Enumerate threads for each process in turn (again using CreateToolhelp32Snapshot) Get desktop handle for each thread (using GetThreadDesktop) Get name of that desktop (using GetUserObjectInformation) Compare with name of your desktop If the names match, open a new handle to the parent process and terminate it (OpenProcess and TerminateProcess) That's a lot of code to write, but it should work.
835,893
836,657
OpenMP parallelization on a recursive function
I'm trying to use parallelization to improve the refresh rate for drawing a 3D scene with heirarchically ordered objects. The scene drawing algorithm first recursively traverses the tree of objects, and from that, builds an ordered array of essential data that is needed to draw the scene. Then it traverses that array multiple times to draw objects/overlays, etc. Since from what I've read OpenGL isn't a thread-safe API, I assume the array traversal/drawing code must be done on the main thread, but I'm thinking that I might be able to parallelize the recursive function that fills the array. The key catch is that the array must be populated in the order that the objects occur in the scene, so all functionality that associates a given object with an array index must be done in the proper order, but once the array index has been assigned, I can fill the data of that array element (which isn't necessarily a trivial operation) using worker threads. So here's the pseudo code that I'm trying to get at. I hope you get the idea of the xml-ish thread syntax. recursivepopulatearray(theobject) { <main thread> for each child of theobject { assign array index <child thread(s)> populate array element for child object </child thread(s)> recursivepopulatearray(childobject) } </main thread> } So, is it possible to do this using OpenMP, and if so, how? Are there other parallelization libraries that would handle this better? Addendum: In response to Davide's request for more clarification, let me explain a little more in detail. Let's say that the scene is ordered like this: -Bicycle Frame - Handle Bars - Front Wheel - Back Wheel -Car Frame - Front Left Wheel - Front Right Wheel - Back Left Wheel - Back Right Wheel Now, each of these objects has lots of data associated with it, i.e. location, rotation, size, different drawing parameters, etc. Additionally, I need to make multiple passes over this scene to draw it properly. One pass draws the shapes of the objects, another pass draws text describing the objects, another pass draws connections/associations between the objects if there are any. Anyway, getting all the drawing data out of these different objects is pretty slow if I have to access it multiple times, so I've decided to use one pass to cache all that data into a one-dimensional array, and then all the actual drawing passes just look at the array. The catch is that because I need to do OpenGL pushes/pops in the right order, the array must be in the proper depth-first search order that is representative of the tree heirarchy. In the example above, the array must be ordered as follows: index 0: Bicycle Frame index 1: Handle Bars index 2: Front Wheel index 3: Back Wheel index 4: Car Frame index 5: Front Left Wheel index 6: Front Right Wheel index 7: Back Left Wheel index 8: Back Right Wheel So, the ordering of the array must be serialized properly, but once I have assigned that ordering properly, I can parallelize the filling of the array. For example once I've assigned Bicycle Frame to index 0 and Handle Bars to index 1, one thread can take the filling of the array element for the Bicycle Frame while another takes the the filling of the array element for Handle Bars. OK, I think in clarifying this, I've answered my own question, so thanks Davide. So I've posted my own answer.
Here's a modified piece of pseudo-code that should work. populatearray(thescene) { recursivepopulatearray(thescene) #pragma omp parallel for for each element in array populate array element based on associated object } recursivepopulatearray(theobject) { for each childobject in theobject { assign array index and associate element with childobject recursivepopulatearray(childobject) } }
835,914
836,666
Qt: adapting signals / binding arguments to slots?
Is there any way to bind arguments to slots ala boost::bind? Here's a for-instance. I have a window with a tree view, and I want to allow the user to hide a column from a context menu. I end up doing something like: void MyWindow::contextMenuEvent (QContextMenuEvent* event) { m_column = view->columnAt (event->x()); QMenu menu; menu.addAction (tr ("Hide Column"), this, SLOT (hideColumn ())); // .. run the menu, etc } I need to capture the index of the column over which the context menu was activated and store it in a member variable that is used by my window's hideColumn slot: void MyWindow::hideColumn () { view->setColumnHidden (m_column, true); } What I'd really like is to be able to bind the column number to my slot when I create the menu so I don't need this member variable. Basically the Qt equivalent of: menu.addAction (tr ("Hide Column"), boost::bind (&MyWindow::hideColumn, this, event->columnAt (event->x())); Or even better yet adapting the QAction::triggered signal and attaching it to the QTreeView::hideColumn slot, which takes the column index as an argument: menu.addAction (tr ("Hide Column"), boost::bind (&QTreeView::hideColumn, view, event->columnAt (event->x()))); Is any of this do-able?
AFAIK the only way is to create a QSignalMapper object to do this. It's like an extra level of indirection that can be used to generate a new signal providing the column index. It's a little bit clumsy IME, you can end up with lots of QSignalMapper objects hanging around all the time, but seems to be the best way at this time. (Ideally, IMO, you would be able to just supply any value such as the column index to connect() which would get passed as an argument to the slot, but you can't.)
835,922
835,928
What determines what is written to a C++ pointer when delete is called?
I have a pointer to a given class. Lets say, for example, the pointer is: 0x24083094 That pointer points to: 0x03ac9184 Which is the virtual function table of my class. That makes sense to me. In windbg, everything looks correct. I delete said pointer. Now at 0x24083094 is: 0x604751f8 But it isn't some random garbage, that address is put in there every time, it is consistently 0x604751f8! So much so that I can actually use that address to determine if that pointer was deleted, between executions of my application! But Why? How does it determine that 0x604751f8 should be written there? For the record, I am using windows, building under visual studio 2003. I know I can't rely on that value being set, even if it does appear to be consistent, but can I rely on it being different? I.e., 0x03ac9184 will not be at 0x24083094 if the pointer is deleted, right? What is put there? It Could be anything, but 0x03ac9184 will definitely not be there (or else I could still call methods, since that is the virtual function table). Am I right? I feel like I have answer. can't rely on anything after it is deleted. Maybe some background will help people see where I am coming from. Essentially, I am trying to fix a bug where a pointer gets deleted out from under me. It's a long story, I won't go into the details. Basically, I am trying to detect that I am in this situation, so I can exit gracefully from my function. I suppose the easiest and best way is to just figure out who actually owns this pointer, and ask him if anything has changed. So I am going to implement a fix like that. It avoids any of this C++ delete hacker-y I was discussing. However, the interesting thing is that in our code we have a class called 'BogusObject' that essentially acts as a tray catching people that accidentally dereference freed objects. Basically, we hook our own delete functions and bash the BogusObject class into the vtable of any freed class. Then if someone calls something they get a nice message saying something to the effect of "hey, something is wrong dude.". This is happening in my case. I.e., 0x604751f8+(someoffset) is inside the BogusObject class. But we no longer use BogusObject! It literally isn't set up anywhere (even links properly if I completely remove the BogusObject class), and yet I still end up getting the nice message saying something is wrong! But I am now of the opinion that it is a coincidence. For some reason the runtime is putting that 0x604751f8 value in this pointer when it is deleted, and that just happens to correspond with the one class that has a purpose to catch situations like this!
Nothing in the standard determines what gets written there. Visual studio (at least in debug mode) will often write sentinal values all over the place to help in catching bugs early. This value is not something you can rely on, but if you ever find that value popping up in your program mysteriously, you can assume that somewhere you are referencing deleted memory. See this answer for a list of values under one compiler. It's also entirely possible that it's a free list pointer, pointing to the next piece of free memory. Most memory allocators keep their free memory in a linked list of sorts, using the free memory they are tracking to store the tracking data. In any case, you MUST NOT use that pointer value for anything you want to keep working, unless you call up microsoft and get some documentation saying why that value is what it is, and get them to guarantee that it will not change. And even then, know that your code is now tied to one compiler's behaviour. In C++, accessing unallocated memory is undefined and evil. Edit: You can't even rely on that value changing after a delete. There's nothing that says a compiler needs to modify the data on delete.
836,327
836,879
Stripping Down VS 2008 Win32 DLL to one file
I have a VS generated C++ Win32 DLL project. It has the following files: stdafx.h targetver.h myProject.h dllmain.cpp myProject.cpp stdafx.cpp I can remove targetver.h, and merge dllmain.cpp into myProject.cpp. What more can I do to get the simplest file structure, preferably one file. I need to dynamically emit this code file and build it into a Win32 DLL.
If you want a minimalistic file structure, you could just create the files yourself. Start an empty project, or delete all the files. Heck, just make a folder, write main.cpp, and compile it from the command line with cl. Few IDEs really try to minimize files like you're trying to -- but when you create the project, you can cut back a little: stdafx.[h, cpp] are for precompiled headers, which you could disable when creating the project. That said, I don't really see the value in minimizing the amount of source code in a compiled language project -- it's not going to have a meaningful impact on the number of output files/dlls and, properly used, using more files only helps your code's clarity.