question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
507,971
508,010
Do STL maps initialize primitive types on insert?
I have a std::map like this: map<wstring,int> Scores; It stores names of players and scores. When someone gets a score I would simply do: Scores[wstrPlayerName]++; When there is no element in the map with the key wstrPlayerName it will create one, but does it initialize to zero or null before the increment or is it left undefined? Should I test if the element exists every time before increment? I just wondered because I thought primitive-type things are always undefined when created. If I write something like: int i; i++; The compiler warns me that i is undefined and when I run the program it is usually not zero.
operator[] looks like this: Value& map<Key, Value>::operator[](const Key& key); If you call it with a key that's not yet in the map, it will default-construct a new instance of Value, put it in the map under key you passed in, and return a reference to it. In this case, you've got: map<wstring,int> Scores; Scores[wstrPlayerName]++; Value here is int, and ints are default-constructed as 0, as if you initialized them with int(). Other primitive types are initialized similarly (e.g., double(), long(), bool(), etc.). In the end, your code puts a new pair (wstrPlayerName, 0) in the map, then returns a reference to the int, which you then increment. So, there's no need to test if the element exists yet if you want things to start from 0.
508,370
508,971
quaternion libraries in C/C++
Any good libraries for quaternion calculations in C/C++ ? Side note: any good tutorials/examples? I've google it and been to the first few pages but maybe you have have some demos/labs from compsci or math courses you could/would share? Thanks
I'm a fan of the Irrlicht quaternion class. It is zlib licensed and is fairly easy to extract from Irrlicht: Irrlicht Quaternion Documentation quaternion.h
508,441
523,199
Problems linking static Intel IPP libraries on Linux with g++
I've been trying to move a project over from Xcode to Linux (Ubuntu x86 for now, but hopefully the statically-linked executable will run on an x86 CentOS machine? I hope I hope?). I have the whole project compiling but it fails at the linking stage-- it's giving me undefined references for all functions defined by IPP. This is probably something really small and silly but I've been beating my head over this for a couple days now and I can't get it to work. Here's the compile statement (I also have a makefile that's generating the same errors): g++ -static /opt/intel/ipp/6.0.1.071/ia32/lib/libippiemerged.a /opt/intel/ipp/6.0.1.071/ia32/lib/libippimerged.a /opt/intel/ipp/6.0.1.071/ia32/lib/libippsemerged.a /opt/intel/ipp/6.0.1.071/ia32/lib/libippsmerged.a /opt/intel/ipp/6.0.1.071/ia32/lib/libippcore.a -pthread -I /opt/intel/ipp/6.0.1.071/ia32/include -I tools/include -o main main.cpp pick_peak.cpp get_starting_segments.cpp get_segment_timing_differences.cpp recast_and_normalize_wave_file.cpp rhythm_score.cpp pitch_score.cpp pitch_curve.cpp tools/source/LocalBuffer.cpp tools/source/wave.cpp distance.cpp ...and here is the beginning of the long list of linker errors: ./main.o: In function `main': main.cpp:(.text+0x13f): undefined reference to `ippsMalloc_16s' main.cpp:(.text+0x166): undefined reference to `ippsMalloc_32f' main.cpp:(.text+0x213): undefined reference to `ippsMalloc_16s' Any ideas? FWIW, these are the IPP dependencies in my Xcode project that builds, links, and runs without a problem: "-lippiemerged", "-lippimerged", "-lippsemerged", "-lippsmerged", "-lippcore", Thanks!
Your linking problem is likely due to the fact that your link line is completely backwards: archive libraries should follow source and object files on command line, not precede them. To understand why the order matters, read this. Also note that on Linux statically linked executables are significantly less portable than dynamically linked ones. In general, if you link system libraries dynamically on an older Linux system, it will work on all newer systems (I use ancient RedHat 6.2, and I haven't seen a system on which my executable will not run). This is not true for completely static executables; they may crash in all kinds of "interesting" ways when moved to a system with a different libc from the one against which they were linked.
508,844
508,891
Having trouble initializing an SDL_Surface
I'm trying to set up something in SDL [in C++] where I can draw a one pixel big rectangle. I've got everything in my code working except my second SDL_Surface called rectangle. I'm having trouble initializing it. Here's the line where I try to initialize it: rectangle = SDL_Surface(SDL_DOUBLEBUF | SDL_HWACCEL | SDL_SRCALPHA | SDL_HWSURFACE, screen->format, 1, 1, 16, NULL, clip_rect, 1); Thank you for taking the time to read this and any answers you might choose to give.
I think that the main problem you are having is that there is no SDL_Surface function. To create a new surface, use SDL_CreateRGBSurface. Be sure to call SDL_FreeSurface on the returned surface after you are done with it or you will leak memory. Additionally, I am not sure why you are creating a surface for the rectangle. A cleaner way of drawing a solid-color rectangle is SDL_FillRect without creating a new surface.
509,642
509,650
C++ 'true' and 'false' keywords suddenly not true or false in Visual C++ 6.0
My compiler (VC++ 6.0 sp6) has apparently gone insane. In certain pieces of code I'm seeing that 'bool mybool = true;' evalutes to and assigns false, and vice versa for true. Changing the true/false keywords to 1/0 makes it work fine. The same code compiles elsewhere fine without changing the true/false keywords. What could possibly cause this? My first thought was RAM or disk corruption, but that all checked out fine. I'm not far from reformatting my drive and reinstalling everything, but I'm terrified I'd still see the same misbehavior. Is it even technically possible for a macro or linked-in library somewhere to screw up the meaning of 'true' and 'false'? UPDATE: Mystery solved. An environment variable flag on my machine was set to 'false' and the way this was interpolated by some preprocessor code redefined the keyword.
A preprocessor macro could certainly do it, although that would be pretty surprising. One way to check if that is the case would be #ifdef true # error "true is defined as a macro" #endif #ifdef false # error "false is defined as a macro" #endif Response to comments: Find a non-header file where you see this behavior, preferably one with few #includes. In the middle of the list of includes, put the #ifdef #error directives. if the error trips you know it's in the first half of includes, if it doesn't it's in the second half. Split the half in half and repeat. When you narrow it down to one header, open that header. If that header includes any headers repeat the process for the list of headers it includes. Eventually you should be able to find the #defines . Tedious, I agree.
509,863
513,575
Transferring vector of objects between C++ DLL and Cpp/CLI console project
I have a C++ library app which talks to a C++ server and I am creating a vector of my custom class objects. But my Cpp/CLI console app(which interacts with native C++ ), throws a memory violation error when I try to return my custom class obj vector. Code Sample - In my native C++ class - std::vector<a> GetStuff(int x) { -- do stuff std::vector<a> vec; A a; vec.push_back(a); --- push more A objs return vec; } In my Cpp/CLI class public void doStuff() { std::vector<a> vec; vec = m_nativeCpp->GetStuff(4); // where nativeCpp is a dynamically allocated class in nativecpp DLL, the app throws up a memory violation error here! } exact error message An unhandled exception of type 'System.AccessViolationException' occurred in CLIConsole.exe -- which is my console cpp/CLI project Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
I'll assume that the native code is in a separately compiled unit, like a .dll. First thing the worry about is the native code using a different allocator (new/delete), you'll get that when it is compiled with /MT or linked to another version of the CRT. Next thing to worry about is STL iterator debugging. You should make sure both modules were compiled with the same setting for _HAS_ITERATOR_DEBUGGING. They won't be the same if the native code was built with an old version of the CRT of is the Release mode build.
509,989
514,406
How to allow 32 bit apps on 64 bit windows to execute 64 bit apps provided in Windows\System32
Say you have an app, that you want to provide users ability to browse the system32 directory and execute programs in (like telnet). What is the best method for supporting this when you need to support XP onwards as a client and 2k onwards for server? Having written all this up I wonder if it's just too much time/effort in providing a browse to do this, where they could just copy it from explorer. Still requires ability to launch. I have found some discussion on Nynaeve. So far it seems there are the following options Create a sysnative folder in windows which will allow you to browse/execute 64 bit. Issues are: only available in Vista/Longhorn, so no support for XP 64 leads to different path naming, can't use same path on multiple versions. will be active for whole of windows, not just our app may not (probably is not) appropriate to do when installing the app allows to specify explicitly through path only which version of the app to launch if there is a 32 bit and 64 bit version Use the windows API to temporarily disable the redirection when showing file lists or executing users run commands. Issues are: Only available on 64 bit - have to mess with GetProcAddress available only under certain service packs must individually identify all locations that this should be implemented user will need to provide seperate information about whether this is a 64 bit app or 32 bit. If anybody had some example code which displayed a Windows OpenFile dialog (say using MFC CFileDialog) showing nativly for XP/Vista and allowing the viewing of 64 bit system32 directory, that would be awesome. If anybody had an example of launching the named app, that would also be great! Edit: Currently we use CreateProcess for launching the app (which is failing). err = CreateProcess((wchar_t*)exeName.c_str(), (wchar_t*)cmdLine.c_str(), NULL, NULL, FALSE, CREATE_SEPARATE_WOW_VDM, NULL, workingDir.c_str(), &startupInfo, &processInfo);
I've gone with option 2, For those who might be interested; here is my quick hack at a scoped version of managing the disabling of Wow64 redirection based on notes from MS. Will redirect if the API is available, expects that kernel32.dll is already available. class Wow64RedirectOff { typedef BOOL (WINAPI *FN_Wow64DisableWow64FsRedirection) ( __out PVOID *OldValue ); typedef BOOL (WINAPI *FN_Wow64RevertWow64FsRedirection) ( __in PVOID OldValue ); public: Wow64RedirectOff() { LPFN_Disable = (FN_Wow64DisableWow64FsRedirection)GetProcAddress( GetModuleHandle(TEXT("kernel32")),"Wow64DisableWow64FsRedirection"); if( LPFN_Disable ) { LPFN_Disable(&OldValue); } } ~Wow64RedirectOff() { if( LPFN_Disable ) { FN_Wow64RevertWow64FsRedirection LPFN_Revert = (FN_Wow64RevertWow64FsRedirection)GetProcAddress( GetModuleHandle(TEXT("kernel32")),"Wow64RevertWow64FsRedirection"); if( LPFN_Revert ) { LPFN_Revert(OldValue); } } } private: FN_Wow64DisableWow64FsRedirection LPFN_Disable; PVOID OldValue; }; And thus usage would be Wow64RedirectOff scopedRedirect; //CFileOpen //CreateProcess
510,085
511,740
How to call a python function from a foreign language thread (C++)
I am developing a program that use DirectShow to grab audio data from media files. DirectShow use thread to pass audio data to the callback function in my program, and I let that callback function call another function in Python. I use Boost.Python to wrapper my library, the callback function : class PythonCallback { private: object m_Function; public: PythonCallback(object obj) : m_Function(obj) {} void operator() (double time, const AudioData &data) { // Call the callback function in python m_Function(time, data); } }; Here comes the problem, a thread of DirectShow calls my PythonCallback, namely, call the function in Python. Once it calls, my program just crash. I found this should be threading problem. Then I found this document: http://docs.python.org/c-api/init.html It seems that my program can't call to Python's function from thread directly, because there is Global Interpreter Lock. The python's GIL is so complex, I have no idea how it works. I'm sorry, what I can do is to ask. My question is. What should I do before and after I call a Python function from threads? It may looks like this. void operator() (double time, const AudioData &data) { // acquire lock m_Function(time, data); // release lock } Thanks. Victor Lin.
Take a look at PyGILState_Ensure()/PyGILState_Release(), from PEP 311 http://www.python.org/dev/peps/pep-0311/ Here is an example taken from the PEP itself: void SomeCFunction(void) { /* ensure we hold the lock */ PyGILState_STATE state = PyGILState_Ensure(); /* Use the Python API */ ... /* Restore the state of Python */ PyGILState_Release(state); }
510,098
510,151
g++ include all /usr/include recursively
I'm trying to compile a simple program, with #include <gtkmm.h> The path to gtkmm.h is /usr/include/gtkmm-2.4/gtkmm.h. g++ doesn't see this file unless I specifically tell it -I /usr/include/gtkmm-2.4. My question is, how can I have g++ automatically look recursively through all the directories in /usr/include for all the header files contained therein, and why is this not the default action?
In this case, the correct thing to do is to use pkg-config in your Makefile or buildscripts: # Makefile ifeq ($(shell pkg-config --modversion gtkmm-2.4),) $(error Package gtkmm-2.4 needed to compile) endif CXXFLAGS += `pkg-config --cflags gtkmm-2.4` LDLIBS += `pkg-config --libs gtkmm-2.4` BINS = program program_OBJS = a.o b.o c.o all: $(BINS) program: $(program_OBJS) $(CXX) $(LDFLAGS) $^ $(LOADLIBES) $(LDLIBS) -o $@ # this part is actually optional, since it's covered by gmake's implicit rules %.o: %.cc $(CXX) -c $(CPPFLAGS) $(CXXFLAGS) $< -o $@ If you're missing gtkmm-2.4, this will produce $ make Package gtkmm-2.4 was not found in the pkg-config search path. Perhaps you should add the directory containing `gtkmm-2.4.pc' to the PKG_CONFIG_PATH environment variable No package 'gtkmm-2.4' found Makefile:3: *** Package gtkmm-2.4 needed to compile. Stop. Otherwise, you'll get all the appropriate paths and libraries sucked in for you, without specifying them all by hand. (Check the output of pkg-config --cflags --libs gtkmm-2.4: that's far more than you want to type by hand, ever.)
510,198
513,617
How to implement RFC 3393 (Ipdv packet delay varation) in C?
I am building an Ethernet Application in which i will be sending packets from one side and receiving it on the other side. I want to calculate delay in packets at the receiver side as in RFC 3393. So I have to put a timestamps in the packet at the sender side and then take the timestamps at the receiver side as soon as i receive the packet . Subtracting the values i will get the difference in timestamps and then subtracting this value with subsequent difference i will get One way ipdv delay . Both the clocks are not synchronized . So any help is greatly appreciated. Thank you.
RFC 3393 is for measuring the variance in the packet delay, not for measuring the delay itself. To give an example: you're writing a video streaming application. You want to buffer as little video data as possible (so that the video starts playing as soon as possible). Let's say that data always always always takes 20ms to get from machine A to machine B. In this case (and assuming that machine A can send the video data as fast as it needs playing), you don't need any buffer at all. As soon as you receive the first frame, you can start playing, safe in the knowledge that by the time the next frame is needed, it will have arrived (because the data always takes exactly 20ms to arrive and machine A is sending at least as fast as you're playing). This works no matter how long that 20ms is, as long as it's always the same. It could be 1000ms - the first frame takes 1000ms to arrive, but you can still start playing as soon as it arrives, because the next frame will also take 1000ms and was sent right behind the first frame - in other words, it's already on its way and will be here momentarily. Obviously the real world isn't like this. Take the other extreme: most of the time, data arrives in 20ms. Except sometimes, when it takes 5000ms. If you keep no buffer and the delay on frames 1 through 50 is 20ms, then you get to play the first 50 frames without a problem. Then frame 51 takes 5000ms to arrive and you're left without any video data for 5000ms. The user goes and visits another site for their cute cat videos. What you really needed was a buffer of 5000ms of data - then you'd have been fine. Long example, short point: you're not interested in what the absolute delay on the packets is, you're interested in what the variance in that delay is - that's how big your buffer has to be. To measure the absolute delay, you'd have to have the clocks on both machines be synchronised. Machine A would send a packet with timestamp 12337849227 28 and when that arrived at machine B at time 12337849227 48, you'd know the packet had taken 20ms to get there. But since you're interested in the variance, you need (as RFC 3393 describes) several packets from machine A. Machine A sends packet 1 with timestamp 1233784922 72 8, then 10ms later sends packet 2 with timestamp 1233784922 73 8, then 10ms later sends packet 3 with timestamp 1233784922 74 8. Machine B receives packet 1 at what it thinks is timestamp 1233784922 12 8. The one-way delay between machine A and machine B has in this case (from machine B's perspective) been -600ms. This is obviously complete rubbish, but we don't care. Machine B receives packet 2 at what it thinks is timestamp 1233784922 15 8. The one-way delay has been -580ms. Machine B receives packet 3 at what it thinks is timestamp 1233784922 16 8. The one-way delay was again -580ms. As above, we don't care what the absolute delay is - so we don't even care if it's negative, or three hours, or whatever. What we care about is that the amount of delay varied by 20ms. So you need a buffer of 20ms of data. Note that I'm entirely glossing over the issue of clock drift here (that is, the clocks on machines A and B running at slightly different rates, so that for example machine A's time advances at a rate of 1.00001 seconds for every second that actually passed). While this does introduce inaccuracy in the measurements, its practical effect isn't likely to be an issue in most applications.
510,441
510,501
How many threads to create and when?
I have a networking Linux application which receives RTP streams from multiple destinations, does very simple packet modification and then forwards the streams to the final destination. How do I decide how many threads I should have to process the data? I suppose, I cannot open a thread for each RTP stream as there could be thousands. Should I take into account the number of CPU cores? What else matters? Thanks.
It is important to understand the purpose of using multiple threads on a server; many threads in a server serve to decrease latency rather than to increase speed. You don't make the cpu more faster by having more threads but you make it more likely a thread will always appear at within a given period to handle a request. Having a bunch of threads which just move data in parallel is a rather inefficient shot-gun (Creating one thread per request naturally just fails completely). Using the thread pool pattern can be a more effective, focused approach to decreasing latency. Now, in the thread pool, you want to have at least as many threads as you have CPUs/cores. You can have more than this but the extra threads will again only decrease latency and not increase speed. Think the problem of organizing server threads as akin to organizing a line in a super market. Would you like to have a lot of cashiers who work more slowly or one cashier who works super fast? The problem with the fast cashier isn't speed but rather that one customer with a lot of groceries might still take up a lot of their time. The need for many threads comes from the possibility that a few request that will take a lot of time and block all your threads. By this reasoning, whether you benefit from many slower cashiers depends on whether your have the same number of groceries or wildly different numbers. Getting back to the basic model, what this means is that you have to play with your thread number to figure what is optimal given the particular characteristics of your traffic, looking at the time taken to process each request.
510,788
510,800
How to pass an interface pointer to a thread?
Note: Using raw Win32 CreateTheard() API No MFC An interface is simply a pointer to a vtable Question: How to pass an interface pointer to a thread? Illustration: IS8Simulation *pis8 = NULL; ... CoCreateInstance( clsid, NULL, CLSCTX_LOCAL_SERVER, __uuidof(IS8Simulation), (void **) &pis8); ... hThread = CreateThread( NULL, 0, SecondaryThread, //interface pointer pis8 0, &dwGenericThreadID); ... DWORD WINAPI SecondaryThread(LPVOID iValue) { //using iValue accordingly //E.g.: iValue->Open Regards
As was stated below, passing a COM interface pointer between threads in not safe. Assuming you know what you are doing: hThread = CreateThread( NULL, 0, SecondaryThread, (LPVOID) pis8 0, &dwGenericThreadID); DWORD WINAPI SecondaryThread(LPVOID iValue) { ((IS8Simulation*) iValue)->Open(); } Thread safe version: void MainThread() { IStream* psis8; HRESULT res = CoMarshalInterThreadInterfaceInStream (IID_IS8SIMULATION, pis8, &psis8); if (FAILED(res)) return; hThread = CreateThread( NULL, 0, SecondaryThread, (LPVOID) psis8 0, &dwGenericThreadID ); } DWORD WINAPI SecondaryThread(LPVOID iValue) { IS8Simulation* pis8; HRESULT res = CoGetInterfaceAndReleaseStream((IStream*) iValue, IID_IS8SIMULATION, &pis8); if (FAILED(res)) return (DWORD) res; pis8->Open(); }
510,845
510,936
How to determine if the current window is the active window?
How can I tell if my window is the current active window? My current guess is to do GetForegroundWindow and compare the HWND with that of my window. Is there a better method than that? I'm using Win32 API / MFC.
Yes, that's the only way that I'm aware of. But you have to handle the fact that GFW can return NULL. Typically, this happens when another desktop (e.g. the screen saver desktop) is active. Note that use of a saver password can affect whether a different desktop is used (this is windows version-dependent and I can't remember the details of how different versions work). Also this code won't work properly in debug mode under Visual Studio, because you will get VS's window handle. Other than that everything's peachy :-)
511,019
511,044
Mac OS X: Where should I store save games for a game delivered as a bundle?
I'm porting a windows game to Mac OS X. I was wondering where I should store game data such as saved games, user profiles, etc and how I can retrieve that path programmatically in C++? The game will be delivered as a "modern bundle" as specified here
Save it under ~/Library/Application Support/Your Game Name/ where "~" stands for the home directory of the user playing your game. You may want to give this a read: http://cocoadevcentral.com/articles/000084.php
511,768
511,779
How to use my logging class like a std C++ stream?
I've a working logger class, which outputs some text into a richtextbox (Win32, C++). Problem is, i always end up using it like this: stringstream ss; ss << someInt << someString; debugLogger.log(ss.str()); instead, it would be much more convenient to use it like a stream as in: debugLogger << someInt << someString; Is there a better way than forwarding everything to an internal stringstream instance? If'd do this, when would i need to flush?
You need to implement operator << appropriately for your class. The general pattern looks like this: template <typename T> logger& operator <<(logger& log, T const& value) { log.your_stringstream << value; return log; } Notice that this deals with (non-const) references since the operation modifies your logger. Also notice that you need to return the log parameter in order for chaining to work: log << 1 << 2 << endl; // is the same as: ((log << 1) << 2) << endl; If the innermost operation didn't return the current log instance, all other operations would either fail at compile-time (wrong method signature) or would be swallowed at run-time.
511,998
512,175
How can I capture a stack trace on the QA computers
I am writing a Qt/C++ application, up until this month I have been using Mingw for compiling and drmingw for getting the stack trace from the QA people. However I recently converted over to MSVC++ 9 so that I can use the phonon framework. The downside is that now the stack traces from drmingw are useless. What do others use?
You can use Dr Watson to catch unhandled exceptions and generate a dump file. The dump can then be opened in Visual Studio or WinDBG to see the stack of all threads, as long as you have the symbol files. http://msdn.microsoft.com/en-us/library/cc265791.aspx
512,804
512,829
C++ Newbie: Having all sorts of problems linking
I am having several problems with tessdll in Visual Studio 2008. FYI, I created this app as an MFC application, I did this just to take advantage of the simple GUI I needed. It is just straight C++ and win32 from there on out. This builds fine as a debug release for some reason (as I have included the header files and lib files that I need, and dll resides in every directory I could put it......). So, there is a linking problem during building a release version: Linking... MTGOBot.obj : error LNK2001: unresolved external symbol "__declspec (dllimport) public: __thiscall TessDllAPI::TessDllAPI(char const *)" (__imp_??0TessDllAPI@@QAE@PBD@Z) MTGOBot.obj : error LNK2001: unresolved external symbol "__declspec (dllimport) public: __thiscall TessDllAPI::~TessDllAPI(void)" (__imp_?? 1TessDllAPI@@QAE@XZ) MTGOBot.obj : error LNK2001: unresolved external symbol "__declspec (dllimport) public: int __thiscall TessDllAPI::BeginPage(unsigned int,unsigned int,unsigned char *,unsigned char)" (__imp_? BeginPage@TessDllAPI@@QAEHIIPAEE@Z) MTGOBot.obj : error LNK2001: unresolved external symbol "__declspec (dllimport) public: struct ETEXT_STRUCT * __thiscall TessDllAPI::Recognize_all_Words(void)" (__imp_? Recognize_all_Words@TessDllAPI@@QAEPAUETEXT_STRUCT@@XZ) C:\CPP Projects\Visual Studio 2008\Projects\MTGO SO Bot\MTGO SO Bot \Release\MTGO SO Bot.exe : fatal error LNK1120: 4 unresolved externals Also, for reference, the source to tessdll.h can be found here: http://code.google.com/p/tesseract-ocr/source/browse/trunk/tessdll.h?r=165 A few more details: I debug by from the toolbar and use the integrated debugger. I use Batch Build to create the release version.
Without seeing the project settings, this is tough. Things to check (differences between debug and release settings): 1) Are you including the .lib in the release build? 2) Did you accidentally define the preprocessor directive for tessdll? I'd walk through the settings, switching back-and-forth between debug and release and see what was accidentally added/forgotten. The existence of the DLL is only required for run-time. You're not getting that far.
512,915
513,706
Wanted: a C++ template idea to catch an issue, but at compile time?
We have a const array of structs, something like this: static const SettingsSuT _table[] = { {5,1}, {1,2}, {1,1}, etc }; the structure has the following: size_bytes: num_items: Other "meta data" members So the "total size" is size_bytes*num_items for a single element. All of this information is in the const array, available at compile time. But, please note, the total size of _table is not related to the size of the EEPROM itself. _table does not mirror the EEPROM, it only describes the layout, usage, and other "meta data" type information we need. But, you can use this meta data to determine the amount of EEPROM we are using. The array simply describes the data that is stored in an external EEPROM, which has a fixed/maximum size. As features are added and removed, the entries in the const array changes. We currently have a runtime check of the total size of the data to insure that it does not exceed the EEPROM size. However, we have been changing over many of these runtime checks to static_assert style template checks, so that the build stops immediately. I'm not a template expert, so could use some help on this one. So, the question: how to create a template to add up the size of all the elements (multiplying the values of each element, and then adding all the results) and then do a static_assert and stop the build if they exceed the magic number size of the EEPROM. I was looking at the typical recursive factorial template example as one approach, but it can not access the array, it requires a const value ( I think ). thank you very much for any help,
Your problem is that they are constant, but they are not constant expressions when evaluated: // f is constant, but its value not known at compile-time int const f = rand() % 4; What you need are true constant expressions. You can use boost::mpl to make up a mpl vector of mpl pairs, each with a pair of integral constants: using namespace boost::mpl; typedef vector< pair< int_<5>, int_<1> >, pair< int_<1>, int_<2> >, pair< int_<1>, int_<1> >, > numbers; Now, you can iterate over the items of it using boost::mpl algorithms. Each int_ is exposes a static int constant value set to the value you told it. That will evaluate to a constant expression: // get at the first element of the pair, located in the first element // of the vector. Then get its ::value member. int array[at<numbers, 0>::type::first::value]; And that would actually make that array contain 5 elements. Website of boost::mpl Reference Manual: Here
512,941
512,965
C/C++ read a byte from an hexinput from stdin
Can't exactly find a way on how to do the following in C/C++. Input : hexdecimal values, for example: ffffffffff... I've tried the following code in order to read the input : uint16_t twoBytes; scanf("%x",&twoBytes); Thats works fine and all, but how do I split the 2bytes in 1bytes uint8_t values (or maybe even read the first byte only). Would like to read the first byte from the input, and store it in a byte matrix in a position of choosing. uint8_t matrix[50][50] Since I'm not very skilled in formating / reading from input in C/C++ (and have only used scanf so far) any other ideas on how to do this easily (and fast if it goes) is greatly appreciated . Edit: Found even a better method by using the fread function as it lets one specify how many bytes it should read from the stream (stdin in this case) and save to a variable/array. size_t fread ( void * ptr, size_t size, size_t count, FILE * stream ); Parameters ptr - Pointer to a block of memory with a minimum size of (size*count) bytes. size - Size in bytes of each element to be read. count - Number of elements, each one with a size of size bytes. stream - Pointer to a FILE object that specifies an input stream. cplusplus ref
%x reads an unsigned int, not a uint16_t (thought they may be the same on your particular platform). To read only one byte, try this: uint32_t byteTmp; scanf("%2x", &byteTmp); uint8_t byte = byteTmp; This reads an unsigned int, but stops after reading two characters (two hex characters equals eight bits, or one byte).
513,033
513,100
How to create type-safe int - enum in C++?
I need to create many classes that are somewhere between integer and enum. I.e. have the arithmetics of integer but also are not implicitly converted to int.
Have a look at the answer to this question - BOOST_STRONG_TYPEDEF did exactly what I wanted. // macro used to implement a strong typedef. strong typedef // guarentees that two types are distinguised even though the // share the same underlying implementation. typedef does not create // a new type. BOOST_STRONG_TYPEDEF(T, D) creates a new type named D // that operates as a type T.
513,508
513,520
Simple way to validate command line arguments
How to check if argv (argument vector) contains a char, i.e.: A-Z Would like to make sure that argv only contains unsigned intergers For example: if argv[1] contained "7abc7\0" - ERROR if argv[1] contains "1234\0" - OK
bool isuint(char const *c) { while (*c) { if (!isdigit(*c++)) return false; } return true; } ... if (isuint(argv[1])) ... Additional error checking could be done for a NULL c pointer and an empty string, as desired. update: (added the missing c++)
514,194
514,205
Using enum inside types - Compiler warning C4482 C++
I am using fully qualified name of the enum inside a method in one of my class. But I am getting compiler warning which says "warning C4482: nonstandard extension used: enum 'Foo' used in qualified name". In C++, do we need to use enums without the qualified name? But IMO, that looks ugly. Any thoughts?
Yes, enums don't create a new "namespace", the values in the enum are directly available in the surrounding scope. So you get: enum sample { SAMPLE_ONE = 1, SAMPLE_TWO = 2 }; int main() { std::cout << "one = " << SAMPLE_ONE << std::endl; return 0; }
514,199
514,250
Example of using FindFirstFIleEx() with specific search criteria
I asked about finding in subdirs with criteria. First answer was use FindFirstFileEx(). It seems the function is no good for this purpose or I'm using it wrong. So can someone explain how I would go about searching in a folder, and all it's subfolders for files that match (to give some sample criteria) .doc;.txt;*.wri; and are newer than 2009-01-01? Please give a specific code example for those criteria so I know how to use it. If it isn't possible, is there an alternative for doing this not-at-all-obscure task??? I am becoming quite baffled that so far there aren't well known/obvious tools/ways to do this.
From MSDN: If you refer to the code fragment in that page: #include <windows.h> #include <tchar.h> #include <stdio.h> void _tmain(int argc, TCHAR *argv[]) { WIN32_FIND_DATA FindFileData; HANDLE hFind; if( argc != 2 ) { _tprintf(TEXT("Usage: %s [target_file]\n"), argv[0]); return; } _tprintf (TEXT("Target file is %s\n"), argv[1]); hFind = FindFirstFileEx(argv[1], FindExInfoStandard, &FindFileData, FindExSearchNameMatch, NULL, 0); if (hFind == INVALID_HANDLE_VALUE) { printf ("FindFirstFileEx failed (%d)\n", GetLastError()); return; } else { _tprintf (TEXT("The first file found is %s\n"), FindFileData.cFileName); FindClose(hFind); } } You'll see that you can call FindFirstFileEx, where argv1 is a string (LPCSTR) pattern to look for, and &FindFileData is a data structure that contains file info of the found data.. hFind is the handle you use on subsequent calls with FindNextFile.. I think you can also add more search parameters by using the fourth and sixth parameter to FindFirstFileEx. Good luck! EDIT: BTW, I think you can check a file or dir's attributes by using GetFileAttributes() .. Just pass the filename found in FileFindData.. (filename can refer to a file's name or a directory name I think) EDIT: MrVimes, here's what you could do (in pseudocode) find the first file (match with *) Check the file find data if it is ".", ".." (these are not really directories or files) if check passed, check file find data if it has the attributes you are looking for (i.e. check filename, file attributes, even file creation time can be checked in the file find data, and what not) and do whatever with it if check passed, do whatever you need to do with the file if check failed, either call findnextfile or end, up to you Something like that..
514,239
514,415
MS SQL stored procedure returned result sets with ODBC
I have a stored procedure and if the stored procedure does this: SELECT 0 As Ret DELETE FROM table where value1 = 1 Returns 1 row result with its value of 0 and column name Ret But if I do this: DELETE FROM table where value1 = 1 SELECT 0 As Ret I get no returned results. My question is, how do I get the second variation to return a value. I'm using C++ and ODBC.
Ok I found that you can use the ODBC call SQLMoreResults to get the next result set. So you can keep calling this SQLMoreResults function until there are no more result sets left. In my case after calling SQLMoreResults I got my expected result set. This is pretty cool because it means that a single stored procedure can return multiple result sets. I never knew that it could. @Sambo99 and @shakalpesch your suggestions also works and returns only 1 result.
514,420
514,495
How to validate numeric input in C++
I'd like to know how to limit an input value to signed decimals using std::cin.
double i; //Reading the value cin >> i; //Numeric input validation if(!cin.eof()) { peeked = cin.peek(); if(peeked == 10 && cin.good()) { //Good! count << "i is a decimal"; } else { count << "i is not a decimal"; cin.clear(); cin >> discard; } } This also gives an error message with the input -1a2.0 avoiding the assignation of just -1 to i.
514,435
514,468
Templatized branchless int max/min function
I'm trying to write a branchless function to return the MAX or MIN of two integers without resorting to if (or ?:). Using the usual technique I can do this easily enough for a given word size: inline int32 imax( int32 a, int32 b ) { // signed for arithmetic shift int32 mask = a - b; // mask < 0 means MSB is 1. return a + ( ( b - a ) & ( mask >> 31 ) ); } Now, assuming arguendo that I really am writing the kind of application on the kind of in-order processor where this is necessary, my question is whether there is a way to use C++ templates to generalize this to all sizes of int. The >>31 step only works for int32s, of course, and while I could copy out overloads on the function for int8, int16, and int64, it seems like I should use a template function instead. But how do I get the size of a template argument in bits? Is there a better way to do it than this? Can I force the mask T to be signed? If T is unsigned the mask-shift step won't work (because it'll be a logical rather than arithmetic shift). template< typename T > inline T imax( T a, T b ) { // how can I force this T to be signed? T mask = a - b; // I hope the compiler turns the math below into an immediate constant! mask = mask >> ( (sizeof(T) * 8) - 1 ); return a + ( ( b - a ) & mask ); } And, having done the above, can I prevent it from being used for anything but an integer type (eg, no floats or classes)?
EDIT: This answer is from before C++11. Since then, C++11 and later has offered make_signed<T> and much more as part of the standard library Generally, looks good, but for 100% portability, replace that 8 with CHAR_BIT (or numeric_limits<char>::max()) since it isn't guaranteed that characters are 8-bit. Any good compiler will be smart enough to merge all of the math constants at compile time. You can force it to be signed by using a type traits library. which would usually look something like (assuming your numeric_traits library is called numeric_traits): typename numeric_traits<T>::signed_type x; An example of a manually rolled numeric_traits header could look like this: http://rafb.net/p/Re7kq478.html (there is plenty of room for additions, but you get the idea). or better yet, use boost: typename boost::make_signed<T>::type x; EDIT: IIRC, signed right shifts don't have to be arithmetic. It is common, and certainly the case with every compiler I've used. But I believe that the standard leaves it up the compiler whether right shifts are arithmetic or not on signed types. In my copy of the draft standard, the following is written: The value of E1 >> E2 is E1 rightshifted E2 bit positions. If E1 has an unsigned type or if E1 has a signed type and a nonnegative value, the value of the result is the integral part of the quotient of E1 divided by the quantity 2 raised to the power E2. If E1 has a signed type and a negative value, the resulting value is implementation defined. But as I said, it will work on every compiler I've seen :-p.
514,637
514,686
Is it more efficient to branch or multiply?
I am trying to optimize a small, highly used function which uses the high bits in an unsigned short int to indicate the values of an array to sum together. At first I was using the obvious approach shown below. Please note that loop unrolling is not explicitly shown as it should be done by the compiler. int total = 0; for(unsigned short mask = 0x0001, j = 0; mask != 0; mask <<= 1, j++){ if (i & mask){ total += value[j]; } } However, later I thought it might be better to remove the branching to help CPU pipelining and came up with the following. int total = 0; for(unsigned short mask = 0x0001, j = 0; mask != 0; mask <<= 1, j++){ total += ((i & mask) != 0) * value[j]; } Note that since (i & mask) does not result in a boolean answer, the comparison with 0 forces the result to be either 1 or 0. Although this second approach eliminates the if-statement from this section of the code, the second solution needs to run a multiplication of 0 or 1 on every iteration in addition to the rest of the equation. Which code will run faster?
You could make it branchless without a multiply. It looks like for each bit set you are using that bit position as an index into an array. First, you can easily extract bits set with: unsigned short set_mask= i & -i; i&= i - 1; Then, you can get the bit index by counting the bits set in (set_mask - 1). There's a constant time formula for this. Some platforms also have an intrinsic to get the bit index of a bit set which is probably faster. x86 has bsr, PPC has cntlz. So the answer is the branchless multiplyless version is probably fastest :)
514,824
514,902
Does C++ deep-initialize class members?
I have a class in C++ with the following member: map< someEnum, vector<SomeObject*>* > someMap So I have a map that gives me a vector of objects for each enumeration I have. For the life of me, I cannot understand how C++ is initializing these objects. Does it deep initialize them by default? If not, what do I need to do? I'm getting segmentation faults no matter how I try to do this (and I have tried everything), so I'm guessing I'm missing something conceptually. I should note that I tried to use: map< someEnum, vector<SomeObject*> > someMap as well and that didn't work as well. In this case, does C++ deep-initialize the vector?
The rule is: If an STL container contains pointers to objects, it it does not create objects on heap and assign them to these pointers. If, however, it contains objects themselves, it does call the default constructor of each contained object and thus initialises them. What you have here is a map containing pointers (no matter what kind). So do not expect the map to make these pointers to point to memory.
514,908
514,925
Best way to return list of objects in C++?
It's been a while since I programmed in C++, and after coming from python, I feel soooo in a straight jacket, ok I'm not gonna rant. I have a couple of functions that act as "pipes", accepting a list as input, returning another list as output (based on the input), this is in concept, but in practice, I'm using std::vector to represent the list, is that acceptable? further more, I'm not using any pointers, so I'm using std::vector<SomeType> the_list(some_size); as the variable, and returning it directly, i.e. return the_list; P.S. So far it's all ok, the project size is small and this doesn't seem to affect performance, but I still want to get some input/advice on this, because I feel like I'm writing python in C++.
The only thing I can see is that your forcing a copy of the list you return. It would be more efficient to do something like: void DoSomething(const std::vector<SomeType>& in, std::vector<SomeType>& out) { ... // no need to return anything, just modify out } Because you pass in the list you want to return, you avoid the extra copy. Edit: This is an old reply. If you can use a modern C++ compiler with move semantics, you don't need to worry about this. Of course, this answer still applies if the object you are returning DOES NOT have move semantics.
514,977
514,991
calling base class functions
I have following classes. class A { public: void fun(); } class B: public A { } class C: public A { } A * ptr = new C; Is it ok to do something like below? Will i have some problems if introduce some virtual functions in the baseclass? ((B *)ptr)->fun(); This may look stupid, but i have a function that calls A's function through B and i don't want to change that.
You can't cast an A* pointing to Class C as a B* because Class C doesn't have any relation with Class B. You'll get undefined behavior which will probably be the wrong function called and stack corruption. If you intended for class C to derive from class B then you could. However, you wouldn't need to. If class C doesn't have fun() defined, it will inherit A's. You didn't declare fun() virtual though so you'll get strange behavior if you even implement C::fun() or B::fun(). You almost certainly want fun() to be declared virtual.
514,981
515,014
Suggestion for template book for C++?
I am learning templates. Which book is worth buying for doing template programming? I already have The C++ Programming Language and Effective C++.
Those two books are pretty good in my opinion and they helped me a lot C++ Templates: The Complete Guide by David Vandevoorde and Nicolai M. Josuttis Modern C++ Design by Andrei Alexandrescu The first one explains how templates work. The second book is more about how to use them. I recommend you to read the first book before starting with Modern C++ Design because that's heavy stuff.
515,071
515,082
Destructor called on object when adding it to std::list
I have a Foo object, and a std::list holding instances of it. My problem is that when I add a new instance to the list, it first calls the ctor but then also the dtor. And then the dtor on another instance (according to the this pointer). A single instance is added to the list but since its dtor (along with its parents) is called, the object cant be used as expected. Heres some simplified code to illustrate the problem: #include <iostream> #include <list> class Foo { public: Foo() { int breakpoint = 0; } ~Foo() { int breakpoint = 0; } }; int main() { std::list<Foo> li; li.push_back(Foo()); }
When you push_back() your Foo object, the object is copied to the list's internal data structures, therefore the Dtor and the Ctor of another instance are called. All standard STL container types in C++ take their items by value, therefore copying them as needed. For example, whenever a vector needs to grow, it is possible that all values in the vector get copied. Maybe you want to store pointers instead of objects in the list. By doing that, only the pointers get copied instead of the object. But, by doing so, you have to make sure to delete the objects once you are done: for (std::list<Foo*>::iterator it = list.begin(); it != list.end(); ++it) { delete *it; } list.clear(); Alternatively, you can try to use some kind of 'smart pointer' class, for example from the Boost libraries.
515,100
515,143
Why does GCC look at private constructors when matching functions?
I'm very busy write now debugging some code, so I can't cookup a complete example, but this basically describes my problem class Base{}; class MyX:public Base { ... }; class Derived:Base { ... }; template<class X> class MyClass:Derived { private: MyClass(const MyClass& ) :x() {} public: MyClass(const X& value) :x(value) {} }; .... MyX x; MyClass<MyX>(x); This gives me an error like this: error: there are two possible constrcutors MyClass<X>(const MyClass<X>&) and MyClass<X>(const X&)
MyClass<MyX>(x); is parsed as MyClass<MyX> x; But MyClass<MyX> does not have a default constructor. Try giving it a name: MyClass<MyX> p(x);
515,149
515,437
Try/Catch a segmentation fault on Linux
I have a Linux C++ application and I'd like to test an object pointer for validity before dereferencing it. However try/catch doesn't work for this on Linux because of the segmentation fault. How can this be done?
If you have a scenario where many pointers across your app reference the same limited-lifetime objects, a popular solution is to use boost smart pointers. Edit: in C++11, both of these types are available in the standard library You would want to use shared_ptr for pointer(s) that are responsible for the lifetime of your object and weak_ptr for the other pointers, which may become invalid. You'll see that weak_ptr has the validity check you're asking for built in.
515,480
515,883
How to set up headers and libraries for Linux development
I recently set up, for a learning exercise, an Ubuntu desktop PC with KDE 4.2, installed Eclipse and started to look for information on how to develop for KDE. I know there's KDevelop and will probably have a look at that at some time in the future. Right now, however, I don't have the correct headers and libraries for creating KDE applications in C/C++ using Eclipse. If I have the following: #include <kapplication.h> it fails to compile since there are dependancies on other header files that are not present on my hard disk or reference classes that aren't declared anywhere. So, the question is, what packages do I need to install in order to have the correct set of headers to allow me to write applications for KDE 4.2? Are there any packages I shouldn't have? Alternatively, if there are no packages then where can I get the appropriate files? As a corollary, are there any good tutorials on KDE development, something like the Petzold Windows book? EDIT: Clarifying what I'm really after: where can I download the correct set of header files / libraries in order to build a KDE application? IDEs to compile code aren't a real problem and are easy to get, as is setting up compiler options for include search paths and so on. Does the KDevelop package have all the correct include and library files or are they separate? I guess they are separate as KDevelop is an IDE that can do other languages as well, but I'm probably wrong. So, the KDE/Qt header files I have don't work, where do I get the right ones? Skizz
Make sure you have installed the build-essential package. For more documentation available from the command line, install glibc-doc, manpages-dev, gcc-*-doc, libstdc++*-doc (replace '*' with suitable version numbers for your system) Getting Started/Build/KDE4/Kubuntu and Debian had a pair of sudo aptitude install commands which I used to get some required packages. I also got the KDevelop and QDevelop applications, although I'm not sure they are required. There was also another package I needed (kdelibs5-dev) and this one appears to be the key package. Everything eventually worked after getting that one. Eclipse and KDevelop were both happy building a simple application once the compiler settings were set up; Eclipse required setting search paths and library filenames. From first impressions, Eclipse appears better than KDevelop for the single reason that the tool windows in Eclipse can be detached from the main window and float - useful on a dual monitor setup. I couldn't see anyway to do that in KDevelop (I'm sure someone will comment on how to do this).
515,587
515,621
Do I need to lock STL list with mutex in push_back pop_front scenario?
I have a thread push-backing to STL list and another thread pop-fronting from the list. Do I need to lock the list with mutex in such case?
From SGI's STL on Thread Safety: If multiple threads access a single container, and at least one thread may potentially write, then the user is responsible for ensuring mutual exclusion between the threads during the container accesses. Since both your threads modify the list, I guess you have to lock it.
515,763
515,772
How can Derived class inherit a static function from Base class?
struct TimerEvent { event Event; timeval TimeOut; static void HandleTimer(int Fd, short Event, void *Arg); }; HandleTimer needs to be static since I'm passing it to C library (libevent). I want to inherit from this class. How can this be done? Thanks.
You can easily inherit from that class: class Derived: public TimerEvent { ... }; However, you can't override HandleTimer in your subclass and expect this to work: TimerEvent *e = new Derived(); e->HandleTimer(); This is because static methods don't have an entry in the vtable, and can't thus be virtual. You can however use the "void* Arg" to pass a pointer to your instance... something like: struct TimerEvent { virtual void handle(int fd, short event) = 0; static void HandleTimer(int fd, short event, void *arg) { ((TimerEvent *) arg)->handle(fd, event); } }; class Derived: public TimerEvent { virtual void handle(int fd, short event) { // whatever } }; This way, HandleTimer can still be used from C functions, just make sure to always pass the "real" object as the "void* Arg".
515,788
516,742
Deduction of reference types in template functions
Do I have to explicitly instantiate a function template's type when it comes to reference type deduction. If that is the case, where is the ambiguity? Let's compare following 2 code snippets: 1st: link for the code template <typename T> void foo(T& var, void(*func)(T&)) // T must be instantiated with int and it does . { ++var; } void ret(int & var){} int main() {int k =7; foo(k, &ret); cout<<k;//prints 8 } Now let's remove &'s in foo()'s decleration and we have an error. 2nd: link for the code template <typename T> void foo(T var, void(*func)(T)) // T must be instantiated with int& but it doesn't. { ++var; } void ret(int & var){} int main() {int k =7; foo(k, &ret); //error: no matching function for call to 'foo(int&, void (*)(int&))' cout<<k; } However, If I call foo by explicitly instantiating with <int&> "foo<int&>(k,&ret);", the code gives the same output as the former one. What is the reason of this error? Where is the ambiguity? Thanks.
As highlighted in Fionn's answer, the problem is that the compiler deduces two distinct types for T, int and int&. 18.8.2.4/2 has the following: In some cases, the deduction is done using a single set of types P and A, in other cases, there will be a set of corresponding types P and A. Type deduction is done independently for each P/A pair, and the deduced template argument values are then combined. If type deduction cannot be done for any P/A pair, or if for any pair the deduction leads to more than one possible set of deduced values, or if different pairs yield different deduced values, or if any template argument remains neither deduced nor explicitly specified, template argument deduction fails. The highlighted text I believe covers your example. You didn't ask, but one option you potentially have is to use two template parameters in your example. Both of these cases can be deduced, so you could then use some other template trickery possibly via enable if to create a new type which represents the version you want, ie. with reference or without.
516,007
516,041
std::map, pointer to map key value, is this possible?
std::map<std::string, std::string> myMap; std::map<std::string, std::string>::iterator i = m_myMap.find(some_key_string); if(i == m_imagesMap.end()) return NULL; string *p = &i->first; Is the last line valid? I want to store this pointer p somewhere else, will it be valid for the whole program life? But what will happen if I add some more elements to this map (with other unique keys) or remove some other keys, won’t it reallocate this string (key-value pair), so the p will become invalid?
First, maps are guaranteed to be stable; i.e. the iterators are not invalidated by element insertion or deletion (except the element being deleted of course). However, stability of iterator does not guarantee stability of pointers! Although it usually happens that most implementations use pointers - at least at some level - to implement iterators (which means it is quite safe to assume your solution will work), what you should really store is the iterator itself. What you could do is create a small object like: struct StringPtrInMap { typedef std::map<string,string>::iterator iterator; StringPtrInMap(iterator i) : it(i) {} const string& operator*() const { return it->first; } const string* operator->() const { return &it->first; } iterator it; } And then store that instead of a string pointer.
516,200
520,951
Relative Paths Not Working in Xcode C++
There are numerous post over the net that detail how relative paths don't work in Xcode. I do have an Xcode template that I downloaded where the relative paths DO work, however I have not been able to figure out why nor replicate it in other projects. Firstly, I am using C++ in Xcode 3.1. I am not using Objective-C, nor any Cocoa/Carbon frameworks, just pure C++. Here is the code that works in my other Xcode template: sound->LoadMusic( (std::string) "Resources/Audio/Pop.wav" ); This relative path works for me also in Windows. Running the following command gives me an absolute path to the application's full path: std::cout << "Current directory is: " << getcwd( buffer, 1000) << "\n"; /Applications/myApp How can we get relative paths to work in an Xcode .app bundle?
Took me about 5 hours of Google and trying different things to FINALLY find the answer! #ifdef __APPLE__ #include "CoreFoundation/CoreFoundation.h" #endif // ---------------------------------------------------------------------------- // This makes relative paths work in C++ in Xcode by changing directory to the Resources folder inside the .app bundle #ifdef __APPLE__ CFBundleRef mainBundle = CFBundleGetMainBundle(); CFURLRef resourcesURL = CFBundleCopyResourcesDirectoryURL(mainBundle); char path[PATH_MAX]; if (!CFURLGetFileSystemRepresentation(resourcesURL, TRUE, (UInt8 *)path, PATH_MAX)) { // error! } CFRelease(resourcesURL); chdir(path); std::cout << "Current Path: " << path << std::endl; #endif // ---------------------------------------------------------------------------- I've thrown some extra include guards because this makes it compile Apple only (I develop cross platform) and makes the code nicer. I thank the other 2 guys for your answers, your help ultimately got me on the right track to find this answer so i've voted you both up. Thanks guys!!!!
516,237
516,253
When should I use typedef in C++?
In my years of C++ (MFC) programming in I never felt the need to use typedef, so I don't really know what is it used for. Where should I use it? Are there any real situations where the use of typedef is preferred? Or is this really more a C-specific keyword?
Template Metaprogramming typedef is necessary for many template metaprogramming tasks -- whenever a class is treated as a "compile-time type function", a typedef is used as a "compile-time type value" to obtain the resulting type. E.g. consider a simple metafunction for converting a pointer type to its base type: template<typename T> struct strip_pointer_from; template<typename T> struct strip_pointer_from<T*> { // Partial specialisation for pointer types typedef T type; }; Example: the type expression strip_pointer_from<double*>::type evaluates to double. Note that template metaprogramming is not commonly used outside of library development. Simplifying Function Pointer Types typedef is helpful for giving a short, sharp alias to complicated function pointer types: typedef int (*my_callback_function_type)(int, double, std::string); void RegisterCallback(my_callback_function_type fn) { ... }
516,395
516,601
When don't I need a typedef?
I encountered some code reading typedef enum eEnum { c1, c2 } tagEnum; typedef struct { int i; double d; } tagMyStruct; I heard rumours that these constructs date from C. In C++ you can easily write enum eEnum { c1, c2 }; struct MyStruct { int i; double d; }; Is that true? When do you need the first variant?
First, both declarations are legal in both C and C++. However, in C, they have slightly different semantics. (In particular, the way you refer to the struct later varies). The key concept to understand is that in C, structs exist in a separate namespace. All built-in types, as well as typedefs exist in the "default" namespace. That is, when I type int, the compiler only checks this "default" namespace. If I type "tagMyStruct" as in your example, the compiler also only checks this one namespace. But depending which type of declaration you use, the struct may not exist in that namespace. Structs are different, and exist in a separate namespace. So if I make the following declaration: struct mystruct {}; I can not simply refer to it as mystruct. Instead, I have to specify that I want the mystruct which exists in the struct namespace: void foo(struct mystruct bar); // Declare a function which takes a mystruct as its parameter Which gets a bit verbose and awkward in the long run. Instead, you can typedef it into the default namespace: typedef struct mystruct mystruct; // From now on, 'mystruct' in the normal namespace is an alias for 'mystruct' in the struct namespace and now, my function can be declared in the straightforward way: void foo(mystruct bar); So your first example simply merges these two steps together: Declare a struct, and put an alias into the regular namespace. And of course, since we're typedeffing it anyway, we don't need the "original" name, so we can make the struct anonymous. So after your declaration typedef struct { int i; double d; } tagMyStruct; we have a struct with no name, which has been typedef'ed to 'tagMyStruct' in the default namespace. That's how C treats it. Both types of declarations are valid, but one does not create the alias in the "default" namespace, so you have to use the struct keyword every time you refer to the type. In C++, the separate struct namespace doesn't exist, so they mean the same thing. (but the shorter version is preferred). Edit Just to be clear, no, C does not have namespaces. Not in the usual sense. C simply places identifiers into one of two predefined namespaces. The names of structs (and enums, as I recall) are placed in one, and all other identifiers in another. Technically, these are namespaces because they are separate "containers" in which names are placed to avoid conflicts, but they are certainly not namespaces in the C++/C# sense.
516,411
516,537
raw function pointer from a bound method
I need to bind a method into a function-callback, except this snippet is not legal as discussed in demote-boostfunction-to-a-plain-function-pointer. What's the simplest way to get this behavior? struct C { void m(int x) { (void) x; _asm int 3; }}; typedef void (*cb_t)(int); int main() { C c; boost::function<void (int x)> cb = boost::bind(&C::m, &c, _1); cb_t raw_cb = *cb.target<cb_t>(); //null dereference raw_cb(1); return 0; }
You can make your own class to do the same thing as the boost bind function. All the class has to do is accept the function type and a pointer to the object that contains the function. For example, this is a void return and void param delegate: template<typename owner> class VoidDelegate : public IDelegate { public: VoidDelegate(void (owner::*aFunc)(void), owner* aOwner) { mFunction = aFunc; mOwner = aOwner; } ~VoidDelegate(void) {} void Invoke(void) { if(mFunction != 0) { (mOwner->*mFunction)(); } } private: void (owner::*mFunction)(void); owner* mOwner; }; Usage: class C { void CallMe(void) { std::cout << "called"; } }; int main(int aArgc, char** aArgv) { C c; VoidDelegate<C> delegate(&C::CallMe, &c); delegate.Invoke(); } Now, since VoidDelegate<C> is a type, having a collection of these might not be practical, because what if the list was to contain functions of class B too? It couldn't. This is where polymorphism comes into play. You can create an interface IDelegate, which has a function Invoke: class IDelegate { virtual ~IDelegate(void) { } virtual void Invoke(void) = 0; } If VoidDelegate<T> implements IDelegate you could have a collection of IDelegates and therefore have callbacks to methods in different class types.
516,662
516,690
Why can't I put an iterator in map?
I have a map defined like this std::map<some_key_type, std::string::iterator> mIteratorMap; And a huge string named "mHugeString". Then I walk trough the string collecting iterators like this: std::string::iterator It=mHugeString.begin(); std::string::iterator EndIt=mHugeString.end(); for(;It!=EndIt;++It){ ...defining a key element... if(need_to_store_an_iterator)mIteratorMap[key_of_a_right_type]=It; } In the end I should recieve a map, where an iterator is associated with a some sort of key. But the iterator somehow looses itself when being paired with a key by "make_pair", unless it points to a place somewhere in the end of a string. It's hard to tell, but maybe last 256 bytes are fine. So the question is not how to avoid loosing iterators, it was a stupid idea to store them anyways, but why trying to store an iterator in the begining of the string fails, and why the same with the iterators on the end works fine? What is the difference between them?
I haven't tried it but I would have expected that, of course you can store iterator values as values in a map. Do you know that if you change the contents of mHugeString then any iterators into it which you have previously stored are now invalid? You might choose to store the index into the string, instead of the iterator.
516,763
517,576
Open a socket using CreateFile
We've got some old serial code which checks whether a serial port is available simply by opening it and then closing it. Now we are adding network support to the app I want to reuse the function by supplying the ip address as a string. /** * So far I have tried: * A passed in portPath normally looks like: \\?\acpi#pnp0501#1#1#{GUID} 10.2.0.155:2001 //10.2.0.155:2001/ \\.\10.2.0.155:2001\ \\?\10.2.0.155:2001\ * all without success. */ bool PortIsAvailable( const CString& portPath ) { HANDLE hCom = ::CreateFile( portPath, GENERIC_READ | GENERIC_WRITE, 0, // comm devices must be opened with exclusive-access NULL, // no security attributes OPEN_EXISTING, // comm devices must use OPEN_EXISTING FILE_FLAG_OVERLAPPED, // not overlapped I/O NULL ); // hTemplate must be NULL for comm devices if (INVALID_HANDLE_VALUE != hCom ) { ::CloseHandle( hCom ); return true; } return false; } I know I could use connect followed by shutdown but I want to reuse the function with minimal changes. If I can reuse the function so much the better. If not then I will have to write code that determines whether it is a socket or not. I was wondering what the correct way of opening a socket via CreateFile is?
You can not create a socket via CreateFile. You should use the windows socket API for this purpose. For creating the SOCKET handle, you use WSASocket. Note that the SOCKET returned by this function can be used as a Windows Handle with some Windows functions, such as ReadFile and WriteFile.
517,003
517,165
Matrix implementation benchmarks, should I whip myself?
I'm trying to find out some matrix multiplication/inversion benchmarks online. My C++ implementation can currently invert a 100 x 100 matrix in 38 seconds, but compared to this benchmark I found, my implementation's performances really suck. I don't know if it's a super-optimized something or if really you can easily invert a 200 x 200 matrix in about 0.11 seconds, so I'm looking for more benchmarks to compare the results. Have you god some good link? UPDATE I spotted a bug in my multiplication code, that didn't affect the result but was causing useless cycle waste. Now my inversion executes in 20 seconds. It's still a lot of time, and any idea is welcome. Thank you folks
This sort of operation is extremely cache sensitive. You want to be doing most of your work on variables that are in your L1 & L2 cache. Check out section 6 of this doc: http://people.redhat.com/drepper/cpumemory.pdf He walks you through optimizing a matrix multiply in a cache-optimized way and gets some big perf improvements.
517,386
517,394
the returned SDL_cursor from SDL_GetCursor() can't be used with SDL_GetMouseState()?
I'm trying to get the x, y, and state of my mouse in SDL. I tried using the lines int mstate, mx, my = 0; mstate, mx, my = SDL_GetCursor().SDL_GetMouseState(); But it gives me the error C:[path]\particletest2\main.cpp|107|error: request for member SDL_GetMouseState' inSDL_GetCursor()', which is of non-class type `SDL_Cursor*'| Is there any way I can get this to work? It seems like a waste to create a SDL_cursor object when SDL_GetCursor() should be creating one to return for you.
http://www.libsdl.org/docs/html/sdlgetcursor.html SDL_GetCursor() returns a pointer and so you need to use the -> operator to access its member. Responding to your reply: I think mstate, mx, my = SDL_GetCursor()->SDL_GetMouseState(); is a problem if it wasn't incorrectly pasted. I do not think that this is doing what you think it should be doing, and I am not really sure what you think it should be doing.
517,563
523,991
C/C++ linker CALL16 reloc at xxxxx not against global symbol
I'm getting these errors while linking, both messages have to do with the same object file. CALL16 reloc at 0x5f8 not against global symbol and could not read symbols: Bad value The 2nd message seems to be the reason I'm getting the CALL16 error, but the file compiles just fine. Any tips on fixing this? FYI, I'm cross compiling for a MIPS target and using gcc 4.1.2 EDIT: No luck so far: Here are my flags used: -fPIC,-Wl,-rpath,-Wl,-O1 I've also tried the following without success: -mno-explicit-relocs -mexplicit-relocs -mlong-calls -mno-long-calls -mxgot -mno-xgot Meanwhile, I'll go back to the source at this point and investigate more.
Aha! Thanks to a colleague of mine, we found the issue. Here was the issue: There was a forward declaration/prototype of a function. void FooBarIsBest(void); Later on in the file the function was defined. static void FooBarIsBest(void) { // do the best } The issue here was that in the prototype the keyword static was left out. So it was like a whole new function was being defined. The CALL16 reference is used by gcc for relocatable code. The assembly code of the file showed that CALL16 was being used on this function... Which is wrong, as this function is local. Interestingly, this code used to compile & link just fine with an older version of gcc (3.2.2). Another lessoned learned. :)
518,028
518,081
In c++ making a function that always runs when any other function of a class is called
C++ has so much stuff that I don't know. Is there any way to create a function within a class, that will always be called whenever any other function of that class is called? (like making the function attach itself to the first execution path of a function) I know this is tricky but I'm curious.
Yes-ish, with a bit of extra code, some indirection and another class and using the -> instead of the . operator. // The class for which calling any method should call PreMethod first. class DogImplementation { public: void PreMethod(); void Bark(); private: DogImplementation(); // constructor private so can only be created via smart-pointer. friend class Dog; // can access constructor. }; // A 'smart-pointer' that wraps a DogImplementation to give you // more control. class Dog { public: DogImplementation* operator -> () { _impl.PreMethod(); return &_impl; } private: DogImplementation _impl; }; // Example usage of the smart pointer. Use -> instead of . void UseDog() { Dog dog; dog->Bark(); // will call DogImplementation::PreMethod, then DogImplementation::Bark } Well.. something roughly along those lines could be developed into a solution that I think would allow you to do what you want. What I've sketched out there probably won't compile, but is just to give you a starting point.
518,206
518,327
Is there a way to allow a Windows service (unmanaged c++) to write files on a shared network folder?
I tried running the service in the "Local System" : didn't work. I tried running the service in an account having rights on the network shared folder : didn't work. Do I have to create a standalone application for this and launch this application as a user with rights on the network shared folder? Thanks, Nic
Both your scenarios should work. The "local system" is the computer account in the active directory that you can give share permissions to. I have no idea why it doesn't work for you. But here is what you can do. Use an regualar account (its just easier). Test your application as console application. Tweak the auditing on the client to log everything to the security log. It is done from the local security policy application. And do the same on the server (If you can). This should be enough to locate the problem. Update 1: In response to the comment which I think is wrong (but maybe I am...). The service which the comment refers to ( The one without network access) is called local service account( NT AUTHORITY\LocalService ). It is usually used in the identity of application pools, but can be used in services. It is not the same as local system account. from msdn: When a service runs under the LocalSystem account on a computer that is a domain member, the service has whatever network access is granted to the computer account, or to any groups of which the computer account is a member.
518,959
519,059
Why does this dynamic_cast of auto_ptr fail?
#include "iostream" class A { private: int a; public : A(): a(-1) {} int getA() { return a; } }; class A; class B : public A { private: int b; public: B() : b(-1) {} int getB() { return b; } }; int main() { std::auto_ptr<A> a = new A(); std::auto_ptr<B> b = dynamic_cast<std::auto_ptr<B> > (a); return 0; } ERROR: cannot dynamic_cast `(&a)->std::auto_ptr<_Tp>::get() const
Well, std::auto_ptr<B> is not derived from std::auto_ptr<A>. But B is derived from A. The auto_ptr does not know about that (it's not that clever). Looks like you want to use a shared ownership pointer. boost::shared_ptr is ideal, it also provides a dynamic_pointer_cast: boost::shared_ptr<A> a = new A(); boost::shared_ptr<B> b = dynamic_pointer_cast<B> (a); For auto_ptr, such a thing can't really work. Because ownership will move to b. But if the cast fails, b can't get ownership. It's not clear what to do then to me. You would probably have to say if the cast fails, a will keep having the ownership - which sounds like it will cause serious trouble. Best start using shared_ptr. Both a and b then would point to the same object - but B as a shared_ptr<B> and a as a shared_ptr<A>
518,995
519,021
CUDA compiler (nvcc) macro
Is there a #define compiler (nvcc) macro of CUDA which I can use? (Like _WIN32 for Windows and so on.) I need this for header code that will be common between nvcc and VC++ compilers. I know I can go ahead and define my own and pass it as an argument to the nvcc compiler (-D), but it would be great if there is one already defined.
__CUDACC__ I don't think it will be that trivial. Check the following thread http://forums.nvidia.com/index.php?showtopic=32369&st=0&p=179913&#entry179913
519,180
519,195
C++ function that returns string doesn't work unless there's an endl involved...?
I've got a function inside of a class that returns a string. Inside this function, I can only get it to work when I add cout<<endl to the function before the return statement. Any idea why this is, or how I can fix it? I'm running this in Eclipse on a Mac In "main.cpp": #include <iostream> #include <fstream> #include <string> #include <vector> #include <cstdlib> #include "Braid.h" using namespace std; static int size=3; int main(){ Braid * b1 = new Braid(size); b1->setCanon();//creates canonical braid. cout<<"a "; cout<<b1->getName()<<endl; cout<<" b "; } In "Braid.h" : public: Braid(int); void setCanon(); string getName(); }; And in "Braid.cpp": string Braid::getName(){ string sName=""; /* body commented out for(int i=0; i<height; i++) { for(int j=2; j<(width-2); j++) { sName += boxes[i][j]; sName += "|"; } } */ //cout<<endl; return sName; } When I run my main code, without the body of that function commented, the output I get is "a 0|0|12|12|0|0|2|1|1|1|1|2|" The "name" it returns is correct, but it's not making it past the function call. If I uncomment the //cout<<endl line, the function works and my output is "a 0|0|12|12|0|0|2|1|1|1|1|2| b " After commenting out the body of the function, so that it only creates an empty string, and returns it, my output is only "a" then if I add the endl back, I get the "a b" that is expected. What am I doing wrong? Is there something that comes with endl that I'm missing?
actually getName() function is probably working correctly. However, the cout 'caches' the output (i.e. it prints the output on screen when it's internal text buffer is full). 'endl' flushes the buffer and forces cout to dump the text (in cache) to screen. Try cout.flush() in main.cpp
519,558
545,207
How to force parent window to draw "under" children windows?
The environment is plain-old win32 under C/C++ without any fancy MFC or similar mumbo-jumbo. I have a window, which has several children and grandchildren. Some children are oddly-shaped icons, and I need them to have transparent background (oddly-shaped icons). Consider a this pseudo-structure: Parent1 Child1 (normal) Child2 (oddly-shaped icon) Child3 (normal) / Parent2 Grandchild1 (normal) Grandchild2 (oddly-shaped icon) Above, Child2 and Grandchild2 are supposed to have transparent background (WM_ERASEBKGND does nothing, or (WNDCLASS)->hbrBackground = NULL). Right now the background for these icons is transparent, but transparent to extreme -- I see stuff under Parent1 -- desktop, etc. This all happens under Windows Mobile. Is there any extra flag I have to set for Parent1 and Parent2? Any good tricks you might offer? I would be surprised if noone had similar problems, since many applications now have to display icons, all shapes and sizes. EDIT: The oddly-shaped window is icon with transparencies. It would be nice if parent window would not do clipping for these particular windows, but invalidate them every time parent draws itself. CS_PARENTDC looks very promising, but not promising enough. Any ideas?
GWES will not paint the rectangle of any child window with contents of the parent window. Ever. Period. That's by design. You can either paint in the child rectangle in response to WM_CTL... in the parent, or subclass the child and override its WM_PAINT completely. That will be really tough for certain windows, such as edit controls, but it's mostly doable.
519,685
531,364
What is the best Framework to use for developing a skinned application?
I need to develop an application for windows, and it needs to support skins. I am looking for a framework to use. I would much prefer to not use QT, because of it's licensing - GPL is not an option for me, and it is otherwise to expensive (and I can't put off developing this application till March, when QT is supposed to go LGPL). Does anyone have any suggestions as to what I should use? Language options for me are C/C++, C# (preferably .Net 2.0 or lower), or Visual Basic. Something that used quasi-CSS for skinning, like QT does, would be a bonus. Open Source licensing (LGPL, MIT, etc) would be a bonus as well. Thanks!
I've ended up writing my own custom skinning support, specific to my application. Using a wxNO_BORDER with wxWidgets, and writing lots of controls from scratch, has given me the abilities I need. In the future, I'll probably use QT, when it is under LGPL, or else WPF, when and if DotNet 3 is an option for me. Thanks for all the responses.
519,808
519,812
How to call a constructor on an already allocated memory?
How can I call a constructor on a memory region that is already allocated?
You can use the placement new constructor, which takes an address. Foo* foo = new (your_memory_address_here) Foo (); Take a look at a more detailed explanation at the C++ FAQ lite or the MSDN. The only thing you need to make sure that the memory is properly aligned (malloc is supposed to return memory that is properly aligned for anything, but beware of things like SSE which may need alignment to 16 bytes boundaries or so).
519,836
519,898
What is the "metadata operation failed" VS2008 linker error?
I have a big project that was first created in Borland C++ 6. We're porting the program gradually to VS2008. There are many projects, which all compile to .lib, and I'm trying to build the exe of the test project for a set of projects. After fixing the compiler errors, I got this crazy linker error: 1>av_geos_core_domain.lib(GerTamMolde.obj) : error LNK2022: metadata operation failed (8013118D) : Inconsistent layout information in duplicated types (PtoGrad): (0x02000045). It appears 4 other times with different classes. The .obj listed are classes (GerTamMolde and PtoGrad). I tried cleaning and building the solution again many times but it doesn't work. Any ideas?
Have you tried searching for duplicated symbols? In my opinion PtoGrad is defined in two or more places, perhaps in different .lib, making the symbol resolving when building the .exe crash.
519,976
525,374
Any recommendations for a PDF 3D SDK with C++ interface
I'm on the look out for a Cor C++ library / SDK that will allow me to either write a 3d PDF directly, or convert a DXF or DWG into a 3D PDF. So far I have come up with the PDF3d library which fits the bill, but is reasonably costly and has an expensive per user run time license. I don't mind a reasonable SDK cost, but the per seat cost kills it for me. Anyone aware of any alternatives?
I use PDF XChange a lot for 2D CAD plots to PDF and it works well. I don't know if (I don't think so) it does 3D. I could find no mention of it at first glance on their site. Your other option is a 3D DWF. Also consider that DWG TrueView is a free viewer and printer that handles 3D DWGs natively. You used to be able to automate TrueView through COM but I'm not sure if that is still so. Here is a blog post on some automation of Trueview - http://through-the-interface.typepad.com/through_the_interface/2007/10/au-handouts-t-1.html
520,035
520,060
Why can't you overload the '.' operator in C++?
It would be very useful to be able to overload the . operator in C++ and return a reference to an object. You can overload operator-> and operator* but not operator. Is there a technical reason for this?
See this quote from Bjarne Stroustrup: Operator . (dot) could in principle be overloaded using the same technique as used for ->. However, doing so can lead to questions about whether an operation is meant for the object overloading . or an object referred to by . For example: class Y { public: void f(); // ... }; class X { // assume that you can overload . Y* p; Y& operator.() { return *p; } void f(); // ... }; void g(X& x) { x.f(); // X::f or Y::f or error? } This problem can be solved in several ways. At the time of standardization, it was not obvious which way would be best. For more details, see The Design and Evolution of C++.
520,599
520,620
Suggestion on book to read about refactoring?
Will it be easy for a C++ developer to read Refactoring: Improving the Design of Existing Code Is there any other book that I should read about refactoring? Feel free to add any articles on refactoring.
If you work with legacy code then it may be worth getting Working Effectively with Legacy Code by Michael Feathers.
520,893
521,650
C/C++ private array initialization in the header file
I have a class called Cal and it's .cpp and .h counterpart Headerfile has class Cal { private: int wa[2][2]; public: void do_cal(); }; .cpp file has #include "Cal.h" void Cal::do_cal() { print(wa) // where print just itterates and prints the elements in wa } My question is how do I initialize the array wa ? I just can't seem to get it to work. I tried with : int wa[2][2] = { {5,2}, {7,9} }; in the header file but I get errors saying I cant do so as it's against iso..something. Tried also to initialize the array wa in the constructor but that didnt work either.. What am I missing ? Thanks
If it can be static, you can initialize it in your .cpp file. Add the static keyword in the class declaration: class Cal { private: static int wa[2][2]; public: void do_cal(); }; and at file scope in the .cpp file add: #include "Cal.h" int Cal::wa[2][2] = { {5,2}, {7,9} }; void Cal::do_cal() { print(wa) // where print just itterates and prints the elements in wa } If you never change it, this would work well (along with making it const). You only get one that's shared with each instance of your class though.
521,083
521,092
How can I tell the compiler not to create a temporary object?
I'm changing an old routine that used to take an integer parameter so that it now takes a const reference to an object. I was hoping that the compiler would tell me where the function is called from (because the parameter type is wrong), but the object has a constructor that takes an integer, so rather than failing, the compiler creates a temporary object, passing it the integer, and passes a reference to that to the routine. Sample code: class thing { public: thing( int x ) { printf( "Creating a thing(%d)\n", x ); } }; class X { public: X( const thing &t ) { printf( "Creating an X from a thing\n" ); } }; int main( int, char ** ) { thing a_thing( 5 ); X an_x( 6 ); return 1; } I want the X an_x( 6 ) line to not compile, because there is no X constructor that takes an int. But it does compile, and the output looks like: Creating a thing(5) Creating a thing(6) Creating an X from a thing How can I keep the thing( int ) constructor, but disallow the temporary object?
Use the explicit keyword in the thing constructor. class thing { public: explicit thing( int x ) { printf( "Creating a thing(%d)\n", x ); } }; This will prevent the compiler from implicitly calling the thing constructor when it finds an integer.
521,133
521,192
Are "#define new DEBUG_NEW" and "#undef THIS_FILE" etc. actually necessary?
When you create a new MFC application, the wizard creates the following block of code in almost every CPP file: #ifdef _DEBUG #define new DEBUG_NEW #endif and sometimes it also adds this: #undef THIS_FILE static char THIS_FILE[] = __FILE__; I would like to remove this code from my CPP files if it is redundant. I am using an MFC app with C++/CLI on VS2008. I have tried running in Debug after deleting this code from a CPP, and it seems to work fine. "new"ing variables work fine, there are no leaks, and ASSERT dialogs show the correct filename and jump to the offending line. Can anyone tell me what it does and whether it's safe to delete it?
It is perfectly safe to delete this. It's a debugging aid; leaving it in will generate better details in the warnings in the output window of any memory leaks you have when the program exits. If you delete it, you still get the memory leak report, but just without any details about where in your source code they occurred.
521,147
821,742
The curious problem of the missing WM_NCLBUTTONUP message when a window isn't maximised
I've got a window that I handle WM_NCLBUTTONUP messages, in order to handle clicks on custom buttons in the caption bar. This works great when the window is maximised, but when it's not, the WM_NCLBUTTONUP message never arrives! I do get a WM_NCLBUTTONDOWN message though. Strangely WM_NCLBUTTONUP does arrive if I click on the right of the menu bar, but anywhere along the caption bar / window frame, the message never arrives. After a while of debugging I discovered that if I set a breakpoint on CMainFrame::OnNcLButtonDown(), clicked the caption bar, but keep the mouse button held down, let the debugger break in the function, hit F5 to continue debugging, then release the mouse button - magically WM_NCLBUTTONUP is sent!! My question is two-fold, (1) what the hell is going on? (2) how do I get around this "problem". I also note that there are several other people on the internet who have the same issue (a quick Google reveals lots of other people with the same issue, but no solution). Edit Thanks for the first two replies, I've tried calling ReleaseCapture in NCLButtonDown, but it has no effect (in fact, it returns NULL, indicating a capture is not in place). I can only assume that the base class (def window proc) functionality may set a capture. I shall investigate on Monday...
I've had this same problem. The issue is indeed that a left button click on the window caption starts a drag, and thus mouse capture, which prevents WM_NCLBUTTONUP from arriving. The solution is to override WM_NCHITTEST: LRESULT CALLBACK WndProc(HWND hWnd, UINT nMsg, WPARAM wParam, LPARAM lParam) { switch (nMsg) { ... case WM_NCHITTEST: Point p(GET_X_LPARAM(lParam), GET_Y_LPARAM(lParam); ScreenToClient(p); if (myButtonRect.Contains(p)) { return HTBORDER; } break; } return DefWindowProc(hWnd, nMsg, wParam, lParam); } So essentially you inform Windows that the area occupied by your button is not part of the window caption, but a non-specific part of the non-client area (HTBORDER). Footnote: If you have called SetCapture() and not yet called ReleaseCapture() when you expect the WM_NCLBUTTONDOWN message to come in, it won't arrive even with the above change. This can be irritating since it's normal to capture the mouse during interaction with such custom buttons so that you can cancel the click/highlight if the mouse leaves the window. However, as an alternative to using capture, you might consider SetTimer()/KillTimer() with a short (eg. 100 ms) interval, which won't cause WM_NCLBUTTONUP messages to vanish.
521,220
529,090
Decrypting RijndaelManaged Encrypted strings with CryptDecrypt
Ok I'm trying to use the Win32 Crypto API in C++ to decrypt a string encrypted in C# (.NET 2) with the RijndaelManaged Class. But I'm having no luck at all i get jibberish or a bad data Win32 error code. All my keys, IV and salt match, I've looked in the watch for both test apps. I've spent all say looking at it and I'm officialy stuck. Anyway here is the C# Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(GetPassPhrase(), salt, 1000); RijndaelManaged rijndael = new RijndaelManaged(); rijndael.BlockSize = 128; rijndael.KeySize = 256; rijndael.Mode = CipherMode.CBC; rijndael.Key = pdb.GetBytes(m_KeySize); rijndael.IV = GetIV(iv); ICryptoTransform encryptor = rijndael.CreateEncryptor(); MemoryStream msEncrypt = new MemoryStream(); CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write); Byte[] encryptedBytes = null; Byte[] toBeEncrypted = UnicodeEncoding.Unicode.GetBytes(value); csEncrypt.Write(toBeEncrypted, 0, toBeEncrypted.Length); csEncrypt.FlushFinalBlock(); encryptedBytes = msEncrypt.ToArray(); The C++ to decrypt it is: keyBlob.hdr.bType = PLAINTEXTKEYBLOB; keyBlob.hdr.bVersion = CUR_BLOB_VERSION; keyBlob.hdr.reserved = 0; keyBlob.hdr.aiKeyAlg = CALG_AES_256; keyBlob.cbKeySize = KEY_SIZE; keyBlob.rgbKeyData = &byKey[0]; if ( CryptImportKey( hProv, (const LPBYTE) &keyBlob, sizeof(BLOBHEADER) + sizeof(DWORD) + KEY_SIZE, 0, CRYPT_EXPORTABLE, &hKey ) ) { if ( CryptSetKeyParam( hKey, KP_IV, (const BYTE *) &byIV, 0)) { DWORD dwLen = iDestLen; if ( CryptDecrypt( hKey, 0, TRUE, 0, pbyData, &dwLen)) { if ( dwLen < (DWORD) *plOutSize) { memcpy_s(pbyOutput, *plOutSize, pbyData, dwLen); *plOutSize = dwLen; bRet = TRUE; } } else { // Log DWORD dwErr = ::GetLastError(); int y =0; } } } I'm calling CryptAcquireContext successfully and my C++ is executing fine. Can anyone spot the error in my ways. It's starting to depress me know :(
Ok my fault, I didn't include the Struct def for the keyblob in the C++ and it turns out you need a contigous block of data for the key with the header but I was using the MSDN example that had a pointer to the key data. Which is wrong!
521,338
527,545
Unexplainable crash in DirectX app in Windows XP that uses English language
The app was working fine but now a few weeks later when the new version begun testing, it crashes. Tried it on five of the workstations, it crashes only on two of them. And the only common about them I can find is that those two have Windows installed with English language. Its a DirectX 8.1 application, written in C++ with Visual Studio 2005. SP2 is installed on all machines. I have no clue about what could cause this. Surely, the language can't cause an DX app to crash? I'm going to look for more common elements but I just wanted to ask if anyone have seen this before? If the language really is the problem. And how to solve it. Edit: The actual error message is This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix the problem. At first we thought it was the Visual Studio Redistributable, but no luck. Something is missing, and I need to figure out what.
Problem solved. And as a note to others having the same problem, I found the answer in this question. We installed the VS2005 CRT alright, but not the SP1 one. Edit: Although, I still have no idea why this only affected the english workstations. Maybe it was a coincidence after all.
521,493
521,612
Creating a linear gradient in 2D array
I have a 2D bitmap-like array of let's say 500*500 values. I'm trying to create a linear gradient on the array, so the resulting bitmap would look something like this (in grayscale): (source: showandtell-graphics.com) The input would be the array to fill, two points (like the starting and ending point for the Gradient tool in Photoshop/GIMP) and the range of values which would be used. My current best result is this: alt text http://img222.imageshack.us/img222/1733/gradientfe3.png ...which is nowhere near what I would like to achieve. It looks more like a radial gradient. What is the simplest way to create such a gradient? I'm going to implement it in C++, but I would like some general algorithm.
In your example image, it looks like you have a radial gradient. Here's my impromtu math explanation for the steps you'll need. Sorry for the math, the other answers are better in terms of implementation. Define a linear function (like y = x + 1) with the domain (i.e. x) being from the colour you want to start with to the colour your want to end with. You can think of this in terms of a range the within Ox0 to OxFFFFFF (for 24 bit colour). If you want to handle things like brightness, you'll have to do some tricks with the range (i.e. the y value). Next you need to map a vector across the matrix you have, as this defines the direction that the colours will change in. Also, the colour values defined by your linear function will be assigned at each point along the vector. The start and end point of the vector also define the min and max of the domain in 1. You can think of the vector as one line of your gradient. For each cell in the matrix, colours can be assigned a value from the vector where a perpendicular line from the cell intersects the vector. See the diagram below where c is the position of the cell and . is the the point of intersection. If you pretend that the colour at . is Red, then that's what you'll assign to the cell. | c | | Vect:____.______________ | |
521,518
521,600
Visual Studio C++ Debugger: No hex dump?
Why is the integrated vs debugger so... barely functional? I cannot see the contents of an object in memory. For example, I am working with bitmaps and I would like to see them in memory. Do I need a better debugger for this? If so I am interested in recommendations. Nothing too powerful like a disassembler, just the debugger.
I've never found it to be "barely functional". VS gives you disassembly by default when it can't find source, and it's pretty easy to get to the memory view. Debug-> Windows -> Memory. Type "this" into the Address: box to get the memory of your current object. To view a specific member type '&this->member_name'. It'll jump right to the first byte.
521,754
521,875
When to use friend class in C++
Possible Duplicate: When should you use 'friend' in C++? I was brushing up on my C++ (I'm a Java developer) and I came across the friend class keyword which I had forgotten about for a while. Is this one of those features that's just part of the kitchen sink, or is there a good reason for doing this rather than just a vanilla getter? I understand the difference in that it limits who can access the data, but I can't think of a scenario when this would be necessary. Note: I've seen a similar question, but specifically I'm asking, is this just an advanced feature that adds no real value except to confuse people looking at you're code until they realize what you're doing?
I agree with the comments that say the friend keyword can improve encapsulation if used wisely. I'd just add that the most common (legitimate!) use for friend classes may be testing. You may want a tester class to have a greater degree of access than other client classes would have. A tester class could have a good reason to look at internal details that are deliberately hidden from other classes.
521,809
710,741
Creating a C++ Class Diagram
In Visual Studio .NET projects you can add a "Class Diagram" to the project which renders a visual representation of all namespaces, classes, methods, and properties. Is there any way to do this for Win32 (not .NET) C++ projects? Either through Visual Studio itself or with a 3rd party tool?
If you have a Visual Studio 2008 solution composed of multiple C++ projects, you can only generate one class diagram per project. For example, if you have one application project linking to 10 library projects, you'll have to generate 11 separate class diagrams. There are two ways to work around this, neither of which is pleasant: Cram all the source into a single project. Create a class diagram for one project (the application, perhaps) and then drag files from all the other projects into the class diagram. A more thorough exploration of the capabilities of the Visual Studio class designer is given in Visual C++ Class Designer. Given the poor support for C++ class diagrams in Visual Studio, you're probably better off going with a commercial tool if you want anything more than a simple list of what classes you have. WinTranslator from Excel Software might be worth looking at, and someone I work with uses Source Insight.
521,824
526,693
event notifications not received as described
Problem: Event notifications (From COM object - Server) are not received as listed in the Sink (class) implementation. One event notification is received (Event_one), however, others are not received accordingly If order is changed - in IDispatch::Invoke, that is: if Event_one is swapped to Event_two then Event_two notification received but Event_one and others neglected accordingly Question: Better way to implement, IDispatch::Invoke or QI? Using the wrong logic or approach? Note: No MFC No ATL Pure C++ using message loop: GetMessage() STA model ( Coinitialize() ) Call to IDispatch::Advise successful (HRESULT from call S_OK) After above, COM object method calls as normal (with interface pointer) Single call to Advise Type library generated from MIDL compiler For instance (example): Illustration of IDispatch::Invoke - taken from Sink Class: HRESULT STDMETHODCALLTYPE Invoke( { //omitted parameters // The riid parameter is always supposed to be IID_NULL if (riid != IID_NULL) return DISP_E_UNKNOWNINTERFACE; if (pDispParams) //DISPID dispIdMember { switch (dispIdMember) { case 1: return Event_one(); case 2: return Event_two(); case 3: return Event_three(); default: return E_NOTIMPL; } } return E_NOTIMPL; } Illustration of QueryInterface: STDMETHOD (QueryInterface)( //omitted parameters { if (iid == IID_IUnknown || iid == __uuidof(IEvents)) { *ppvObject = (IEvents *)this; } else { *ppvObject = NULL; return E_NOINTERFACE; } m_dwRefCount++; return S_OK; };
SOLVED: After reviewing the corresponding IDL FILE (generated by the MIDL compiler), it was evident that each method contained in the IEvent interface, has a unique ID. For instance, Event_one has an ID of 2. For example: methods: [id(0x00000002)] HRESULT Event_one(); Therefore, making a change as followings - in the IDispatch::invoke implementation (illustrated in the above question): //omitted if (pDispParams) //DISPID dispIdMember { switch (dispIdMember) { case 2: return Event_one(); //omitted Now when invoked accordingly, the desired/correct method is now executed.
521,957
522,998
How to develop a DirectFB app without leaving X.11 environment
I'm trying to develop a GUI application for an embedded platform, without any windowing whatsoever and I'm doing that with DirectFB, and it suits my needs very fine. Since the embedded I develop for is not that powerful, I would really like to try to develop on my own Ubuntu desktop. The problem is Framebuffer is conflicting with X.org causing me to leave the whole desktop, and shutdown X.org just to see the result of my changes. Is there a good framebuffer simulator that suits my needs? Qt has one, called QVFb, but it only works for developing Qt apps, and the VNC back-end of DirectFB always crash. So, any ideas?
DirectFB has a X11 backend. $ sudo apt-get install libdirectfb-extra # for Debian and Ubuntu, anyhow $ cat ~/.directfbrc system=x11 force-windowed Also, DirectFB has a SDL backend, and SDL has a X11 backend. Also, SDL has a GGI backend, and GGI has an X backend. That's a bit circuitous, but it should work :) I tested it with $ SDL_VIDEODRIVER=directfb ffplay some_movie.avi and got a nice 640x480 window with media playing and DirectFB handling layering and input, so I'm sure this works.
521,972
523,092
Why is runtime library a compiler option rather than a linker option?
I'm trying to build a C/C++ static library using visual studio 2005. Since the selection of the runtime library is a compile option, I am forced to build four variations of my library, one for each variation of the runtime library: /MT - static runtime library /MD - DLL runtime library /MTd - debug static runtime library /MDd - debug DLL runtime library These are compiler options, not linker options. Coming from a Linux background, this seems strange. Do the different runtime libraries have different calling conventions or something? Why can't the different runtime libraries be resolved at link time, i.e. when I link the application which uses my static library?
One side effect of the C preprocessor definitions like _DLL and _DEBUG that zdan mentioned: Some data structures (such as STL containers and iterators) may be sized differently in the debug runtime, possibly due to features such as _HAS_ITERATOR_DEBUGGING and _SECURE_SCL. You must compile your code with structure definitions that are binary-compatible with the library you're linking to. If you mix and match object files that were compiled against different runtime libraries, you will get linker warnings such as the following: warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs
522,278
522,476
Trouble examining byte code in MSVC++
I've been messing around with the free Digital Mars Compiler at work (naughty I know), and created some code to inspect compiled functions and look at the byte code for learning purposes, seeing if I can learn anything valuable from how the compiler builds its functions. However, recreating the same method in MSVC++ has failed miserably and the results I am getting are quite confusing. I have a function like this: unsigned int __stdcall test() { return 42; } Then later I do: unsigned char* testCode = (unsigned char*)test; I can't seem to get the C++ static_cast to work in this case (it throws a compiler error)... hence the C-style cast, but that's besides the point... I've also tried using the reference &test, but that helps none. Now, when I examine the contents of the memory pointed to by testCode I am confused because what I see doesn't even look like valid code, and even has a debug breakpoint stuck in there... it looks like this (target is IA-32): 0xe9, 0xbc, 0x18, 0x00, 0x00, 0xcc... This is clearly wrong, 0xe9 is a relative jump instruction, and looking 0xbc bytes away it looks like this: 0xcc, 0xcc, 0xcc... i.e. memory initialised to the debug breakpoint opcode as expected for unallocated or unused memory. Where as what I would expect from a function returning 42 would be something like: 0x8b, 0x2a, 0x00, 0x00, 0x00, 0xc3 or at least some flavour of mov followed by a ret (0xc2, 0xc3, 0xca or 0xcb)a little further down Is MSVC++ taking steps to prevent me from doing this sort of thing for security reasons, or am I doing something stupid and not realising it? This method seems to work fine using DMC as the compiler... I'm also having trouble going the other way (executing bytes), but I suspect that the underlying cause is the same. Any help or tips would be greatly appreciated.
I can only guess, but I'm pretty sure you are inspecting a debug build. In debug mode the MSVC++ compiler replaces all calls by calls to jump stubs. This means, that every function starts with a jump to the real function and this is exactly what you are facing here. The surrounding 0xCC bytes are indeed breakpoint instructions, in order to fire a possibly attached debugger in case you're executing code where you shouldn't. Try the same with a release build. That should work as expected. Edit: This is actually affected by the linker setting /INCREMENTAL. The reason that the effect you're describing doesn't show up in release builds is that these jump stubs are simply optimized away if any kind of optimization is turned on (which is of course usually the case for release builds).
522,637
522,701
Should objects delete themselves in C++?
I've spent the last 4 years in C# so I'm interested in current best practices and common design patterns in C++. Consider the following partial example: class World { public: void Add(Object *object); void Remove(Object *object); void Update(); } class Fire : Object { public: virtual void Update() { if(age > burnTime) { world.Remove(this); delete this; } } } Here we have a world responsible for managing a set of objects and updating them regularly. Fire is an an object that might be added to the world under many different circumstances but typically by another object already in the world. Fire is the only object that knows when it has burned out so currently I have it deleting itself. The object that created the fire is likely no longer in existence or relevant. Is this a sensible thing to do or is there a better design that would help clean up these objects?
The problem with this is that you're really creating an implicit coupling between the object and the World class. If I try to call Update() outside the World class, what happens? I might end up with the object being deleted, and I don't know why. It seems the responsibilities are badly mixed up. This is going to cause problems the moment you use the Fire class in a new situation you hadn't thought of when you wrote this code. What happens if the object should be deleted from more than one place? Perhaps it should be removed both from the world, the current map, and the player's inventory? Your Update function will remove it from the world, and then delete the object, and the next time the map or the inventory tries to access the object, Bad Things Happen. In general, I'd say it is very unintuitive for an Update() function to delete the object it is updating. I'd also say it's unintuitive for an object to delete itself. The object should more likely have some kind of way to fire an event saying that it has finished burning, and anyone interested can now act on that. For example by removing it from the world. For deleting it, think in terms of ownership. Who owns the object? The world? That means the world alone gets to decide when the object dies. That's fine as long as the world's reference to the object is going to outlast an other references to it. Do you think the object own itself? What does that even mean? The object should be deleted when the object no longer exists? Doesn't make sense. But if there is no clearly defined single owner, implement shared ownership, for example using a smart pointer implementing reference counting, such as boost::shared_ptr But having a member function on the object itself, which is hardcoded to remove the object from one specific list, whether or not it exists there, and whether or not it also exists in any other list, and also delete the object itself regardless of which references to it exist, is a bad idea.
522,919
522,925
Is this a reasonable use of the ternary operator?
Are there any understanding / maintainability issues that result from code like inVar1 == 0 ? NULL : v.push_back(inVar1); inVar2 == 0 ? NULL : v.push_back(inVar2); and so forth. The possibly confusing idea is using the ternary operator for program flow rather than variable assignment, which is the usual explanation. I haven't seen coding standards at work that address this usage, so while I'm comfortable doing this I'd like to find out if there is a good reason not to.
I think it's confusing and a lot harder to read than simply typing; if (inVar != 0) v.push_back(inVar); I had to scan your example several times to figure out what the result would be with any certainty. I'd even prefer a single-line if() {} statement than your example - and I hate single-line if statements :)
522,931
522,939
Creating variables / vectors based on runtime
I am working on a program that "encodes" a file based on a supplied bookfile. The resulting file has each letter replaced with a number. This number corresponds to the offset of that letters appearence in the bookfile. So if we had "hello" it would pick an 'h' from the bookfile, find its location number and replace it in the output. Mine works correctly but i am looking for a way to optimize it. As of right now, everytime it brings in a new letter it creates a vector of offset numbers to choose from. I would like to be able to "save" this vector and use it again if i find the same number again. I am at a loss as to how i would program this however. For example, if i read in an 'h' i would like to save it as vector<int> hLocations; Is there anyway of doing this or am i just insane? i was thinking of making a function that does this, but the part that confuses me is <int>Locations; Is there a way of using variables inside C++ code? i think that is what i am really asking.
You could use a std::map<unsigned int, std::vector<unsigned int> >, so that the character of interest is the key to the vector of offsets. That way, you don't have to code up N different vectors for each possible character in the file.
523,385
523,472
How do I limit an external DLL to one CPU?
I have a program that I would like to run on just one CPU so it doesn't take up too much system resources. The problem is, it makes a call into an external DLL that automatically uses all available CPU cores. I do not have the source code to the external DLL. How can I limit the DLL to only using one CPU? EDIT: Thanks for the help, here is the code I used to limit to one CPU (Windows): // Limit the process to only 1 thread so we don't chew up system resources HANDLE ProcessHandle = GetCurrentProcess(); DWORD ProcessAffinityMask; DWORD SystemAffinityMask; if(GetProcessAffinityMask(ProcessHandle,&ProcessAffinityMask,&SystemAffinityMask) && SystemAffinityMask != 0) { // Limit to 1 thread by masking all but 1 bit of the system affinity mask DWORD NewProcessAffinityMask = ((SystemAffinityMask-1) ^ SystemAffinityMask) & SystemAffinityMask; SetProcessAffinityMask(ProcessHandle,NewProcessAffinityMask); } EDIT: Turns out Brannon's approach of setting process priority works even better for what I want, which is to keep the process from chewing up resources. Here's that code (Windows): // Make the process low priority so we don't chew up system resources HANDLE ProcessHandle = GetCurrentProcess(); SetPriorityClass(ProcessHandle,BELOW_NORMAL_PRIORITY_CLASS);
Setting processor affinity is the wrong approach. Let the OS handle scheduling. If the machine is sitting idle, you want to use as much processor as you can. Otherwise you're doing less work for no reason. If the machine is busy, then you want to make use of "free" cycles and not adversely affect other processes. Windows has this functionality built-in. The proper solution for this is to set the base priority of the process. See http://msdn.microsoft.com/en-us/library/ms686219(VS.85).aspx for details on SetPriorityClass(). If you want to test this without writing any code, use Task Manager to change the priority of your process.
523,724
523,737
C/C++ check if one bit is set in, i.e. int variable
int temp = 0x5E; // in binary 0b1011110. Is there such a way to check if bit 3 in temp is 1 or 0 without bit shifting and masking. Just want to know if there is some built in function for this, or am I forced to write one myself.
In C, if you want to hide bit manipulation, you can write a macro: #define CHECK_BIT(var,pos) ((var) & (1<<(pos))) and use it this way to check the nth bit from the right end: CHECK_BIT(temp, n - 1) In C++, you can use std::bitset.
523,827
523,833
C++0x atomic template implementation
I know that similar template exits in Intel's TBB, besides that I can't find any implementation on google or in Boost library.
You can find discussions about this feature implementation in boost there : http://lists.boost.org/Archives/boost/2008/11/144803.php > Can the N2427 - C++ Atomic Types and Operations be implemented > without the help of the compiler? No. They don't need to be intrinsics if you can write inline assembler (or separately-compiled assembler for that matter) then you can write the operations themselves directly. You might even be able to use simple C++ (e.g. just plain assignment for load or store). The reason you need compiler support is preventing inappropriate optimizations: atomic operations can't be optimized out, and generally must not be reordered before or after any other operations. This means that even non-atomic stores performed before an atomic store have to be complete, and can't be cached in a register (for example). Also, loads that occur after an atomic operation cannot be hoisted before the atomic op. On some compilers, just using inline assembler is enough. On others, calling an external function is enough. MSVC provides _ReadWriteBarrier() to provide the compiler ordering. Other compilers need other flags.
523,872
523,882
How do you serialize an object in C++?
I have a small hierarchy of objects that I need to serialize and transmit via a socket connection. I need to both serialize the object, then deserialize it based on what type it is. Is there an easy way to do this in C++ (as there is in Java)? Just to be clear, I'm looking for methods on converting an object into an array of bytes, then back into an object. I can handle the socket transmission.
Talking about serialization, the boost serialization API comes to my mind. As for transmitting the serialized data over the net, I'd either use Berkeley sockets or the asio library. If you want to serialize your objects to a byte array, you can use the boost serializer in the following way (taken from the tutorial site): #include <boost/archive/binary_oarchive.hpp> #include <boost/archive/binary_iarchive.hpp> class gps_position { private: friend class boost::serialization::access; template<class Archive> void serialize(Archive & ar, const unsigned int version) { ar & degrees; ar & minutes; ar & seconds; } int degrees; int minutes; float seconds; public: gps_position(){}; gps_position(int d, int m, float s) : degrees(d), minutes(m), seconds(s) {} }; Actual serialization is then pretty easy: #include <fstream> std::ofstream ofs("filename.dat", std::ios::binary); // create class instance const gps_position g(35, 59, 24.567f); // save data to archive { boost::archive::binary_oarchive oa(ofs); // write class instance to archive oa << g; // archive and stream closed when destructors are called } Deserialization works in an analogous manner. There are also mechanisms which let you handle serialization of pointers (complex data structures like tress etc are no problem), derived classes and you can choose between binary and text serialization. Besides all STL containers are supported out of the box.
523,894
523,906
Overloading . -> and :: for use in multiplatform classes
Say I have three window classes, one for each OS I want to support: WindowsWindow OSXWindow LinuxWindow They all inherit from the Window class. This is also the class you instantiate. The Window class have the . -> and :: operators overloaded, and depending on which OS were running on (based on IFDEFs) it casts the this pointer to the relevant class. What I want to do is just create a Window instance without any clue as to what OS is running. Is this thinking very wrong? Is the downcast to dangerous? Are there better ways to do this? Im aware that there are libraries to do this, but I want to try it myself. I guess the easiest way is to create a factory. But can something like this be done?
You can't overload the scope resolution operator :: at all. You could overload the -> operator, but when you invoke that operator, you already have to have an object of the requisite type. For creating your windows, just use a simple factory method: class Window { public: static Window *CreateWindow(...) { #ifdef _WIN32 return new Win32Window(...); #elif defined(/** whatever is defined for Linux */) return new X11Window(...); #elif defined(/** whatever is defined for Mac */) return new CocoaWindow(...); #else #error "Bad platform!" #endif } };
524,028
524,054
Enum bitfield container class
Im trying to write a small class to better understand bit flags in c++. But something isnt working out. It prints the wrong values. Where is the problem? Have I misunderstood how to add flags? Or check if the bit field has them? Heres the code: #include <iostream> enum flag { A = 1, B = 2, C = 4 }; class Holder { public: Holder() : m_flags(A) {} ~Holder() {} void add_flag(flag f) { m_flags |= f; } bool has_flag(flag f) { return ((m_flags&f)==f); } void remove_flag(flag f) { unsigned int flags = 0; for (int i = 1; i<=(int)C; i *= 2) { if ((flag)i!=f && has_flag(f)) flags |= f; } m_flags = flags; } void print() { std::cout << "flags are now: " << m_flags << " | holding: "; for (int i = 1; i<=(int)C; i *= 2) { if (has_flag((flag)i)) std::cout << i << " "; } std::cout << std::endl; } private: unsigned int m_flags; }; int main() { Holder h; h.print(); // should print 1 h.add_flag(B); h.print(); // should print 1 2 h.remove_flag(A); h.print(); // should print 2 h.add_flag(C); h.print(); // should print 2 4 h.remove_flag(B); h.print(); // should print 4 } Output of program: flags are now: 1 | holding: 1 flags are now: 3 | holding: 1 2 flags are now: 1 | holding: 1 flags are now: 5 | holding: 1 4 flags are now: 0 | holding:
personally I would use std::vector< bool > to handle flags, since it is a specialization that packs bools into bit. However: I think your remove flag is a bit complex, try this instead void remove_flag( flag f ) { if ( has_flag( f ) == true ) { m_flags ^= f; // toggle the bit leaving all other unchanged } } Edit: A comment asked why I just didn't do m_flags &= ~f. I took the question as a 'learner' question not an optimization question. I show how to make his code correct, not fast.
524,137
524,167
Get icons for common file types
I want to get the icons of common file types in my dll. I am using vc++. I only have the file extension and mime type of the file based on which I want to get the icon for the file. Can someone please tell me how I can do that? (The method available in vc++ needs the user to give the path of the file for which the icon is needed. I do not have access to any such file) Thanks.
Shell API You can get them from the shell by calling SHGetFileInfo() along with the SHGFI_USEFILEATTRIBUTES flag - this flag allows the routine to work without requiring the filename passed in to actually exist, so if you have a file extension just make up a filename, append the extension, and pass it in. By combining other flags, you'll be able to retrieve: A large or small icon as determined by the system configuration: SHGFI_ICON|SHGFI_LARGEICON or SHGFI_ICON|SHGFI_SMALLICON A large or small icon as determined by the shell configuration: SHGFI_ICON|SHGFI_LARGEICON|SHGFI_SHELLICONSIZE or SHGFI_ICON|SHGFI_SMALLICON|SHGFI_SHELLICONSIZE The index of the icon in the shell's image list along with the appropriate image list: SHGFI_SYSICONINDEX The path and filename of the actual module where the icon is stored (along with the icon index in that module): SHGFI_ICONLOCATION Examples // Load a System Large icon image SHGetFileInfo( szFileName, FILE_ATTRIBUTE_NORMAL, &shfi, sizeof(SHFILEINFO), SHGFI_USEFILEATTRIBUTES | SHGFI_ICON | SHGFI_LARGEICON); // Load a System Small icon image SHGetFileInfo( szFileName, FILE_ATTRIBUTE_NORMAL, &shfi, sizeof(SHFILEINFO), SHGFI_USEFILEATTRIBUTES | SHGFI_ICON | SHGFI_SMALLICON); // Load a Shell Large icon image SHGetFileInfo( szFileName, FILE_ATTRIBUTE_NORMAL, &shfi, sizeof(SHFILEINFO), SHGFI_USEFILEATTRIBUTES | SHGFI_ICON | SHGFI_SHELLICONSIZE); // Load a Shell Small icon image SHGetFileInfo( szFileName, FILE_ATTRIBUTE_NORMAL, &shfi, sizeof(SHFILEINFO), SHGFI_USEFILEATTRIBUTES | SHGFI_ICON | SHGFI_SHELLICONSIZE | SHGFI_SMALLICON); If you want to draw such an icon, use something like this: // Draw it at its native size DrawIconEx( hDC, nLeft, nTop, hIcon, 0, 0, 0, NULL, DI_NORMAL ); // Draw it at the System Large size DrawIconEx( hDC, nLeft, nTop, hIcon, 0, 0, 0, NULL, DI_DEFAULTSIZE | DI_NORMAL ); // Draw it at some other size (40x40 in this example) DrawIconEx( hDC, nLeft, nTop, hIcon, 40, 40, 0, NULL, DI_NORMAL ); The icon handle as well as the file system path can be obtained from the SHFILEINFO structure: typedef struct _SHFILEINFOA { HICON hIcon; // out: icon int iIcon; // out: icon index DWORD dwAttributes; // out: SFGAO_ flags CHAR szDisplayName[MAX_PATH]; // out: display name (or path) CHAR szTypeName[80]; // out: type name } SHFILEINFOA; Keep in mind that you must free the obtained icon by passing hIcon to DestroyIcon() after you're done with it.
524,342
524,350
How to store a hash table in a file?
How can I store a hash table with separate chaining in a file on disk? Generating the data stored in the hash table at runtime is expensive, it would be faster to just load the HT from disk...if only I can figure out how to do it. Edit: The lookups are done with the HT loaded in memory. I need to find a way to store the hashtable (in memory) to a file in some binary format. So that next time when the program runs it can just load the HT off disk into RAM. I am using C++.
What language are you using? The common method is to do some sort binary serialization. Ok, I see you have edited to add the language. For C++ there a few options. I believe the Boost serialization mechanism is pretty good. In addition, the page for Boost's serialization library also describes alternatives. Here is the link: http://www.boost.org/doc/libs/1_37_0/libs/serialization/doc/index.html
524,453
795,370
boost asio and endian
I cant tell, does boost asio handle endian?
Asio does convert things like port into network order. The conversion functions are not exposed as part of the official interface and are hidden in the detail namespace instead (e.g. boost::asio::detail::socket_ops::host_to_network_short).
524,495
524,510
Is this code threadsafe?
I'm writing some code where the UI thread need to communicate with the background thread doing network communication. The code works, but would it be considered thread safe? I would feel a lot better if someone experienced could lead me on to the right path on this... static Mutex^ mut_currentPage = gcnew Mutex; static array<unsigned char>^ m_currentPage; property array<unsigned char>^ Write { void set(array<unsigned char>^ value) { mut_currentPage->WaitOne(); m_currentPage = value; mut_currentPage->ReleaseMutex(); } } This is .NET C++ code... :)
It looks thread-safe, but you might want to think about exception handling; setting a field shouldn't error (except perhaps ThreadAbortException), but if the code was more complex, you'd want to ensure you release the mutex upon exception. I'd also look at Monitor ("lock" in C#) One other thought: even if you lock the field access, an array is inherently mutable. Consider using string instead, since this is immutable?
524,512
524,550
Most suitable asynchronous socket model for an instant messenger client?
I'm working on an instant messenger client in C++ (Win32) and I'm experimenting with different asynchronous socket models. So far I've been using WSAAsyncSelect for receiving notifications via my main window. However, I've been experiencing some unexpected results with Winsock spawning additionally 5-6 threads (in addition to the initial thread created when calling WSAAsyncSelect) for one single socket. I have plans to revamp the client to support additional protocols via DLL:s, and I'm afraid that my current solution won't be suitable based on my experiences with WSAAsyncSelect in addition to me being negative towards mixing network with UI code (in the message loop). I'm looking for advice on what a suitable asynchronous socket model could be for a multi-protocol IM client which needs to be able to handle roughly 10-20+ connections (depending on amount of protocols and protocol design etc.), while not using an excessive amount of threads -- I am very interested in performance and keeping the resource usage down. I've been looking on IO Completion Ports, but from what I've gathered, it seems overkill. I'd very much appreciate some input on what a suitable socket solution could be! Thanks in advance! :-)
There are four basic ways to handle multiple concurrent sockets. Multiplexing, that is using select() to poll the sockets. AsyncSelect which is basically what you're doing with WSAAsyncSelect. Worker Threads, creating a single thread for each connection. IO Completion Ports, or IOCP. dp mentions them above, but basically they are an OS specific way to handle asynchronous I/O, which has very good performance, but it is a little more confusing. Which you choose often depends on where you plan to go. If you plan to port the application to other platforms, you may want to choose #1 or #3, since select is not terribly different from other models used on other OS's, and most other OS's also have the concept of threads (though they may operate differently). IOCP is typically windows specific (although Linux now has some async I/O functions as well). If your app is Windows only, then you basically want to choose the best model for what you're doing. This would likely be either #3 or #4. #4 is the most efficient, as it calls back into your application (similar, but with better peformance and fewer issues to WSAsyncSelect). The big thing you have to deal with when using threads (either IOCP or WorkerThreads) is marshaling the data back to a thread that can update the UI, since you can't call UI functions on worker threads. Ultimately, this will involve some messaging back and forth in most cases. If you were developing this in Managed code, i'd tell you to look at Jeffrey Richter's AysncEnumerator, but you've chose C++ which has it's pros and cons. Lots of people have written various network libraries for C++, maybe you should spend some time researching some of them.
524,548
524,624
Regular expression to detect semi-colon terminated C++ for & while loops
In my Python application, I need to write a regular expression that matches a C++ for or while loop that has been terminated with a semi-colon (;). For example, it should match this: for (int i = 0; i < 10; i++); ... but not this: for (int i = 0; i < 10; i++) This looks trivial at first glance, until you realise that the text between the opening and closing parenthesis may contain other parenthesis, for example: for (int i = funcA(); i < funcB(); i++); I'm using the python.re module. Right now my regular expression looks like this (I've left my comments in so you can understand it easier): # match any line that begins with a "for" or "while" statement: ^\s*(for|while)\s* \( # match the initial opening parenthesis # Now make a named group 'balanced' which matches a balanced substring. (?P<balanced> # A balanced substring is either something that is not a parenthesis: [^()] | # …or a parenthesised string: \( # A parenthesised string begins with an opening parenthesis (?P=balanced)* # …followed by a sequence of balanced substrings \) # …and ends with a closing parenthesis )* # Look for a sequence of balanced substrings \) # Finally, the outer closing parenthesis. # must end with a semi-colon to match: \s*;\s* This works perfectly for all the above cases, but it breaks as soon as you try and make the third part of the for loop contain a function, like so: for (int i = 0; i < 10; doSomethingTo(i)); I think it breaks because as soon as you put some text between the opening and closing parenthesis, the "balanced" group matches that contained text, and thus the (?P=balanced) part doesn't work any more since it won't match (due to the fact that the text inside the parenthesis is different). In my Python code I'm using the VERBOSE and MULTILINE flags, and creating the regular expression like so: REGEX_STR = r"""# match any line that begins with a "for" or "while" statement: ^\s*(for|while)\s* \( # match the initial opening parenthesis # Now make a named group 'balanced' which matches # a balanced substring. (?P<balanced> # A balanced substring is either something that is not a parenthesis: [^()] | # …or a parenthesised string: \( # A parenthesised string begins with an opening parenthesis (?P=balanced)* # …followed by a sequence of balanced substrings \) # …and ends with a closing parenthesis )* # Look for a sequence of balanced substrings \) # Finally, the outer closing parenthesis. # must end with a semi-colon to match: \s*;\s*""" REGEX_OBJ = re.compile(REGEX_STR, re.MULTILINE| re.VERBOSE) Can anyone suggest an improvement to this regular expression? It's getting too complicated for me to get my head around.
You could write a little, very simple routine that does it, without using a regular expression: Set a position counter pos so that is points to just before the opening bracket after your for or while. Set an open brackets counter openBr to 0. Now keep incrementing pos, reading the characters at the respective positions, and increment openBr when you see an opening bracket, and decrement it when you see a closing bracket. That will increment it once at the beginning, for the first opening bracket in "for (", increment and decrement some more for some brackets in between, and set it back to 0 when your for bracket closes. So, stop when openBr is 0 again. The stopping positon is your closing bracket of for(...). Now you can check if there is a semicolon following or not.
524,591
524,843
Performance of creating a C++ std::string from an input iterator
I'm doing something really simple: slurping an entire text file from disk into a std::string. My current code basically does this: std::ifstream f(filename); return std::string(std::istreambuf_iterator<char>(f), std::istreambuf_iterator<char>()); It's very unlikely that this will ever have any kind of performance impact on the program, but I still got curious whether this is a slow way of doing it. Is there a risk that the construction of the string will involve a lot of reallocations? Would it be better (that is, faster) to use seekg()/tellg() to calculate the size of the file and reserve() that much space in the string before doing the reading?
I benchmarked your implementation(1), mine(2), and two others(3 and 4) that I found on stackoverflow. Results (Average of 100 runs; timed using gettimeofday, file was 40 paragraphs of lorem ipsum): readFile1: 764 readFile2: 104 readFile3: 129 readFile4: 402 The implementations: string readFile1(const string &fileName) { ifstream f(fileName.c_str()); return string(std::istreambuf_iterator<char>(f), std::istreambuf_iterator<char>()); } string readFile2(const string &fileName) { ifstream ifs(fileName.c_str(), ios::in | ios::binary | ios::ate); ifstream::pos_type fileSize = ifs.tellg(); ifs.seekg(0, ios::beg); vector<char> bytes(fileSize); ifs.read(&bytes[0], fileSize); return string(&bytes[0], fileSize); } string readFile3(const string &fileName) { string data; ifstream in(fileName.c_str()); getline(in, data, string::traits_type::to_char_type( string::traits_type::eof())); return data; } string readFile4(const std::string& filename) { ifstream file(filename.c_str(), ios::in | ios::binary | ios::ate); string data; data.reserve(file.tellg()); file.seekg(0, ios::beg); data.append(istreambuf_iterator<char>(file.rdbuf()), istreambuf_iterator<char>()); return data; }
524,633
524,636
How can i avoid name mangling?
How can I avoid name mangling in C++?
You can't. It's built into compilers to allow you overloading functions and to have functions with the same name in different classes and such stuff. But you can write functions that are mangled like C functions. Those can be called from C code. But those can't be overloaded and can't be called by "normal" C++ function pointers: extern "C" void foo() { } The above function will be mangled like C functions for your compiler. That may include no change at all to the name, or some changes like a leading "_" in front of it or so.
524,641
528,661
How do I create my own ostream/streambuf?
For educational purposes I want to create a ostream and stream buffer to do: fix endians when doing << myVar; store in a deque container instead of using std:cout or writing to a file log extra data, such as how many times I did <<, how many times I did .write, the amount of bytes I written and how many times I flush(). But I do not need all the info. I tried overloading but failed horribly. I tried overloading write by doing ostream& write( const char* s, streamsize n ) in my basic_stringstream2 class (I copied paste basic_stringstream into my cpp file and modified it) but the code kept using basic_ostream. I looked through code and it looks like I need to overload xsputn (which isn't mention on this page http://www.cplusplus.com/reference/iostream/ostream ) but what else do I need to overload? and how do I construct my class (what does it need to inherit, etc)?
The canonical approach consists in defining your own streambuf. You should have a look at: Angelika LAnger's articles on IOStreams derivation James Kanze's articles on filtering streambufs boost.iostream for examples of application
524,805
524,862
How do I enlarge a picture so that it is 300 DPI?
The accepted answer to the question C++ Library for image recognition: images containing words to string recommended that you: Upsize/Downsize your input image to 300 DPI. How would I do this... I was under the impression that DPI was for monitors, not image formats.
I think the more accurate term here is resampling. You want a pixel resolution high enough to support accurate OCR. Font size (e.g. in points) is typically measured in units of length, not pixels. Since 72 points = 1 inch, we need 300/72 pixels-per-point for a resolution of 300 dpi ("pixels-per-inch"). That means a typical 12-point font has a height (or more accurately, base-line to base-line distance in single-spaced text) of 50 pixels. Ideally, your source documents should be scanned at an appropriate resolution for the given font size, so that the font in the image is about 50 pixels high. If the resolution is too high/low, you can easily resample the image using a graphics program (e.g. GIMP). You can also do this programmatically through a graphics library, such as ImageMagick which has interfaces for many programming languages.
524,919
524,944
Will Garbage Collected C be Faster Than C++?
I had been wondering for quite some time on how to manager memory in my next project. Which is writing a DSL in C/C++. It can be done in any of the three ways. Reference counted C or C++. Garbage collected C. In C++, copying class and structures from stack to stack and managing strings separately with some kind of GC. The community probably already has a lot of experience on each of these methods. Which one will be faster? What are the pros and cons for each? A related side question. Will malloc/free be slower than allocating a big chunk at the beginning of the program and running my own memory manager over it? .NET seems to do it. But I am confused why we can't count on OS to do this job better and faster than what we can do ourselves.
It all depends! That's a pretty open question. It needs an essay to answer it! Hey.. here's one somebody prepared earlier: http://lambda-the-ultimate.org/node/2552 http://www.hpl.hp.com/personal/Hans_Boehm/gc/issues.html It depends how big your objects are, how many of them there are, how fast they're being allocated and discarded, how much time you want to invest optimizing and tweaking to make optimizations. If you know the limits of how much memory you need, for fast performance, I would think you can't really beat grabbing all the memory you need from the OS up front, and then managing it yourself. The reason it can be slow allocating memory from the OS is that it deals with lots of processes and memory on disk and in ram, so to get memory it's got to decide if there is enough. Possibly, it might have to page another processes memory out from ram to disk so it can give you enough. There's lots going on. So managing it yourself (or with a GC collected heap) can be far quicker than going to the OS for each request. Also, the OS usually deals with bigger chunks of memory, so it might round up the size of requests you make meaning you could waste memory. Have you got a real hard requirement for going super quick? A lot of DSL applications don't need raw performance. I'd suggest going with whatever's simplest to code. You could spend a lifetime writing memory management systems and worrying which is best.
524,999
525,060
C++ What's the max number of bytes you can dynamically allocate using the new operator in Windows XP using VS2005?
I have c++ code that attempts to dynamically allocate a 2d array of bytes that measures approx 151MB in size. When I attempt to go back and index through the array, my program crashes in exactly the same place every time with an "Access violation reading location 0x0110f000" error, but the indicies appear to be in range. That leads me to believe the memory at those indicies wasn't allocated correctly. 1) What's the max number of bytes you can dynamically allocate using the new operator? 2) If it is the case that I'm failing to dynamically allocate memory, would it make sense that my code is crashing when attempting to access the array at exactly the same two indicies every time? For some reason, I feel like they would be different every time the program is run, but what do i know ;) 3) If you don't think the problem is from an unsuccessful call to new, any other ideas what could be causing this error and crash? Thanks in advance for all your help! *Edit Here's my code to allocate the 2d array... #define HD_WIDTH 960 #define HD_HEIGHT 540 #define HD_FRAMES 100 //pHDVideo is a char** pHDVideo->VideoData = new char* [HD_FRAMES]; for(int iFrame = 0; iFrame < HD_FRAMES; iFrame++) { //Create the new HD frame pHDVideo->VideoData[iFrame] = new char[HD_WIDTH * HD_HEIGHT * 3]; memset(pHDVideo->VideoData[iFrame], 0, HD_WIDTH * HD_HEIGHT * 3); } and here's a screenshot of the crashing code and debugger (Dead Link) it will help. I should add that the call to memset never fails, which to me means the allocations is successful, but I could be wrong. EDIT I found a fix everyone, thanks for all your help. Somehow, and I still need to figure out how, there was one extra horizontal line being upscaled, so I changed... for(int iHeight = 0; iHeight < HD_HEIGHT; iHeight++) to for(int iHeight = 0; iHeight < HD_HEIGHT-1; iHeight++) and it suddenly worked. Anyhow, thanks so much again!
Some possibilities to look at or things to try: It may be that the pHDVideo->VideoData[iFrame] or pHDVideo->VideoData is being freed somewhere. I doubt this is the case but I'd check all the places this can happen anyway. Output a debug statement each time you free on of those AND just before your crash statement. Something might be overwriting the pHDVideo->VideoData[iFrame] values. Print them out when allocated and just before your crash statement to see if they've changed. If 0x0110f000 isn't within the range of one of them, that's almost certainly the case. Something might be overwriting the pHDVideo value. Print it out when allocated and just before your crash statement to see if it's changed. This depends on what else is within your pHDVideo structure. Please show us the code that crashes, with a decent amount of context so we can check that out as well. In answer to your specific questions: 1/ It's implementation- or platform-specific, and it doesn't matter in this case. If your new's were failing you'd get an exception or null return, not a dodgy pointer. 2/ It's not the case: see (1). 3/ See above for some possibilities and things to try. Following addition of your screenshot: You do realize that the error message says "Access violation reading ..."? That means it's not complaining about writing to pHDVideo->VideoData[iFrame][3*iPixel+2] but reading from this->VideoData[iFrame][3*iPixelIndex+2]. iPixelIndex is set to 25458, so can you confirm that this->VideoData[iFrame][76376] exists? I can't see from your screenshot how this->VideoData is allocated and populated.
525,227
525,270
Console menu updating OpenGL window
I am making an application that does some custom image processing. The program will be driven by a simple menu in the console. The user will input the filename of an image, and that image will be displayed using openGL in a window. When the user selects some processing to be done to the image, the processing is done, and the openGL window should redraw the image. My problem is that my image is never drawn to the window, instead the window is always black. I think it may have to do with the way I am organizing the threads in my program. The main execution thread handles the menu input/output and the image processing and makes calls to the Display method, while a second thread runs the openGL mainloop. Here is my main code: #include <iostream> #include <GL/glut.h> #include "ImageProcessor.h" #include "BitmapImage.h" using namespace std; DWORD WINAPI openglThread( LPVOID param ); void InitGL(); void Reshape( GLint newWidth, GLint newHeight ); void Display( void ); BitmapImage* b; ImageProcessor ip; int main( int argc, char *argv[] ) { DWORD threadID; b = new BitmapImage(); CreateThread( 0, 0, openglThread, NULL, 0, &threadID ); while( true ) { char choice; string path = "TestImages\\"; string filename; cout << "Enter filename: "; cin >> filename; path += filename; b = new BitmapImage( path ); Display(); cout << "1) Invert" << endl; cout << "2) Line Thin" << endl; cout << "Enter choice: "; cin >> choice; if( choice == '1' ) { ip.InvertColour( *b ); } else { ip.LineThinning( *b ); } Display(); } return 0; } void InitGL() { int argc = 1; char* argv[1]; argv[0] = new char[20]; strcpy( argv[0], "main" ); glutInit( &argc, argv ); glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowPosition( 0, 0 ); glutInitWindowSize( 800, 600 ); glutCreateWindow( "ICIP Program - Character recognition using line thinning, Hilbert curve, and wavelet approximation" ); glutDisplayFunc( Display ); glutReshapeFunc( Reshape ); glClearColor(0.0,0.0,0.0,1.0); glEnable(GL_DEPTH_TEST); } void Reshape( GLint newWidth, GLint newHeight ) { /* Reset viewport and projection parameters */ glViewport( 0, 0, newWidth, newHeight ); } void Display( void ) { glClear (GL_COLOR_BUFFER_BIT); // Clear display window. b->Draw(); glutSwapBuffers(); } DWORD WINAPI openglThread( LPVOID param ) { InitGL(); glutMainLoop(); return 0; } Here is my draw method for BitmapImage: void BitmapImage::Draw() { cout << "Drawing" << endl; if( _loaded ) { glBegin( GL_POINTS ); for( unsigned int i = 0; i < _height * _width; i++ ) { glColor3f( _bitmap_image[i*3] / 255.0, _bitmap_image[i*3+1] / 255.0, _bitmap_image[i*3+2] / 255.0 ); // invert the y-axis while drawing glVertex2i( i % _width, _height - (i / _width) ); } glEnd(); } } Any ideas as to the problem? Edit: The problem was technically solved by starting a glutTimer from the openglThread which calls glutPostRedisplay() every 500ms. This is OK for now, but I would prefer a solution in which I only have to redisplay every time I make changes to the bitmap (to save on processing time) and one in which I don't have to run another thread (the timer is another thread im assuming). This is mainly because the main processing thread is going to be doing a lot of intensive work and I would like to dedicate most of the resources to this thread rather than anything else.
I've had this problem before - it's pretty annoying. The problem is that all of your OpenGL calls must be done in the thread where you started the OpenGL context. So when you want your main (input) thread to change something in the OpenGL thread, you need to somehow signal to the thread that it needs to do stuff (set a flag or something). Note: I don't know what your BitmapImage loading function (here, your constructor) does, but it probably has some OpenGL calls in it. The above applies to that too! So you'll need to signal to the other thread to create a BitmapImage for you, or at least to do the OpenGL-related part of creating the bitmap.
525,365
525,377
Does std::stack expose iterators?
Does the std::stack in the C++ STL expose any iterators of the underlying container or should I use that container directly?
Stack does not have iterators, by definition of stack. If you need stack with iterators, you'll need to implement it yourself on top of other container (std::list, std::vector, etc). Stack doc is here. P.S. According to a comment i got from Iraimbilanja, std::stack by default uses std::deque for implementation.
525,609
529,017
Use C++ with Cocoa Instead of Objective-C?
I would like to write applications that use C++ and the Cocoa frameworks because Apple is not making Carbon 64-bit capable. C++ seems to be pretty vanilla in its implementation on Linux and Windows but on Mac OS X it seems like additional Apple specific pieces of code are required (like an Obj-C wrapper). It also seems that Apple is forcing developers to write in Objective-C rather than C++, although I could be wrong. I am trying to find a path to write code on the Mac that would be easy to keep cross platform. Having to write code in C++ for Linux/Windows and then rewrite large portions in Objective-C would be very inefficient. Is there a way to write code in C++ that will be supported for the future and supported in Xcode? Also, if this is possible, how would I mix C++ and Objective-C in Xcode? Thanks.
You cannot write a Cocoa application entirely in C++. Cocoa relies heavily on the late binding capabilities of Objective-C for many of its core technologies such as Key-Value Bindings, delegates (Cocoa style), and the target-action pattern. The late binding requirements make it very difficult to implement the Cocoa API in a compile-time bound, typed language like C++ⁱ. You can, of course, write a pure C++ app that runs on OS X. It just can't use the Cocoa APIs. So, you have two options if you want to share code between C++ apps on other platforms and your Cocoa-based application. The first is to write the model layer in C++ and the GUI in Cocoa. This is a common approach used by some very large apps, including Mathematica. Your C++ code can be left unchanged (you do not need "funky" apple extensions to write or compile C++ on OS X). Your controller layer will likely make use of Objective-C++ (perhaps the "funky" Apple extension you refer to). Objective-C++ is a superset of C++, just as Objective-C is a superset of C. In Objective-C++, you can make objc-style message passing calls (like [some-objc-object callMethod];) from within a C++ function. Conversely, you can call C++ functions from within ObjC code like: @interface MyClass { MyCPPClass *cppInstance; } @end @implementation MyClass - (id)init { if(self = [super init]) { cppInstance = new MyCPPClass(); } return self; } - (void) dealloc { if(cppInstance != NULL) delete cppInstance; [super dealloc]; } - (void)callCpp { cppInstance->SomeMethod(); } @end You can find out more about Objective-C++ in the Objective-C language guide. The view layer can then be pure Objective-C. The second option is to use a cross-platform C++ toolkit. The Qt toolkit might fit the bill. Cross-platform toolkits are generally despised by Mac users because they do not get all the look and feel details exactly right and Mac users expect polish in the UI of Mac applications. Qt does a surprisingly good job, however, and depending on the audience and the use of your app, it may be good enough. In addition, you will lose out on some of the OS X-specific technologies such as Core Animation and some QuickTime functionality, though there are approximate replacements in the Qt API. As you point out, Carbon will not be ported to 64-bit. Since Qt is implemented on Carbon APIs, Trolltech/Nokia have had to port Qt to the Cocoa API to make it 64-bit compatible. My understanding is that the next relase of Qt (currently in release candiate) completes this transition and is 64-bit compatible on OS X. You may want to have a look at the source of Qt 4.5 if you're interested in integrating C++ and the Cocoa APIs. ⁱ For a while Apple made the Cocoa API available to Java, but the bridge required extensive hand-tuning and was unable to handle the more advanced technologies such as Key-Value Bindings described above. Currently dynamically typed, runtime-bound languages like Python, Ruby, etc. are the only real option for writing a Cocoa app without Objective-C (though of course these bridges use Objective-C under the hood).
525,677
525,712
Is there a way to disable all warnings with a pragma?
I've started a new project and have decided to make sure it builds cleanly with the /Wall option enabled. The only problem is not all 3rd party libraries (like boost) compile without warnings, so I've resorted to doing this in a shared header: #pragma warning(push) #pragma warning(disable:4820) #pragma warning(disable:4619) #pragma warning(disable:4668) #pragma warning(disable:4625) #pragma warning(disable:4626) #pragma warning(disable:4571) #pragma warning(disable:4347) #pragma warning(disable:4640) #pragma warning(disable:4365) #pragma warning(disable:4710) #pragma warning(disable:4820) #pragma warning(disable:4350) #pragma warning(disable:4686) #pragma warning(disable:4711) #pragma warning(disable:4548) #include <boost/array.hpp> #include <boost/assert.hpp> #include <boost/assign.hpp> #include <boost/bind.hpp> #include <boost/lexical_cast.hpp> #include <boost/filesystem.hpp> #include <boost/foreach.hpp> #include <boost/format.hpp> #include <boost/function.hpp> #include <boost/integer.hpp> #include <boost/optional.hpp> #include <boost/regex.hpp> #include <boost/smart_ptr.hpp> #include <boost/algorithm/string.hpp> #include <boost/tuple/tuple.hpp> #include <boost/utility.hpp> #include <boost/variant.hpp> #pragma warning(pop) This works well enough, but every time I add new boost headers I have to add whatever warnings they generate to the list. Is there a way to say disable all warnings for this stretch of code?
You can push/pop a low level of warning, like this: #pragma warning(push, 0) #include <boost/bind.hpp> #include <boost/shared_ptr.hpp> // ... #pragma warning(pop) But know that it's not possible to disable all warnings. For example, some linker warnings are impossible to turn off.
525,803
525,915
How can I trust the behavior of C++ functions that declare const?
This is a C++ disaster, check out this code sample: #include <iostream> void func(const int* shouldnotChange) { int* canChange = (int*) shouldnotChange; *canChange += 2; return; } int main() { int i = 5; func(&i); std::cout << i; return 0; } The output was 7! So, how can we make sure of the behavior of C++ functions, if it was able to change a supposed-to-be-constant parameter!? EDIT: I am not asking how can I make sure that my code is working as expected, rather I am wondering how to believe that someone else's function (for instance some function in some dll library) isn't going to change a parameter or posses some behavior...
Based on your edit, your question is "how can I trust 3rd party code not to be stupid?" The short answer is "you can't." If you don't have access to the source, or don't have time to inspect it, you can only trust the author to have written sane code. In your example, the author of the function declaration specifically claims that the code will not change the contents of the pointer by using the const keyword. You can either trust that claim, or not. There are ways of testing this, as suggested by others, but if you need to test large amounts of code, it will be very labour intensive. Perhaps moreso than reading the code. If you are working on a team and you have a team member writing stuff like this, then you can talk to them about it and explain why it is bad.