question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
30,729
30,766
C# Performance For Proxy Server (vs C++)
I want to create a simple http proxy server that does some very basic processing on the http headers (i.e. if header x == y, do z). The server may need to support hundreds of users. I can write the server in C# (pretty easy) or c++ (much harder). However, would a C# version have as good of performance as a C++ version? If not, would the difference in performance be big enough that it would not make sense to write it in C#?
You can use unsafe C# code and pointers in critical bottleneck points to make it run faster. Those behave much like C++ code and I believe it executes as fast. But most of the time, C# is JIT-ted to uber-fast already, I don't believe there will be much differences as with what everyone has said. But one thing you might want to consider is: Managed code (C#) string operations are rather slow compared to using pointers effectively in C++. There are more optimization tricks with C++ pointers than with CLR strings. I think I have done some benchmarks before, but can't remember where I've put them.
30,998
31,111
Is using too much static bad or good?
I like to use static functions in C++ as a way to categorize them, like C# does. Console::WriteLine("hello") Is this good or bad? If the functions are used often I guess it doesn't matter, but if not do they put pressure on memory? What about static const?
but is it good or bad The first adjective that comes to mind is "unnecessary". C++ has free functions and namespaces, so why would you need to make them static functions in a class? The use of static methods in uninstantiable classes in C# and Java is a workaround because those languages don't have free functions (that is, functions that reside directly in the namespace, rather than as part of a class). C++ doesn't have that flaw. Just use a namespace.
31,867
32,062
Are there any examples where we *need* protected inheritance in C++?
While I've seen rare cases where private inheritance was needed, I've never encountered a case where protected inheritance is needed. Does someone have an example?
People here seem to mistake Protected class inheritance and Protected methods. FWIW, I've never seen anyone use protected class inheritance, and if I remember correctly I think Stroustrup even considered the "protected" level to be a mistake in c++. There's precious little you cannot do if you remove that protection level and only rely on public and private.
32,168
32,224
C++ cast syntax styles
A question related to Regular cast vs. static_cast vs. dynamic_cast: What cast syntax style do you prefer in C++? C-style cast syntax: (int)foo C++-style cast syntax: static_cast<int>(foo) constructor syntax: int(foo) They may not translate to exactly the same instructions (do they?) but their effect should be the same (right?). If you're just casting between the built-in numeric types, I find C++-style cast syntax too verbose. As a former Java coder I tend to use C-style cast syntax instead, but my local C++ guru insists on using constructor syntax. What do you think?
It's best practice never to use C-style casts for three main reasons: as already mentioned, no checking is performed here. The programmer simply cannot know which of the various casts is used which weakens strong typing the new casts are intentionally visually striking. Since casts often reveal a weakness in the code, it's argued that making casts visible in the code is a good thing. this is especially true if searching for casts with an automated tool. Finding C-style casts reliably is nearly impossible. As palm3D noted: I find C++-style cast syntax too verbose. This is intentional, for the reasons given above. The constructor syntax (official name: function-style cast) is semantically the same as the C-style cast and should be avoided as well (except for variable initializations on declaration), for the same reasons. It is debatable whether this should be true even for types that define custom constructors but in Effective C++, Meyers argues that even in those cases you should refrain from using them. To illustrate: void f(auto_ptr<int> x); f(static_cast<auto_ptr<int> >(new int(5))); // GOOD f(auto_ptr<int>(new int(5)); // BAD The static_cast here will actually call the auto_ptr constructor.
32,332
37,542
Why don't the std::fstream classes take a std::string?
This isn't a design question, really, though it may seem like it. (Well, okay, it's kind of a design question). What I'm wondering is why the C++ std::fstream classes don't take a std::string in their constructor or open methods. Everyone loves code examples so: #include <iostream> #include <fstream> #include <string> int main() { std::string filename = "testfile"; std::ifstream fin; fin.open(filename.c_str()); // Works just fine. fin.close(); //fin.open(filename); // Error: no such method. //fin.close(); } This gets me all the time when working with files. Surely the C++ library would use std::string wherever possible?
By taking a C string the C++03 std::fstream class reduced dependency on the std::string class. In C++11, however, the std::fstream class does allow passing a std::string for its constructor parameter. Now, you may wonder why isn't there a transparent conversion from a std:string to a C string, so a class that expects a C string could still take a std::string just like a class that expects a std::string can take a C string. The reason is that this would cause a conversion cycle, which in turn may lead to problems. For example, suppose std::string would be convertible to a C string so that you could use std::strings with fstreams. Suppose also that C string are convertible to std::strings as is the state in the current standard. Now, consider the following: void f(std::string str1, std::string str2); void f(char* cstr1, char* cstr2); void g() { char* cstr = "abc"; std::string str = "def"; f(cstr, str); // ERROR: ambiguous } Because you can convert either way between a std::string and a C string the call to f() could resolve to either of the two f() alternatives, and is thus ambiguous. The solution is to break the conversion cycle by making one conversion direction explicit, which is what the STL chose to do with c_str().
33,042
33,071
How can I measure CppUnit test coverage (on win32 and Unix)?
I have a very large code base that contains extensive unit tests (using CppUnit). I need to work out what percentage of the code is exercised by these tests, and (ideally) generate some sort of report that tells me on a per-library or per-file basis, how much of the code was exercised. Here's the kicker: this has to run completely unnatended (eventually inside a continuous integration build), and has to be cross platform (well, WIN32 and *nix at least). Can anyone suggest a tool, or set of tools that can help me do this? I can't change away from CppUnit (nor would I want to - it kicks ass), but otherwise I'm eager to hear any recommendations you might have. Cheers,
Which tool should I use? This article describes another developers frustrations searching for C++ code coverage tools. The author's final solution was Bullseye Coverage. Bullseye Coverage features: Cross Platform Support (win32, unix, and embedded), (supports linux gcc compilers and MSVC6) Easy to use (up and running in a few hours). Provides "best" metrics: Function Coverage and Condition/Decision Coverage. Uses source code instrumentation. As for hooking into your continuous integration, it depends on which CI solution you use, but you can likely hook the instrumentation / coverage measurement steps into the make file you use for automated testing. Testing Linux vs Windows? So long as all your tests run correctly in both environments, you should be fine measuring coverage on one or the other. (Though Bullseye appears to support both platforms). But why aren't you doing continuous integration builds in both environments?? If you deliver to clients in both environments then you need to be testing in both. For that reason, it sounds like you might need to have two continuous build servers set up, one for a linux build and one for a windows build. Perhaps this can be easily accomplished with some virtualization software like vmware or virtualbox. You may not need to run code coverage metrics on both OSs, but you should definitely be running your unit tests on both.
33,306
33,313
C++ STL question: allocators
I have a (potentially dumb) question about the C++ STL. When I make a container (vector, set, map, etc), is it allocated on the stack or on the heap? If I make a set and put 5 million strings, will I have to worry about a stack overflow?
STL classes by default allocate their internal buffers from the heap, although these classes also allow custom allocators that allow a user to specify an alternate location to allocate from - e.g. a shared memory pool.
34,065
35,717
How to read a value from the Windows registry
Given the key for some registry value (e.g. HKEY_LOCAL_MACHINE\blah\blah\blah\foo) how can I: Safely determine that such a key exists. Programmatically (i.e. with code) get its value. I have absolutely no intention of writing anything back to the registry (for the duration of my career if I can help it). So we can skip the lecture about every molecule in my body exploding at the speed of light if I write to the registry incorrectly. Prefer answers in C++, but mostly just need to know what the special Windows API incantation to get at the value is.
Here is some pseudo-code to retrieve the following: If a registry key exists What the default value is for that registry key What a string value is What a DWORD value is Example code: Include the library dependency: Advapi32.lib HKEY hKey; LONG lRes = RegOpenKeyExW(HKEY_LOCAL_MACHINE, L"SOFTWARE\\Perl", 0, KEY_READ, &hKey); bool bExistsAndSuccess (lRes == ERROR_SUCCESS); bool bDoesNotExistsSpecifically (lRes == ERROR_FILE_NOT_FOUND); std::wstring strValueOfBinDir; std::wstring strKeyDefaultValue; GetStringRegKey(hKey, L"BinDir", strValueOfBinDir, L"bad"); GetStringRegKey(hKey, L"", strKeyDefaultValue, L"bad"); LONG GetDWORDRegKey(HKEY hKey, const std::wstring &strValueName, DWORD &nValue, DWORD nDefaultValue) { nValue = nDefaultValue; DWORD dwBufferSize(sizeof(DWORD)); DWORD nResult(0); LONG nError = ::RegQueryValueExW(hKey, strValueName.c_str(), 0, NULL, reinterpret_cast<LPBYTE>(&nResult), &dwBufferSize); if (ERROR_SUCCESS == nError) { nValue = nResult; } return nError; } LONG GetBoolRegKey(HKEY hKey, const std::wstring &strValueName, bool &bValue, bool bDefaultValue) { DWORD nDefValue((bDefaultValue) ? 1 : 0); DWORD nResult(nDefValue); LONG nError = GetDWORDRegKey(hKey, strValueName.c_str(), nResult, nDefValue); if (ERROR_SUCCESS == nError) { bValue = (nResult != 0) ? true : false; } return nError; } LONG GetStringRegKey(HKEY hKey, const std::wstring &strValueName, std::wstring &strValue, const std::wstring &strDefaultValue) { strValue = strDefaultValue; WCHAR szBuffer[512]; DWORD dwBufferSize = sizeof(szBuffer); ULONG nError; nError = RegQueryValueExW(hKey, strValueName.c_str(), 0, NULL, (LPBYTE)szBuffer, &dwBufferSize); if (ERROR_SUCCESS == nError) { strValue = szBuffer; } return nError; }
34,125
34,129
Which, if any, C++ compilers do tail-recursion optimization?
It seems to me that it would work perfectly well to do tail-recursion optimization in both C and C++, yet while debugging I never seem to see a frame stack that indicates this optimization. That is kind of good, because the stack tells me how deep the recursion is. However, the optimization would be kind of nice as well. Do any C++ compilers do this optimization? Why? Why not? How do I go about telling the compiler to do it? For MSVC: /O2 or /Ox For GCC: -O2 or -O3 How about checking if the compiler has done this in a certain case? For MSVC, enable PDB output to be able to trace the code, then inspect the code For GCC..? I'd still take suggestions for how to determine if a certain function is optimized like this by the compiler (even though I find it reassuring that Konrad tells me to assume it) It is always possible to check if the compiler does this at all by making an infinite recursion and checking if it results in an infinite loop or a stack overflow (I did this with GCC and found out that -O2 is sufficient), but I want to be able to check a certain function that I know will terminate anyway. I'd love to have an easy way of checking this :) After some testing, I discovered that destructors ruin the possibility of making this optimization. It can sometimes be worth it to change the scoping of certain variables and temporaries to make sure they go out of scope before the return-statement starts. If any destructor needs to be run after the tail-call, the tail-call optimization can not be done.
All current mainstream compilers perform tail call optimisation fairly well (and have done for more than a decade), even for mutually recursive calls such as: int bar(int, int); int foo(int n, int acc) { return (n == 0) ? acc : bar(n - 1, acc + 2); } int bar(int n, int acc) { return (n == 0) ? acc : foo(n - 1, acc + 1); } Letting the compiler do the optimisation is straightforward: Just switch on optimisation for speed: For MSVC, use /O2 or /Ox. For GCC, Clang and ICC, use -O3 An easy way to check if the compiler did the optimisation is to perform a call that would otherwise result in a stack overflow — or looking at the assembly output. As an interesting historical note, tail call optimisation for C was added to the GCC in the course of a diploma thesis by Mark Probst. The thesis describes some interesting caveats in the implementation. It's worth reading.
34,325
34,329
Should I use a cross-platform GUI-toolkit or rely on the native ones?
On my side job as programmer, I am to write a program in C++ to convert audio files from/to various formats. Probably, this will involve building a simple GUI. Will it be a great effort to build seperate GUIs for Mac and Windows using Cocoa and WinForms instead of a cross-platform toolkit like Qt or GTK? (I will have to maintain a seperate Windows-version and Mac-Version anyway) The GUI will probably be very simple and only need very basic functionality. I always felt that native GUIs feel far more intuitive than its cross-platform brethren...
If you have the expertise, use native frontends, it'll effectively double the job you have to do for UI but from my experience non-native UI is a little bit clunkier than their native counterparts.
34,506
35,201
Simulating a virtual static member of a class in c++?
Is there anyway to have a sort of virtual static member in C++? For example: class BaseClass { public: BaseClass(const string& name) : _name(name) {} string GetName() const { return _name; } virtual void UseClass() = 0; private: const string _name; }; class DerivedClass : public BaseClass { public: DerivedClass() : BaseClass("DerivedClass") {} virtual void UseClass() { /* do something */ } }; I know this example is trivial, but if I have a vector of complex data that is going to be always the same for all derived class but is needed to be accessed from base class methods? class BaseClass { public: BaseClass() {} virtual string GetName() const = 0; virtual void UseClass() = 0; }; class DerivedClass : public BaseClass { public: DerivedClass() {} virtual string GetName() const { return _name; } virtual void UseClass() { /* do something */ } private: static const string _name; }; string DerivedClass::_name = "DerivedClass"; This solution does not satify me because I need reimplement the member _name and its accessor GetName() in every class. In my case I have several members that follows _name behavior and tenths of derived classes. Any idea?
Here is one solution: struct BaseData { const string my_word; const int my_number; }; class Base { public: Base(const BaseData* apBaseData) { mpBaseData = apBaseData; } const string getMyWord() { return mpBaseData->my_word; } int getMyNumber() { return mpBaseData->my_number; } private: const BaseData* mpBaseData; }; class Derived : public Base { public: Derived() : Base(&sBaseData) { } private: static BaseData sBaseData; } BaseData Derived::BaseData = { "Foo", 42 };
34,544
63,555
Detect DOM modification in Internet Explorer
I am writing a Browser Helper Object for ie7, and I need to detect DOM modification (i.e. via AJAX). So far I couldn't find any feasible solution.
You want to use IMarkupContainer2::CreateChangeLog.
34,732
34,796
How do I list the symbols in a .so file
How do I list the symbols being exported from a .so file? If possible, I'd also like to know their source (e.g. if they are pulled in from a static library). I'm using gcc 4.0.2, if that makes a difference.
The standard tool for listing symbols is nm, you can use it simply like this: nm -gD yourLib.so If you want to see symbols of a C++ library, add the "-C" option which demangle the symbols (it's far more readable demangled). nm -gDC yourLib.so If your .so file is in elf format, you have two options: Either objdump (-C is also useful for demangling C++): $ objdump -TC libz.so libz.so: file format elf64-x86-64 DYNAMIC SYMBOL TABLE: 0000000000002010 l d .init 0000000000000000 .init 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 free 0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __errno_location 0000000000000000 w D *UND* 0000000000000000 _ITM_deregisterTMCloneTable Or use readelf: $ readelf -Ws libz.so Symbol table '.dynsym' contains 112 entries: Num: Value Size Type Bind Vis Ndx Name 0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND 1: 0000000000002010 0 SECTION LOCAL DEFAULT 10 2: 0000000000000000 0 FUNC GLOBAL DEFAULT UND free@GLIBC_2.2.5 (14) 3: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __errno_location@GLIBC_2.2.5 (14) 4: 0000000000000000 0 NOTYPE WEAK DEFAULT UND _ITM_deregisterTMCloneTable
34,955
41,043
Best practices for debugging linking errors
When building projects in C++, I've found debugging linking errors to be tricky, especially when picking up other people's code. What strategies do people use for debugging and fixing linking errors?
Not sure what your level of expertise is, but here are the basics. Below is a linker error from VS 2005 - yes, it's a giant mess if you're not familiar with it. ByteComparator.obj : error LNK2019: unresolved external symbol "int __cdecl does_not_exist(void)" (?does_not_exist@@YAHXZ) referenced in function "void __cdecl TextScan(struct FileTextStats &,char const *,char const *,bool,bool,__int64)" (?TextScan@@YAXAAUFileTextStats@@PBD1_N2_J@Z) There are a couple of points to focus on: "ByteComparator.obj" - Look for a ByteComparator.cpp file, this is the source of the linker problem "int __cdecl does_not_exist(void)" - This is the symbol it couldn't find, in this case a function named does_not_exist() At this point, in many cases the fastest way to resolution is to search the code base for this function and find where the implementation is. Once you know where the function is implemented you just have to make sure the two places get linked together. If you're using VS2005, you would use the "Project Dependencies..." right-click menu. If you're using gcc, you would look in your makefiles for the executable generation step (gcc called with a bunch of .o files) and add the missing .o file. In a second scenario, you may be missing an "external" dependency, which you don't have code for. The Win32 libraries are often times implemented in static libraries that you have to link to. In this case, go to MSDN or "Microsoft Google" and search for the API. At the bottom of the API description the library name is given. Add this to your project properties "Configuration Properties->Linker->Input->Additional Dependencies" list. For example, the function timeGetTime()'s page on MSDN tells you to use Winmm.lib at the bottom of the page.
35,485
35,738
Linux: What is the best way to estimate the code & static data size of program?
I want to be able to get an estimate of how much code & static data is used by my C++ program? Is there a way to find this out by looking at the executable or object files? Or perhaps something I can do at runtime? Will objdump & readelf help?
"size" is the traditional tool. "readelf" has a lot of options. $ size /bin/sh text data bss dec hex filename 712739 37524 21832 772095 bc7ff /bin/sh
35,522
35,524
looping and average in c++
Programming Student here...trying to work on a project but I'm stuck. The project is trying to find the miles per gallon per trip then at the end outputting total miles and total gallons used and averaging miles per gallon How do I loop back up to the first question after the first set of questions has been asked. Also how will I average the trips...will I have to have a variable for each of the trips? I'm stuck, any help would be great!
You will have to tell us the type of data you are given. As per your last question: remember that an average can be calculated in real time by either storing the sum and the number of data points (two numbers), or the current average and the number of data points (again, two numbers). For instance: class Averager { double avg; int n; public: Averager() : avg(0), n(0) {} void addPoint(double v) { avg = (n * avg + v) / (n + 1); n++; } double average() const { return avg; } };
35,762
35,770
Linux GUI development
I have a large GUI project that I'd like to port to Linux. What is the most recommended framework to utilize for GUI programming in Linux? Are Frameworks such as KDE / Gnome usable for this objective Or is better to use something more generic other than X? I feel like if I chose one of Gnome or KDE, I'm closing the market out for a chunk of the Linux market who have chosen one over the other. (Yes I know there is overlap) Is there a better way? Or would I have to create 2 complete GUI apps to have near 100% coverage? It's not necessary to have a cross-platform solution that will also work on Win32.
Your best bet may be to port it to a cross-platform widget library such as wxWidgets, which would give you portability to any platform wxWidgets supports. It's also important to make the distinction between Gnome libraries and GTK, and likewise KDE libraries and Qt. If you write the code to use GTK or Qt, it should work fine for users of any desktop environment, including less popular ones like XFCE. If you use other Gnome or KDE-specific libraries to do non-widget-related tasks, your app would be less portable between desktop environments.
35,950
35,965
I don't understand std::tr1::unordered_map
I need an associative container that makes me index a certain object through a string, but that also keeps the order of insertion, so I can look for a specific object by its name or just iterate on it and retrieve objects in the same order I inserted them. I think this hybrid of linked list and hash map should do the job, but before I tried to use std::tr1::unordered_map thinking that it was working in that way I described, but it wasn't. So could someone explain me the meaning and behavior of unordered_map? @wesc: I'm sure std::map is implemented by STL, while I'm sure std::hash_map is NOT in the STL (I think older version of Visual Studio put it in a namespace called stdext). @cristopher: so, if I get it right, the difference is in the implementation (and thus performances), not in the way it behaves externally.
Boost documentation of unordered containers The difference is in the method of how you generate the look up. In the map/set containers the operator< is used to generate an ordered tree. In the unordered containers, an operator( key ) => index is used. See hashing for a description of how that works.
36,039
36,080
Templates spread across multiple files
C++ seems to be rather grouchy when declaring templates across multiple files. More specifically, when working with templated classes, the linker expect all method definitions for the class in a single compiler object file. When you take into account headers, other declarations, inheritance, etc., things get really messy. Are there any general advice or workarounds for organizing or redistributing templated member definitions across multiple files?
Are there any general advice or workarounds for organizing or redistributing templated member definitions across multiple files? Yes; don't. The C++ spec permits a compiler to be able to "see" the entire template (declaration and definition) at the point of instantiation, and (due to the complexities of any implementation) most compilers retain this requirement. The upshot is that #inclusion of any template header must also #include any and all source required to instantiate the template. The easiest way to deal with this is to dump everything into the header, inline where posible, out-of-line where necessary. If you really regard this as an unacceptable affront, a common option is to split the template into the usual header/implementation pair, and then #include the implementation file at the end of the header. C++'s "export" feature may or may not provide another workaround. The feature is poorly supported and poorly defined; although it in principle should permit some kind of separate compilation of templates, it doesn't necessarily obviate the demand that the compiler be able to see the entire template body.
36,077
60,140
Finding out the source of an exception in C++ after it is caught?
I'm looking for an answer in MS VC++. When debugging a large C++ application, which unfortunately has a very extensive usage of C++ exceptions. Sometimes I catch an exception a little later than I actually want. Example in pseudo code: FunctionB() { ... throw e; ... } FunctionA() { ... FunctionB() ... } try { Function A() } catch(e) { (<--- breakpoint) ... } I can catch the exception with a breakpoint when debugging. But I can't trace back if the exception occurred in FunctionA() or FunctionB(), or some other function. (Assuming extensive exception use and a huge version of the above example). One solution to my problem is to determine and save the call stack in the exception constructor (i.e. before it is caught). But this would require me to derive all exceptions from this base exception class. It would also require a lot of code, and perhaps slow down my program. Is there an easier way that requires less work? Without having to change my large code base? Are there better solutions to this problem in other languages?
If you are just interested in where the exception came from, you could just write a simple macro like #define throwException(message) \ { \ std::ostringstream oss; \ oss << __FILE __ << " " << __LINE__ << " " \ << __FUNC__ << " " << message; \ throw std::exception(oss.str().c_str()); \ } which will add the file name, line number and function name to the exception text (if the compiler provides the respective macros). Then throw exceptions using throwException("An unknown enum value has been passed!");
36,114
36,246
Variable Holding data in a while statement
I know I must be missing something, but in a while statement how does the variable hold the data, when it finishes the first pass and goes into the second pass? { int num1 = 0 ; int num2 = 0; int num3 = 0; while (num1 < 10) {cout << "enter your first number: "; cin >> num1; cout << "Enter your second number: "; cin >> num2; num1 = num1 + num2 ; cout << "Number 1 is now: " << num1 <<endl; cout << "Enter Number 3: " ; cin >> num3; num1 = num1 + num3; cout << "Number 1 is now: " << num1 << endl; num1++; }; In this code. The Variable doesn't hold the data. I'm not sure what I'm doing wrong!
Is num1 the variable you're having trouble with? This line: cin >> num1; is setting num1 to the value input by the user. So the value calculated for it in the previous run through the loop is being overwritten each time by the new input.
36,347
36,364
What are the differences between "generic" types in C++ and Java?
Java has generics and C++ provides a very strong programming model with templates. So then, what is the difference between C++ and Java generics?
There is a big difference between them. In C++ you don't have to specify a class or an interface for the generic type. That's why you can create truly generic functions and classes, with the caveat of a looser typing. template <typename T> T sum(T a, T b) { return a + b; } The method above adds two objects of the same type, and can be used for any type T that has the "+" operator available. In Java you have to specify a type if you want to call methods on the objects passed, something like: <T extends Something> T sum(T a, T b) { return a.add ( b ); } In C++ generic functions/classes can only be defined in headers, since the compiler generates different functions for different types (that it's invoked with). So the compilation is slower. In Java the compilation doesn't have a major penalty, but Java uses a technique called "erasure" where the generic type is erased at runtime, so at runtime Java is actually calling ... Something sum(Something a, Something b) { return a.add ( b ); } Nevertheless, Java's generics help with type-safety.
36,832
75,654
Virtual functions in constructors, why do languages differ?
In C++ when a virtual function is called from within a constructor it doesn't behave like a virtual function. I think everyone who encountered this behavior for the first time was surprised but on second thought it made sense: As long as the derived constructor has not been executed the object is not yet a derived instance. So how can a derived function be called? The preconditions haven't had the chance to be set up. Example: class base { public: base() { std::cout << "foo is " << foo() << std::endl; } virtual int foo() { return 42; } }; class derived : public base { int* ptr_; public: derived(int i) : ptr_(new int(i*i)) { } // The following cannot be called before derived::derived due to how C++ behaves, // if it was possible... Kaboom! virtual int foo() { return *ptr_; } }; It's exactly the same for Java and .NET yet they chose to go the other way, and is possibly the only reason for the principle of least surprise? Which do you think is the correct choice?
There's a fundamental difference in how the languages define an object's life time. In Java and .Net the object members are zero/null initialized before any constructor is run and is at this point that the object life time begins. So when you enter the constructor you've already got an initialized object. In C++ the object life time only begins when the constructor finishes (although member variables and base classes are fully constructed before it starts). This explains the behaviour when virtual functions are called and also why the destructor isn't run if there's an exception in the constructor's body. The problem with the Java/.Net definition of object lifetime is that it's harder to make sure the object always meets its invariant without having to put in special cases for when the object is initialized but the constructor hasn't run. The problem with the C++ definition is that you have this odd period where the object is in limbo and not fully constructed.
36,890
42,467
Changing a CORBA interface without recompiling
I'd like to add a method to my existing server's CORBA interface. Will that require recompiling all clients? I'm using TAO.
Recompilation of clients is not required (and should not be, regardless of the ORB that you use). As Adam indicated, lookups are done by operation name (a straight text comparison). I've done what you're describing with our ACE/TAO-based system, and encountered no issues (servers were in ACE/TAO C++, clients were ACE/TAO C++, C# using Borland's Janeva, and OmniORBPy).
36,991
37,001
Do you have to register a Dialog Box?
So, I am a total beginner in any kind of Windows related programming. I have been playing around with the Windows API and came across a couple of examples on how to initialize create windows and such. One example creates a regular window (I abbreviated some of the code): int WINAPI WinMain( [...] ) { [...] // Windows Class setup wndClass.cbSize = sizeof( wndClass ); wndClass.style = CS_HREDRAW | CS_VREDRAW; [...] // Register class RegisterClassEx( &wndClass ); // Create window hWnd = CreateWindow( szAppName, "Win32 App", WS_OVERLAPPEDWINDOW, 0, 0, 512, 384, NULL, NULL, hInstance, NULL ); [...] } The second example creates a dialog box (no abbreviations except the WinMain arguments): int WINAPI WinMain( [...] ) { // Create dialog box DialogBox(hInstance, MAKEINTRESOURCE(IDD_MAIN_DLG), NULL, (DLGPROC)DialogProc); } The second example does not contain any call to the register function. It just creates the DialogBox with its DialogProc process attached. This works fine, but I am wondering if there is a benefit of registering the window class and then creating the dialog box (if this is at all possible).
You do not have to register a dialog box. Dialog boxes are predefined so (as you noted) there is no reference to a window class when you create a dialog. If you want more control of a dialog (like you get when you create your own window class) you would subclass the dialog which is a method by which you replace the dialogs window procedure with your own. When your procedure is called you modify the behavior of the dialog window; you then might or might not call the original window procedure depending upon what you're trying to do.
37,067
37,314
Task oriented thread pooling
I've created a model for executing worker tasks in a server application using a thread pool associated with an IO completion port such as shown in the posts below: http://weblogs.asp.net/kennykerr/archive/2008/01/03/parallel-programming-with-c-part-4-i-o-completion-ports.aspx http://blogs.msdn.com/larryosterman/archive/2004/03/29/101329.aspx Are there any classes in boost that can help with this programming model?
Not really, at least, not last time I looked. I mean, boost::thread_group might make things marginally tidier in places, but not so as would make much of a difference, I don't think. Boost's thread support seems marginally useful when writing something that's cross-platform, but given that what you're writing is going to be Win32-specific anyway (due to the use of IOCPs) there doesn't really seem to be much benefit from that.
37,346
37,348
Why can't a forward declaration be used for a std::vector?
If I create a class like so: // B.h #ifndef _B_H_ #define _B_H_ class B { private: int x; int y; }; #endif // _B_H_ and use it like this: // main.cpp #include <iostream> #include <vector> class B; // Forward declaration. class A { public: A() { std::cout << v.size() << std::endl; } private: std::vector<B> v; }; int main() { A a; } The compiler fails when compiling main.cpp. Now the solution I know is to #include "B.h", but I'm curious as to why it fails. Neither g++ or cl's error messages were very enlightening in this matter.
The compiler needs to know how big "B" is before it can generate the appropriate layout information. If instead, you said std::vector<B*>, then the compiler wouldn't need to know how big B is because it knows how big a pointer is.
37,398
37,402
How do I make a fully statically linked .exe with Visual Studio Express 2005?
My current preferred C++ environment is the free and largely excellent Microsoft Visual Studio 2005 Express edition. From time to time I have sent release .exe files to other people with pleasing results. However recently I made the disturbing discovery that the pleasing results were based on more luck that I would like. Attempting to run one of these programs on an old (2001 vintage, not scrupulously updated) XP box gave me nothing but a nasty "System cannot run x.exe" (or similar) message. Some googling revealed that with this toolset, even specifying static linking results in a simple hello-world.exe actually relying on extra .dll files (msvcm80.dll etc.). An incredibly elaborate version scheming system (manifest files anyone?) then will not let the .exe run without exactly the right .dll versions. I don't want or need this stuff, I just want an old fashioned self contained .exe that does nothing but lowest common denominator Win32 operations and runs on any old win32 OS. Does anyone know if its possible to do what I want to do with my existing toolset ? Thank you.
For the C-runtime go to the project settings, choose C/C++ then 'Code Generation'. Change the 'runtime library' setting to 'multithreaded' instead of 'multithreaded dll'. If you are using any other libraries you may need to tell the linker to ignore the dynamically linked CRT explicitly.
37,428
37,461
Get back to basics. How do I get back into C++?
I haven't used C++ since college. Even though I've wanted to I haven't needed to do any until I started wanting to write plugins for Launchy. Is there a good book to read to get back into it? My experience since college is mainly C# and recently ruby. I bought some book for C# developers and it ended up being on how to write C++ with CLI. While a good book it wasn't quite what I was looking for.
The best way to get back into C++ is to jump in. You can't learn a real language without spending any serious time in a country where they speak it. I wouldn't try to learn a programming language without spending time coding in it either. I wouldn't recommend learning C first though. That's a good way to pick up some bad habits in C++.
37,473
37,474
How can I assert() without using abort()?
If I use assert() and the assertion fails then assert() will call abort(), ending the running program abruptly. I can't afford that in my production code. Is there a way to assert in runtime yet be able to catch failed assertions so I have the chance to handle them gracefully?
Yes, as a matter of fact there is. You will need to write a custom assert function yourself, as C++'s assert() is exactly C's assert(), with the abort() "feature" bundled in. Fortunately, this is surprisingly straightforward. Assert.hh template <typename X, typename A> inline void Assert(A assertion) { if( !assertion ) throw X(); } The above function will throw an exception if a predicate doesn't hold. You will then have the chance to catch the exception. If you don't catch the exception, terminate() will be called, which will end the program similarly to abort(). You may wonder what about optimizing away the assertion when we're building for production. In this case, you can define constants that will signify that you're building for production and then refer to the constant when you Assert(). debug.hh #ifdef NDEBUG const bool CHECK_WRONG = false; #else const bool CHECK_WRONG = true; #endif main.cc #include<iostream> struct Wrong { }; int main() { try { Assert<Wrong>(!CHECK_WRONG || 2 + 2 == 5); std::cout << "I can go to sleep now.\n"; } catch( Wrong e ) { std::cerr << "Someone is wrong on the internet!\n"; } return 0; } If CHECK_WRONG is a constant then the call to Assert() will be compiled away in production, even if the assertion is not a constant expression. There is a slight disadvantage in that by referring to CHECK_WRONG we type a little more. But in exchange we gain an advantage in that we can classify various groups of assertions and enable and disable each of them as we see fit. So, for example we could define a group of assertions that we want enabled even in production code, and then define a group of assertions that we only want to see in development builds. The Assert() function is equivalent to typing if( !assertion ) throw X(); but it clearly indicates the intent of the programmer: make an assertion. Assertions are also easier to grep for with this approach, just like plain assert()s. For more details on this technique see Bjarne Stroustrup's The C++ Programming Language 3e, section 24.3.7.2.
37,799
38,317
GCOV for multi-threaded apps
Is it possible to use gcov for coverage testing of multi-threaded applications? I've set some trivial tests of our code-base up, but it would be nice to have some idea of the coverage we're achieving. If gcov isn't appropriate can anyone recommend an alternative tool (possible oprofile), ideally with some good documentation on getting started.
We've certainly used gcov to get coverage information on our multi-threaded application. You want to compile with gcc 4.3 which can do coverage on dynamic code. You compile with the -fprofile-arcs -ftest-coverage options, and the code will generate .gcda files which gcov can then process. We do a separate build of our product, and collect coverage on that, running unit tests and regression tests. Finally we use lcov to generate HTML results pages.
37,956
38,017
C++ : What's the easiest library to open video file
I would like to open a small video file and map every frames in memory (to apply some custom filter). I don't want to handle the video codec, I would rather let the library handle that for me. I've tried to use Direct Show with the SampleGrabber filter (using this sample http://msdn.microsoft.com/en-us/library/ms787867(VS.85).aspx), but I only managed to grab some frames (not every frames!). I'm quite new in video software programming, maybe I'm not using the best library, or I'm doing it wrong. I've pasted a part of my code (mainly a modified copy/paste from the msdn example), unfortunately it doesn't grabb the 25 first frames as expected... [...] hr = pGrabber->SetOneShot(TRUE); hr = pGrabber->SetBufferSamples(TRUE); pControl->Run(); // Run the graph. pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done. // Find the required buffer size. long cbBuffer = 0; hr = pGrabber->GetCurrentBuffer(&cbBuffer, NULL); for( int i = 0 ; i < 25 ; ++i ) { pControl->Run(); // Run the graph. pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done. char *pBuffer = new char[cbBuffer]; hr = pGrabber->GetCurrentBuffer(&cbBuffer, (long*)pBuffer); AM_MEDIA_TYPE mt; hr = pGrabber->GetConnectedMediaType(&mt); VIDEOINFOHEADER *pVih; pVih = (VIDEOINFOHEADER*)mt.pbFormat; [...] } [...] Is there somebody, with video software experience, who can advise me about code or other simpler library? Thanks Edit: Msdn links seems not to work (see the bug)
Currently these are the most popular video frameworks available on Win32 platforms: Video for Windows: old windows framework coming from the age of Win95 but still widely used because it is very simple to use. Unfortunately it supports only AVI files for which the proper VFW codec has been installed. DirectShow: standard WinXP framework, it can basically load all formats you can play with Windows Media Player. Rather difficult to use. Ffmpeg: more precisely libavcodec and libavformat that comes with Ffmpeg open- source multimedia utility. It is extremely powerful and can read a lot of formats (almost everything you can play with VLC) even if you don't have the codec installed on the system. It's quite complicated to use but you can always get inspired by the code of ffplay that comes shipped with it or by other implementations in open-source software. Anyway I think it's still much easier to use than DS (and much faster). It needs to be comipled by MinGW on Windows, but all the steps are explained very well here (in this moment the link is down, hope not dead). QuickTime: the Apple framework is not the best solution for Windows platform, since it needs QuickTime app to be installed and also the proper QuickTime codec for every format; it does not support many formats, but its quite common in professional field (so some codec are actually only for QuickTime). Shouldn't be too difficult to implement. Gstreamer: latest open source framework. I don't know much about it, I guess it wraps over some of the other systems (but I'm not sure). All of this frameworks have been implemented as backend in OpenCv Highgui, except for DirectShow. The default framework for Win32 OpenCV is using VFW (and thus able only to open some AVI files), if you want to use the others you must download the CVS instead of the official release and still do some hacking on the code and it's anyway not too complete, for example FFMPEG backend doesn't allow to seek in the stream. If you want to use QuickTime with OpenCV this can help you.
38,037
38,413
C++: How to extract a string from RapidXml
In my C++ program I want to parse a small piece of XML, insert some nodes, then extract the new XML (preferably as a std::string). RapidXml has been recommended to me, but I can't see how to retrieve the XML back as a text string. (I could iterate over the nodes and attributes and build it myself, but surely there's a build in function that I am missing.) Thank you.
Althoug the documentation is poor on this topic, I managed to get some working code by looking at the source. Although it is missing the xml header which normally contains important information. Here is a small example program that does what you are looking for using rapidxml: #include <iostream> #include <sstream> #include "rapidxml/rapidxml.hpp" #include "rapidxml/rapidxml_print.hpp" int main(int argc, char* argv[]) { char xml[] = "<?xml version=\"1.0\" encoding=\"latin-1\"?>" "<book>" "</book>"; //Parse the original document rapidxml::xml_document<> doc; doc.parse<0>(xml); std::cout << "Name of my first node is: " << doc.first_node()->name() << "\n"; //Insert something rapidxml::xml_node<> *node = doc.allocate_node(rapidxml::node_element, "author", "John Doe"); doc.first_node()->append_node(node); std::stringstream ss; ss <<*doc.first_node(); std::string result_xml = ss.str(); std::cout <<result_xml<<std::endl; return 0; }
38,299
38,568
Tutorial on understanding strings in Symbian
I have Carbide.c++ Developer from Nokia and I want to create applications for my S60 phone. I've looked at the samples that goes with the different SDK's for S60 but I haven't found any simple explanation on how strings, called descriptors, are used in Symbian. One of the problems are that I'm visually impaired and therefore it takes quite some time to read through large documents that has page up and page down with lots of unuseful info and I've given up. I'm willing to give it another try. Can anyone help me?
Here are a few sites on blogspot that may help. They have RSS feeds that will hopefully be easier to consume than paging through PDFs. http://descriptor-tips.blogspot.com/ http://descriptors.blogspot.com/
38,447
66,560,184
Asynchronous Mysql connector
Do any asynchronous connectors exist for Mysql that can be used within a C or C++ application? I'm looking for something that can be plugged into a reactor pattern written in Boost.Asio. [Edit:] Running a synchronous connector in threads is not an option.
I know this is an old question, but consider looking at the new Boost.Mysql library: https://anarthal.github.io/mysql/index.html
38,501
40,291
Thread pool for executing arbitrary tasks with different priorities
I'm trying to come up with a design for a thread pool with a lot of design requirements for my job. This is a real problem for working software, and it's a difficult task. I have a working implementation but I'd like to throw this out to SO and see what interesting ideas people can come up with, so that I can compare to my implementation and see how it stacks up. I've tried to be as specific to the requirements as I can. The thread pool needs to execute a series of tasks. The tasks can be short running (<1sec) or long running (hours or days). Each task has an associated priority (from 1 = very low to 5 = very high). Tasks can arrive at any time while the other tasks are running, so as they arrive the thread pool needs to pick these up and schedule them as threads become available. The task priority is completely independant of the task length. In fact it is impossible to tell how long a task could take to run without just running it. Some tasks are CPU bound while some are greatly IO bound. It is impossible to tell beforehand what a given task would be (although I guess it might be possible to detect while the tasks are running). The primary goal of the thread pool is to maximise throughput. The thread pool should effectively use the resources of the computer. Ideally, for CPU bound tasks, the number of active threads would be equal to the number of CPUs. For IO bound tasks, more threads should be allocated than there are CPUs so that blocking does not overly affect throughput. Minimising the use of locks and using thread safe/fast containers is important. In general, you should run higher priority tasks with a higher CPU priority (ref: SetThreadPriority). Lower priority tasks should not "block" higher priority tasks from running, so if a higher priority task comes along while all low priority tasks are running, the higher priority task will get to run. The tasks have a "max running tasks" parameter associated with them. Each type of task is only allowed to run at most this many concurrent instances of the task at a time. For example, we might have the following tasks in the queue: A - 1000 instances - low priority - max tasks 1 B - 1000 instances - low priority - max tasks 1 C - 1000 instances - low priority - max tasks 1 A working implementation could only run (at most) 1 A, 1 B and 1 C at the same time. It needs to run on Windows XP, Server 2003, Vista and Server 2008 (latest service packs). For reference, we might use the following interface: namespace ThreadPool { class Task { public: Task(); void run(); }; class ThreadPool { public: ThreadPool(); ~ThreadPool(); void run(Task *inst); void stop(); }; }
So what are we going to pick as the basic building block for this. Windows has two building blocks that look promising :- I/O Completion Ports (IOCPs) and Asynchronous Procedure Calls (APCs). Both of these give us FIFO queuing without having to perform explicit locking, and with a certain amount of built-in OS support in places like the scheduler (for example, IOCPs can avoid some context switches). APCs are perhaps a slightly better fit, but we will have to be slightly careful with them, because they are not quite "transparent". If the work item performs an alertable wait (::SleepEx, ::WaitForXxxObjectEx, etc.) and we accidentally dispatch an APC to the thread then the newly dispatched APC will take over the thread, suspending the previously executing APC until the new APC is finished. This is bad for our concurrency requirements and can make stack overflows more likely.
38,822
49,780
Best full text search alternative to MS SQL, C++ solution
What is the best full text search alternative to Microsoft SQL? (which works with MS SQL) I'm looking for something similar to Lucene and Lucene.NET but without the .NET and Java requirements. I would also like to find a solution that is usable in commercial applications.
Take a look at CLucene - It's a well maintained C++ port of java Lucene. It's currently licenced under LGPL and we use it in our commercial application. Performance is incredible, however you do have to get your head around some of the strange API conventions.
39,304
39,314
Do C++ logging frameworks sacrifice reusability?
In C++, there isn't a de-facto standard logging tool. In my experience, shops roll their own. This creates a bit of a problem, however, when trying to create reusable software components. If everything in your system depends on the logging component, this makes the software less reusable, basically forcing any downstream projects to take your logging framework along with the components they really want. IOC (dependency injection) doesn't really help with the problem since your components would need to depend on a logging abstraction. Logging components themselves can add dependencies on file I/O, triggering mechanisms, and other possibly unwanted dependencies. Does adding a dependency to your proprietary logging framework sacrifice the reusability of the component?
Yes. But dependency injection will help in this case. You can create an abstract logging base-class and create implementations for the logging-frameworks you want to use. Your components are just dependent on the abstract base-class. And you inject the implementations along with al their dependencies as needed.
39,419
39,441
How large is a DWORD with 32- and 64-bit code?
In Visual C++ a DWORD is just an unsigned long that is machine, platform, and SDK dependent. However, since DWORD is a double word (that is 2 * 16), is a DWORD still 32-bit on 64-bit architectures?
Actually, on 32-bit computers a word is 32-bit, but the DWORD type is a leftover from the good old days of 16-bit. In order to make it easier to port programs to the newer system, Microsoft has decided all the old types will not change size. You can find the official list here: http://msdn.microsoft.com/en-us/library/aa383751(VS.85).aspx All the platform-dependent types that changed with the transition from 32-bit to 64-bit end with _PTR (DWORD_PTR will be 32-bit on 32-bit Windows and 64-bit on 64-bit Windows).
39,468
39,478
Calling DLL functions from VB6
I've got a Windows DLL that I wrote, written in C/C++ (all exported functions are 'C'). The DLL works fine for me in VC++. I've given the DLL to another company who do all their development in VB. They seem to be having a problem linking to the functions. I haven't used VB in ten years and I don't even have it installed. What could be the problem? I've declared all my public functions as follows: #define MYDCC_API __declspec(dllexport) MYDCCL_API unsigned long MYDCC_GetVer( void); . . . Any ideas? Finally got back to this today and have it working. The answers put me on the right track but I found this most helpful: http://www.codeproject.com/KB/DLL/XDllPt2.aspx Also, I had a few problems passing strings to the DLL functions, I found this helpful: http://www.flipcode.com/archives/Interfacing_Visual_Basic_And_C.shtml
By using __declspec for export, the function name will get exported mangled, i.e. contain type information to help the C++ compiler resolve overloads. VB6 cannot handle mangled names. As a workaround, you have to de-mangle the names. The easiest solution is to link the DLL file using an export definition file in VC++. The export definition file is very simple and just contains the name of the DLL and a list of exported functions: LIBRARY mylibname EXPORTS myfirstfunction secondfunction Additionally, you have to specify the stdcall calling convention because that's the only calling convention VB6 can handle. There's a project using assembly injection to handle C calls but I guess you don't want to use this difficult and error-prone method.
39,474
39,590
How to get IntelliSense to reliably work in Visual Studio 2008
Does anyone know how to get IntelliSense to work reliably when working in C/C++ projects? It seems to work for about 1 in 10 files. Visual Studio 2005 seems to be a lot better than 2008. Edit: Whilst not necessarily a solution, the work-around provided here: How to get IntelliSense to reliably work in Visual Studio 2008 Is probably the best bet if I want a decent IntelliSense system.
I've also realized than Intellisense is sometime 'lost', on some big project. Why? No idea. This is why we have bought Visual Assist (from Tomato software) and disabled Intellisense by deleting the dll feacp.dll in the Visual studio subdirectory (C:\Program Files\Microsoft Visual Studio 8\VC\vcpackages) This is not a solution, just a workaround.
39,912
39,944
How do I remove an item from a stl vector with a certain value?
I was looking at the API documentation for stl vector, and noticed there was no method on the vector class that allowed the removal of an element with a certain value. This seems like a common operation, and it seems odd that there's no built in way to do this.
std::remove does not actually erase elements from the container: it moves the elements to be removed to the end of the container, and returns the new end iterator which can be passed to container_type::erase to do the actual removal of the extra elements that are now at the end of the container: std::vector<int> vec; // .. put in some values .. int int_to_remove = n; vec.erase(std::remove(vec.begin(), vec.end(), int_to_remove), vec.end());
40,423
40,616
How would you implement Erlang-like send and receive in C++?
Actually, this question seems to have two parts: How to implement pattern matching? How to implement send and receive (i.e. the Actor model)? For the pattern matching part, I've been looking into various projects like App and Prop. These look pretty nice, but couldn't get them to work on a recent version (4.x) of g++. The Felix language also seems to support pattern matching pretty well, but isn't really C++. As for the Actor model, there are existing implementations like ACT++ and Theron, but I couldn't find anything but papers on the former, and the latter is single-threaded only [see answers]. Personally, I've implemented actors using threading and a thread-safe message queue. Messages are hash-like structures, and used these together with a number of preprocessor macros to implemented simple pattern matching. Right now, I can use the following code to send a message: (new Message(this)) ->set("foo", "bar") ->set("baz", 123) ->send(recipient); And the following to do simple pattern matching (qDebug and qPrintable are Qt-specific): receive_and_match(m) match_key("foo") { qDebug("foo: %s", qPrintable(m->value("foo").toString())); } or_match_key("baz") { qDebug("baz: %d", m->value("baz").toInt()); } or_match_ignore end_receive However, this looks a bit hackish to me, and isn't very robust. How would you do it? Did I miss any existing work?
One of the important things about erlang is how the features are used to make robust systems. The send/recieve model is no-sharing, and explicitly copying. The processes themselves are lightweight threads. If you did desire the robust properties of the erlang model, you would be best to use real processes and IPC rather than threads. If you want robust message passing though you may end up wanting to serialize and deserialise the contents. Especially with type safety. Pattern matching in C++ isn't always pretty but there will be a good pattern for this - you will end up creating a dispatcher object that uses some form of polymorphism to get what you want. Although if you are not careful you end up with xml over pipes :) Really, if you want the erlang model you really want to use erlang. If there are slow bits, I'm sure you can augment your program using a foreign function internet. The problem about re-implementing parts, is you won't get a good cohesive library and solution. The solutions you have already don't look much like C++ anymore.
41,045
41,059
Can I have polymorphic containers with value semantics in C++?
As a general rule, I prefer using value rather than pointer semantics in C++ (ie using vector<Class> instead of vector<Class*>). Usually the slight loss in performance is more than made up for by not having to remember to delete dynamically allocated objects. Unfortunately, value collections don't work when you want to store a variety of object types that all derive from a common base. See the example below. #include <iostream> using namespace std; class Parent { public: Parent() : parent_mem(1) {} virtual void write() { cout << "Parent: " << parent_mem << endl; } int parent_mem; }; class Child : public Parent { public: Child() : child_mem(2) { parent_mem = 2; } void write() { cout << "Child: " << parent_mem << ", " << child_mem << endl; } int child_mem; }; int main(int, char**) { // I can have a polymorphic container with pointer semantics vector<Parent*> pointerVec; pointerVec.push_back(new Parent()); pointerVec.push_back(new Child()); pointerVec[0]->write(); pointerVec[1]->write(); // Output: // // Parent: 1 // Child: 2, 2 // But I can't do it with value semantics vector<Parent> valueVec; valueVec.push_back(Parent()); valueVec.push_back(Child()); // gets turned into a Parent object :( valueVec[0].write(); valueVec[1].write(); // Output: // // Parent: 1 // Parent: 2 } My question is: Can I have have my cake (value semantics) and eat it too (polymorphic containers)? Or do I have to use pointers?
Since the objects of different classes will have different sizes, you would end up running into the slicing problem if you store them as values. One reasonable solution is to store container safe smart pointers. I normally use boost::shared_ptr which is safe to store in a container. Note that std::auto_ptr is not. vector<shared_ptr<Parent>> vec; vec.push_back(shared_ptr<Parent>(new Child())); shared_ptr uses reference counting so it will not delete the underlying instance until all references are removed.
41,400
41,794
How can I wrap a function with variable length arguments?
I am looking to do this in C/C++. I came across Variable Length Arguments, but this suggests a solution with Python and C using libffi. Now, if I want to wrap the printf function with myprintf. I do it like below: void myprintf(char* fmt, ...) { va_list args; va_start(args, fmt); printf(fmt, args); va_end(args); } int _tmain(int argc, _TCHAR* argv[]) { int a = 9; int b = 10; char v = 'C'; myprintf("This is a number: %d and \nthis is a character: %c and \n another number: %d\n", a, v, b); return 0; } But the results are not as expected! This is a number: 1244780 and this is a character: h and another number: 29953463 What did I miss?
The problem is that you cannot use 'printf' with va_args. You must use vprintf if you are using variable argument lists. vprint, vsprintf, vfprintf, etc. (there are also 'safe' versions in Microsoft's C runtime that will prevent buffer overruns, etc.) You sample works as follows: void myprintf(char* fmt, ...) { va_list args; va_start(args, fmt); vprintf(fmt, args); va_end(args); } int _tmain(int argc, _TCHAR* argv[]) { int a = 9; int b = 10; char v = 'C'; myprintf("This is a number: %d and \nthis is a character: %c and \n another number: %d\n", a, v, b); return 0; }
41,453
34,551,440
How can I add reflection to a C++ application?
I'd like to be able to introspect a C++ class for its name, contents (i.e. members and their types) etc. I'm talking native C++ here, not managed C++, which has reflection. I realise C++ supplies some limited information using RTTI. Which additional libraries (or other techniques) could supply this information?
Ponder is a C++ reflection library, in answer to this question. I considered the options and decided to make my own since I couldn't find one that ticked all my boxes. Although there are great answers to this question, I don't want to use tonnes of macros, or rely on Boost. Boost is a great library, but there are lots of small bespoke C++0x projects out that are simpler and have faster compile times. There are also advantages to being able to decorate a class externally, like wrapping a C++ library that doesn't (yet?) support C++11. It is fork of CAMP, using C++11, that no longer requires Boost.
41,583
41,591
How to do a sample rate conversion in Windows (and OSX)
I am about to write an audio file converter for my side job at the university. As part of this I would need sample rate conversion. However, my professor said that it would be pretty hard to write a sample rate converter that was both of good quality and fast. On my research on the subject, I found some functions in the OSX CoreAudio-framework, that could do a sample rate conversion (AudioConverter.h). After all, an OS has to have some facilities to do that for its own audio stack. Do you know a similar method for C/C++ and Windows, that are either part of the OS or open source? I am pretty sure that this function exists within DirectX Audio (XAudio2?), but I seem to be unable to find a reference to it in the MSDN library.
Try Secret Rabbit Code (= SRC = Sample Rate Conversion ) It's GPL, it's fast and it's high quality. http://www.mega-nerd.com/SRC/license.html
41,590
41,598
How do you properly use namespaces in C++?
I come from a Java background, where packages are used, not namespaces. I'm used to putting classes that work together to form a complete object into packages, and then reusing them later from that package. But now I'm working in C++. How do you use namespaces in C++? Do you create a single namespace for the entire application, or do you create namespaces for the major components? If so, how do you create objects from classes in other namespaces?
Namespaces are packages essentially. They can be used like this: namespace MyNamespace { class MyClass { }; } Then in code: MyNamespace::MyClass* pClass = new MyNamespace::MyClass(); Or, if you want to always use a specific namespace, you can do this: using namespace MyNamespace; MyClass* pClass = new MyClass(); Edit: Following what bernhardrusch has said, I tend not to use the "using namespace x" syntax at all, I usually explicitly specify the namespace when instantiating my objects (i.e. the first example I showed). And as you asked below, you can use as many namespaces as you like.
41,654
41,668
C++ libraries to manipulate images
Do you know any open source/free software C++ libraries to manipulate images in these formats: .jpg .gif .png .bmp ? The more formats it supports, the better. I am implementing a free program in C++ which hides a text file into one or more images, using steganography. I am working under Unix.
ImageMagick can manipulate about anything and has interfaces for a dozen of languages, including the Magick++ API for C++.
41,676
41,710
Is there a way of getting the process id of my C++ application?
Is there a way of getting the process id of my C++ application? I am using the Carbon framework, but not Cocoa…
can you use the getpid() function found in unistd.h ? osx reference
41,832
41,859
Maintain the correct version for a COM dll referenced in a .NET project
I want to reference a COM DLL in a .NET project, but I also want to make sure that the interop DLL created will have the correct version (so that patches will know when the DLL must be changed). If I use TlbImp I can specify the required version with the /asmversion flag but when I add it directly from Visual Studio it gets a version that has nothing to do with the original COM DLL's version. I tried changing the version in the .vcproj file <ItemGroup> <COMReference Include="MYDLLLib"> <Guid>{459F8813-D74D-DEAD-BEEF-00CAFEBABEA5}</Guid> <!-- I changed this --> <VersionMajor>1</VersionMajor> <!-- This too --> <VersionMinor>0</VersionMinor> <Lcid>0</Lcid> <WrapperTool>tlbimp</WrapperTool> <Isolated>False</Isolated> </COMReference> </ItemGroup> But then the project failed to build with the following error: error CS0246: The type or namespace name 'MYDLLLib' could not be found (are you missing a using directive or an assembly reference?) Is there any way to get this done without creating all my COM references with TlbImp in advance? If the answer is yes is there a way to specify a build number in addition to the major and minor versions? (e.g 1.2.42.0)
The Guid refers to the Guid for the TypeLib not the DLL directly. The version numbers refer to the TypeLib's version not the DLLs. The version number will come from your idl file, and I believe it only supports a major and minor version and not a build version. Is this version changing when you modify the typelib? The version numbers will appear in the registry under: HKEY_CLASSES_ROOT\Typelib\{typelib uuid}\Major.Minor If the minor version is set to 0 then I believe it will import the 'latest' version that matches the major version, but the major version must be set to something.
42,126
42,327
C++ Compiler Error C2371 - Redefinition of WCHAR
I am getting C++ Compiler error C2371 when I include a header file that itself includes odbcss.h. My project is set to MBCS. C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\odbcss.h(430) : error C2371: 'WCHAR' : redefinition; different basic types 1> C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\winnt.h(289) : see declaration of 'WCHAR' I don't see any defines in odbcss.h that I could set to avoid this. Has anyone else seen this?
This is a known bug - see the Microsoft Connect website: http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=98699 The error doesn't occur if you compile your app as Unicode instead of MBCS.
42,308
42,513
Tool to track #include dependencies
Any good suggestions? Input will be the name of a header file and output should be a list (preferably a tree) of all files including it directly or indirectly.
If you have access to GCC/G++, then the -M option will output the dependency list. It doesn't do any of the extra stuff that the other tools do, but since it is coming from the compiler, there is no chance that it will pick up files from the "wrong" place.
42,446
42,968
How do I convert an IntPtr to a Stream?
class Foo { static bool Bar(Stream^ stream); }; class FooWrapper { bool Bar(LPCWSTR szUnicodeString) { return Foo::Bar(??); } }; MemoryStream will take a byte[] but I'd like to do this without copying the data if possible.
You can avoid the copy if you use an UnmanagedMemoryStream() instead (class exists in .NET FCL 2.0 and later). Like MemoryStream, it is a subclass of IO.Stream, and has all the usual stream operations. Microsoft's description of the class is: Provides access to unmanaged blocks of memory from managed code. which pretty much tells you what you need to know. Note that UnmanagedMemoryStream() is not CLS-compliant.
42,531
42,544
How do I call ::CreateProcess in c++ to launch a Windows executable?
Looking for an example that: Launches an EXE Waits for the EXE to finish. Properly closes all the handles when the executable finishes.
Something like this: STARTUPINFO info={sizeof(info)}; PROCESS_INFORMATION processInfo; if (CreateProcess(path, cmd, NULL, NULL, TRUE, 0, NULL, NULL, &info, &processInfo)) { WaitForSingleObject(processInfo.hProcess, INFINITE); CloseHandle(processInfo.hProcess); CloseHandle(processInfo.hThread); }
42,770
42,806
Writing/Using C++ Libraries
I am looking for basic examples/tutorials on: How to write/compile libraries in C++ (.so files for Linux, .dll files for Windows). How to import and use those libraries in other code.
The code r.cc : #include "t.h" int main() { f(); return 0; } t.h : void f(); t.cc : #include<iostream> #include "t.h" void f() { std::cout << "OH HAI. I'M F." << std::endl; } But how, how, how?! ~$ g++ -fpic -c t.cc # get t.o ~$ g++ -shared -o t.so t.o # get t.so ~$ export LD_LIBRARY_PATH="." # make sure t.so is found when dynamically linked ~$ g++ r.cc t.so # get an executable The export step is not needed if you install the shared library somewhere along the global library path.
43,194
43,304
Mixing C/C++ Libraries
Is it possible for gcc to link against a library that was created with Visual C++? If so, are there any conflicts/problems that might arise from doing so?
Some of the comments in the answers here are slightly too generalistic. Whilst no, in the specific case mentioned gcc binaries won't link with a VC++ library (AFAIK). The actual means of interlinking code/libraries is a question of the ABI standard being used. An increasingly common standard in the embedded world is the EABI (or ARM ABI) standard (based on work done during Itanium development http://www.codesourcery.com/cxx-abi/). If compilers are EABI compliant they can produce executables and libraries which will work with each other. An example of multiple toolchains working together is ARM's RVCT compiler which produces binaries which will work with GCC ARM ABI binaries. (The code sourcery link is down at the moment but can be google cached)
43,322
43,325
What's safe for a C++ plug-in system?
Plug-in systems in C++ are hard because the ABI is not properly defined, and each compiler (or version thereof) follows its own rules. However, COM on Windows shows that it's possible to create a minimal plug-in system that allows programmers with different compilers to create plug-ins for a host application using a simple interface. Let's be practical, and leave the C++ standard, which is not very helpful in this respect, aside for a minute. If I want to write an app for Windows and Mac (and optionally Linux) that supports C++ plug-ins, and if I want to give plug-in authors a reasonably large choice of compilers (say less than 2 year old versions of Visual C++, GCC or Intel's C++ compiler), what features of C++ could I count on? Of course, I assume that plug-ins would be written for a specific platform. Off the top of my head, here are some C++ features I can think of, with what I think is the answer: vtable layout, to use objects through abstract classes? (yes) built-in types, pointers? (yes) structs, unions? (yes) exceptions? (no) extern "C" functions? (yes) stdcall non-extern "C" functions with built-in parameter types? (yes) non-stdcall non-extern "C" functions with user-defined parameter types? (no) I would appreciate any experience you have in that area that you could share. If you know of any moderately successful app that has a C++ plug-in system, that's cool too. Carl
Dr Dobb's Journal has an article Building Your Own Plugin Framework: Part 1 which is pretty good reading on the subject. It is the start of a series of articles which covers the architecture, development, and deployment of a C/C++ cross-platform plugin framework.
44,693
44,762
Efficient alternatives for exposing a Collection
In C++, what alternatives do I have for exposing a collection, from the point of view of performance and data integrity? My problem is that I want to return an internal list of data to the caller, but I don't want to generate a copy. Thant leaves me with either returning a reference to the list, or a pointer to the list. However, I'm not crazy about letting the caller change the data, I just want to let it read the data. Do I have to choose between performance and data integrity? If so, is in general better to go one way or is it particular to the case? Are there other alternatives?
RichQ's answer is a reasonable technique, if you're using an array, vector, etc. If you're using a collection that isn't indexed by ordinal values... or think you might need to at some point in the near future... then you might want to consider exposing your own iterator type(s), and associated begin()/end() methods: class Blah { public: typedef std::vector<mydata> mydata_collection; typedef myDataCollection::const_iterator mydata_const_iterator; // ... mydata_const_iterator data_begin() const { return myPreciousData.begin(); } mydata_const_iterator data_end() const { return myPreciousData.end(); } private: mydata_collection myPreciousData; }; ...which you can then use in the normal fashion: Blah blah; for (Blah::mydata_const_iterator itr = blah.data_begin(); itr != blah.data_end(); ++itr) { // ... }
44,821
45,170
Default smart device project can't find dependencies
When running the default c++ project in Visual Studios for a Windows CE 5.0 device, I get an error complaining about missing resources. Depends says that my executable needs ayghsell.dll (the Windows Mobile shell), and CoreDll.dll. Does this mean that my executable can only be run on Windows Mobile devices, instead of any generic Windows CE installation? If that's the case, how do I create an executable targeting generic WinCE?
Depends what you mean by a generic Windows CE installation. Windows CE itself is a modularised operating system, so different devices can have different modules included. Therefore each Windows CE device can have a radically different OS installed (headless even). Coredll is the standard "common" library that gets included in a Windows CE installation, however it can contain different components depending on the other modules in the system. If you want to target a relatively standard version of Windows CE either target the Standard SDK set of components, or go for a Windows Mobile platform. If you have an SDK then install and use that. If none is available then you can generate an SDK using Platform Builder and the OS project files. To get your application to work on a non-Windows Mobile installation of Windows CE you just have to remove the code that uses the aygshell library, and not link to those libraries.
45,286
45,316
How can I overwrite the same portion of the console in a Windows native C++ console app, without using a 3rd Party library?
I have a console app that needs to display the state of items, but rather than having text scroll by like mad I'd rather see the current status keep showing up on the same lines. For the sake of example: Running... nn% complete Buffer size: bbbb bytes should be the output, where 'nn' is the current percentage complete, and 'bbbb' is a buffer size, updated periodically on the same lines of the console. The first approach I took simply printed the correct number of backspaces to the console before printing the new state, but this has an obnoxious flicker that I want to get rid of. I also want to stick to either standard library or MS-provided functionality (VC 8) so as not to introduce another dependency for this one simple need.
You can use SetConsoleCursorPosition. You'll need to call GetStdHandle to get a handle to the output buffer.
45,528
45,587
Simplest way to change listview and treeview colours
I'm trying to find a simple way to change the colour of the text and background in listview and treeview controls in WTL or plain Win32 code. I really don't want to have to implement full owner drawing for these controls, simply change the colours used. I want to make sure that the images are still drawn with proper transparency. Any suggestions?
Have a look at the following macros: ListView_SetBkColor ListView_SetTextColor TreeView_SetBkColor TreeView_SetTextColor
45,658
63,430
How do I retrieve IPIEHTMLDocument2 interface on IE Mobile
I wrote an Active X plugin for IE7 which implements IObjectWithSite besides some other necessary interfaces (note no IOleClient). This interface is queried and called by IE7. During the SetSite() call I retrieve a pointer to IE7's site interface which I can use to retrieve the IHTMLDocument2 interface using the following approach: IUnknown *site = pUnkSite; /* retrieved from IE7 during SetSite() call */ IServiceProvider *sp = NULL; IHTMLWindow2 *win = NULL; IHTMLDocument2 *doc = NULL; if(site) { site->QueryInterface(IID_IServiceProvider, (void **)&sp); if(sp) { sp->QueryService(IID_IHTMLWindow2, IID_IHTMLWindow2, (void **)&win); if(win) { win->get_document(&doc); } } } if(doc) { /* found */ } I tried a similiar approach on PIE as well using the following code, however, even the IPIEHTMLWindow2 interface cannot be acquired, so I'm stuck: IUnknown *site = pUnkSite; /* retrieved from PIE during SetSite() call */ IPIEHTMLWindow2 *win = NULL; IPIEHTMLDocument1 *tmp = NULL; IPIEHTMLDocument2 *doc = NULL; if(site) { site->QueryInterface(__uuidof(*win), (void **)&win); if(win) { /* never the case */ win->get_document(&tmp); if(tmp) { tmp->QueryInterface(__uuidof(*doc), (void **)&doc); } } } if(doc) { /* found */ } Using the IServiceProvider interface doesn't work either, so I already tested this. Any ideas?
I found the following code in the Google Gears code, here. I copied the functions I think you need to here. The one you need is at the bottom (GetHtmlWindow2), but the other two are needed as well. Hopefully I didn't miss anything, but if I did the stuff you need is probably at the link. #ifdef WINCE // We can't get IWebBrowser2 for WinCE. #else HRESULT ActiveXUtils::GetWebBrowser2(IUnknown *site, IWebBrowser2 **browser2) { CComQIPtr<IServiceProvider> service_provider = site; if (!service_provider) { return E_FAIL; } return service_provider->QueryService(SID_SWebBrowserApp, IID_IWebBrowser2, reinterpret_cast<void**>(browser2)); } #endif HRESULT ActiveXUtils::GetHtmlDocument2(IUnknown *site, IHTMLDocument2 **document2) { HRESULT hr; #ifdef WINCE // Follow path Window2 -> Window -> Document -> Document2 CComPtr<IPIEHTMLWindow2> window2; hr = GetHtmlWindow2(site, &window2); if (FAILED(hr) || !window2) { return false; } CComQIPtr<IPIEHTMLWindow> window = window2; CComPtr<IHTMLDocument> document; hr = window->get_document(&document); if (FAILED(hr) || !document) { return E_FAIL; } return document->QueryInterface(__uuidof(*document2), reinterpret_cast<void**>(document2)); #else CComPtr<IWebBrowser2> web_browser2; hr = GetWebBrowser2(site, &web_browser2); if (FAILED(hr) || !web_browser2) { return E_FAIL; } CComPtr<IDispatch> doc_dispatch; hr = web_browser2->get_Document(&doc_dispatch); if (FAILED(hr) || !doc_dispatch) { return E_FAIL; } return doc_dispatch->QueryInterface(document2); #endif } HRESULT ActiveXUtils::GetHtmlWindow2(IUnknown *site, #ifdef WINCE IPIEHTMLWindow2 **window2) { // site is javascript IDispatch pointer. return site->QueryInterface(__uuidof(*window2), reinterpret_cast<void**>(window2)); #else IHTMLWindow2 **window2) { CComPtr<IHTMLDocument2> html_document2; // To hook an event on a page's window object, follow the path // IWebBrowser2->document->parentWindow->IHTMLWindow2 HRESULT hr = GetHtmlDocument2(site, &html_document2); if (FAILED(hr) || !html_document2) { return E_FAIL; } return html_document2->get_parentWindow(window2); #endif }
45,972
6,383,253
mmap() vs. reading blocks
I'm working on a program that will be processing files that could potentially be 100GB or more in size. The files contain sets of variable length records. I've got a first implementation up and running and am now looking towards improving performance, particularly at doing I/O more efficiently since the input file gets scanned many times. Is there a rule of thumb for using mmap() versus reading in blocks via C++'s fstream library? What I'd like to do is read large blocks from disk into a buffer, process complete records from the buffer, and then read more. The mmap() code could potentially get very messy since mmap'd blocks need to lie on page sized boundaries (my understanding) and records could potentially lie across page boundaries. With fstreams, I can just seek to the start of a record and begin reading again, since we're not limited to reading blocks that lie on page sized boundaries. How can I decide between these two options without actually writing up a complete implementation first? Any rules of thumb (e.g., mmap() is 2x faster) or simple tests?
I was trying to find the final word on mmap / read performance on Linux and I came across a nice post (link) on the Linux kernel mailing list. It's from 2000, so there have been many improvements to IO and virtual memory in the kernel since then, but it nicely explains the reason why mmap or read might be faster or slower. A call to mmap has more overhead than read (just like epoll has more overhead than poll, which has more overhead than read). Changing virtual memory mappings is a quite expensive operation on some processors for the same reasons that switching between different processes is expensive. The IO system can already use the disk cache, so if you read a file, you'll hit the cache or miss it no matter what method you use. However, Memory maps are generally faster for random access, especially if your access patterns are sparse and unpredictable. Memory maps allow you to keep using pages from the cache until you are done. This means that if you use a file heavily for a long period of time, then close it and reopen it, the pages will still be cached. With read, your file may have been flushed from the cache ages ago. This does not apply if you use a file and immediately discard it. (If you try to mlock pages just to keep them in cache, you are trying to outsmart the disk cache and this kind of foolery rarely helps system performance). Reading a file directly is very simple and fast. The discussion of mmap/read reminds me of two other performance discussions: Some Java programmers were shocked to discover that nonblocking I/O is often slower than blocking I/O, which made perfect sense if you know that nonblocking I/O requires making more syscalls. Some other network programmers were shocked to learn that epoll is often slower than poll, which makes perfect sense if you know that managing epoll requires making more syscalls. Conclusion: Use memory maps if you access data randomly, keep it around for a long time, or if you know you can share it with other processes (MAP_SHARED isn't very interesting if there is no actual sharing). Read files normally if you access data sequentially or discard it after reading. And if either method makes your program less complex, do that. For many real world cases there's no sure way to show one is faster without testing your actual application and NOT a benchmark. (Sorry for necro'ing this question, but I was looking for an answer and this question kept coming up at the top of Google results.)
47,538
47,554
Where's the Win32 resource for the mouse cursor for dragging splitters?
I am building a custom win32 control/widget and would like to change the cursor to a horizontal "splitter" symbol when hovering over a particular vertical line in the control. IE: I want to drag this vertical line (splitter bar) left and right (WEST and EAST). Of the the system cursors (OCR_*), the only cursor that makes sense is the OCR_SIZEWE. Unfortunately, that is the big, awkward cursor the system uses when resizing a window. Instead, I am looking for the cursor that is about 20 pixels tall and around 3 or 4 pixel wide with two small arrows pointing left and right. I can easily draw this and include it as a resource in my application but the cursor itself is so prevalent that I wanted to be sure it wasn't missing something. For example: when you use the COM drag and drop mechanism (CLSID_DragDropHelper, IDropTarget, etc) you implicitly have access to the "drag" icon (little box under the pointer). I didn't see an explicit OCR_* constant for this guy ... so likewise, if I can't find this splitter cursor outright, I am wondering if it is part of a COM object or something else in the win32 lib.
There are all sorts of icons, cursors, and images in use throughout the Windows UI which are not publicly available to 3rd-party software. Of course, you could still load up the module in which they reside and use them, but there's really no guarantee your program will keep working after a system update / upgrade. Include your own. The last thing you want is adding an extra dependency over a tiny little cursor.
47,901
47,921
Can UDP data be delivered corrupted?
Is it possible for UDP data to come to you corrupted? I know it is possible for it to be lost.
UDP packets use a 16 bit checksum. It is not impossible for UDP packets to have corruption, but it's pretty unlikely. In any case it is not more susceptible to corruption than TCP.
47,975
70,695
Is it possible to develop DirectX apps in Linux?
More out of interest than anything else, but can you compile a DirectX app under linux? Obviously there's no official SDK, but I was thinking it might be possible with wine. Presumably wine has an implementation of the DirectX interface in order to run games? Is it possible to link against that? (edit: This is called winelib) Failing that, maybe a mingw cross compiler with the app running under wine. Half answered my own question here, but wondered if anyone had heard of anything like this being done?
I've had some luck with this. I've managed to compile this simple Direct3D example. I used winelib for this (wine-dev package on Ubuntu). Thanks to alastair for pointing me to winelib. I modified the source slightly to convert the wchars to chars (1 on line 52, 2 on line 55, by removing the L before the string literals). There may be a way around this, but this got it up and running. I then compiled the source with the following: wineg++ -ld3d9 -ld3dx9 triangle.cpp This generates an a.out.exe.so binary, as well as an a.out script to run it under wine.
47,980
47,983
Deciphering C++ template error messages
I'm really beginning to understand what people mean when they say that C++'s error messages are pretty terrible in regards to templates. I've seen horrendously long errors for things as simple as a function not matching its prototype. Are there any tricks to deciphering these errors? EDIT: I'm using both gcc and MSVC. They both seem to be pretty terrible.
You can try the following tool to make things more sane: http://www.bdsoft.com/tools/stlfilt.html
47,981
47,990
How do I set, clear, and toggle a single bit?
How do I set, clear, and toggle a bit?
Setting a bit Use the bitwise OR operator (|) to set a bit. number |= 1UL << n; That will set the nth bit of number. n should be zero, if you want to set the 1st bit and so on upto n-1, if you want to set the nth bit. Use 1ULL if number is wider than unsigned long; promotion of 1UL << n doesn't happen until after evaluating 1UL << n where it's undefined behaviour to shift by more than the width of a long. The same applies to all the rest of the examples. Clearing a bit Use the bitwise AND operator (&) to clear a bit. number &= ~(1UL << n); That will clear the nth bit of number. You must invert the bit string with the bitwise NOT operator (~), then AND it. Toggling a bit The XOR operator (^) can be used to toggle a bit. number ^= 1UL << n; That will toggle the nth bit of number. Checking a bit You didn't ask for this, but I might as well add it. To check a bit, shift the number n to the right, then bitwise AND it: bit = (number >> n) & 1U; That will put the value of the nth bit of number into the variable bit. Changing the nth bit to x Setting the nth bit to either 1 or 0 can be achieved with the following on a 2's complement C++ implementation: number ^= (-x ^ number) & (1UL << n); Bit n will be set if x is 1, and cleared if x is 0. If x has some other value, you get garbage. x = !!x will booleanize it to 0 or 1. To make this independent of 2's complement negation behaviour (where -1 has all bits set, unlike on a 1's complement or sign/magnitude C++ implementation), use unsigned negation. number ^= (-(unsigned long)x ^ number) & (1UL << n); or unsigned long newbit = !!x; // Also booleanize to force 0 or 1 number ^= (-newbit ^ number) & (1UL << n); It's generally a good idea to use unsigned types for portable bit manipulation. or number = (number & ~(1UL << n)) | (x << n); (number & ~(1UL << n)) will clear the nth bit and (x << n) will set the nth bit to x. It's also generally a good idea to not to copy/paste code in general and so many people use preprocessor macros (like the community wiki answer further down) or some sort of encapsulation.
48,017
48,031
What is a jump table?
Can someone explain the mechanics of a jump table and why is would be needed in embedded systems?
A jump table can be either an array of pointers to functions or an array of machine code jump instructions. If you have a relatively static set of functions (such as system calls or virtual functions for a class) then you can create this table once and call the functions using a simple index into the array. This would mean retrieving the pointer and calling a function or jumping to the machine code depending on the type of table used. The benefits of doing this in embedded programming are: Indexes are more memory efficient than machine code or pointers, so there is a potential for memory savings in constrained environments. For any particular function the index will remain stable and changing the function merely requires swapping out the function pointer. If does cost you a tiny bit of performance for accessing the table, but this is no worse than any other virtual function call.
48,053
48,103
Is there any alternative to using % (modulus) in C/C++?
I read somewhere once that the modulus operator is inefficient on small embedded devices like 8 bit micro-controllers that do not have integer division instruction. Perhaps someone can confirm this but I thought the difference is 5-10 time slower than with an integer division operation. Is there another way to do this other than keeping a counter variable and manually overflowing to 0 at the mod point? const int FIZZ = 6; for(int x = 0; x < MAXCOUNT; x++) { if(!(x % FIZZ)) print("Fizz\n"); // slow on some systems } vs: The way I am currently doing it: const int FIZZ = 6; int fizzcount = 1; for(int x = 1; x < MAXCOUNT; x++) { if(fizzcount >= FIZZ) { print("Fizz\n"); fizzcount = 0; } }
Ah, the joys of bitwise arithmetic. A side effect of many division routines is the modulus - so in few cases should division actually be faster than modulus. I'm interested to see the source you got this information from. Processors with multipliers have interesting division routines using the multiplier, but you can get from division result to modulus with just another two steps (multiply and subtract) so it's still comparable. If the processor has a built in division routine you'll likely see it also provides the remainder. Still, there is a small branch of number theory devoted to Modular Arithmetic which requires study if you really want to understand how to optimize a modulus operation. Modular arithmatic, for instance, is very handy for generating magic squares. So, in that vein, here's a very low level look at the math of modulus for an example of x, which should show you how simple it can be compared to division: Maybe a better way to think about the problem is in terms of number bases and modulo arithmetic. For example, your goal is to compute DOW mod 7 where DOW is the 16-bit representation of the day of the week. You can write this as: DOW = DOW_HI*256 + DOW_LO DOW%7 = (DOW_HI*256 + DOW_LO) % 7 = ((DOW_HI*256)%7 + (DOW_LO % 7)) %7 = ((DOW_HI%7 * 256%7) + (DOW_LO%7)) %7 = ((DOW_HI%7 * 4) + (DOW_LO%7)) %7 Expressed in this manner, you can separately compute the modulo 7 result for the high and low bytes. Multiply the result for the high by 4 and add it to the low and then finally compute result modulo 7. Computing the mod 7 result of an 8-bit number can be performed in a similar fashion. You can write an 8-bit number in octal like so: X = a*64 + b*8 + c Where a, b, and c are 3-bit numbers. X%7 = ((a%7)*(64%7) + (b%7)*(8%7) + c%7) % 7 = (a%7 + b%7 + c%7) % 7 = (a + b + c) % 7 since 64%7 = 8%7 = 1 Of course, a, b, and c are c = X & 7 b = (X>>3) & 7 a = (X>>6) & 7 // (actually, a is only 2-bits). The largest possible value for a+b+c is 7+7+3 = 17. So, you'll need one more octal step. The complete (untested) C version could be written like: unsigned char Mod7Byte(unsigned char X) { X = (X&7) + ((X>>3)&7) + (X>>6); X = (X&7) + (X>>3); return X==7 ? 0 : X; } I spent a few moments writing a PIC version. The actual implementation is slightly different than described above Mod7Byte: movwf temp1 ; andlw 7 ;W=c movwf temp2 ;temp2=c rlncf temp1,F ; swapf temp1,W ;W= a*8+b andlw 0x1F addwf temp2,W ;W= a*8+b+c movwf temp2 ;temp2 is now a 6-bit number andlw 0x38 ;get the high 3 bits == a' xorwf temp2,F ;temp2 now has the 3 low bits == b' rlncf WREG,F ;shift the high bits right 4 swapf WREG,F ; addwf temp2,W ;W = a' + b' ; at this point, W is between 0 and 10 addlw -7 bc Mod7Byte_L2 Mod7Byte_L1: addlw 7 Mod7Byte_L2: return Here's a liitle routine to test the algorithm clrf x clrf count TestLoop: movf x,W RCALL Mod7Byte cpfseq count bra fail incf count,W xorlw 7 skpz xorlw 7 movwf count incfsz x,F bra TestLoop passed: Finally, for the 16-bit result (which I have not tested), you could write: uint16 Mod7Word(uint16 X) { return Mod7Byte(Mod7Byte(X & 0xff) + Mod7Byte(X>>8)*4); } Scott
48,094
48,102
C++ deleting a pointer to a pointer
So I have a pointer to an array of pointers. If I delete it like this: delete [] PointerToPointers; Will that delete all the pointed to pointers as well? If not, do I have to loop over all of the pointers and delete them as well, or is there an easier way to do it? My google-fu doesn't seem to give me any good answers to this question. (And yeah, I know I need to use a vector. This is one of those "catch up on C++" type assignments in school.)
Yes you have to loop over the pointers, deleting individually. Reason: What if other code had pointers to the objects in your array? The C++ compiler doesn't know if that's true or not, so you have to be explicit. For an "easier way," two suggestions: (1) Make a subroutine for this purpose so at least you won't have to write the code more than once. (2) Use the "smart pointer" design paradigm where you hold an array of objects with reference-counters, then the objects are deleted when the objects are no longer referenced by any code.
48,338
48,343
Am I allowed to run a javascript runtime (like v8) on the iPhone?
According to this discussion, the iphone agreement says that it doesn't allow "loading of plugins or running interpreted code that has been downloaded". Technically, I would like to download scripts from our server (embedded in a proprietary protocol). Does this mean I wouldn't be allowed to run a runtime like v8 in an iphone app? This is probably more of a legal question.
I think your interpretation is correct - You would not be allowed to download and execute JavaScript code in v8. If there were some way to run the code in an interpreter already on the iPhone (i.e. the javascript engine in MobileSafari) then that would be permitted I think.
48,390
48,587
Eclipse spelling engine does not exist
I'm using Eclipse 3.4 (Ganymede) with CDT 5 on Windows. When the integrated spell checker doesn't know some word, it proposes (among others) the option to add the word to a user dictionary. If the user dictionary doesn't exist yet, the spell checker offers then to help configuring it and shows the "General/Editors/Text Editors/Spelling" preference pane. This preference pane however states that "The selected spelling engine does not exist", but has no control to add or install an engine. How can I put a spelling engine in existence? Update: What solved my problem was to install also the JDT. This solution was brought up on 2008-09-07 and was accepted, but is now missing.
Are you using the C/C++ Development Tools exclusively?The Spellcheck functionality is dependent upon the Java Development Tools being installed also.The spelling engine is scheduled to be pushed down from JDT to the Platform,so you can get rid of the Java related bloat soon enough. :)
48,426
112,078
How could I graphically display the memory layout from a .map file?
My gcc build toolchain produces a .map file. How do I display the memory map graphically?
Here's the beginnings of a script in Python. It loads the map file into a list of Sections and Symbols (first half). It then renders the map using HTML (or do whatever you want with the sections and symbols lists). You can control the script by modifying these lines: with open('t.map') as f: colors = ['9C9F84', 'A97D5D', 'F7DCB4', '5C755E'] total_height = 32.0 map2html.py from __future__ import with_statement import re class Section: def __init__(self, address, size, segment, section): self.address = address self.size = size self.segment = segment self.section = section def __str__(self): return self.section+"" class Symbol: def __init__(self, address, size, file, name): self.address = address self.size = size self.file = file self.name = name def __str__(self): return self.name #=============================== # Load the Sections and Symbols # sections = [] symbols = [] with open('t.map') as f: in_sections = True for line in f: m = re.search('^([0-9A-Fx]+)\s+([0-9A-Fx]+)\s+((\[[ 0-9]+\])|\w+)\s+(.*?)\s*$', line) if m: if in_sections: sections.append(Section(eval(m.group(1)), eval(m.group(2)), m.group(3), m.group(5))) else: symbols.append(Symbol(eval(m.group(1)), eval(m.group(2)), m.group(3), m.group(5))) else: if len(sections) > 0: in_sections = False #=============================== # Gererate the HTML File # colors = ['9C9F84', 'A97D5D', 'F7DCB4', '5C755E'] total_height = 32.0 segments = set() for s in sections: segments.add(s.segment) segment_colors = dict() i = 0 for s in segments: segment_colors[s] = colors[i % len(colors)] i += 1 total_size = 0 for s in symbols: total_size += s.size sections.sort(lambda a,b: a.address - b.address) symbols.sort(lambda a,b: a.address - b.address) def section_from_address(addr): for s in sections: if addr >= s.address and addr < (s.address + s.size): return s return None print "<html><head>" print " <style>a { color: black; text-decoration: none; font-family:monospace }</style>" print "<body>" print "<table cellspacing='1px'>" for sym in symbols: section = section_from_address(sym.address) height = (total_height/total_size) * sym.size font_size = 1.0 if height > 1.0 else height print "<tr style='background-color:#%s;height:%gem;line-height:%gem;font-size:%gem'><td style='overflow:hidden'>" % \ (segment_colors[section.segment], height, height, font_size) print "<a href='#%s'>%s</a>" % (sym.name, sym.name) print "</td></tr>" print "</table>" print "</body></html>" And here's a bad rendering of the HTML it outputs:
48,496
48,508
How to teach a crash course on C++?
In a few weeks, we'll be teaching a crash course on C++ for Java programmers straight out of college. They have little or no experience yet with C or C++. Previous editions of this course were just 1 or 2 half-day sessions and covered topics including: new language features, e.g. header vs. implementation pointers and references memory management operator overloading templates the standard libraries, e.g. the C library headers basic iostreams basic STL using libraries (headers, linking) they'll be using Linux, so Basic Linux console commands GCC and how to interpret its error messages Makefiles and Autotools basic debugger commands any topic they ask about During the course, each person individually writes, compiles, runs, and debugs simple programs using the newly introduced features. Is this the best way to learn? Which topics do you consider most crucial? Which topics should be added or removed? Which topics just can't be covered adequately in a short time?
I can only once again point to Stroustrup and preach: Don't teach the C subset! It's important, but not for beginners! C++ is complex enough as it is and the standard library classes, especially the STL, is much more important and (at least superficially) easier to understand than the C subset of C++. Same goes for pointers and heap memory allocation, incidentally. Of course they're important but only after having taught the STL containers. Another important concept that new students have to get their head around is the concept of different compilation units, the One Definition Rule (because if you don't know it you won't be able to decypher error messages) and headers. This is actually quite a barrier and one that has to be breached early on. Apart from the language features the most important thing to be taught is how to understand the C++ compiler and how to get help. Getting help (i.e. knowing how to search for the right information) in my experience is the single most important thing that has to be taught about C++. I've had quite good experiences with this order of teaching in the past. /EDIT: If you happen to know any German, take a look at http://madrat.net/coding/cpp/skript, part of a very short introduction used in one of my courses.
48,647
48,663
Does ScopeGuard use really lead to better code?
I came across this article written by Andrei Alexandrescu and Petru Marginean many years ago, which presents and discusses a utility class called ScopeGuard for writing exception-safe code. I'd like to know if coding with these objects truly leads to better code or if it obfuscates error handling, in that perhaps the guard's callback would be better presented in a catch block? Does anyone have any experience using these in actual production code?
It definitely improves your code. Your tentatively formulated claim, that it's obscure and that code would merit from a catch block is simply not true in C++ because RAII is an established idiom. Resource handling in C++ is done by resource acquisition and garbage collection is done by implicit destructor calls. On the other hand, explicit catch blocks would bloat the code and introduce subtle errors because the code flow gets much more complex and resource handling has to be done explicitly. RAII (including ScopeGuards) isn't an obscure technique in C++ but firmly established best-practice.
49,035
49,056
What does the const operator mean when used with a method in C++?
Given a declaration like this: class A { public: void Foo() const; }; What does it mean? Google turns up this: Member functions should be declared with the const keyword after them if they can operate on a const (this) object. If the function is not declared const, in can not be applied to a const object, and the compiler will give an error message. But I find that somewhat confusing; can anyone out there put it in better terms? Thanks.
Consider a variation of your class A. class A { public: void Foo() const; void Moo(); private: int m_nState; // Could add mutable keyword if desired int GetState() const { return m_nState; } void SetState(int val) { m_nState = val; } }; const A *A1 = new A(); A *A2 = new A(); A1->Foo(); // OK A2->Foo(); // OK A1->Moo(); // Error - Not allowed to call non-const function on const object instance A2->Moo(); // OK The const keyword on a function declaration indicates to the compiler that the function is contractually obligated not to modify the state of A. Thus you are unable to call non-const functions within A::Foo nor change the value of member variables. To illustrate, Foo() may not invoke A::SetState as it is declared non-const, A::GetState however is ok because it is explicitly declared const. The member m_nState may not be changed either unless declared with the keyword mutable. One example of this usage of const is for 'getter' functions to obtain the value of member variables. @1800 Information: I forgot about mutable! The mutable keyword instructs the compiler to accept modifications to the member variable which would otherwise cause a compiler error. It is used when the function needs to modify state but the object is considered logically consistent (constant) regardless of the modification.
49,046
49,055
Different sizeof results
Why does n not equal to 8 in the following function? void foo(char cvalue[8]) { int n = sizeof (cvalue); } But n does equal to 8 in this version of the function: void bar() { char cvalue[8]; int n = sizeof (cvalue); }
Because you can't pass entire arrays as function parameters in C. You're actually passing a pointer to it; the brackets are syntactic sugar. There are no guarantees the array you're pointing to has size 8, since you could pass this function any character pointer you want. // These all do the same thing void foo(char cvalue[8]) void foo(char cvalue[]) void foo(char *cvalue)
49,098
49,130
Can cout alter variables somehow?
So I have a function that looks something like this: float function(){ float x = SomeValue; return x / SomeOtherValue; } At some point, this function overflows and returns a really large negative value. To try and track down exactly where this was happening, I added a cout statement so that the function looked like this: float function(){ float x = SomeValue; cout << x; return x / SomeOtherValue; } and it worked! Of course, I solved the problem altogether by using a double. But I'm curious as to why the function worked properly when I couted it. Is this typical, or could there be a bug somewhere else that I'm missing? (If it's any help, the value stored in the float is just an integer value, and not a particularly big one. I just put it in a float to avoid casting.)
Welcome to the wonderful world of floating point. The answer you get will likely depend on the floating point model you compiled the code with. This happens because of the difference between the IEEE spec and the hardware the code is running on. Your CPU likely has 80 bit floating point registers that get use to hold the 32-bit float value. This means that there is far more precision while the value stays in a register than when it is forced to a memory address (also known as 'homing' the register). When you passed the value to cout the compiler had to write the floating point to memory, and this results in a lost of precision and interesting behaviour WRT overflow cases. See the MSDN documentation on VC++ floating point switches. You could try compiling with /fp:strict and seeing what happens.
49,137
49,148
Calling python from a c++ program for distribution
I would like to call python script files from my c++ program. I am not sure that the people I will distribute to will have python installed.
Boost has a python interface library which could help you. Boost.Python
49,168
49,172
How can I embed Perl inside a C++ application?
I would like to call Perl script files from my c++ program. I am not sure that the people I will distribute to will have Perl installed. Basically I'm looking for a .lib file that I can use that has an Apache like distribution license.
You can embed perl into your app. Perl Embedding by John Quillan C++ wrapper around Perl C API
49,211
345,696
How can I use a key blob generated from Win32 CryptoAPI in my .NET application?
I have an existing application that is written in C++ for Windows. This application uses the Win32 CryptoAPI to generate a TripleDES session key for encrypting/decrypting data. We're using the exponent of one trick to export the session key out as a blob, which allows the blob to be stored somewhere in a decrypted format. The question is how can we use this in our .NET application (C#). The framework encapsulates/wraps much of what the CryptoAPI is doing. Part of the problem is the CryptAPI states that the TripleDES algorithm for the Microsoft Enhanced Cryptographic Provider is 168 bits (3 keys of 56 bits). However, the .NET framework states their keys are 192 bits (3 keys of 64 bits). Apparently, the 3 extra bytes in the .NET framework is for parity? Anyway, we need to read the key portion out of the blob and somehow be able to use that in our .NET application. Currently we are not getting the expected results when attempting to use the key in .NET. The decryption is failing miserably. Any help would be greatly appreciated. Update: I've been working on ways to resolve this and have come up with a solution that I will post in time. However, still would appreciate any feedback from others.
Intro I'm Finally getting around to posting the solution. I hope it provides some help to others out there that might be doing similar type things. There really isn't much reference to doing this elsewhere. Prerequisites In order for a lot of this to make sense it's necessary to read the exponent of one trick, which allows you to export a session key out to a blob (a well known byte structure). One can then do what they wish with this byte stream, but it holds the all important key. MSDN Documentation is Confusing In this particular example, I'm using the Microsoft Enhanced Cryptographic Provider, with the Triple DES (CALG_3DES) algorithm. The first thing that threw me for a loop was the fact that the key length is listed at 168 bits, with a block length of 64 bits. How can the key length be 168? Three keys of 56 bits? What happens to the other byte? So with that information I started to read elsewhere how the last byte is really parity and for whatever reason CryptoAPI strips that off. Is that really the case? Seems kind of crazy that they would do that, but OK. Consumption of Key in .NET Using the TripleDESCryptoServiceProvider, I noticed the remarks in the docs indicated that: This algorithm supports key lengths from 128 bits to 192 bits in increments of 64 bits. So if CryptoAPI has key lengths of 168, how will I get that into .NET which supports only supports multiples of 64? Therefore, the .NET side of the API takes parity into account, where the CryptoAPI does not. As one could imagine... confused was I. So with all of this, I'm trying to figure out how to reconstruct the key on the .NET side with the proper parity information. Doable, but not very fun... let's just leave it at that. Once I got all of this in place, everything ended up failing with a CAPITAL F. Still with me? Good, because I just fell off my horse again. Light Bulbs and Fireworks Low and behold, as I'm scraping MSDN for every last bit of information I find a conflicting piece in the Win32 CryptExportKey function. Low and behold I find this piece of invaluble information: For any of the DES key permutations that use a PLAINTEXTKEYBLOB, only the full key size, including parity bit, may be exported. The following key sizes are supported. Algorithm Supported key size CALG_DES 64 bits CALG_3DES_112 128 bits CALG_3DES 192 bits So it does export a key that is a multiple of 64 bits! Woohoo! Now to fix the code on the .NET side. .NET Import Code Tweak The byte order is important to keep in mind when importing a byte stream that contains a key that was exported as a blob from the CryptoAPI. The two API's do not use the same byte order, therefore, as @nic-strong indicates, reversing the byte array is essential before actually trying to use the key. Other than that, things work as expected. Simply solved: Array.Reverse( keyByteArray ); Conclusion I hope this helps somebody out there. I spent way too much time trying to track this down. Leave any comments if you have further questions and I can attempt to help fill in any details. Happy Crypto!
49,258
49,266
What is the cleanest way to direct wxWidgets to always use wxFileConfig?
I am writing my first serious wxWidgets program. I'd like to use the wxConfig facility to make the program's user options persistent. However I don't want wxConfigBase to automatically use the Windows registry. Even though I'm initially targeting Windows, I'd prefer to use a configuration (eg .ini) file. Does anyone know a clean and simple way of doing this ? Thanks.
According to the source of wx/config.h file, all you need is to define the wxUSE_CONFIG_NATIVE symbol to 0 in your project and then it will always use wxFileConfig.
50,120
50,168
Database abstraction layers for (Visual) C++
What options exist for accessing different databases from C++? Put differently, what alternatives are there to ADO? What are the pros and cons?
Microsoft ODBC. The MFC ODBC classes such as CDatabase. OleDB (via COM). And you can always go through the per-RDBMS native libraries (for example, the SQL Server native library) DAO (don't). 3rd party ORM providers. I would recommend going through ODBC or OleDB by default. Native libraries really restrict you, DAO is no fun, there aren't a lot of great 3rd-party ORM for C++/Windows.
50,311
56,899
create and stream large XML document in C++
I have some code that creates a fairly large xml DOM and writes it off to a file (up to 50-100MB) . It basically creates the DOM and then calls a toString on it and writes it out with ofstream. Is there a way to get streaming output of the generated dom so that it doesn't create the whole structure in memory all at once and then copy it, etc? I will not modify any node after i create it so it can write it out and free up the memory right away. I could write my own xml class that does the xml construction but ... i don't think that's a good idea since i'll probably miss something when it comes down to escaping etc.
Ok, turns out libxml2 has a streaming API: http://xmlsoft.org/examples/testWriter.c It's a little old style (very C-ish) but you can write your wrapper around it.
51,032
51,080
Is there a difference between foo(void) and foo() in C++ or C?
Consider these two function definitions: void foo() { } void foo(void) { } Is there any difference between these two? If not, why is the void argument there? Aesthetic reasons?
In C: void foo() means "a function foo taking an unspecified number of arguments of unspecified type" void foo(void) means "a function foo taking no arguments" In C++: void foo() means "a function foo taking no arguments" void foo(void) means "a function foo taking no arguments" By writing foo(void), therefore, we achieve the same interpretation across both languages and make our headers multilingual (though we usually need to do some more things to the headers to make them truly cross-language; namely, wrap them in an extern "C" if we're compiling C++).
51,266
51,427
High availability and scalable platform for Java/C++ on Solaris
I have an application that's a mix of Java and C++ on Solaris. The Java aspects of the code run the web UI and establish state on the devices that we're talking to, and the C++ code does the real-time crunching of data coming back from the devices. Shared memory is used to pass device state and context information from the Java code through to the C++ code. The Java code uses a PostgreSQL database to persist its state. We're running into some pretty severe performance bottlenecks, and right now the only way we can scale is to increase memory and CPU counts. We're stuck on the one physical box due to the shared memory design. The really big hit here is being taken by the C++ code. The web interface is fairly lightly used to configure the devices; where we're really struggling is to handle the data volumes that the devices deliver once configured. Every piece of data we get back from the device has an identifier in it which points back to the device context, and we need to look that up. Right now there's a series of shared memory objects that are maintained by the Java/UI code and referred to by the C++ code, and that's the bottleneck. Because of that architecture we cannot move the C++ data handling off to another machine. We need to be able to scale out so that various subsets of devices can be handled by different machines, but then we lose the ability to do that context lookup, and that's the problem I'm trying to resolve: how to offload the real-time data processing to other boxes while still being able to refer to the device context. I should note we have no control over the protocol used by the devices themselves, and there is no possible chance that situation will change. We know we need to move away from this to be able to scale out by adding more machines to the cluster, and I'm in the early stages of working out exactly how we'll do this. Right now I'm looking at Terracotta as a way of scaling out the Java code, but I haven't got as far as working out how to scale out the C++ to match. As well as scaling for performance we need to consider high availability as well. The application needs to be available pretty much the whole time -- not absolutely 100%, which isn't cost effective, but we need to do a reasonable job of surviving a machine outage. If you had to undertake the task I've been given, what would you do? EDIT: Based on the data provided by @john channing, i'm looking at both GigaSpaces and Gemstone. Oracle Coherence and IBM ObjectGrid appear to be java-only.
The first thing I would do is construct a model of the system to map the data flow and try to understand precisely where the bottleneck lies. If you can model your system as a pipeline, then you should be able to use the theory of constraints (most of the literature is about optimising business processes but it applies equally to software) to continuously improve performance and eliminate the bottleneck. Next I would collect some hard empirical data that accurately characterises the performance of your system. It is something of a cliché that you cannot manage what you cannot measure, but I have seen many people attempt to optimise a software system based on hunches and fail miserably. Then I would use the Pareto Principle (80/20 rule) to choose the small number of things that will produce the biggest gains and focus only on those. To scale a Java application horizontally, I have used Oracle Coherence extensively. Although some dismiss it as a very expensive distributed hashtable, the functionality is much richer than that and you can, for example, directly access data in the cache from C++ code . Other alternatives for horizontally scaling your Java code would be Giga Spaces, IBM Object Grid or Gemstone Gemfire. If your C++ code is stateless and is used purely for number crunching, you could look at distributing the process using ICE Grid which has bindings for all of the languages you are using.
51,436
51,460
How to host licensed .Net controls in unmanaged C++ app?
I need to host and run managed controls inside of a purely unmanaged C++ app. How to do this? To run unlicensed controls is typically simple: if (SUCCEEDED(ClrCreateManagedInstance(type, iid, &obj))) { // do something with obj } When using a licensed control however, we need to somehow embed a .licx file into the project (ref application licensing). In an unmanaged C++ app, the requisite glue does not seem to work. The lc.exe tool is supposed to be able to embed the license as an assembly resource but either we were not waving the correct invocation, or it failed silently. Any help would be appreciated.
The answer depends on the particular component you're using. Contact your component help desk OR read up the documentation on what it takes to deploy their component. Basically component developers are free to implement licensing as they deem fit. With the .licx file the component needs to be able to do whatever the developer wished via GetKey and IsValidKey (explained in the link you posted). So if GetKey checks for a .licx file in the component directory - you just need to make sure its there. AFAIK the client assembly doesn't need to do anything except instantiate the control. Also if you post the name of the component and the lc.exe command you're using, people could take a look..
51,592
51,602
Is there a need to destroy char * = "string" or char * = new char[6]?
I assume that char* = "string" is the same to char* = new char[6]. I believe these strings are created on the heap instead of the stack. So do I need to destroy them or free their memory when I'm done using them or do they get destroyed by themselves?
No. You only need to manually free strings when you manually allocate the memory yourself using the malloc function (in C) or the new operator (in C++). If you do not use malloc or new, then the char* or string will be created on the stack or as a compile-time constant.
51,687
54,341
Lightbox style dialogs in MFC App
Has anyone implemented Lightbox style background dimming on a modal dialog box in a MFC/non .net app. I think the procedure would have to be something like: steps: Get dialog parent HWND or CWnd* Get the rect of the parent window and draw an overlay with a translucency over that window allow the dialog to do it's modal draw routine, e.g DoModal() Are there any existing libraries/frameworks to do this, or what's the best way to drop a translucent overlay in MFC? edit Here's a mockup of what i'm trying to achieve if you don't know what 'lightbox style' means Some App: with a lightbox dialog box
Here's what I did* based on Brian's links First create a dialog resource with the properties: border FALSE 3D look FALSE client edge FALSE Popup style static edge FALSE Transparent TRUE Title bar FALSE and you should end up with a dialog window with no frame or anything, just a grey box. override the Create function to look like this: BOOL LightBoxDlg::Create(UINT nIDTemplate, CWnd* pParentWnd) { if(!CDialog::Create(nIDTemplate, pParentWnd)) return false; RECT rect; RECT size; GetParent()->GetWindowRect(&rect); size.top = 0; size.left = 0; size.right = rect.right - rect.left; size.bottom = rect.bottom - rect.top; SetWindowPos(m_pParentWnd,rect.left,rect.top,size.right,size.bottom,NULL); HWND hWnd=m_hWnd; SetWindowLong (hWnd , GWL_EXSTYLE ,GetWindowLong (hWnd , GWL_EXSTYLE ) | WS_EX_LAYERED ) ; typedef DWORD (WINAPI *PSLWA)(HWND, DWORD, BYTE, DWORD); PSLWA pSetLayeredWindowAttributes; HMODULE hDLL = LoadLibrary (_T("user32")); pSetLayeredWindowAttributes = (PSLWA) GetProcAddress(hDLL,"SetLayeredWindowAttributes"); if (pSetLayeredWindowAttributes != NULL) { /* * Second parameter RGB(255,255,255) sets the colorkey * to white LWA_COLORKEY flag indicates that color key * is valid LWA_ALPHA indicates that ALphablend parameter * is valid - here 100 is used */ pSetLayeredWindowAttributes (hWnd, RGB(255,255,255), 100, LWA_COLORKEY|LWA_ALPHA); } return true; } then create a small black bitmap in an image editor (say 48x48) and import it as a bitmap resource (in this example IDB_BITMAP1) override the WM_ERASEBKGND message with: BOOL LightBoxDlg::OnEraseBkgnd(CDC* pDC) { BOOL bRet = CDialog::OnEraseBkgnd(pDC); RECT rect; RECT size; m_pParentWnd->GetWindowRect(&rect); size.top = 0; size.left = 0; size.right = rect.right - rect.left; size.bottom = rect.bottom - rect.top; CBitmap cbmp; cbmp.LoadBitmapW(IDB_BITMAP1); BITMAP bmp; cbmp.GetBitmap(&bmp); CDC memDc; memDc.CreateCompatibleDC(pDC); memDc.SelectObject(&cbmp); pDC->StretchBlt(0,0,size.right,size.bottom,&memDc,0,0,bmp.bmWidth,bmp.bmHeight,SRCCOPY); return bRet; } Instantiate it in the DoModal of the desired dialog, Create it like a Modal Dialog i.e. on the stack(or heap if desired), call it's Create manually, show it then create your actual modal dialog over the top of it: INT_PTR CAboutDlg::DoModal() { LightBoxDlg Dlg(m_pParentWnd);//make sure to pass in the parent of the new dialog Dlg.Create(LightBoxDlg::IDD); Dlg.ShowWindow(SW_SHOW); BOOL ret = CDialog::DoModal(); Dlg.ShowWindow(SW_HIDE); return ret; } and this results in something exactly like my mock up above *there are still places for improvment, like doing it without making a dialog box to begin with and some other general tidyups.
51,859
51,920
Using Makefile instead of Solution/Project files under Visual Studio (2005)
Does anyone have experience using makefiles for Visual Studio C++ builds (under VS 2005) as opposed to using the project/solution setup. For us, the way that the project/solutions work is not intuitive and leads to configuruation explosion when you are trying to tweak builds with specific compile time flags. Under Unix, it's pretty easy to set up a makefile that has its default options overridden by user settings (or other configuration setting). But doing these types of things seems difficult in Visual Studio. By way of example, we have a project that needs to get build for 3 different platforms. Each platform might have several configurations (for example debug, release, and several others). One of my goals on a newly formed project is to have a solution that can have all platform build living together, which makes building and testing code changes easier since you aren't having to open 3 different solutions just to test your code. But visual studio will require 3 * (number of base configurations) configurations. i.e. PC Debug, X360 Debug, PS3 Debug, etc. It seems like a makefile solution is much better here. Wrapped with some basic batchfiles or scripts, it would be easy to keep the configuration explotion to a minimum and only maintain a small set of files for all of the different builds that we have to do. However, I have no experience with makefiles under visual studio and would like to know if others have experiences or issues that they can share. Thanks. (post edited to mention that these are C++ builds)
I've found some benefits to makefiles with large projects, mainly related to unifying the location of the project settings. It's somewhat easier to manage the list of source files, include paths, preprocessor defines and so on, if they're all in a makefile or other build config file. With multiple configurations, adding an include path means you need to make sure you update every config manually through Visual Studio's fiddly project properties, which can get pretty tedious as a project grows in size. Projects which use a lot of custom build tools can be easier to manage too, such as if you need to compile pixel / vertex shaders, or code in other languages without native VS support. You'll still need to have various different project configurations however, since you'll need to differentiate the invocation of the build tool for each config (e.g. passing in different command line options to make). Immediate downsides that spring to mind: Slower builds: VS isn't particularly quick at invoking external tools, or even working out whether it needs to build a project in the first place. Awkward inter-project dependencies: It's fiddly to set up so that a dependee causes the base project to build, and fiddlier to make sure that they get built in the right order. I've had some success getting SCons to do this, but it's always a challenge to get working well. Loss of some useful IDE features: Edit & Continue being the main one! In short, you'll spend less time managing your project configurations, but more time coaxing Visual Studio to work properly with it.
51,949
52,009
How to get file extension from string in C++
Given a string "filename.conf", how to I verify the extension part? I need a cross platform solution.
You have to make sure you take care of file names with more then one dot. example: c:\.directoryname\file.name.with.too.many.dots.ext would not be handled correctly by strchr or find. My favorite would be the boost filesystem library that have an extension(path) function
52,357
52,365
What is the point of clog?
I've been wondering, what is the point of clog? As near as I can tell, clog is the same as cerr but with buffering so it is more efficient. Usually stderr is the same as stdout, so clog is the same as cout. This seems pretty lame to me, so I figure I must be misunderstanding it. If I have log messages going out to the same place I have error messages going out to (perhaps something in /var/log/messages), then I probably am not writing too much out (so there isn't much lost by using non-buffered cerr). In my experience, I want my log messages up to date (not buffered) so I can help find a crash (so I don't want to be using the buffered clog). Apparently I should always be using cerr. I'd like to be able to redirect clog inside my program. It would be useful to redirect cerr so that when I call a library routine I can control where cerr and clog go to. Can some compilers support this? I just checked DJGPP and stdout is defined as the address of a FILE struct, so it is illegal to do something like "stdout = freopen(...)". Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr? Is the only difference between clog and cerr the buffering? How should I implement (or find) a more robust logging facility (links please)?
Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr? Yes. You want the rdbuf function. ofstream ofs("logfile"); cout.rdbuf(ofs.rdbuf()); cout << "Goes to file." << endl; Is the only difference between clog and cerr the buffering? As far as I know, yes.
52,506
52,617
C++ Template Ambiguity
A friend and I were discussing C++ templates. He asked me what this should do: #include <iostream> template <bool> struct A { A(bool) { std::cout << "bool\n"; } A(void*) { std::cout << "void*\n"; } }; int main() { A<true> *d = 0; const int b = 2; const int c = 1; new A< b > (c) > (d); } The last line in main has two reasonable parses. Is 'b' the template argument or is b > (c) the template argument? Although, it is trivial to compile this, and see what we get, we were wondering what resolves the ambiguity?
AFAIK it would be compiled as new A<b>(c) > d. This is the only reasonable way to parse it IMHO. If the parser can't assume under normal circumstances a > end a template argument, that would result it much more ambiguity. If you want it the other way, you should have written: new A<(b > c)>(d);
52,557
57,553
profile-guided optimization (C)
Anyone know this compiler feature? It seems GCC support that. How does it work? What is the potential gain? In which case it's good? Inner loops? (this question is specific, not about optimization in general, thanks)
It works by placing extra code to count the number of times each codepath is taken. When you compile a second time the compiler uses the knowledge gained about execution of your program that it could only guess at before. There are a couple things PGO can work toward: Deciding which functions should be inlined or not depending on how often they are called. Deciding how to place hints about which branch of an "if" statement should be predicted on based on the percentage of calls going one way or the other. Deciding how to optimize loops based on how many iterations get taken each time that loop is called. You never really know how much these things can help until you test it.
52,714
52,734
STL vector vs map erase
In the STL almost all containers have an erase function. The question I have is in a vector, the erase function returns an iterator pointing to the next element in the vector. The map container does not do this. Instead it returns a void. Anyone know why there is this inconsistancy?
See http://www.sgi.com/tech/stl/Map.html Map has the important property that inserting a new element into a map does not invalidate iterators that point to existing elements. Erasing an element from a map also does not invalidate any iterators, except, of course, for iterators that actually point to the element that is being erased. The reason for returning an iterator on erase is so that you can iterate over the list erasing elements as you go. If erasing an item doesn't invalidate existing iterators there is no need to do this.
53,757
53,759
Which compiles to faster code: "n * 3" or "n+(n*2)"?
Which compiles to faster code: "ans = n * 3" or "ans = n+(n*2)"? Assuming that n is either an int or a long, and it is is running on a modern Win32 Intel box. Would this be different if there was some dereferencing involved, that is, which of these would be faster? long a; long *pn; long ans; ... *pn = some_number; ans = *pn * 3; Or ans = *pn+(*pn*2); Or, is it something one need not worry about as optimizing compilers are likely to account for this in any case?
IMO such micro-optimization is not necessary unless you work with some exotic compiler. I would put readability on the first place.
53,811
53,826
How do you normally set up your compiler's optimization settings?
Do you normally set your compiler to optimize for maximum speed or smallest code size? or do you manually configure individual optimization settings? Why? I notice most of the time people tend to just leave compiler optimization settings to their default state, which with visual c++ means max speed. I've always felt that the default settings had more to do with looking good on benchmarks, which tend to be small programs that will fit entirely within the L2 cache than what's best for overall performance, so I normally set it optimize for smallest size.
As a Gentoo user I have tried quite a few optimizations on the complete OS and there have been endless discussions on the Gentoo forums about it. Some good flags for GCC can be found in the wiki. In short, optimizing for size worked best on an old Pentium3 laptop with limited ram, but on my main desktop machine with a Core2Duo, -O2 gave better results over all. There's also a small script if you are interested in the x86 (32 bit) specific flags that are the most optimized. If you use gcc and really want to optimize a specific application, try ACOVEA. It runs a set of benchmarks, then recompile them with all possible combinations of compile flags. There's an example using Huffman encoding on the site (lower is better): A relative graph of fitnesses: Acovea Best-of-the-Best: ************************************** (2.55366) Acovea Common Options: ******************************************* (2.86788) -O1: ********************************************** (3.0752) -O2: *********************************************** (3.12343) -O3: *********************************************** (3.1277) -O3 -ffast-math: ************************************************** (3.31539) -Os: ************************************************* (3.30573) (Note that it found -Os to be the slowest on this Opteron system.)
53,849
53,863
How do I tokenize a string in C++?
Java has a convenient split method: String str = "The quick brown fox"; String[] results = str.split(" "); Is there an easy way to do this in C++?
C++ standard library algorithms are pretty universally based around iterators rather than concrete containers. Unfortunately this makes it hard to provide a Java-like split function in the C++ standard library, even though nobody argues that this would be convenient. But what would its return type be? std::vector<std::basic_string<…>>? Maybe, but then we’re forced to perform (potentially redundant and costly) allocations. Instead, C++ offers a plethora of ways to split strings based on arbitrarily complex delimiters, but none of them is encapsulated as nicely as in other languages. The numerous ways fill whole blog posts. At its simplest, you could iterate using std::string::find until you hit std::string::npos, and extract the contents using std::string::substr. A more fluid (and idiomatic, but basic) version for splitting on whitespace would use a std::istringstream: auto iss = std::istringstream{"The quick brown fox"}; auto str = std::string{}; while (iss >> str) { process(str); } Using std::istream_iterators, the contents of the string stream could also be copied into a vector using its iterator range constructor. Multiple libraries (such as Boost.Tokenizer) offer specific tokenisers. More advanced splitting require regular expressions. C++ provides the std::regex_token_iterator for this purpose in particular: auto const str = "The quick brown fox"s; auto const re = std::regex{R"(\s+)"}; auto const vec = std::vector<std::string>( std::sregex_token_iterator{begin(str), end(str), re, -1}, std::sregex_token_iterator{} );