question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
191,814
191,836
Can pipes be used across LAN computers?
Can pipes be used across LAN computers? In particular I'm looking for Windows, but if you have more info on other platforms, that will also help others who find this thread later.
Yes they can. Named pipes can be used to provide communication between processes on the same computer or between processes on different computers across a network. If the server service is running, all named pipes are accessible remotely http://msdn.microsoft.com/en-us/library/aa365590.aspx
191,980
191,992
In an MFC application, what's the easiest way to copy a file from one directory to another?
Should I create two CFile objects and copy one into the other character by character? Or is there something in the library that will do this for me?
I would just use the CopyFile Win32 API function, but the example code in the CFile::Open documentation shows how to copy files with CFile (using pretty much the method you suggest).
192,021
192,046
Why are pipes considered dangerous to use in Windows/unix/linux?
Why are pipes considered dangerous to use? What can be done to avoid these security issues? I'm mostly interested in Windows, but if you have other OS information, please provide.
(assuming you're talking about Unix named pipes from the mention of 'c' and 'IPC'. Windows named pipes work somewhat differently) Anyone with permissions can write to a named pipe, so you have to be careful with permissions and locking (see flock()). If an application trusts the input it's getting from the named pipe (which will usually be the case unless you explicitly build input validation into it) then a malicious user can write any desired data into the named pipe if they have permission. Also, any user with permissions can read from the pipe and intercept data coming out of it if you have not exclusively locked it. The data is then missing from the input stream that the reader is expecting.
192,124
192,568
How do I load and save an image from an SQL Server database using GDI+ and C++?
I need specifically to load a JPG image that was saved as a blob. GDI+ makes it very easy to retrieve images from files but not from databases...
Take a look at Image::Image(IStream *, BOOL). This takes a pointer to a COM object implementing the IStream interface. You can get one of these by allocating some global memory with GlobalAlloc and then calling CreateStreamOnHGlobal on the returned handle. It'll look something like this: shared_ptr<Image> CreateImage(BYTE *blob, size_t blobSize) { HGLOBAL hMem = ::GlobalAlloc(GMEM_MOVEABLE,blobSize); BYTE *pImage = (BYTE*)::GlobalLock(hMem); for (size_t iBlob = 0; iBlob < blobSize; ++iBlob) pImage[iBlob] = blob[iBlob]; ::GlobalUnlock(hMem); CComPtr<IStream> spStream; HRESULT hr = ::CreateStreamOnHGlobal(hMem,TRUE,&spStream); shared_ptr<Image> image = new Image(spStream); return image; } But with error checking and such (omitted here to make things clearer)
192,940
192,979
Storing PCRE compiled regexes in C/C++
Is there an efficient way to store the compiled regexes (compiled via regcomp(), PCRE) in a binary file, so that later I can just read from the file and call regexec()? Or is it just a matter of dumping the compiled regex_t structs to the file and reading them back when needed?
Unless you have a super-complex regex, I hardly see an advantage of serializing the compiled regex, the compilation time shouldn't be that big. Unless you are on a super-tight embed system? In any case, indeed dumping the structure might be a solution, at least you can try... [EDIT] I just looked at the source I have (6.7) and as I feared, it is not so simple, the structure starts with a void *... You can't serialize pointers, they have to be recomputed.
193,469
193,770
How do I make Visual Studio pause after executing a console application in debug mode?
I have a collection of Boost unit tests I want to run as a console application. When I'm working on the project and I run the tests I would like to be able to debug the tests, and I would like to have the console stay open after the tests run. I see that if I run in release mode the console window stays up after the program exits, but in debug mode this is not the case. I do not want to add 'system("pause");' or any other hacks like reading a character to my program. I just want to make Visual Studio pause after running the tests with debugging like it would if I were running in release mode. I would also like it if the output of tests were captured in one of Visual Studio's output windows, but that also seems to be harder than it should be. How can I do this?
Boost test offers the following usage recommendations for Visual Studio that would enable you to run the unit tests automatically at the end of compilation and capture the output into the build window. The nice side effect of this trick is it enable you to treat test failures as compilation errors. "...you could jump through these errors using usual keyboard shortcuts/mouse clicks you use for compilation error analysis..."
193,471
687,302
Which is better BOOST_MPL_ASSERT or BOOST_STATIC_ASSERT?
As I recall BOOST_MPL_ASSERT was once preferred. Is this still true? Anyone know why?
[Answering my own question] It depends. This is an apples to oranges comparison. Although similar, these macros are NOT interchangeable. Here's a summary of how each works: BOOST_STATIC_ASSERT( P ) generates a compilation error if P != true. BOOST_MPL_ASSERT(( P )) generates a compilation error if P::type::value != true. The latter form, despite requiring double parentheses, is especially useful because it can generate more informative error messages if one uses Boolean nullary Metafunctions from Boost.MPL or TR1's <type_traits> as predicates. Here is an example program that demonstrates how to use (and misuse) these macros: #include <boost/static_assert.hpp> #include <boost/mpl/assert.hpp> #include <type_traits> using namespace ::boost::mpl; using namespace ::std::tr1; struct A {}; struct Z {}; int main() { // boolean predicates BOOST_STATIC_ASSERT( true ); // OK BOOST_STATIC_ASSERT( false ); // assert // BOOST_MPL_ASSERT( false ); // syntax error! // BOOST_MPL_ASSERT(( false )); // syntax error! BOOST_MPL_ASSERT(( bool_< true > )); // OK BOOST_MPL_ASSERT(( bool_< false > )); // assert // metafunction predicates BOOST_STATIC_ASSERT(( is_same< A, A >::type::value ));// OK BOOST_STATIC_ASSERT(( is_same< A, Z >::type::value ));// assert, line 19 BOOST_MPL_ASSERT(( is_same< A, A > )); // OK BOOST_MPL_ASSERT(( is_same< A, Z > )); // assert, line 21 return 0; } For comparison, here are the error messages my compiler (Microsoft Visual C++ 2008) generated for lines 19 and 21 above: 1>static_assert.cpp(19) : error C2027: use of undefined type 'boost::STATIC_ASSERTION_FAILURE<x>' 1> with 1> [ 1> x=false 1> ] 1>static_assert.cpp(21) : error C2664: 'boost::mpl::assertion_failed' : cannot convert parameter 1 from 'boost::mpl::failed ************std::tr1::is_same<_Ty1,_Ty2>::* ***********' to 'boost::mpl::assert<false>::type' 1> with 1> [ 1> _Ty1=A, 1> _Ty2=Z 1> ] 1> No constructor could take the source type, or constructor overload resolution was ambiguous So if you're using metafunctions (as defined here) as predicates then BOOST_MPL_ASSERT is both less verbose to code and more informative when it asserts. For simple boolean predicates, BOOST_STATIC_ASSERT is less verbose to code although its error messages may be less clear (depending on your compiler.)
193,703
193,752
How is tr1::reference_wrapper useful?
recently I've been reading through Scott Meyers's excellent Effective C++ book. In one of the last tips he covered some of the features from TR1 - I knew many of them via Boost. However, there was one that I definitely did NOT recognize: tr1::reference_wrapper. How and when would I use tr1::reference_wrapper?
It's like boost::ref, as far as I know. Basically, a reference which can be copied. Very useful when binding to functions where you need to pass parameters by reference. For example (using boost syntax): void Increment( int& iValue ) { iValue++; } int iVariable = 0; boost::function< void () > fIncrementMyVariable = boost::bind( &Increment, boost::ref( iVariable )); fIncrementMyVariable(); This Dr. Dobbs article has some info. Hope this is right, and helpful. :)
193,708
193,742
What is a good use case for tr1::result_of?
I hear that tr1::result_of gets used frequently inside of Boost... I'm wondering if there are any good (simple) use cases for tr1::result_of I can use at home.
A description of result_of is given at open_std.org. Microsoft has a quick example of a unit test wrapper that uses result_of.
193,965
194,042
Are there any good open source BDD tools for C/C++?
I love the Ruby RSpec BDD development style. Are there any good tools for doing this with C/C++?
cspec is for C. Presumably it will work with C++. There is a list of tools for various languages on the Behavior Driven Development Wikipedia page.
194,465
11,354,496
How to parse a string to an int in C++?
What's the C++ way of parsing a string (given as char *) into an int? Robust and clear error handling is a plus (instead of returning zero).
In the new C++11 there are functions for that: stoi, stol, stoll, stoul and so on. int myNr = std::stoi(myString); It will throw an exception on conversion error. Even these new functions still have the same issue as noted by Dan: they will happily convert the string "11x" to integer "11". See more: http://en.cppreference.com/w/cpp/string/basic_string/stol
194,492
194,640
Accessing protected members from subclasses: gcc vs msvc
In visual C++, I can do things like this: template <class T> class A{ protected: T i; }; template <class T> class B : public A<T>{ T geti() {return i;} }; If I try to compile this in g++, I get an error. I have to do this: template <class T> class B : public A<T>{ T geti() {return A<T>::i;} }; Am I not supposed to do the former in standard C++? Or is something misconfigured with gcc that's giving me errors?
This used to be allowed, but changed in gcc 3.4. In a template definition, unqualified names will no longer find members of a dependent base (as specified by [temp.dep]/3 in the C++ standard). For example, template <typename T> struct B { int m; int n; int f (); int g (); }; int n; int g (); template <typename T> struct C : B<T> { void h () { m = 0; // error f (); // error n = 0; // ::n is modified g (); // ::g is called } }; You must make the names dependent, e.g. by prefixing them with this->. Here is the corrected definition of C::h, template <typename T> void C<T>::h () { this->m = 0; this->f (); this->n = 0 this->g (); }
194,499
194,509
How to paralleize search for a string in a file with a help of fork? (GNU Linux/g++)
I got a text file with a couple of lines and I am looking for a string in this file. I need to pass following command line parameters to the program: - file path - the string I am looking for - maximum number of processes the program is allowed to "fork" in order to complete this task. How to such a program should be constructed?
A couple of thoughts. You will have to open the file separately from each process, otherwise they will share a single file descriptor and thus have a shared position in the file (or not, see the comments, as this may be system specific...). You may not see the speed increase you are hoping for due to disk access and/or cache miss patterns. You might be able to beat both issues by memory mapping the file (well you still risk an increased cache miss rate)... How badly do you need this? It runs a real risk of being premature optimization. I would recommend against touching the problem without a compelling need. Really.
194,676
194,741
What language/platform would you recommend for CPU-bound application?
I'm developing non-interactive cpu-bound application which does only computations, almost no IO. Currently it works too long and while I'm working on improving the algorithm, I also think if it can give any benefit to change language or platform. Currently it is C++ (no OOP so it is almost C) on windows compiled with Intel C++ compiler. Can switching to ASM help and how much? Can switching to Linux and GCC help?
Just to be thorough: the first thing to do is to gather profile data and the second thing to do is consider your algorithms. I'm sure you know that, but they've got to be #included into any performance-programming discussion. To be direct about your question "Can switching to ASM help?" the answer is "If you don't know the answer to that, then probably not." Unless you're very familiar with the CPU architecture and its ins and outs, it's unlikely that you'll do a significantly better job than a good optimizing C/C++ compiler on your code. The next point to make is that significant speed-ups in your code (aside from algorithmic improvements) will almost certainly come from parallelism, not linear increases. Desktop machines can now throw 4 or 8 cores at a task, which has much more performance potential than a slightly better code generator. Since you're comfortable with C/C++, OpenMP is pretty much a no-brainer; it's very easy to use to parallelize your loops (obviously, you have to watch loop-carried dependencies, but it's definitely "the simplest parallelism that could possibly work"). Having said all that, code generation quality does vary between C/C++ compilers. The Intel C++ compiler is well-regarded for its optimization quality and has full support not just for OpenMP but for other technologies such as the Threading Building Blocks. Moving into the question of what programming languages might be even better than C++, the answer would be "programming languages that actively promote / facilitate concepts of parallelism and concurrent programming." Erlang is the belle of the ball in that regard, and is a "hot" language right now and most people interested in performance programming are paying at least some attention to it, so if you want to improve your skills in that area, you might want to check it out.
194,912
194,971
Any C++ libraries available to convert between floating point representations?
I recently had a need to interpret a DEC 32-bit floating point representation. It differs from the IEEE floating point representations in the number of bits allocated to the exponent and mantissa. Here's a description of a bunch of floating point formats: http://www.quadibloc.com/comp/cp0201.htm I managed to roll my own C++ code to stuff a DEC 32-bit float into an IEEE 32-bit float but was wondering if there were any libraries out there that had tackled this conversion and others already. I checked boost.org but didn't see any in there.
You mean like libvaxdata? I've never used it - I just found it by googling. But it looks like what you're looking for. It's a C library rather than C++, but converting floating point data should work fine from extern "C" calls.
195,207
195,209
Unresolved external symbol on static class members
Very simply put: I have a class that consists mostly of static public members, so I can group similar functions together that still have to be called from other classes/functions. Anyway, I have defined two static unsigned char variables in my class public scope, when I try to modify these values in the same class' constructor, I am getting an "unresolved external symbol" error at compilation. class test { public: static unsigned char X; static unsigned char Y; ... test(); }; test::test() { X = 1; Y = 2; } I'm new to C++ so go easy on me. Why can't I do this?
If you are using C++ 17 you can just use the inline specifier (see https://stackoverflow.com/a/11711082/55721) If using older versions of the C++ standard, you must add the definitions to match your declarations of X and Y unsigned char test::X; unsigned char test::Y; somewhere. You might want to also initialize a static member unsigned char test::X = 4; and again, you do that in the definition (usually in a CXX file) not in the declaration (which is often in a .H file)
195,323
195,350
What is the most elegant way to read a text file with c++?
I'd like to read whole content of a text file to a std::string object with c++. With Python, I can write: text = open("text.txt", "rt").read() It is very simple and elegant. I hate ugly stuff, so I'd like to know - what is the most elegant way to read a text file with C++? Thanks.
There are many ways, you pick which is the most elegant for you. Reading into char*: ifstream file ("file.txt", ios::in|ios::binary|ios::ate); if (file.is_open()) { file.seekg(0, ios::end); size = file.tellg(); char *contents = new char [size]; file.seekg (0, ios::beg); file.read (contents, size); file.close(); //... do something with it delete [] contents; } Into std::string: std::ifstream in("file.txt"); std::string contents((std::istreambuf_iterator<char>(in)), std::istreambuf_iterator<char>()); Into vector<char>: std::ifstream in("file.txt"); std::vector<char> contents((std::istreambuf_iterator<char>(in)), std::istreambuf_iterator<char>()); Into string, using stringstream: std::ifstream in("file.txt"); std::stringstream buffer; buffer << in.rdbuf(); std::string contents(buffer.str()); file.txt is just an example, everything works fine for binary files as well, just make sure you use ios::binary in ifstream constructor.
195,682
195,747
How do you mock classes that use RAII in c++
Here's my issue, I'd like to mock a class that creates a thread at initialization and closes it at destruction. There's no reason for my mock class to actually create and close threads. But, to mock a class, I have inherit from it. When I create a new instance of my mock class, the base classes constructor is called, creating the thread. When my mock object is destroyed, the base classes destructor is called, attempting to close the thread. How does one mock an RAII class without having to deal with the actual resource?
You instead make an interface that describes the type, and have both the real class and the mock class inherit from that. So if you had: class RAIIClass { public: RAIIClass(Foo* f); ~RAIIClass(); bool DoOperation(); private: ... }; You would make an interface like: class MockableInterface { public: MockableInterface(Foo* f); virtual ~MockableInterface(); virtual bool DoOperation() = 0; }; And go from there.
195,714
196,248
Design: Large archive file editor, file mapping
I'm writing an editor for large archive files (see below) of 4GB+, in native&managed C++. For accessing the files, I'm using file mapping (see below) like any sane person. This is absolutely great for reading data, but a problem arises in actually editing the archive. File mapping does not allow resizing a file while it's being accessed, so I don't know how I should proceed when the user wants to insert new data in the file (which would exceed the file's original size, when it was mapped.) Should I remap the whole thing every time? That's bound to be slow. However, I'd want to keep the editor real-time with exclusive file access, since that simplifies the programming a lot, and won't let the file get screwed by other applications while being modified. I wouldn't want to spend an eternity working on the editor; It's just a simple dev-tool for the actual project I'm working on. So I'd like to hear how you've handled similar cases, and what other archiving software and especially other games do to solve this? To clarify: This is not a text file, I'm writing a specific binary archive file format. By which I mean a big file that contains many others, in directories. Custom archive files are very common in game usage for a number of reasons. With my format, I'm aiming to a similar (but somewhat simpler) structure as with Valve Software's GCF format - I would have used the GCF format as it is, but unfortunately no editor exists for the format, although there are many great implementations for reading them, like HLLib. Accessing the file must be fast, as it is intended for storing game resources. So it's not a database. Database files would be contained inside it, along with GFX, SFX etc. files. "File mapping" as talked here is a specific technique on the Windows platform, which allows direct access to a large file through creating "views" to parts of it, see here: http://msdn.microsoft.com/en-us/library/aa366556(VS.85).aspx - This technique allows minimal latency and memory usage and is a no-brainer for accessing any large files. So this does not mean reading the whole 4GB file into memory, it's exactly the contrary.
What I do is to close view handle(s) and FileMapping handle, set the file size then reopen mapping / view handles. // Open memory mapped file HANDLE FileHandle = ::CreateFileW(file_name, GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL); size_t Size = ::GetFileSize(FileHandle, 0); HANDLE MappingHandle = ::CreateFileMapping(FileHandle, NULL, PAGE_READWRITE, 0, Size, NULL); void* ViewHandle = ::MapViewOfFile(MappingHandle, FILE_MAP_ALL_ACCESS, 0, 0, Size); ... // increase size of file UnmapViewOfFile(ViewHandle); CloseHandle(MappingHandle); Size += 1024; LARGE_INTEGER offset; offset.QuadPart = Size; LARGE_INTEGER newpos; SetFilePointerEx(FileHandle, offset, &newpos, FILE_BEGIN); SetEndOfFile(FileHandle); MappingHandle = ::CreateFileMapping(FileHandle, NULL, PAGE_READWRITE, 0, Size, NULL); ViewHandle = ::MapViewOfFile(MappingHandle, FILE_MAP_ALL_ACCESS, 0, 0, Size); The above code has no error checking and does not handle 64bit sizes, but that's not hard to fix.
195,842
809,259
Capture which step of an animated system cursor is being shown on Windows
I want to capture as a bitmap the system cursor on Windows OSes as accurately as possible. The provided API for this is to my knowledge GetCursorInfo, DrawIconEx. The simple chain of actions is: Get cursor by using GetCursorInfo Paint the cursor in a memory DC by using DrawIconEx. Here is how the code looks roughly. CURSORINFO CursorInfo; (VOID)memset(&CursorInfo, 0, sizeof(CursorInfo)); CursorInfo.cbSize = sizeof(CursorInfo); if (GetCursorInfo(&CursorInfo) && CursorInfo.hCursor) { // ... create here the memory DC, memory bitmap boError |= !DrawIconEx(hCursorDC, // device context 0, // xLeft 0, // yTop CursorInfo.hCursor, // cursor handle 0, // width, use system default 0, // height, use system default 0, // step of animated cursor !!!!!!!!! NULL, // flicker free brush, don't use it now DI_MASK | DI_DEFAULTSIZE); // flags // ... do whatever we want with the cursor in our memory DC } Now, anyone knows how I could get which step of the animated cursor is being drawn (I need the value that can be then passed to the istepIfAniCur parameter of DrawIconEx...)? Currently the above code obviously always renders only the first step of an animated cursor. I suspect this can not be easily done, but it's worth asking anyway.
Unfortunately, I don't think there's a Windows API that discloses the current frame of the cursor animation. I assume that's what you're after: the look of the cursor at the instant you make the snapshot.
196,088
196,151
Coercing template class with operator T* when passing as T* argument of a function template
Assume I have a function template like this: template<class T> inline void doStuff(T* arr) { // stuff that needs to use sizeof(T) } Then in another .h filee I have a template class Foo that has: public: operator T*() const; Now, I realize that those are different Ts. But If I have a variable Foo<Bar> f on the stack, the only way to coerce it to any kind of pointer would be to invoke operator T*(). Yet, if call doStuff(f), GCC complains that doStuff can't take Foo<Bar> instead of automatically using operator T*() to coerce to Bar* and then specializing the function template with Bar as T. Is there anything I can do to make this work with two templates? Or does either the argument of the template function have to be a real pointer type or the template class with the coercion operator be passed to a non-template function?
GCC is correct. In template arguments only exact matches are considered, type conversions are not. This is because otherwise an infinite (or at least exponential) amount of conversions could have to be considered. If Foo<T> is the only other template that you're going to run in to, the best solution would be to add: template<typename T> inline void doStuff(const Foo<T>& arr) { doStuff(static_cast<T*>(arr)); } If you are having this issue with a lot of templates, this one should fix it: #include <boost/type_traits/is_convertible.hpp> #include <boost/utility/enable_if.hpp> template<template <typename> class T, typename U> inline typename boost::enable_if<typename boost::is_convertible<T<U>, U*>::type>::type doStuff(const T<U>& arr) { doStuff(static_cast<U*>(arr)); } It's a bit verbose though ;-)
196,390
196,397
Can you write a block of c++ code inside C#?
I heard somewhere that you can drop down to C++ directly inside C# code. How is this done? Or did I hear wrong? Note: I do not mean C++ / CLI.
You might be thinking of unsafe blocks where you can write code that looks a lot like C++, since you can use pointers.
196,522
196,545
In C++ what are the benefits of using exceptions and try / catch instead of just returning an error code?
I've programmed C and C++ for a long time and so far I've never used exceptions and try / catch. What are the benefits of using that instead of just having functions return error codes?
Possibly an obvious point - a developer can ignore (or not be aware of) your return status and go on blissfully unaware that something failed. An exception needs to be acknowledged in some way - it can't be silently ignored without actively putting something in place to do so.
196,733
197,157
How can I use covariant return types with smart pointers?
I have code like this: class RetInterface {...} class Ret1: public RetInterface {...} class AInterface { public: virtual boost::shared_ptr<RetInterface> get_r() const = 0; ... }; class A1: public AInterface { public: boost::shared_ptr<Ret1> get_r() const {...} ... }; This code does not compile. In visual studio it raises C2555: overriding virtual function return type differs and is not covariant If I do not use boost::shared_ptr but return raw pointers, the code compiles (I understand this is due to covariant return types in C++). I can see the problem is because boost::shared_ptr of Ret1 is not derived from boost::shared_ptr of RetInterface. But I want to return boost::shared_ptr of Ret1 for use in other classes, else I must cast the returned value after the return. Am I doing something wrong? If not, why is the language like this - it should be extensible to handle conversion between smart pointers in this scenario? Is there a desirable workaround?
Firstly, this is indeed how it works in C++: the return type of a virtual function in a derived class must be the same as in the base class. There is the special exception that a function that returns a reference/pointer to some class X can be overridden by a function that returns a reference/pointer to a class that derives from X, but as you note this doesn't allow for smart pointers (such as shared_ptr), just for plain pointers. If your interface RetInterface is sufficiently comprehensive, then you won't need to know the actual returned type in the calling code. In general it doesn't make sense anyway: the reason get_r is a virtual function in the first place is because you will be calling it through a pointer or reference to the base class AInterface, in which case you can't know what type the derived class would return. If you are calling this with an actual A1 reference, you can just create a separate get_r1 function in A1 that does what you need. class A1: public AInterface { public: boost::shared_ptr<RetInterface> get_r() const { return get_r1(); } boost::shared_ptr<Ret1> get_r1() const {...} ... }; Alternatively, you can use the visitor pattern or something like my Dynamic Double Dispatch technique to pass a callback in to the returned object which can then invoke the callback with the correct type.
197,048
198,063
Idiomatic use of std::auto_ptr or only use shared_ptr?
Now that shared_ptr is in tr1, what do you think should happen to the use of std::auto_ptr? They both have different use cases, but all use cases of auto_ptr can be solved with shared_ptr, too. Will you abandon auto_ptr or continue to use it in cases where you want to express explicitly that only one class has ownership at any given point? My take is that using auto_ptr can add clarity to code, precisely by adding nuance and an indication of the design of the code, but on the other hand, it add yet another subtle issue when training new programmers: they need to understand smart pointers and the fine details of how they work. When you use only one smart pointer everywhere, you can just lay down a rule 'wrap all pointers in shared_ptr' and be done with it. What's your take on this?
To provide a little more ammunition to the 'avoid std::auto_ptr' camp: auto_ptr is being deprecated in the next standard (C++0x). I think this alone is good enough ammunition for any argument to use something else. However, as Konrad Rudolph mentioned, the default replacement for auto_ptr should probably be boost::scoped_ptr. The semantics of scoped_ptr more closely match those of auto_ptr and it is intended for similar uses. The next C++09 standard will have something similar called unique_ptr. However, using shared_ptr anywhere that scoped_ptr should be used will not break anything, it'll just add a very slight bit of inefficiency to deal with the reference count if the object is never actually going to be shared. So for private member pointers that will never be handed out to another object - use scoped_ptr. If the pointer will be handed out to something else (this includes using them in containers or if all you want to do is transfer ownership and not keep or share it) - use shared_ptr.
197,121
198,542
How do I write a for loop that iterates over a CAtlMap selectively deleting elements as it goes?
I'm trying to do the following without too much special case code to deal with invalidated POSITIONs etc: What's the best way to fill in the blanks? void DeleteUnreferencedRecords(CAtlMap<Record>& records) { for(____;____;____) { if( NotReferencedElsewhere(record) ) { // Delete record _______; } } }
According to this: http://msdn.microsoft.com/en-us/library/0h4c3zkw(VS.80).aspx RemoveAtPos has these semantics Removes the key/value pair stored at the specified position. The memory used to store the element is freed. The POSITION referenced by pos becomes invalid, and while the POSITION of any other elements in the map remains valid, they do not necessarily retain the same order. The problem is that the order can change -- which means that GetNext() won't really continue the iteration. It looks like you need to collect the POSITIONs you want to delete in one pass and delete them in the next. Removing a POSITION does not invalidate the other POSITION objects
197,135
207,796
Getting notifications when the user tries sending an SMS
My application is implemented as a service (running under services.exe). I am adding a new feature which requires being notified when the user sends an SMS. I have tried using IMAPIAdviseSink, registering with both IMAPISession and IMsgStore, but I do not get any notifications. The other options I can see are to create a Short Message Service Provider or to implement the IFormProviderEx interface, but I am not sure about the impact this might have on SMS functionality and the user experience. Is there any way in which my application can reliably get notifications of SMSs being created in the Outbox? edit: The app is written in native C++. I've looked into RIL and several other APIs, but I can only find information about getting notified of incoming SMSs. OK, some more information: The same code for registering my IMAPIAdviseSink works in a stand alone app. It's only failing to get notifications in the service. Is there anyway to get notifications in my service? Or do I need a separate process to monitor SMS events and notify my service? Mark
You can't use IMAPIAdviseSink from a service. You need to use it from separate process and notify the service of the events you're interested in.
197,140
197,251
What is the best implementation of STL for VS2005?
I'm currently using default implementation of STL for VS2005 and I'm not really satisfied with it. Perhaps there is something better?
The Dinkumware STL implementation (supplied with VS2005) is actually quite good. The STL is a general purpose library and so it is almost always possible to write something better for very specific use cases. I'm aware of the following alternative implementations, but I've never used them with VS2005: SGI Standard Template Library 3.3: http://www.sgi.com/tech/stl/ STLport 5.1.6 (derived from SGI implementation): http://www.stlport.org/ Both SGI and STLport implement the size/length check as the first test in operator== and so you might like it. They are also both free to download and use. Changing from one STL implementation to another, in theory, should be easy. However, I've heard from some colleagues that it is not always so. They've tripped over compiler bugs, found that they've inadvertently used non-standard features, or unknowingly relied on some behaviour specific to a particular STL implementation. One good thing about the Dinkumware STL implementation is that it has been well tested with the VS2005 C++ compiler. If you decide to try out these alternatives, good luck! And let us know how it goes.
197,211
197,478
Class design with vector as a private/public member?
what is the best way to put a container class or a some other class inside a class as private or a public member? Requirements: 1.Vector< someclass> inside my class 2.Add and count of vector is needed interface
If the container's state is part of the class's invariant, then it should, if possible, be private. For example, if the container represents a three dimensional vector then part of the invariant might be that it always contains exactly 3 numbers. Exposing it as a public member would allow code external to the class to change the containers size, which in turn could cause problems for any routine which requires the container's size to be constant. Keeping the container private limits the places in your software where the container's size can be modified to the class's member functions.
197,258
197,273
Calling a static member function of a C++ STL container's value_type
I'm trying to get my head around why the following doesn't work. I have a std::vector and I want to call a static member function of it's contained value_type like so: std::vector<Vector> v; unsigned u = v.value_type::Dim(); where Vector is in fact a typedef for a templated type: template <typename T, unsigned U> class SVector; typedef SVector<double, 2> Vector; //two-dimensional SVector containing doubles and the static member function Dim() actually inlines the dimensionality U of the Vector. Now the compiler returns an error message saying: error: ‘SVector<double, 2u>’ is not a base of ‘std::vector<SVector<double, 2u>, std::allocator<SVector<double, 2u> > > which puzzles me. I can replace the apparently offending line by unsigned u = Vector::Dim(); and that works, but is obviously ugly as it hardcodes assumptions about the value_type of v... Thanks!
You are accessing the value_type trough the variable instance and not the variable type. Method 1 - this works: typedef std::vector<Vector> MyVector; MyVector v; unsigned u = MyVector::value_type::Dim(); Method 2 - or this: std::vector<Vector> v; unsigned u = std::vector<Vector>::value_type::Dim(); If you typedef like on method 1 you do not hardcode assumptions on vector template parameter and you write clean code. Edit: Expanded to explain the behavior for this issue as requested by question owner: The scope resolution operator :: has higher precedence than any other C++ operator. This includes the Member access from an object . operator. Thus when you write something like: unsigned u= v.value_type::Dim(); this resolves to the following C++ code: unsigned u = v.SVector<double, 2>::Dim(); and ultimately what is resolved first is the SVector<double, 2>::Dim() part. This would force the vector instance declared trough variable v to have a templatized inner class named SVector. And because this does not happen this results in error: error C2039: 'SVector<double,2>' : is not a member of 'std::vector<_Ty>' STL vector would have to be "expanded" for each usage of this pattern (accessing value_type trough variable instance and not variable type). This is not a good solution as it leads to lots of boilerplate and unnecessary and unmaintainable code. By following the above mentioned solutions you avoid all this and can easily do what you wanted.
197,375
197,382
Visual c++ "for each" portability
I only just recently discovered that Visual C++ 2008 (and perhaps earlier versions as well?) supports for each syntax on stl lists et al to facilitate iteration. For example: list<Object> myList; for each (Object o in myList) { o.foo(); } I was very happy to discover it, but I'm concerned about portability for the dreaded day when someone decides I need to be able to compile my code in say, gcc or some other compiler. Is this syntax widely supported and can I use it without worrying about portability issues?
For each is not standard C or C++ syntax. If you want to be able to compile this code in gcc or g++, you will need to create an iterator and use a standard for loop. QuantumPete [edit] This seems to be a new feature introduced into MS Visual C++, so this is definitely not portable. Ref: http://msdn.microsoft.com/en-us/library/xey702bw%28VS.80%29.aspx [/edit]
197,379
197,440
How do I create a symlink in Windows Vista?
I am looking to create symlinks (soft links) from Java on a Windows Vista/ 2008 machine. I'm happy with the idea that I need to call out to the JNI to do this. I am after help on the actual C code though. What is the appropriate system call to create the link? Pointers to some good documentation on this subject would be very much appreciated.
Symbolic links in Windows are created using the CreateSymbolicLink API Function, which takes parameters very similar to the command line arguments accepted by the Mklink command line utility. Assuming you're correctly referencing the JNI and Win32 SDK headers, your code could thus be as simple as: JNIEXPORT jboolean JNICALL Java_ClassName_MethodName (JNIEnv *env, jstring symLinkName, jstring targetName) { const char *nativeSymLinkName = env->GetStringUTFChars(symLinkName, 0); const char *nativeTargetName = env->GetStringUTFChars(targetName, 0); jboolean success = (CreateSymbolicLink(nativeSymLinkName, nativeTargetName, 0) != 0); env->ReleaseStringUTFChars(symLinkName, nativeSymLinkName); env->ReleaseStringUTFChars(targetName, nativeTargetName); return success; } Note that this is just off the top of my head, and I haven't dealt with JNI in ages, so I may have overlooked some of the finer points of making this work...
197,461
197,464
What is the difference between "VC++" and "C++"?
Someone asked me how familiar I am with VC++ and how familiar I am with C++. What is the difference?
C++ is the actual language, VC++ is Microsoft's Visual C++, an IDE for C++ development. From stason.org: C++ is the programming language, Visual C++ is Microsoft's implementation of it. When people talk about learning Visual C++, it usually has more to do with learning how to use the programming environment, and how to use the Microsoft Foundation Classes (MFCs) for Windows rather than any language issues. Visual C++ can and will compile straight C and C++.
197,468
253,883
Does ATL/WTL still require the use of a global _Module variable?
I'm just starting up a new ATL/WTL project and I was wondering if the global _Module variable is still required? Back a few years when I started working with WTL it was required (at least for ATL 3.0) that you define a global variable such as: CAppModule _Module; To get ATL to work correctly. But recently I've read somewhere that this may not be required anymore (yet the wizard generated code still uses it). Also I did a search through the Visual C++ include directories and it only picked up _Module in a few places - most notably the ATL COM registry stuff. So do I still need to define a global variable to use ATL these days?
Technically you do not need a global _Module instance since ATL/WTL version 7. Earlier ATL/WTL code referenced _Module by this specific name and expected you to declare a single instance of this object. This has since been replaced by a single instance object named _AtlBaseModule that is automatically declared for you in atlcore.h. Having said that, though, some of the best WTL features are contained within CAppModule and its base class CComModule. Automatic COM registration, message loop handling, etc. So most non-trivial WTL based apps will still want a singleton instance of a CComModule base class. However, it doesn't need to be named _Module.
197,675
197,699
How does delete[] "know" the size of the operand array?
Foo* set = new Foo[100]; // ... delete [] set; You don't pass the array's boundaries to delete[]. But where is that information stored? Is it standardised?
When you allocate memory on the heap, your allocator will keep track of how much memory you have allocated. This is usually stored in a "head" segment just before the memory that you get allocated. That way when it's time to free the memory, the de-allocator knows exactly how much memory to free.
197,839
197,900
Determine the size of a C++ array programmatically?
This question was inspired by a similar question: How does delete[] “know” the size of the operand array? My question is a little different: Is there any way to determine the size of a C++ array programmatically? And if not, why? Every function I've seen that takes an array also requires an integer parameter to give it the size. But as the linked question pointed out, delete[] must know the size of the memory to be deallocated. Consider this C++ code: int* arr = new int[256]; printf("Size of arr: %d\n", sizeof(arr)); This prints "Size of arr: 4", which is just the size of the pointer. It would be nice to have some function which prints 256, but I don't think one exists in C++. (Again, part of the question is why it doesn't exist.) Clarification: I know that if I declared the array on the stack instead of the heap (i.e. "int arr[256];") that the sizeof operator would return 1024 (array length * sizeof(int)).
delete [] does know the size that was allocated. However, that knowledge resides in the runtime or in the operating system's memory manager, meaning that it is not available to the compiler during compilation. And sizeof() is not a real function, it is actually evaluated to a constant by the compiler, which is something it cannot do for dynamically allocated arrays, whose size is not known during compilation. Also, consider this example: int *arr = new int[256]; int *p = &arr[100]; printf("Size: %d\n", sizeof(p)); How would the compiler know what the size of p is? The root of the problem is that arrays in C and C++ are not first-class objects. They decay to pointers, and there is no way for the compiler or the program itself to know whether a pointer points to the beginning of a chunk of memory allocated by new, or to a single object, or to some place in the middle of a chunk of memory allocated by new. One reason for this is that C and C++ leave memory management to the programmer and to the operating system, which is also why they do not have garbage collection. Implementation of new and delete is not part of the C++ standard, because C++ is meant to be used on a variety of platforms, which may manage their memory in very different ways. It may be possible to let C++ keep track of all the allocated arrays and their sizes if you are writing a word processor for a windows box running on the latest Intel CPU, but it may be completely infeasible when you are writing an embedded system running on a DSP.
197,983
197,989
What is a basic example of "low-level" multi-threading in C++?
I'm a kinda newbie developer with a few years under my belt. Recently I interviewed at a game company and was asked "have you done any multi-threading?" I told them about having a C# app with a few Threads... and then I said a bit about transactions and locking etc in Sql. The interviewer politely told me that this was too high-level and they are looking for someone with experience doing multi-threading in C++. So what is a basic example of "low-level" multi-threading in C++ ?
The canonical implementation of "low level threads" is pthreads. The most basic examples of threading problems that are usually taught along with pthreads are some form of readers and writers problem. That page also links to more classical threading problems like producers/consumers and dining philosophers.
197,987
198,308
Multiple Interchangeable Views (MFC/C++)
I have a main frame with a splitter. On the left I have my (imaginatively named) CAppView_Leftand on the right I have CAppView_Right_1and CAppView_Right_2. Through the following code I initialise the two primary views correctly: if (!m_wndSplitter.CreateStatic(this, 1, 2)) { TRACE0("Failed to CreateStaticSplitter\n"); return FALSE; } else if (!m_wndSplitter.CreateView(0, 0, RUNTIME_CLASS(CAppView_Left), CSize(300, 200), pContext)) { TRACE0("Failed to create left pane\n"); return FALSE; } else if (!m_wndSplitter.CreateView(0, 1, RUNTIME_CLASS(CAppView_Right_1), CSize(375, 200), pContext)) { TRACE0("Failed to create first right pane\n"); return FALSE; } ... What I would like to do is create a second view inside the right frame, however when I try to add this: if (!m_wndSplitter.CreateView(0, 1, RUNTIME_CLASS(CAppView_Right_2), CSize(375, 200), pContext)) { TRACE0("Failed to create first right pane\n"); return FALSE; } VS compiles but fails to run the application, raising an exception telling me I have already defined the view. Can someone suggest how I do this? Also, how to change between the views from either a view or the document class?
There is a CodeProject article that should help you achieve what you want: http://www.codeproject.com/KB/splitter/usefulsplitter.aspx I have replaced views in a splitter before, so if the above doesn't help I'll post some of my own code.
197,999
198,025
Need Advice on Implementing a Time-limited Trial
I'm developing a shareware desktop application. I'm to the point where I need to implement the trial-use/activation code. How do you approach something like this? I have my own ideas, but I want to see what the stackoverflow community thinks. I'm developing with C++/Qt. The intended platform is Windows/Mac/Linux. Thanks for your advice!
What to protect against and what not to protect against: Keep in mind that people will always find a way to get around your trial period. So you want to make it annoying for the person to have to get around your trial period, but it doesn't matter if it's impossible to get around you trial period. Most people will think it's too much work to try and get around your trial period if there is even a simple mechanism. For example people can always use filemon/regmon to see which files and registry entries change upon installing your software. That being said, a simple mechanism is best, because it wastes less of your time. Here are some ideas: You can do a tick count somewhere in registry for every unique day that is run. If tick count > 30 then show them an expired message. You can store the install date, but take head to check if they have more days available than your trial is supposed to be, then do tell them they are expired. This will protect against people changing their date before installing to a future day. I would recommend to make your uninstall, remove your "days running" count. This is because people may re-evaluate your product months later and eventually buy. But if they can't evaluate it, they won't buy. No serious user would have time to uninstall/re-install just to gain extra use of your product. Extending trials: For us, when a customer requests a trial extension, we send them an automated email that contains a program "TrialExtend.exe" and a trial extend code. This program contacts our server with the trial extend code to validate it. If the code is validated, their trial period is reset.
198,051
198,065
Why "delete [][]... multiDimensionalArray;" operator in C++ does not exist
I was always wondering if there is operator for deleting multi dimensional arrays in the standard C++ language. If we have created a pointer to a single dimensional array int *array = new int[size]; the delete looks like: delete [] array; That's great. But if we have two dimension array, we can not do delete [][] twoDimenstionalArray; Instead, we should loop and delete the items, like in this example. Can anybody explain why?
Technically, there aren't two dimensional arrays in C++. What you're using as a two dimensional array is a one dimensional array with each element being a one dimensional array. Since it doesn't technically exist, C++ can't delete it.
198,071
198,162
Code for checking write-permissions for directories in Win2K/XP
Greetings! I am trying to check directory write-permissions from within a Windows MFC/ATL program using C++. My first guess is to use the C-standard _access function, e.g.: if (_access("C:\mydir", 2) == -1) // Directory is not writable. But apparently on Windows 2000 and XP, _access can't determine directory permissions. (i.e. the Security tab in the Properties dialog when you right-click on a directory in Explorer) So, is there an elegant way to determine a directory's write-permissions in Windows 2000/XP using any of the Windows C++ libraries? If so, how? Thanks Evan
You can call CreateFile with GENERIC_WRITE access to check this. http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx It's not a C++ library but it still counts as elegant because it directly does what you want...
198,124
199,834
How can I get the path of a Windows "special folder" for a specific user?
Inside a service, what is the best way to determine a special folder path (e.g., "My Documents") for a specific user? SHGetFolderPath allows you to pass in a token, so I am assuming there is some way to impersonate the user whose folder you are interested in. Is there a way to do this based just on a username? If not, what is the minimum amount of information you need for the user account? I would rather not have to require the user's password. (Here is a related question.)
I would mount the user's registry hive and look for the path value. Yes, it's a sub-optimal solution, for all the reasons mentioned (poor forwards compatibility, etc.). However, like many other things in Windows, MS didn't provide an API way to do what you want to do, so it's the best option available. You can get the SID (not GUID) of the user by using LookupAccountName. You can load the user's registry hive using LoadUserProfile, but unfortunately this also requires a user token, which is going to require their password. Fortunately, you can manually load the hive using RegLoadKey into an arbitrary location, read the data, and unload it (I think). Yes, it's a pain, and yes, it's probably going to break in future versions of Windows. Perhaps by that time MS will have provided an API to do it, back-ported it into older versions of Windows, and distributed it automatically through Windows update... but I wouldn't hold my breath. P.S. This information intended to augment the information provided in your related question, including disclaimers.
198,199
198,264
How do you reverse a string in place in C or C++?
How do you reverse a string in C or C++ without requiring a separate buffer to hold the reversed string?
The standard algorithm is to use pointers to the start / end, and walk them inward until they meet or cross in the middle. Swap as you go. Reverse ASCII string, i.e. a 0-terminated array where every character fits in 1 char. (Or other non-multibyte character sets). void strrev(char *head) { if (!head) return; char *tail = head; while(*tail) ++tail; // find the 0 terminator, like head+strlen --tail; // tail points to the last real char // head still points to the first for( ; head < tail; ++head, --tail) { // walk pointers inwards until they meet or cross in the middle char h = *head, t = *tail; *head = t; // swapping as we go *tail = h; } } // test program that reverses its args #include <stdio.h> int main(int argc, char **argv) { do { printf("%s ", argv[argc-1]); strrev(argv[argc-1]); printf("%s\n", argv[argc-1]); } while(--argc); return 0; } The same algorithm works for integer arrays with known length, just use tail = start + length - 1 instead of the end-finding loop. (Editor's note: this answer originally used XOR-swap for this simple version, too. Fixed for the benefit of future readers of this popular question. XOR-swap is highly not recommended; hard to read and making your code compile less efficiently. You can see on the Godbolt compiler explorer how much more complicated the asm loop body is when xor-swap is compiled for x86-64 with gcc -O3.) Ok, fine, let's fix the UTF-8 chars... (This is XOR-swap thing. Take care to note that you must avoid swapping with self, because if *p and *q are the same location you'll zero it with a^a==0. XOR-swap depends on having two distinct locations, using them each as temporary storage.) Editor's note: you can replace SWP with a safe inline function using a tmp variable. #include <bits/types.h> #include <stdio.h> #define SWP(x,y) (x^=y, y^=x, x^=y) void strrev(char *p) { char *q = p; while(q && *q) ++q; /* find eos */ for(--q; p < q; ++p, --q) SWP(*p, *q); } void strrev_utf8(char *p) { char *q = p; strrev(p); /* call base case */ /* Ok, now fix bass-ackwards UTF chars. */ while(q && *q) ++q; /* find eos */ while(p < --q) switch( (*q & 0xF0) >> 4 ) { case 0xF: /* U+010000-U+10FFFF: four bytes. */ SWP(*(q-0), *(q-3)); SWP(*(q-1), *(q-2)); q -= 3; break; case 0xE: /* U+000800-U+00FFFF: three bytes. */ SWP(*(q-0), *(q-2)); q -= 2; break; case 0xC: /* fall-through */ case 0xD: /* U+000080-U+0007FF: two bytes. */ SWP(*(q-0), *(q-1)); q--; break; } } int main(int argc, char **argv) { do { printf("%s ", argv[argc-1]); strrev_utf8(argv[argc-1]); printf("%s\n", argv[argc-1]); } while(--argc); return 0; } Why, yes, if the input is borked, this will cheerfully swap outside the place. Useful link when vandalising in the UNICODE: http://www.macchiato.com/unicode/chart/ Also, UTF-8 over 0x10000 is untested (as I don't seem to have any font for it, nor the patience to use a hexeditor) Examples: $ ./strrev Räksmörgås ░▒▓○◔◑◕● ░▒▓○◔◑◕● ●◕◑◔○▓▒░ Räksmörgås sågrömskäR ./strrev verrts/.
198,625
200,260
Determine if IP is blocked
Does anyone know if it is possible to reliably determine (programattically C/C++...) whether or not a firewall or IP filtering software is installed on a Windows PC? I need to detect whether a certain server IP is being blocked in my client software by the host OS. I don't need to worry about external hardware firewals in this situation as I have full control of this. It is only software firewalls that I am concerned with. My hope was that I could iterate the windows network stack or NDIS interfaces and determine this
After reading some of your comments in reply to other answers, I think this might actually be closer to what you're looking for. It might not catch every type of firewall but any major firewall vendor should be registered with the Security Center and therefore detected with this method. You could also combine this with some of the other answers here to give yourself a second level of verification. Detecting running firewalls in windows It's an Expert's Exchange post so you may not be able to read the thread. Just in case, I've copied and pasted the relevant info. It's in VBScript but it should point you in the right direction as far as what WMI namespaces you can use. KemalRouge: I've just solved this problem with some help from a colleague. He pointed me in the direction of a knowledge base article, which pointed out that this information was stored in the WMI database Basically, it's possible to query the WMI in a few lines of code to find out what firewalls/anti-virus software is being monitored by the Security Center, and the status of this software (i.e. enabled or not). Anyway, if you're interested, here's some VB code I used to test this out (you'll need a reference to "Microsoft WMI Scripting V1.2 Library"): Private Sub DumpFirewallInfo() Dim oLocator As WbemScripting.SWbemLocator Dim oService As WbemScripting.SWbemServicesEx Dim oFirewalls As WbemScripting.SWbemObjectSet Dim oFirewall As WbemScripting.SWbemObjectEx Dim oFwMgr As Variant Set oFwMgr = CreateObject("HNetCfg.FwMgr") Debug.Print "Checking the Windows Firewall..." Debug.Print "Windows Firewal Enabled: " & oFwMgr.LocalPolicy.CurrentProfile.FirewallEnabled Debug.Print "" Set oFwMgr = Nothing Debug.Print "Checking for other installed firewalls..." Set oLocator = New WbemScripting.SWbemLocator Set oService = oLocator.ConnectServer(".", "root\SecurityCenter") oService.Security_.ImpersonationLevel = 3 Set oFirewalls = oService.ExecQuery("SELECT * FROM FirewallProduct") ' This could also be "AntivirusProduct" For Each oFirewall In oFirewalls Debug.Print "Company: " & vbTab & oFirewall.CompanyName Debug.Print "Firewall Name: " & vbTab & oFirewall.DisplayName Debug.Print "Enabled: " & vbTab & Format$(oFirewall.Enabled) Debug.Print "Version: " & vbTab & oFirewall.versionNumber Debug.Print "" Next oFirewall Set oFirewall = Nothing Set oFirewalls = Nothing Set oService = Nothing Set oLocator = Nothing End Sub
199,403
199,419
C++: What's the simplest way to read and write BMP files using C++ on Windows?
I would like to load a BMP file, do some operations on it in memory, and output a new BMP file using C++ on Windows (Win32 native). I am aware of ImageMagick and it's C++ binding Magick++, but I think it's an overkill for this project since I am currently not interested in other file formats or platforms. What would be the simplest way in terms of code setup to read and write BMP files? The answer may be "just use Magick++, it's the simplest." Related Question: What is the best image manipulation library?
When developing just for Windows I usually just use the ATL CImage class
199,418
199,422
Using C++ library in C code
I have a C++ library that provides various classes for managing data. I have the source code for the library. I want to extend the C++ API to support C function calls so that the library can be used with C code and C++ code at the same time. I'm using GNU tool chain (gcc, glibc, etc), so language and architecture support are not an issue. Are there any reasons why this is technically not possible? Are there any gotcha's that I need to watch out for? Are there resources, example code and/or documentation available regarding this? Some other things that I have found out: Use the following to wrap your C++ headers that need to be used by C code. #ifdef __cplusplus extern "C" { #endif // // Code goes here ... // #ifdef __cplusplus } // extern "C" #endif Keep "real" C++ interfaces in separate header files that are not included by C. Think PIMPL principle here. Using #ifndef __cplusplus #error stuff helps here to detect any craziness. Careful of C++ identifiers as names in C code Enums varying in size between C and C++ compilers. Probably not an issue if you're using GNU tool chain, but still, be careful. For structs follow the following form so that C does not get confused. typedef struct X { ... } X Then use pointers for passing around C++ objects, they just have to be declared in C as struct X where X is the C++ object. All of this is courtesy of a friend who's a wizard at C++.
Yes, this is certainly possible. You will need to write an interface layer in C++ that declares functions with extern "C": extern "C" int foo(char *bar) { return realFoo(std::string(bar)); } Then, you will call foo() from your C module, which will pass the call on to the realFoo() function which is implemented in C++. If you need to expose a full C++ class with data members and methods, then you may need to do more work than this simple function example.
199,555
201,229
How would you go about implementing the game reversi? (othello)
I have been thinking about starting a side project at home to exercise my brain a bit. Reversi looks like a simply game, where mobility has a profound effect on game play. It is at least a step up from tic tac toe. This would be a single player against an AI of some sort. I am thinking to try this in C++ on a PC. What issues am I likely to run into? What graphics library would you recommend? What questions am I not smart enough to ask myself?
In overall, issues you will end up running onto will depend on you and your approaches. Friend tends to say that complex is simple from different perspective. Choice of graphics library depends about what kind of game you are going to write? OpenGL is common choice in this kind of projects, but you could also use some GUI-library or directly just use windows' or xorg's own libraries. If you are going to do fancy, just use OpenGL. Questions you ought ask: Is C++ sensible choice for this project? Consider C and/or python as well. My answer to this would be that if you just want to write reversi, go python. But if you want to learn a low level language, do C first. C++ is an extension to C, therefore there's more to learn in there than there's in C. And to my judge, the more you have to learn onto C++ is not worth the effort. How do you use the graphics library? If you are going to do fancy effects, go to the scene graph. Instead you can just render the reversi grid with buttons on it. How ought you implement the UI, should you use the common UI concepts? Usual UI concepts (windowing, frames, buttons, menubars, dialogs) aren't so good as people think they are, there's lot of work in implementing them properly. Apply the scene graph for interpreting input and try different clever ways onto controlling the game. Avoid intro menus(they are dumb and useless work), use command line arguments for most configuration. I yet give you some ideas to get you started: Othello board is 8x8, 64 cells in overall. You can assign a byte per each cell, that makes it 64 bytes per each board state. It's 8 long ints, not very much at all! You can store the whole progress of the game and the player can't even notice it. Therefore it's advised to implement the othello board as an immutable structure which you copy always when you change a state. It will also help you later with your AI and implementing an 'undo' -feature. Because one byte can store more information than just three states (EMPTY, BLACK, WHITE), I advice you will also provide two additional states (BLACK_ALLOWED, WHITE_ALLOWED, BOTH_ALLOWED). You can calculate these values while you copy the new state. Algorithm for checking out where you can put a block, could go the board through one by one, then trace from empty cells to every direction for regex-patterns: B+W => W^, W+B => B^ This way you can encapsulate the game rules inside a simple interface that takes care of it all.
199,606
199,618
How should C bitflag enumerations be translated into C++?
C++ is mostly a superset of C, but not always. In particular, while enumeration values in both C and C++ implicitly convert into int, the reverse isn't true: only in C do ints convert back into enumeration values. Thus, bitflags defined via enumeration declarations don't work correctly. Hence, this is OK in C, but not in C++: typedef enum Foo { Foo_First = 1<<0, Foo_Second = 1<<1, } Foo; int main(void) { Foo x = Foo_First | Foo_Second; // error in C++ return 0; } How should this problem be handled efficiently and correctly, ideally without harming the debugger-friendly nature of using Foo as the variable type (it decomposes into the component bitflags in watches etc.)? Consider also that there may be hundreds of such flag enumerations, and many thousands of use-points. Ideally some kind of efficient operator overloading would do the trick, but it really ought to be efficient; the application I have in mind is compute-bound and has a reputation of being fast. Clarification: I'm translating a large (>300K) C program into C++, so I'm looking for an efficient translation in both run-time and developer-time. Simply inserting casts in all the appropriate locations could take weeks.
Why not just cast the result back to a Foo? Foo x = Foo(Foo_First | Foo_Second); EDIT: I didn't understand the scope of your problem when I first answered this question. The above will work for doing a few spot fixes. For what you want to do, you will need to define a | operator that takes 2 Foo arguments and returns a Foo: Foo operator|(Foo a, Foo b) { return Foo(int(a) | int(b)); } The int casts are there to prevent undesired recursion.
199,627
199,911
Converting C source to C++
How would you go about converting a reasonably large (>300K), fairly mature C codebase to C++? The kind of C I have in mind is split into files roughly corresponding to modules (i.e. less granular than a typical OO class-based decomposition), using internal linkage in lieu private functions and data, and external linkage for public functions and data. Global variables are used extensively for communication between the modules. There is a very extensive integration test suite available, but no unit (i.e. module) level tests. I have in mind a general strategy: Compile everything in C++'s C subset and get that working. Convert modules into huge classes, so that all the cross-references are scoped by a class name, but leaving all functions and data as static members, and get that working. Convert huge classes into instances with appropriate constructors and initialized cross-references; replace static member accesses with indirect accesses as appropriate; and get that working. Now, approach the project as an ill-factored OO application, and write unit tests where dependencies are tractable, and decompose into separate classes where they are not; the goal here would be to move from one working program to another at each transformation. Obviously, this would be quite a bit of work. Are there any case studies / war stories out there on this kind of translation? Alternative strategies? Other useful advice? Note 1: the program is a compiler, and probably millions of other programs rely on its behaviour not changing, so wholesale rewriting is pretty much not an option. Note 2: the source is nearly 20 years old, and has perhaps 30% code churn (lines modified + added / previous total lines) per year. It is heavily maintained and extended, in other words. Thus, one of the goals would be to increase mantainability. [For the sake of the question, assume that translation into C++ is mandatory, and that leaving it in C is not an option. The point of adding this condition is to weed out the "leave it in C" answers.]
Having just started on pretty much the same thing a few months ago (on a ten-year-old commercial project, originally written with the "C++ is nothing but C with smart structs" philosophy), I would suggest using the same strategy you'd use to eat an elephant: take it one bite at a time. :-) As much as possible, split it up into stages that can be done with minimal effects on other parts. Building a facade system, as Federico Ramponi suggested, is a good start -- once everything has a C++ facade and is communicating through it, you can change the internals of the modules with fair certainty that they can't affect anything outside them. We already had a partial C++ interface system in place (due to previous smaller refactoring efforts), so this approach wasn't difficult in our case. Once we had everything communicating as C++ objects (which took a few weeks, working on a completely separate source-code branch and integrating all changes to the main branch as they were approved), it was very seldom that we couldn't compile a totally working version before we left for the day. The change-over isn't complete yet -- we've paused twice for interim releases (we aim for a point-release every few weeks), but it's well on the way, and no customer has complained about any problems. Our QA people have only found one problem that I recall, too. :-)
199,629
199,767
IDL declaration (in C++) for a function that will take a C-style array from C#
I am working with an existing code base made up of some COM interfaces written in C++ with a C# front end. There is some new functionality that needs to be added, so I'm having to modify the COM portions. In one particular case, I need to pass an array (allocated from C#) to the component to be filled. What I would like to do is to be able to pass an array of int to the method from C#, something like: // desired C# signature void GetFoo(int bufferSize, int[] buffer); // desired usage int[] blah = ...; GetFoo(blah.Length, blah); A couple of wrenches in the works: C++/CLI or Managed C++ can't be used (COM could be done away with in this case). The C# side can't be compiled with /unsafe (using Marshal is allowed). The COM interface is only used (an will only ever be used) by the C# part, so I'm less concerned with interoperability with other COM consumers. Portability between 32 and 64 bit is also not a concern (everything is being compiled and run from a 32 bit machine, so code generators are converting pointers to integers). Eventually, it will be replaced by just C++/CLI, but that is a ways off. My initial attempt is something similar to: HRESULT GetFoo([in] int bufferSize, [in, size_is(bufferSize)] int buffer[]); And the output TLB definition is (seems reasonable): HRESULT _stdcall GetFoo([in] int bufferSize, [in] int* buffer); Which is imported by C# as (not so reasonable): void GetFoo(int bufferSize, ref int buffer); Which I could use with int[] b = ...; fixed(int *bp = &b[0]) { GetFoo(b.Length, ref *bp); } ...except that I can't compile with /unsafe. At the moment I am using: HRESULT GetFoo([in] int bufferSize, [in] INT_PTR buffer); Which imports as: void GetFoo(int bufferSize, int buffer); And I need use use it like: int[] b = ...; GCHandle bPin = GCHandle.Alloc(b, GCHandleType.Pinned); try { GetFoo(b.Length, (int)Marshal.UnsafeAddrOfPinnedArrayElement(b, 0)); } finally { bPin.Free(); } Which works..., but I'd like to find a cleaner way. So, the question is Is there an IDL definition that is friendly to the C# import from TLB generator for this case? If not, what can be done on the C# side to make it a little safer?
Hmmm... I've found some information that gets me closer... Marshaling Changes - Conformant C-Style Arrays This IDL declaration (C++) HRESULT GetFoo([in] int bufferSize, [in, size_is(bufferSize)] int buffer[]); Is imported as (MSIL) method public hidebysig newslot virtual instance void GetFoo([in] int32 bufferSize, [in] int32& buffer) runtime managed internalcall And if changed to (MSIL) method public hidebysig newslot virtual instance void GetFoo([in] int32 bufferSize, [in] int32[] marshal([]) buffer) runtime managed internalcall Can be used like (C#) int[] b = ...; GetFoo(b.Length, b); Exactly what I was gunning for! But, are there any other solutions that don't require fixing up the MSIL of the runtime callable wrapper that is generated by tlbimport?
199,876
200,121
Setting Background Color CMDIFrameWnd
Is there a way to change the color of the background for a MDIParent windows in MFC (2005)? I have tried intercepting ON_WM_CTLCOLOR AND ON_WM_ERASEBKGND but neither work. OnEraseBkgnd does work, but then it gets overwritten by the standard WM_CTL color. Cheers
The CMDIFrameWnd is actually covered up by another window called the MDIClient window. Here is a Microsoft article on how to subclass this MDIClient window and change the background colour. I just tried it myself and it works great. http://support.microsoft.com/kb/129471
199,959
200,034
What is the equivalent of Thread.SetApartmentState in C++?
In C# there is a method SetApartmentState in the class Thread. How do I do the same thing in C++?
For unmanaged processes, you control the apartment model used for a thread by passing appropriate parameters to CoInitializeEx(). Larry Osterman wrote up a great little guide to these: ... When a thread calls CoInitializeEx (or CoInitialize), the thread tells COM which of the two apartment types it’s prepared to host. To indicate that the thread should live in the MTA, you pass the COINIT_MULTITHREADED flag to CoInitializeEx. To indicate that the thread should host an STA, either call CoInitialize or pass the COINIT_APARTMENTTHREADED flag to CoInitializeEx. ... -- http://blogs.msdn.com/larryosterman/archive/2004/04/28/122240.aspx
199,999
200,025
With scons, how do you link to prebuilt libraries?
I recently started using scons to build several small cross-platform projects. One of these projects needs to link against pre-built static libraries... how is this done? In make, I'd just append "link /LIBPATH:wherever libstxxl.lib" on windows, and "stxxl.a" on unix.
Just to document the answer, as I already located it myself. Program( 'foo', ['foo.cpp'], LIBS=['foo'], LIBPATH='.' ) Adding the LIBS & LIBPATH parameters add the correct arguments to the build command line. More information here.
200,032
200,054
Are C++ meta-templates required knowledge for programmers?
In my experience Meta-templates are really fun (when your compilers are compliant), and can give good performance boosts, and luckily I'm surrounded by seasoned C++ programmers that also grok meta-templates, however occasionally a new developer arrives and can't make heads or tails of some of the meta-template tricks we use (mostly Andrei Alenxandrescu stuff), for a few weeks until he gets initiated appropriately. So I was wondering what's the situation for other C++ programmers out there? Should meta-template programming be something C++ programmers should be "required" to know (excluding entry level students of course), or not? Edit: Note my question is related to production code and not little samples or prototypes
If you can you find enough candidates who really know template meta-programing then by all means, require it. You will be showing a lot of qualified and potentially productive people the door (there are plenty of legitimate reasons not to know how to do this, namely that if you do it on a lot of platforms, you will create code that can't compile, or that average developers will have trouble understanding). Template meta-programming is great, but let's face it, it's pushing C++ to the limit. Now, a candidate should probably understand basics (compute n! at compile time, or at least explain how it works if they are shown the code). If your new developers are reliably becoming productive within a few weeks, then your current recruiting is probably pretty good.
200,093
200,160
UTF usage in C++ code
What is the difference between UTF and UCS. What are the best ways to represent not European character sets (using UTF) in C++ strings. I would like to know your recommendations for: Internal representation inside the code For string manipulation at run-time For using the string for display purposes. Best storage representation (i.e. In file) Best on wire transport format (Transfer between application that may be on different architectures and have a different standard locale)
What is the difference between UTF and UCS. UCS encodings are fixed width, and are marked by how many bytes are used for each character. For example, UCS-2 requires 2 bytes per character. Characters with code points outside the available range can't be encoded in a UCS encoding. UTF encodings are variable width, and marked by the minimum number of bits to store a character. For example, UTF-16 requires at least 16 bits (2 bytes) per character. Characters with large code points are encoded using a larger number of bytes -- 4 bytes for astral characters in UTF-16. Internal representation inside the code Best storage representation (i.e. In file) Best on wire transport format (Transfer between application that may be on different architectures and have a different standard locale) For modern systems, the most reasonable storage and transport encoding is UTF-8. There are special cases where others might be appropriate -- UTF-7 for old mail servers, UTF-16 for poorly-written text editors -- but UTF-8 is most common. Preferred internal representation will depend on your platform. In Windows, it is UTF-16. In UNIX, it is UCS-4. Each has its good points: UTF-16 strings never use more memory than a UCS-4 string. If you store many large strings with characters primarily in the basic multi-lingual plane (BMP), UTF-16 will require much less space than UCS-4. Outside the BMP, it will use the same amount. UCS-4 is easier to reason about. Because UTF-16 characters might be split over multiple "surrogate pairs", it can be challenging to correctly split or render a string. UCS-4 text does not have this issue. UCS-4 also acts much like ASCII text in "char" arrays, so existing text algorithms can be ported easily. Finally, some systems use UTF-8 as an internal format. This is good if you need to inter-operate with existing ASCII- or ISO-8859-based systems because NULL bytes are not present in the middle of UTF-8 text -- they are in UTF-16 or UCS-4.
200,237
200,425
Where can I learn more about C++0x?
I would like to learn more about C++0x. What are some good references and resources? Has anyone written a good book on the subject yet?
Articles on Lambda Expressions, The Type Traits Library, The rvalue Reference, Concepts, Variadic Templates, shared_ptr Regular Expressions Tuples Multi Threading General Discussion The C/C++ Users Journal, The New C++, Article Videos Google tech talk overview of various features overview at wikipedia Library Boost
200,550
200,645
Trapping messages in MFC - Whats the difference?
I was just wondering what (if any) the difference was between the following two message traps in MFC for the function, OnSize(..). 1 - Via Message map: BEGIN_MESSAGE_MAP(CClassWnd, CBaseClassWnd) ... ON_WM_SIZE() .. END_MESSAGE_MAP() 2 - Via afx_message: afx_msg type OnSize(...); They seem to be used interchangeably, which one should be used or does it depend on other factors?
Both parts are necessary to add a message handler to a class. The message map should be declared inside your class, together with declarations for any message handler functions (e.g, OnSize). class CClassWnd : public CBaseClassWnd { ... afx_msg void OnSize(UINT nType, int cx, int cy); DECLARE_MESSAGE_MAP }; afx_msg is just an empty placeholder macro - it doesn't actually do anything, but is always included by convention. The message map is then defined in the class's .cpp file: BEGIN_MESSAGE_MAP(CClassWnd, CBaseClassWnd) ON_WM_SIZE() END_MESSAGE_MAP() These macros generate a lookup table for the class which allows messages received by the window to be dispatched to the corresponding handler functions. The ON_WM_SIZE macro allows the wParam and lParam message parameters in the WM_SIZE message to be decoded into more meaningful values for the message handler function (nType, cx, and cy in this case). MFC provides macros for most window messages (WM_LBUTTONDOWN, WM_DESTROY, etc). You can find more information on how message maps work in MFC here on MSDN.
201,141
203,250
How to create a JNIEnv mock in C/C++
I am writing some JNI code in C that I wish to test using cunit. In order to call the JNI functions, I need to create a valid JNIEnv struct. Does anyone know if there is a mocking framework for such a purpose, or who can give me some pointers on how to create a mock JNIEnv struct myself?
jni.h contains the complete structure for JNIEnv_, including the "jump table" JNINativeInterface_. You could create your own JNINativeInterface_ (pointing to mock implementations) and instantiate a JNIEnv_ from it. Edit in response to comments: (I didn't look at the other SO question you referenced) #include "jni.h" #include <iostream> jint JNICALL MockGetVersion(JNIEnv *) { return 23; } JNINativeInterface_ jnini = { 0, 0, 0, 0, //4 reserved pointers MockGetVersion }; // class Foo { public static native void bar(); } void Java_Foo_bar(JNIEnv* jni, jclass) { std::cout << jni->GetVersion() << std::endl; } int main() { JNIEnv_ myjni = {&jnini}; Java_Foo_bar(&myjni, 0); return 0; }
201,593
201,795
Is there a simple way to convert C++ enum to string?
Suppose we have some named enums: enum MyEnum { FOO, BAR = 0x50 }; What I googled for is a script (any language) that scans all the headers in my project and generates a header with one function per enum. char* enum_to_string(MyEnum t); And a implementation with something like this: char* enum_to_string(MyEnum t){ switch(t){ case FOO: return "FOO"; case BAR: return "BAR"; default: return "INVALID ENUM"; } } The gotcha is really with typedefed enums, and unnamed C style enums. Does anybody know something for this? EDIT: The solution should not modify my source, except for the generated functions. The enums are in an API, so using the solutions proposed until now is just not an option.
You may want to check out GCCXML. Running GCCXML on your sample code produces: <GCC_XML> <Namespace id="_1" name="::" members="_3 " mangled="_Z2::"/> <Namespace id="_2" name="std" context="_1" members="" mangled="_Z3std"/> <Enumeration id="_3" name="MyEnum" context="_1" location="f0:1" file="f0" line="1"> <EnumValue name="FOO" init="0"/> <EnumValue name="BAR" init="80"/> </Enumeration> <File id="f0" name="my_enum.h"/> </GCC_XML> You could use any language you prefer to pull out the Enumeration and EnumValue tags and generate your desired code.
201,940
201,998
can realloc Array, then Why use pointers?
This was an job placement interview I faced. They asked whether we can realloc Array, I told yes. Then They asked - then why we need pointers as most of the people give reason that it wastes memory space. I could not able to give satisfactory answer. If any body can give any satisfactory answer, I'll be obliged. Please mention any situation where the above statement can contradict. Thank you.
You can only reallocate an array that was allocated dynamically. If it was allocated statically, it cannot be reallocated [safely].* Pointers hold addresses of data in memory. They can be allocated, deallocated, and reallocated dynamically using the new/delete operators in C++ and malloc/free in C. I would strongly suggest that you read The C Programming Language by Kernighan and Ritchie and a solid C++ text like C++ From the Ground Up by Herbert Schildt. Using dynamic memory, pointers, offsets, etc are all fundamental to using these two languages. Not knowing how they work, and why they exist will be a likely reason for potential employers to turn you down. *the compiler shouldn't let you reallocate anything that's been allocated statically, but if it does, the behavior is undefined
202,007
581,579
Creating Docking Panes in CView instead of CMainFrame
When creating an MDI Application with "Visual Studio" style using the AppWizard of VS2008 (plus Feature Pack), the CMainFrame class gets a method CreateDockingWindows(). Since I don't want all panes to be always visible but display them depending on the type of the active document, I made those windows to members of my views and also moved the creation to OnInitialUpdate(). I create those panes in the same manner as was done by the CMainFrame including setting the main frame as parent window. The positions of the docking windows get saved to the registry automatically but they won't be restored because the docking windows don't yet exist when the frame is initialized. Is it a good idea to create the docking windows with the views or should I expect more problems? Is there a better way to accomplish what I want? Thanks in advance!
The following solution turned out to work pretty well for me. The MainFrame still owns all the panes thus keeping all the existing framework-functionality. I derive the panes from a class which implements the "CView-like" behavior I need: /** * \brief Mimics some of the behavior of a CView * * CDockablePane derived class which stores a pointer to the document and offers * a behavior similar to CView classes. * * Since the docking panes are child windows of the main frame, * they have a longer live time than a view. Thus the (de-)initialization code * cannot reside in the CTor/DTor. */ class CPseudoViewPane : public CDockablePane, { DECLARE_DYNAMIC(CPseudoViewPane) public: /// Initializes the pane with the specified document void Initialize(CMyDoc* pDoc); void DeInitialize(); /// Checks if window is valid and then forwards call to pure virtual OnUpdate() method. void Update(const LPARAM lHint); protected: CPseudoViewPane(); virtual ~CPseudoViewPane(); CMyDoc* GetDocument() const { ASSERT(NULL != m_pDocument); return m_pDocument; } CMainFrame* GetMainFrame() const; /** * This method is called after a document pointer has been set with #Initialize(). * Override this in derived classes to mimic a view's "OnInitialUpdate()-behavior". */ virtual void OnInitialUpdate() = 0; /** * Called by #Update(). Overrider to mimic a view's "OnUpdate()-behavior". * This method has a simplified parameter list. Enhance this if necessary. */ virtual void OnUpdate(const LPARAM lHint) = 0; DECLARE_MESSAGE_MAP() private: CMyDoc* m_pDocument; };
202,549
202,943
Parsing iCal/vCal/Google calendar files in C++
Can anyone recommend a ready-to-use class/library compatible with C/C++/MFC/ATL that would parse iCal/vCal/Google calendar files (with recurrences)? It can be free or commercial.
there is a parser in PHP for iCal, you can downloaded and check the code to suit your language. for vCal/vCard parsing there's a C Library. for Google Calendar I couldn't find any exact answer, so, try to Google it.
202,718
202,738
Can the result of a function call be used as a default parameter value?
Is there a good method for writing C / C++ function headers with default parameters that are function calls? I have some header with the function: int foo(int x, int y = 0); I am working in a large code base where many functions call this function and depend on this default value. This default value now needs to change to something dynamic and I am looking for a way to do: int foo(int x, int y = bar()); Where bar() is some function that generates the default value based on some system parameters. Alternatively this function prototype would look like: int foo(int x, int y = baz.bar()); Where baz is a function belonging to an object that has not been instantiated within the header file.
Go figure! It does work. Default arguments in C++ functions
203,126
203,136
How does the C++ compiler know which implementation of a virtual function to call?
Here is an example of polymorphism from http://www.cplusplus.com/doc/tutorial/polymorphism.html (edited for readability): // abstract base class #include <iostream> using namespace std; class Polygon { protected: int width; int height; public: void set_values(int a, int b) { width = a; height = b; } virtual int area(void) =0; }; class Rectangle: public Polygon { public: int area(void) { return width * height; } }; class Triangle: public Polygon { public: int area(void) { return width * height / 2; } }; int main () { Rectangle rect; Triangle trgl; Polygon * ppoly1 = &rect; Polygon * ppoly2 = &trgl; ppoly1->set_values (4,5); ppoly2->set_values (4,5); cout << ppoly1->area() << endl; // outputs 20 cout << ppoly2->area() << endl; // outputs 10 return 0; } My question is how does the compiler know that ppoly1 is a Rectangle and that ppoly2 is a Triangle, so that it can call the correct area() function? It could find that out by looking at the "Polygon * ppoly1 = &rect;" line and knowing that rect is a Rectangle, but that wouldn't work in all cases, would it? What if you did something like this? cout << ((Polygon *)0x12345678)->area() << endl; Assuming that you're allowed to access that random area of memory. I would test this out but I can't on the computer I'm on at the moment. (I hope I'm not missing something obvious...)
Each object (that belongs to a class with at least one virtual function) has a pointer, called a vptr. It points to the vtbl of its actual class (which each class with virtual functions has at least one of; possibly more than one for some multiple-inheritance scenarios). The vtbl contains a bunch of pointers, one for each virtual function. So at runtime, the code just uses the object's vptr to locate the vtbl, and from there the address of the actual overridden function. In your specific case, Polygon, Rectangle, and Triangle each has a vtbl, each with one entry pointing to its relevant area method. Your ppoly1 will have a vptr pointing to Rectangle's vtbl, and ppoly2 similarly with Triangle's vtbl. Hope this helps!
203,548
203,550
Undefined Symbol ___gxx_personality_v0 on link
I've been getting this undefined symbol building with this command line: $ gcc test.cpp Undefined symbols: "___gxx_personality_v0", referenced from: etc... test.cpp is simple and should build fine. What is the deal?
Use g++ test.cpp instead, since this is c++ code. Or, if you really want to use gcc, add -lstdc++ to the command line, like so: gcc test.cpp -lstdc++ Running md5 against the a.out produced under each scenario shows that it's the same output. But, yeah, g++ probably makes your world a simpler place.
203,616
203,655
Why does C# not provide the C++ style 'friend' keyword?
The C++ friend keyword allows a class A to designate class B as its friend. This allows Class B to access the private/protected members of class A. I've never read anything as to why this was left out of C# (and VB.NET). Most answers to this earlier StackOverflow question seem to be saying it is a useful part of C++ and there are good reasons to use it. In my experience I'd have to agree. Another question seems to me to be really asking how to do something similar to friend in a C# application. While the answers generally revolve around nested classes, it doesn't seem quite as elegant as using the friend keyword. The original Design Patterns book uses it regularly throughout its examples. So in summary, why is friend missing from C#, and what is the "best practice" way (or ways) of simulating it in C#? (By the way, the internal keyword is not the same thing, it allows all classes within the entire assembly to access internal members, while friend allows you to give a certain class complete access to exactly one other class)
Having friends in programming is more-or-less considered "dirty" and easy to abuse. It breaks the relationships between classes and undermines some fundamental attributes of an OO language. That being said, it is a nice feature and I've used it plenty of times myself in C++; and would like to use it in C# too. But I bet because of C#'s "pure" OOness (compared to C++'s pseudo OOness) MS decided that because Java has no friend keyword C# shouldn't either (just kidding ;)) On a serious note: internal is not as good as friend but it does get the job done. Remember that it is rare that you will be distributing your code to 3rd party developers not through a DLL; so as long as you and your team know about the internal classes and their use you should be fine. EDIT Let me clarify how the friend keyword undermines OOP. Private and protected variables and methods are perhaps one of the most important part of OOP. The idea that objects can hold data or logic that only they can use allows you to write your implementation of functionality independent of your environment - and that your environment cannot alter state information that it is not suited to handle. By using friend you are coupling two classes' implementations together - which is much worse then if you just coupled their interface.
203,667
203,703
C++ "Named Parameter Idiom" vs. Boost::Parameter library
I've looked at both the Named Parameter Idiom and the Boost::Parameter library. What advantages does each one have over the other? Is there a good reason to always choose one over the other, or might each of them be better than the other in some situations (and if so, what situations)?
Implementing the Named Parameter Idiom is really easy, almost about as easy as using Boost::Parameter, so it kind of boils down to one main point. -Do you already have boost dependencies? If you don't, Boost::parameter isn't special enough to merit adding the dependency. Personally I've never seen Boost::parameter in production code, 100% of the time its been a custom implementation of Named Parameters, but that's not necessarily a good thing.
203,823
203,830
C++ headers - separation between interface and implementation details
One of classes in my program uses some third-party library. Library object is a private member of my class: // My.h #include <3pheader.h> class My { ... private: 3pObject m_object; } The problem with this - any other unit in my program that uses My class should be configured to include 3p headers. Moving to another kind of 3p will jeopardize the whole build... I see two ways to fix this - one is to is to make 3pObject extern and turn m_Object into a pointer, being initialized in constructor; second is to create an "interface" and "factory" classes and export them... Could you suggest another ways to solve that ?
Use the "pimpl" idiom: // header class My { class impl; std::auto_ptr<impl> _impl; }; // cpp #include <3pheader.h> class My::impl { 3pObject _object; };
203,847
204,374
Fast library to replace CDC vector graphics
I have a mature MFC C++ application that displays on screen and prints using CDC wrappings on the Win32 GDI. While it has been optimized over the years, I would like to replace it with something a bit faster. The graphics included rendered triangular surface models, complex polylines and polygons, and lots of text. It needs to meet the following criteria; The number of vectors displayed is likely to be very large. For example a single surface triangle is likely to generate a number lines and solid fills when rendered. At present this information is not stored anywhere, it is generated and drawn on the fly. The SDK should support limit the total number of buffered vectors, or it is liable to run out of memory. The SDK should be able to render to any CWnd derived class including CView and ScrollView classes. The SDK should support printing to any Windows print device, The SDK should be low level enough to make the port from low level CDC / GDI calls relatively straightforward. Open source is always nice, but a one off cost of up to say $2k, with optional upgrades/support would also be ok. A license cost per user is not acceptable, Access to source code would be a big bonus, specifically with the idea of running portions of the SDK on Windows CE / Mobile. I currenly handle my own 3d to 2d viewport management. If a decent low level SDK is not available, a higher level SDK must handle 3d well, and work with millions of triangles, polygons and text entities on a 32 bit windows platform. Any suggestions? Listing the specific pros and cons in your proposed suggestion would be greatly appreciated.
I have once evaluated FastGraph (http://www.fastgraph.com) for a project. I liked it in the small test programs I wrote, it was very fast. We ended up not using it for external reasons (nothing to do with the libraries that I evaluated) so I don't have more practical experience.
203,899
204,342
Use Compiler/Linker for C++ Code Clean-up
I'm using VS2008 for a C++ project. The code is quite old and has passed through many hands. There are several classes hierarchies, functions, enums and so on which are no longer being used. Is there a way to get the compiler/linker to list out identifiers which have been declared or defined but are not being referred to anywhere?
PC-Lint "whole project" analysis (which analyses multiple files together) can do this. Please feel free to contact me if you need help setting it up.
203,990
205,417
How to manage a simple PHP session using C++ cURL (libcurl)
I'm writing a C++ client which is using libcurl for communicating with a PHP script. The communication should be session based, and thus the first task is to login and make the PHP script set up a session. I'm not used to working with sessions either from C++ or PHP. I basically know that it has to do with cookies and communicating session id. I can't find any example on the curl homepage which demonstrates a simple session management use case. I'm assuming it has something to do with one or many of the following options in curl: CURLOPT_COOKIE CURLOPT_COOKIEFILE CURLOPT_COOKIEJAR CURLOPT_COOKIESESSION CURLOPT_COOKIELIST But I can't really see the big picture just from the documentation of CURLOPT_COOKIESESSION for instance. Anybody who has done this, please share a simple piece of code which shows the concept. Regards Robert
As far as I understand it, CURL will handle session cookies automatically for you if you enable cookies, as long as you reuse your CURL handle for each request in the session: CURL *Handle = curl_easy_init(); // Read cookies from a previous session, as stored in MyCookieFileName. curl_easy_setopt( Handle, CURLOPT_COOKIEFILE, MyCookieFileName ); // Save cookies from *this* session in MyCookieFileName curl_easy_setopt( Handle, CURLOPT_COOKIEJAR, MyCookieFileName ); curl_easy_setopt( Handle, CURLOPT_URL, MyLoginPageUrl ); assert( curl_easy_perform( Handle ) == CURLE_OK ); curl_easy_setopt( Handle, CURLOPT_URL, MyActionPageUrl ); assert( curl_easy_perform( Handle ) == CURLE_OK ); // The cookies are actually saved here. curl_easy_cleanup( Handle ); I'm not positive that you need to set both COOKIEFILE and COOKIEJAR, but the documentation makes it seem that way. In any case, you have to set one of the two in order to enable cookies at all in CURL. You can do something as simple as: curl_easy_setopt( Handle, CURLOPT_COOKIEFILE, "" ); That won't read any cookies from disk, but it will enable session cookies for the duration of the curl handle.
204,167
215,834
How to get folder sharing on Windows Mobile emulator to work
I am developing an application using Windows Mobile 5.0, under embedded VC++ 4.0, and using the emulator for debugging. I need to copy some files onto the emulator and planned on using the option to map a directory to the emulator storage card. Problem is, this option is greyed out when I run the emulator. From the emulator help i get 'On the Emulator, run a Windows CE OS that supports the ability to connect to a directory on the development workstation. ' How do I accomplish this? I have seen the command line option /sharedfolder but can't get at this from platform manager under EVC++ 4.0. All comments welcome.
I have the WinMo 5.0 SDK installed on Visual Studio 2005 and the option to map a directory works fine for me. I'd guess it's an issue related to eVC, which is pretty old by now. My recommendation is to try VS 2005 or 2008, there's a free 90-day trial you can download from microsoft: http://msdn.microsoft.com/en-us/vstudio/products/aa700831.aspx Also, I'd note that VS is way better than eVC in many aspects. I used eVC and them moved to VS 2005, many "heavy templates" I had which wouldn't compile in eVC were compiled OK in VS 2005.
204,345
204,353
How to setup a linux C++ project in Eclipse?
I have an existing C++ project on a linux environment, and would like to import it into the Eclipse IDE. Not sure if I should start a new Eclipse C++ project, or if there was some way to import the source files?
You can create a new Eclipse C++ project "in-place", i.e. if you have your sources checked out at /home/joe/mysources, you can select that directory in the new project wizard (uncheck the "use default location" checkbox first). All your source files will show up in the Eclipse project.
204,360
204,364
What's the term for design ala "object.method1().method2().method3()"?
What's the term for this design? object.method1().method2().method3() ..when all methods return *this? I found the term for this a while ago, but lost it meanwhile. I have no clue how to search for this on google :) Also if anyone can think of a better title for the question, feel free to change it. Thanks Update-Gishu: After reading about it, I feel that your question is misleading w.r.t. code snippet provided.. (Feel free to rollback) Method Chaining object.method1().method2().method3() Fluent Interfaces private void makeFluent(Customer customer) { customer.newOrder() .with(6, "TAL") .with(5, "HPK").skippable() .with(3, "LGV") .priorityRush(); }
Looks to me like you are describing a fluent interface. Ive also heard it referred to as pipelineing or chaining. Update-Gishu: http://martinfowler.com/bliki/FluentInterface.html
204,476
204,483
What should main() return in C and C++?
What is the correct (most efficient) way to define the main() function in C and C++ — int main() or void main() — and why? And how about the arguments? If int main() then return 1 or return 0? There are numerous duplicates of this question, including: What are the valid signatures for C's main() function? The return type of main() function Difference between void main() and int main()? main()'s signature in C++ What is the proper declaration of main()? — For C++, with a very good answer indeed. Styles of main() functions in C Return type of main() method in C int main() vs void main() in C Related: C++ — int main(int argc, char **argv) C++ — int main(int argc, char *argv[]) Is char *envp[] as a third argument to main() portable? Must the int main() function return a value in all compilers? Why is the type of the main() function in C and C++ left to the user to define? Why does int main(){} compile? Legal definitions of main() in C++14?
The return value for main indicates how the program exited. Normal exit is represented by a 0 return value from main. Abnormal exit is signaled by a non-zero return, but there is no standard for how non-zero codes are interpreted. As noted by others, void main() is prohibited by the C++ standard and should not be used. The valid C++ main signatures are: int main() and int main(int argc, char* argv[]) which is equivalent to int main(int argc, char** argv) It is also worth noting that in C++, int main() can be left without a return-statement, at which point it defaults to returning 0. This is also true with a C99 program. Whether return 0; should be omitted or not is open to debate. The range of valid C program main signatures is much greater. Efficiency is not an issue with the main function. It can only be entered and left once (marking the program's start and termination) according to the C++ standard. For C, re-entering main() is allowed, but should be avoided.
204,576
204,662
Splitting a already split pane (MFC)
In my MFC program I am using a splitter to create two panes. I now want to split one of these panes in half again and put in another view, can someone talk me through how to do it or point me in the direction of some code? I would prefer to code it myself so I am not interested in custom derived classes unless they are extremely basic. Thanks!
In CMainFrame::OnCreateClient // Create splitter with 2 rows and 1 col m_wndSplitter.CreateStatic(this, 2, 1); // Create a view in the top row m_wndSplitter.CreateView(0, 0, RUNTIME_CLASS(CView1), CSize(100, 100), pContext); // Create a 2 column splitter that will go in the bottom row of the first m_wndSplitter2.CreateStatic(&m_wndSplitter, 1, 2, WS_CHILD|WS_VISIBLE, m_wndSplitter.IdFromRowCol(1, 0)); // Create views for the bottom splitter m_wndSplitter2.CreateView(0, 0, RUNTIME_CLASS(CView2), CSize(100, 100), pContext); m_wndSplitter2.CreateView(0, 1, RUNTIME_CLASS(CView3), CSize(100, 100), pContext); ...
204,606
204,712
Undefined behavior when exceed 64 bits
In my current 32-bit application, I check (very occasionally) for overflow by doing operations on 64-bit integers. However, on 64-bit systems there does not seem to be a standard 128-bit integer. Is there a simple way of checking for overflow, or a way of getting 128-bit integers, which works on all OSes and compilers? I tried using GMP as a more generic solution, but it is a little heavyweight for my requirements. Efficiency is not too important, no processor specific-ASM is.
Much of the discussion in this question applies: How to detect integer overflow? Many of the techniques used for 32-bit overflow chacking apply to 64-bits as well (not all of the techniques discussed use the next larger integer type to handle the overflow).
204,683
205,034
How do I input 4-byte UTF-8 characters?
I am writing a small app which I need to test with utf-8 characters of different number of byte lengths. I can input unicode characters to test that are encoded in utf-8 with 1,2 and 3 bytes just fine by doing, for example: string in = "pi = \u3a0"; But how do I get a unicode character that is encoded with 4-bytes? I have tried: string in = "aegan check mark = \u10102"; Which as far as I understand should be outputting . But when I print that out I get ᴶ0 What am I missing? EDIT: I got it to work by adding leading zeros: string in = "\U00010102"; Wish I had thought of that sooner :)
There's a longer form of escape in the pattern \U followed by eight digits, rather than \u followed by four digits. This is also used in Java and Python, amongst others: >>> '\xf0\x90\x84\x82'.decode("UTF-8") u'\U00010102' However, if you are using byte strings, why not just escape each byte like above, rather than relying on the compiler to convert the escape to a UTF-8 string? This would seem to be more portable as well - if I compile the following program: #include <iostream> #include <string> int main() { std::cout << "narrow: " << std::string("\uFF0E").length() << " utf8: " << std::string("\xEF\xBC\x8E").length() << " wide: " << std::wstring(L"\uFF0E").length() << std::endl; std::cout << "narrow: " << std::string("\U00010102").length() << " utf8: " << std::string("\xF0\x90\x84\x82").length() << " wide: " << std::wstring(L"\U00010102").length() << std::endl; } On win32 with my current options cl gives: warning C4566: character represented by universal-character-name '\UD800DD02' cannot be represented in the current code page (932) The compiler tries to convert all unicode escapes in byte strings to the system code page, which unlike UTF-8 cannot represent all unicode characters. Oddly it has understood that \U00010102 is \uD800\uDD02 in UTF-16 (its internal unicode representation) and mangled the escape in the error message... When run, the program prints: narrow: 2 utf8: 3 wide: 1 narrow: 2 utf8: 4 wide: 2 Note that the UTF-8 bytestrings and the wide strings are correct, but the compiler failed to convert "\U00010102", giving the byte string "??", an incorrect result.
204,841
204,959
Where can I look at the C++ standard
Possible Duplicate: Where do I find the current C or C++ standard documents? I want to use STL with the current program I'm working on and the vendor doesn't support what I feel is a reasonable STL, working is not my idea of reasonable. I have been unable to find a C++ Standard or an STL standard that is not just an API that leaves me wondering if my interpretation is correct or if the vendor interpretation is correct. I've already spent a great deal of time at SGI's site. Any reccomendations? Also, is there any document that's not an API that would be considered the standard?
Information on where to get the current standard document: Where do I find the current C or C++ standard documents? Other responses in that question have information on downloads of various drafts of the standards which can be obtained free (the actual ratified standards cannot be obtained free).
204,983
205,000
static const Member Value vs. Member enum : Which Method is Better & Why?
If you want to associate some constant value with a class, here are two ways to accomplish the same goal: class Foo { public: static const size_t Life = 42; }; class Bar { public: enum {Life = 42}; }; Syntactically and semantically they appear to be identical from the client's point of view: size_t fooLife = Foo::Life; size_t barLife = Bar::Life; Is there any reason other than just pure style concerns why one would be preferable to another?
The enum hack used to be necessary because many compilers didn't support in-place initialization of the value. Since this is no longer an issue, go for the other option. Modern compilers are also capable of optimizing this constant so that no storage space is required for it. The only reason for not using the static const variant is if you want to forbid taking the address of the value: you can't take an address of an enum value while you can take the address of a constant (and this would prompt the compiler to reserve space for the value after all, but only if its address is really taken). Additionally, the taking of the address will yield a link-time error unless the constant is explicitly defined as well. Notice that it can still be initialized at the site of declaration: struct foo { static int const bar = 42; // Declaration, initialization. }; int const foo::bar; // Definition.
205,059
205,083
Is there a C++ decompiler?
I have a program in which I've lost the C++ source code. Are there any good C++ decompilers out there? I've already ran across Boomerang.
You can use IDA Pro by Hex-Rays. You will usually not get good C++ out of a binary unless you compiled in debugging information. Prepare to spend a lot of manual labor reversing the code. If you didn't strip the binaries there is some hope as IDA Pro can produce C-alike code for you to work with. Usually it is very rough though, at least when I used it a couple of years ago.
205,127
206,352
CMapStringToOb::Lookup Won't Work with Japanese Characters
Does anyone know why CMapStringToOb::Lookup doesn't work in Japanese? The code loads a string from the string table, and puts it into a CMapStringToOb object. Later it loads the same string from the string table (so it is guaranteed to be exactly the same) and calls CMapStringToOb::Lookup to find it. It works in all languages that we've translated to and tested, except for Japanese which can't find the string in the CMapStringToOb object. Thanks
Is your app Unicode or MBCS? Not that I would have an explanation in one of those cases. Just trying to narrow down the scope.
205,270
785,307
ActiveX plugin causes ASSERT to fail on application exit in VS2008
My MFC application using the "ESRI MapObjects LT2" ActiveX plugin throws an ASSERT at me when closing it. The error occurs in cmdtarg.cpp: CCmdTarget::~CCmdTarget() { #ifndef _AFX_NO_OLE_SUPPORT if (m_xDispatch.m_vtbl != 0) ((COleDispatchImpl*)&m_xDispatch)->Disconnect(); ASSERT(m_dwRef <= 1); //<--- Fails because m_dwRef is 3 #endif m_pModuleState = NULL; } I built the (native C++) application with VC9. When I compile the application with VC6, it behaves nicely. What could be the reason for this?
The following solved it for me: In the window that contains the control, add an OnDestroy() handler: void CMyWnd::OnDestroy() { // Apparently we have to disconnect the (ActiveX) Map control manually // with this undocumented method. COleControlSite* pSite = GetOleControlSite(MY_DIALOG_CONTROL_ID); if(NULL != pSite) { pSite->ExternalDisconnect(); } CWnd::OnDestroy(); }
205,945
205,985
Why does the C++ STL not provide any "tree" containers?
Why does the C++ STL not provide any "tree" containers, and what's the best thing to use instead? I want to store a hierarchy of objects as a tree, rather than use a tree as a performance enhancement...
There are two reasons you could want to use a tree: You want to mirror the problem using a tree-like structure: For this we have boost graph library Or you want a container that has tree like access characteristics For this we have std::map (and std::multimap) std::set (and std::multiset) Basically the characteristics of these two containers is such that they practically have to be implemented using trees (though this is not actually a requirement). See also this question: C tree Implementation
206,045
206,054
How do you mark a struct template as friend?
I have code like this: template <typename T, typename U> struct MyStruct { T aType; U anotherType; }; class IWantToBeFriendsWithMyStruct { friend struct MyStruct; //what is the correct syntax here ? }; What is the correct syntax to give friendship to the template ?
class IWantToBeFriendsWithMyStruct { template <typename T, typename U> friend struct MyStruct; }; Works in VS2008, and allows MyStruct to access the class.
206,106
206,200
Is !! a safe way to convert to bool in C++?
[This question is related to but not the same as this one.] If I try to use values of certain types as boolean expressions, I get a warning. Rather than suppress the warning, I sometimes use the ternary operator (?:) to convert to a bool. Using two not operators (!!) seems to do the same thing. Here's what I mean: typedef long T; // similar warning with void * or double T t = 0; bool b = t; // performance warning: forcing 'long' value to 'bool' b = t ? true : false; // ok b = !!t; // any different? So, does the double-not technique really do the same thing? Is it any more or less safe than the ternary technique? Is this technique equally safe with non-integral types (e.g., with void * or double for T)? I'm not asking if !!t is good style. I am asking if it is semantically different than t ? true : false.
The argument of the ! operator and the first argument of the ternary operator are both implicitly converted to bool, so !! and ?: are IMO silly redundant decorations of the cast. I vote for b = (t != 0); No implicit conversions.
206,172
206,300
How do you programmatically determine whether a Windows computer is a member of a domain?
I need a way to determine whether the computer running my program is joined to any domain. It doesn't matter what specific domain it is part of, just whether it is connected to anything. I'm coding in vc++ against the Win32 API.
Straight from Microsoft: How To Determine If a Windows NT/Windows 2000 Computer Is a Domain Member This approach uses the Windows API. From the article summary: This article describes how to determine if a computer that is running Windows NT 4.0 or Windows 2000 is a member of a domain, is a member of a workgroup, or is a stand-alone computer using the Local Security Authority APIs. The article also provides sample code for a small program that outputs whether the computer the program is running on is part of a domain, part of a workgroup, or a standalone computer.
206,257
206,308
Freeing memory allocated to an array of void pointers
I am declaring an array of void pointers. Each of which points to a value of arbitary type. void **values; // Array of void pointers to each value of arbitary type Initializing values as follows: values = (void**)calloc(3,sizeof(void*)); //can initialize values as: values = new void* [3]; int ival = 1; float fval = 2.0; char* str = "word"; values[0] = (void*)new int(ival); values[1] = (void*)new float(fval); values[2] = (void*)str; //Trying to Clear the memory allocated free(*values); //Error: *** glibc detected *** simpleSQL: free(): invalid pointer: 0x080611b4 //Core dumped delete[] values*; //warning: deleting 'void*' is undefined //Similar Error. Now how do I free/delete the memory allocated for values ( the array of void pointers)?
You have 3 things that are dynamically allocated that need to be freed in 2 different ways: delete reinterpret_cast<int*>( values[0]); delete reinterpret_cast<float*>( values[1]); free( values); // I'm not sure why this would have failed in your example, // but it would have leaked the 2 items that you allocated // with new Note that since str is not dynamically allocated it should not (actually cannot) be freed. A couple of notes: I'm assuming that the sizeof(void) was meant to be sizeof(void*) since what you have won't compile I'm not going to say anything about your seemingly random casting except that it looks like code that ready for disaster in general
206,272
206,279
Hiding private data members? (C++)
Is there a way to hide private data members of a C++ class away from its users, in the cpp file? I think of the private members as part of the implementation and it seems a little backwards to declare them in the header file.
The "pimpl" idiom is how this is generally handled. See http://www.gotw.ca/gotw/024.htm http://www.gotw.ca/gotw/028.htm http://herbsutter.com/gotw/_100/ (updated for C++11)
206,498
206,533
need access to Class Object via Function Pointer - Binary Search Tree Class Creation Related
Creating Traversals for Binary Search Tree with Recursion. void inOrder(void (*inOrderPtr)(T&)) { if(this->left != NULL) inOrder((*inOrderPtr)(this->left)); inOrderPtr(this->data); if(this->right != NULL) inOrder((*inOrderPtr)(this->right)); } Here is the function. Now this is obviously wrong. This function is called like this: first.inOrder(print_vals); first is the object, and print vals is simply a function that prints what is the data in the object. There are three values for each object, data, left, and right. How do I actually access those items with the function?
It looks like the call to inOrderPtr(this->data) is passing just the data member of the tree node to the print_vals function. If you would like to access the left and right elements, use inOrderPtr(*this). You will have to change various declarations in order for this to compile, such as the declarations for inOrderPtr and print_vals. Without seeing the rest of your code it's hard to say what you need to change them to. On another note, it seems to me that you might want to write the recursive calls more like this: this->left->inOrder(inOrderPtr); I am making assumptions about your implementation, though.
206,564
206,639
What is the performance implication of converting to bool in C++?
[This question is related to but not the same as this one.] My compiler warns about implicitly converting or casting certain types to bool whereas explicit conversions do not produce a warning: long t = 0; bool b = false; b = t; // performance warning: forcing long to bool b = (bool)t; // performance warning b = bool(t); // performance warning b = static_cast<bool>(t); // performance warning b = t ? true : false; // ok, no warning b = t != 0; // ok b = !!t; // ok This is with Visual C++ 2008 but I suspect other compilers may have similar warnings. So my question is: what is the performance implication of casting/converting to bool? Does explicit conversion have better performance in some circumstance (e.g., for certain target architectures or processors)? Does implicit conversion somehow confuse the optimizer? Microsoft's explanation of their warning is not particularly helpful. They imply that there is a good reason but they don't explain it.
I was puzzled by this behaviour, until I found this link: http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=99633 Apparently, coming from the Microsoft Developer who "owns" this warning: This warning is surprisingly helpful, and found a bug in my code just yesterday. I think Martin is taking "performance warning" out of context. It's not about the generated code, it's about whether or not the programmer has signalled an intent to change a value from int to bool. There is a penalty for that, and the user has the choice to use "int" instead of "bool" consistently (or more likely vice versa) to avoid the "boolifying" codegen. [...] It is an old warning, and may have outlived its purpose, but it's behaving as designed here. So it seems to me the warning is more about style and avoiding some mistakes than anything else. Hope this will answer your question... :-p
206,811
206,889
Code standard refactoring on large codebase
My studio has a large codebase that has been developed over 10+ years. The coding standards that we started with were developed with few developers in house and long before we had to worry about any kind of standards related to C++. Recently, we started a small R&D project in house and we updated our coding conventions to be more suitable for our environment. The R&D work is going to be integrated into existing project code. One major problem facing us is that we now have two standards for the two areas of work, and now the code bases will cross. I don't want two standards at the studio, and I'm actually quite happy to move forward with a single standard. (The 'how' of how we got into this situation isn't important -- just that we are and I had hoped that we wouldn't be.) The problem is refactoring existing code. I'm not very keen on having two code bases (one relatively small and one very large) looking different. I am interested in doing some refactoring of one of the existing codebases to make it conform to the other standard. The problem is, the smaller code base is (IMO) the more desireable standard. I started looking around for a tool that could do large scale refactoring for me. I'm not interested in rearranging and tightening code. I'm interested in changing things like class my_class {} .... class my_class A; to class MyClass {} .... class MyClass A; Basically doing function/variable level renaming. I'd prefer not to use something like Visual Assist because that will take a long time. I have upwards of 10000 source/header files with hundreds of thousands of lines of code. Using VA one class at a time would be a time killer and not worth the effort. I did run across Vera in another post on SO. That seems like it might do the job and do it well. I'd like to know if anyone has specific experience using Vera for the situation that I'm in, or has any other recommendations for tools that might get the job done. I think that it's important that this tool actually understand code structure so that we don't wind up just renaming variables in a search/replace manner because that will lead to subtle bugs if not done carefully. EDIT: While my example shows I'm going from using _ between names to camelcase type notation, it might be more beneficial for us to move the other way. I'm really looking for a generic solution that will help with large scale renaming. Thanks.
My process would be to rename each time someone touches a given module. Eventually, all modules would be refactored, but the incremental approach would result in less code breakage(assuming you have a complete set of tests. ;) )
206,857
206,926
How to implement blocking read using POSIX threads
I would like to implement a producer/consumer scenario that obeys interfaces that are roughly: class Consumer { private: vector<char> read(size_t n) { // If the internal buffer has `n` elements, then dequeue them // Otherwise wait for more data and try again } public: void run() { read(10); read(4839); // etc } void feed(const vector<char> &more) { // Safely queue the data // Notify `read` that there is now more data } }; In this case, feed and run will run on separate threads and read should be a blocking read (like recv and fread). Obviously, I will need some kind of mutual exclusion on my deque, and I will need some kind of notification system to inform read to try again. I hear condition variables are the way to go, but all my multithreading experience lies with Windows and am having a hard time wrapping my head around them. Thanks for any help! (Yes, I know it's inefficient to return vectors. Let's not get into that.)
This code is not production ready. No error checking is done on the results of any library calls. I have wrapped the lock/unlock of the mutex in LockThread so it is exception safe. But that's about it. In addition if I was doing this seriously I would wrap the mutex and condition variables inside objects so they can cot be abused inside other methods of Consumer. But as long as you take note that the lock must be acquired before you use the condition variable (in any way) then this simple situation can stand as is. Out of interest have you checked the boost threading library? #include <iostream> #include <vector> #include <pthread.h> class LockThread { public: LockThread(pthread_mutex_t& m) :mutex(m) { pthread_mutex_lock(&mutex); } ~LockThread() { pthread_mutex_unlock(&mutex); } private: pthread_mutex_t& mutex; }; class Consumer { pthread_mutex_t lock; pthread_cond_t cond; std::vector<char> unreadData; public: Consumer() { pthread_mutex_init(&lock,NULL); pthread_cond_init(&cond,NULL); } ~Consumer() { pthread_cond_destroy(&cond); pthread_mutex_destroy(&lock); } private: std::vector<char> read(size_t n) { LockThread locker(lock); while (unreadData.size() < n) { // Must wait until we have n char. // This is a while loop because feed may not put enough in. // pthread_cond() releases the lock. // Thread will not be allowed to continue until // signal is called and this thread reacquires the lock. pthread_cond_wait(&cond,&lock); // Once released from the condition you will have re-aquired the lock. // Thus feed() must have exited and released the lock first. } /* * Not sure if this is exactly what you wanted. * But the data is copied out of the thread safe buffer * into something that can be returned. */ std::vector<char> result(n); // init result with size n std::copy(&unreadData[0], &unreadData[n], &result[0]); unreadData.erase(unreadData.begin(), unreadData.begin() + n); return (result); } public: void run() { read(10); read(4839); // etc } void feed(const std::vector<char> &more) { LockThread locker(lock); // Once we acquire the lock we can safely modify the buffer. std::copy(more.begin(),more.end(),std::back_inserter(unreadData)); // Only signal the thread if you have the lock // Otherwise race conditions happen. pthread_cond_signal(&cond); // destructor releases the lock and thus allows read thread to continue. } }; int main() { Consumer c; }
206,998
207,004
What does "const class" mean?
After some find and replace refactoring I ended up with this gem: const class A { }; What does "const class" mean? It seems to compile ok.
What does "const class" mean? It seems to compile ok. Not for me it doesn't. I think your compiler's just being polite and ignoring it. Edit: Yep, VC++ silently ignores the const, GCC complains.
207,069
281,253
How to link using GCC without -l nor hardcoding path for a library that does not follow the libNAME.so naming convention?
I have a shared library that I wish to link an executable against using GCC. The shared library has a nonstandard name not of the form libNAME.so, so I can not use the usual -l option. (It happens to also be a Python extension, and so has no 'lib' prefix.) I am able to pass the path to the library file directly to the link command line, but this causes the library path to be hardcoded into the executable. For example: g++ -o build/bin/myapp build/bin/_mylib.so Is there a way to link to this library without causing the path to be hardcoded into the executable?
There is the ":" prefix that allows you to give different names to your libraries. If you use g++ -o build/bin/myapp -l:_mylib.so other_source_files should search your path for the _mylib.so.
207,312
207,325
C++ Recursive Traversals with Function Pointers
template <class T> void BT<T>::inOrder(void (*inOrderPtr)(T&)) { inOrderPtr(inOrder(this->root)); } template <class T> void BT<T>::inOrder(Node<T>* root) const { if (root->left != NULL) inOrder(root->left); //something here if (root->right != NULL) inOrder(root->right); } Ok I am trying to create this Traversal via recursion. I actually posted this problem before but I was going about it wrong due to me having to use a function pointer. I don't understand what I'm suppose to do. I've got the public wrapper that calls on the private one... but the public one is the one with the function being passed in so what do I even do with it?? I feel retarded so even if someone were to give me a small hint I'm sure I'd get it. I just don't know where to go from here. an example of a code that calls on it is this: first.inOrder(print_val)
This is how to do it properly, but Node::GetItem needs implementing in order for this to be 100% correct: template <class T> T& Node<T>::GetItem() const { // TODO - implement getting a T& from a Node<T> return m_item; // possible implementation depending on Node's definition } template <class T> void BT<T>::inOrder(void (*inOrderPtr)(T&)) { inOrder(this->root, inOrderPtr); } template <class T> void BT<T>::inOrder(Node<T>* root, void (*inOrderPtr)(T&)) const { if (root->left != NULL) inOrder(root->left, inOrderPtr); inOrderPtr(root->GetItem()); if (root->right != NULL) inOrder(root->right, inOrderPtr); }
207,496
207,503
C++ Binary Search Tree Insert via Recursion
So my code is below. I'm not getting any errors and it places everything in the node just fine. But based on my debug statements Everytime anything is inserted it's finding the root. I'm not sure if that is right. But according to output file for the assignment, my answers are different when it comes to the height of the tree, the traversals, and I just flat am still having troubles with my leaf count function. Another story though. Based on the debug statements it looks like everything is going right where they should. But I figure I might need fresh eyes. I don't see how my traversals could change at all since it is really only a matter of where I'm proccessing the node that should effect the Inorder, preorder, and postorder. template <class T> void BT<T>::insert(const T& item) { Node<T>* newNode; newNode = new Node<T>(item); insert(root, newNode); } template <class T> void BT<T>::insert(struct Node<T> *&root, struct Node<T> *newNode) { if (root == NULL) { cout << "Root Found" << newNode->data << endl; root = newNode; } else { if (newNode->data < root->data) { insert(root->left, newNode); cout << "Inserting Left" << newNode-> data << endl; } else { insert(root->right, newNode); cout << "Inserting Right" << newNode->data << endl; } } } My height function is as follows just in case my insert is actually fine. template <class T> int BT<T>::height() const { return height(root); } template <class T> int BT<T>::height(Node<T>* root) const { if (root == NULL) return 0; else { if (height(root->right) > height(root->left)) return 1 + height(root-> right); return 1 + height(root->left); } }
You need to change the wording of your debug statements Really it should read (not Root node) cout << "Leaf Node Found" << newNode->data << endl; It is only the root when it is first called after that any call with node->left or node->right makes it an intermediate node. To write height() I would do this: template <class T> int BT<T>::height(Node<T>* root) const { if (root == NULL) {return 0;} return 1 + max(height(root->left),height(root->right)); }
207,662
207,711
Writing utf16 to file in binary mode
I'm trying to write a wstring to file with ofstream in binary mode, but I think I'm doing something wrong. This is what I've tried: ofstream outFile("test.txt", std::ios::out | std::ios::binary); wstring hello = L"hello"; outFile.write((char *) hello.c_str(), hello.length() * sizeof(wchar_t)); outFile.close(); Opening test.txt in for example Firefox with encoding set to UTF16 it will show as: h�e�l�l�o� Could anyone tell me why this happens? EDIT: Opening the file in a hex editor I get: FF FE 68 00 00 00 65 00 00 00 6C 00 00 00 6C 00 00 00 6F 00 00 00 Looks like I get two extra bytes in between every character for some reason?
I suspect that sizeof(wchar_t) is 4 in your environment - i.e. it's writing out UTF-32/UCS-4 instead of UTF-16. That's certainly what the hex dump looks like. That's easy enough to test (just print out sizeof(wchar_t)) but I'm pretty sure it's what's going on. To go from a UTF-32 wstring to UTF-16 you'll need to apply a proper encoding, as surrogate pairs come into play.
207,730
207,743
Templated superclass linking problem
I'm trying to create a C++ class, with a templated superclass. The idea being, I can easily create lots of similar subclasses from a number of superclasses which have similar characteristics. I have distilled the problematic code as follows: template_test.h: template<class BaseClass> class Templated : public BaseClass { public: Templated(int a); virtual int Foo(); }; class Base { protected: Base(int a); public: virtual int Foo() = 0; protected: int b; }; template_test.cpp: #include "template_test.h" Base::Base(int a) : b(a+1) { } template<class BaseClass> Templated<BaseClass>::Templated(int a) : BaseClass(a) { } template<class BaseClass> int Templated<BaseClass>::Foo() { return this->b; } main.cpp: #include "template_test.h" int main() { Templated<Base> test(1); return test.Foo(); } When I build the code, I get linker errors, saying that the symbols Templated<Base>::Templated(int) and Templated<Base>::Foo() cannot be found. A quick Google suggests that adding the following to main.cpp will solve the problem: template<> Templated<Base>::Templated(int a); template<> int Templated<Base>::Foo(); But this does not solve the problem. Adding the lines to main.cpp does not work either. (Though, interestingly, adding them to both gives 'multiply defined symbol' errors from the linker, so they must be doing something...) However, putting all the code in one source file does solve the problem. While this would be ok for the noddy example above, the real application I'm looking at would become unmanageable very fast if I was forced to put the whole lot in one cpp file. Does anyone know if what I'm doing is even possible? (How) can I solve my linker errors? I would assume that I could make all the methods in class Templated inline and this would work, but this doesn't seem ideal either.
With templated classes, the definitions must be available for each translation unit that uses it. The definitions can go in a separate file, usually with .inl or .tcc extension; the header file #includes that file at the bottom. Thus, even though it's in a separate file, it's still #included for each translation unit; it cannot be standalone. So, for your example, rename template_test.cpp to template_test.inl (or template_test.tcc, or whatever), then have #include "template_test.inl" (or whatever) at the bottom of template_test.h, just before the #endif of the include guard. Hope this helps!
207,768
207,777
How to fill a vector with non-trivial initial values?
I know how to fill an std::vector with non-trivial initial values, e.g. sequence numbers: void IndexArray( unsigned int length, std::vector<unsigned int>& v ) { v.resize(length); for ( unsigned int i = 0; i < length; ++i ) { v[i] = i; } } But this is a for-loop. Is there an elegant way to do this with less lines of code using stl functionality (and not using Boost)?
You can use the generate algorithm, for a more general way of filling up containers: #include <iostream> #include <algorithm> #include <vector> struct c_unique { int current; c_unique() {current=0;} int operator()() {return ++current;} } UniqueNumber; int main () { vector<int> myvector (8); generate (myvector.begin(), myvector.end(), UniqueNumber); cout << "\nmyvector contains:"; for (vector<int>::iterator it=myvector.begin(); it!=myvector.end(); ++it) cout << " " << *it; cout << endl; return 0; } This was shamelessly lifted and edited from cplusplusreference.